Feb 25 11:18:04 crc systemd[1]: Starting Kubernetes Kubelet... Feb 25 11:18:04 crc restorecon[4761]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 25 11:18:04 crc restorecon[4761]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 25 11:18:05 crc restorecon[4761]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 25 11:18:05 crc restorecon[4761]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 25 11:18:06 crc kubenswrapper[5005]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 25 11:18:06 crc kubenswrapper[5005]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 25 11:18:06 crc kubenswrapper[5005]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 25 11:18:06 crc kubenswrapper[5005]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 25 11:18:06 crc kubenswrapper[5005]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 25 11:18:06 crc kubenswrapper[5005]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.429041 5005 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442114 5005 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442162 5005 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442172 5005 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442182 5005 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442190 5005 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442200 5005 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442208 5005 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442219 5005 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442230 5005 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442240 5005 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442250 5005 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442260 5005 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442270 5005 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442281 5005 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442290 5005 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442298 5005 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442306 5005 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442313 5005 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442321 5005 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442328 5005 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442337 5005 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442345 5005 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442353 5005 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442360 5005 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442394 5005 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442402 5005 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442416 5005 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442425 5005 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442434 5005 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442442 5005 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442450 5005 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442458 5005 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442465 5005 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442473 5005 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442481 5005 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442488 5005 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442499 5005 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442510 5005 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442519 5005 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442530 5005 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442538 5005 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442547 5005 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442556 5005 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442565 5005 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442573 5005 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442581 5005 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442589 5005 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442597 5005 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442606 5005 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442613 5005 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442635 5005 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442643 5005 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442651 5005 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442658 5005 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442666 5005 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442674 5005 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442681 5005 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442689 5005 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442697 5005 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442705 5005 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442713 5005 feature_gate.go:330] unrecognized feature gate: Example Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442720 5005 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442728 5005 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442736 5005 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442743 5005 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442751 5005 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442758 5005 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442766 5005 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442773 5005 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442781 5005 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.442788 5005 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443078 5005 flags.go:64] FLAG: --address="0.0.0.0" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443100 5005 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443116 5005 flags.go:64] FLAG: --anonymous-auth="true" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443128 5005 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443139 5005 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443148 5005 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443161 5005 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443172 5005 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443182 5005 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443191 5005 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443200 5005 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443212 5005 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443221 5005 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443230 5005 flags.go:64] FLAG: --cgroup-root="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443239 5005 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443248 5005 flags.go:64] FLAG: --client-ca-file="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443256 5005 flags.go:64] FLAG: --cloud-config="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443265 5005 flags.go:64] FLAG: --cloud-provider="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443274 5005 flags.go:64] FLAG: --cluster-dns="[]" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443287 5005 flags.go:64] FLAG: --cluster-domain="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443295 5005 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443305 5005 flags.go:64] FLAG: --config-dir="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443313 5005 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443323 5005 flags.go:64] FLAG: --container-log-max-files="5" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443335 5005 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443344 5005 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443352 5005 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443361 5005 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443405 5005 flags.go:64] FLAG: --contention-profiling="false" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443415 5005 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443424 5005 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443433 5005 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443442 5005 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443454 5005 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443463 5005 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443471 5005 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443480 5005 flags.go:64] FLAG: --enable-load-reader="false" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443509 5005 flags.go:64] FLAG: --enable-server="true" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443518 5005 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443530 5005 flags.go:64] FLAG: --event-burst="100" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443539 5005 flags.go:64] FLAG: --event-qps="50" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443566 5005 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443576 5005 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443584 5005 flags.go:64] FLAG: --eviction-hard="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443606 5005 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443615 5005 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443623 5005 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443634 5005 flags.go:64] FLAG: --eviction-soft="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443643 5005 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443652 5005 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443661 5005 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443670 5005 flags.go:64] FLAG: --experimental-mounter-path="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443679 5005 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443687 5005 flags.go:64] FLAG: --fail-swap-on="true" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443696 5005 flags.go:64] FLAG: --feature-gates="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443707 5005 flags.go:64] FLAG: --file-check-frequency="20s" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443716 5005 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443726 5005 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443735 5005 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443744 5005 flags.go:64] FLAG: --healthz-port="10248" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443753 5005 flags.go:64] FLAG: --help="false" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443762 5005 flags.go:64] FLAG: --hostname-override="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443771 5005 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443780 5005 flags.go:64] FLAG: --http-check-frequency="20s" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443789 5005 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443799 5005 flags.go:64] FLAG: --image-credential-provider-config="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443807 5005 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443816 5005 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443825 5005 flags.go:64] FLAG: --image-service-endpoint="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443833 5005 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443842 5005 flags.go:64] FLAG: --kube-api-burst="100" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443851 5005 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443860 5005 flags.go:64] FLAG: --kube-api-qps="50" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443869 5005 flags.go:64] FLAG: --kube-reserved="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443877 5005 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443886 5005 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443896 5005 flags.go:64] FLAG: --kubelet-cgroups="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443904 5005 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443913 5005 flags.go:64] FLAG: --lock-file="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443922 5005 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443931 5005 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443940 5005 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443954 5005 flags.go:64] FLAG: --log-json-split-stream="false" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.443964 5005 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444008 5005 flags.go:64] FLAG: --log-text-split-stream="false" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444018 5005 flags.go:64] FLAG: --logging-format="text" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444027 5005 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444036 5005 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444046 5005 flags.go:64] FLAG: --manifest-url="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444054 5005 flags.go:64] FLAG: --manifest-url-header="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444067 5005 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444076 5005 flags.go:64] FLAG: --max-open-files="1000000" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444088 5005 flags.go:64] FLAG: --max-pods="110" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444097 5005 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444116 5005 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444125 5005 flags.go:64] FLAG: --memory-manager-policy="None" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444134 5005 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444143 5005 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444152 5005 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444161 5005 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444183 5005 flags.go:64] FLAG: --node-status-max-images="50" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444192 5005 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444201 5005 flags.go:64] FLAG: --oom-score-adj="-999" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444210 5005 flags.go:64] FLAG: --pod-cidr="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444218 5005 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444231 5005 flags.go:64] FLAG: --pod-manifest-path="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444240 5005 flags.go:64] FLAG: --pod-max-pids="-1" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444249 5005 flags.go:64] FLAG: --pods-per-core="0" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444258 5005 flags.go:64] FLAG: --port="10250" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444267 5005 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444275 5005 flags.go:64] FLAG: --provider-id="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444284 5005 flags.go:64] FLAG: --qos-reserved="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444293 5005 flags.go:64] FLAG: --read-only-port="10255" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444302 5005 flags.go:64] FLAG: --register-node="true" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444310 5005 flags.go:64] FLAG: --register-schedulable="true" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444319 5005 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444336 5005 flags.go:64] FLAG: --registry-burst="10" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444344 5005 flags.go:64] FLAG: --registry-qps="5" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444353 5005 flags.go:64] FLAG: --reserved-cpus="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444362 5005 flags.go:64] FLAG: --reserved-memory="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444400 5005 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444409 5005 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444419 5005 flags.go:64] FLAG: --rotate-certificates="false" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444428 5005 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444436 5005 flags.go:64] FLAG: --runonce="false" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444445 5005 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444455 5005 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444467 5005 flags.go:64] FLAG: --seccomp-default="false" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444476 5005 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444485 5005 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444495 5005 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444505 5005 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444515 5005 flags.go:64] FLAG: --storage-driver-password="root" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444523 5005 flags.go:64] FLAG: --storage-driver-secure="false" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444532 5005 flags.go:64] FLAG: --storage-driver-table="stats" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444541 5005 flags.go:64] FLAG: --storage-driver-user="root" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444550 5005 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444559 5005 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444568 5005 flags.go:64] FLAG: --system-cgroups="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444577 5005 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444591 5005 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444600 5005 flags.go:64] FLAG: --tls-cert-file="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444608 5005 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444620 5005 flags.go:64] FLAG: --tls-min-version="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444629 5005 flags.go:64] FLAG: --tls-private-key-file="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444638 5005 flags.go:64] FLAG: --topology-manager-policy="none" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444647 5005 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444655 5005 flags.go:64] FLAG: --topology-manager-scope="container" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444665 5005 flags.go:64] FLAG: --v="2" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444678 5005 flags.go:64] FLAG: --version="false" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444690 5005 flags.go:64] FLAG: --vmodule="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444702 5005 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.444712 5005 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.444936 5005 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.444948 5005 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.444959 5005 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.444968 5005 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.444978 5005 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.444986 5005 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.444997 5005 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.445005 5005 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.445013 5005 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.445021 5005 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.445030 5005 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.445038 5005 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.445045 5005 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.445053 5005 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.445061 5005 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.445069 5005 feature_gate.go:330] unrecognized feature gate: Example Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.445077 5005 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.445085 5005 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.445093 5005 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.445101 5005 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.445110 5005 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.445118 5005 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.445128 5005 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.445136 5005 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.445147 5005 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.445157 5005 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.445167 5005 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.445175 5005 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.445184 5005 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.445192 5005 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.445200 5005 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.445209 5005 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.445217 5005 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.445226 5005 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.445234 5005 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.445245 5005 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.445258 5005 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.445267 5005 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.445289 5005 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.445297 5005 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.445305 5005 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.445313 5005 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.445321 5005 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.445330 5005 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.445338 5005 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.445346 5005 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.445354 5005 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.445362 5005 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.445395 5005 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.445403 5005 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.445412 5005 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.445420 5005 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.445429 5005 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.445437 5005 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.445444 5005 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.445452 5005 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.445461 5005 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.445470 5005 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.445478 5005 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.445486 5005 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.445494 5005 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.445502 5005 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.445511 5005 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.445519 5005 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.445527 5005 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.445535 5005 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.445543 5005 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.445551 5005 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.445561 5005 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.445570 5005 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.445581 5005 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.445594 5005 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.457174 5005 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.457569 5005 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.457719 5005 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.457738 5005 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.457748 5005 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.457757 5005 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.457768 5005 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.457778 5005 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.457788 5005 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.457796 5005 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.457806 5005 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.457814 5005 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.457821 5005 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.457830 5005 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.457837 5005 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.457845 5005 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.457853 5005 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.457861 5005 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.457869 5005 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.457877 5005 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.457888 5005 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.457896 5005 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.457904 5005 feature_gate.go:330] unrecognized feature gate: Example Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.457911 5005 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.457919 5005 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.457927 5005 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.457935 5005 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.457943 5005 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.457950 5005 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.457958 5005 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.457965 5005 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.457973 5005 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.457981 5005 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.457992 5005 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458001 5005 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458011 5005 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458032 5005 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458040 5005 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458048 5005 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458056 5005 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458065 5005 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458073 5005 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458080 5005 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458088 5005 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458096 5005 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458103 5005 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458114 5005 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458124 5005 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458132 5005 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458141 5005 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458149 5005 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458157 5005 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458165 5005 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458173 5005 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458181 5005 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458189 5005 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458198 5005 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458207 5005 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458216 5005 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458225 5005 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458234 5005 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458247 5005 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458256 5005 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458264 5005 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458273 5005 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458280 5005 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458288 5005 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458296 5005 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458304 5005 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458311 5005 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458319 5005 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458327 5005 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458337 5005 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.458349 5005 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458614 5005 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458629 5005 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458639 5005 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458648 5005 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458657 5005 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458665 5005 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458674 5005 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458681 5005 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458690 5005 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458697 5005 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458705 5005 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458713 5005 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458722 5005 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458730 5005 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458738 5005 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458746 5005 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458754 5005 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458762 5005 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458769 5005 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458777 5005 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458785 5005 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458793 5005 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458803 5005 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458814 5005 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458823 5005 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458832 5005 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458841 5005 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458850 5005 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458858 5005 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458866 5005 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458875 5005 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458884 5005 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458892 5005 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458900 5005 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458910 5005 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458918 5005 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458925 5005 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458933 5005 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458941 5005 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458949 5005 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458958 5005 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458967 5005 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458977 5005 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458987 5005 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.458995 5005 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.459003 5005 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.459011 5005 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.459021 5005 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.459030 5005 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.459038 5005 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.459046 5005 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.459054 5005 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.459062 5005 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.459070 5005 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.459077 5005 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.459086 5005 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.459093 5005 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.459101 5005 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.459111 5005 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.459121 5005 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.459130 5005 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.459138 5005 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.459146 5005 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.459154 5005 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.459162 5005 feature_gate.go:330] unrecognized feature gate: Example Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.459170 5005 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.459178 5005 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.459185 5005 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.459193 5005 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.459200 5005 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.459212 5005 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.459223 5005 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.459518 5005 server.go:940] "Client rotation is on, will bootstrap in background" Feb 25 11:18:06 crc kubenswrapper[5005]: E0225 11:18:06.464730 5005 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2026-02-24 05:52:08 +0000 UTC" logger="UnhandledError" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.469798 5005 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.470006 5005 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.472138 5005 server.go:997] "Starting client certificate rotation" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.472191 5005 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.472456 5005 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.495508 5005 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 25 11:18:06 crc kubenswrapper[5005]: E0225 11:18:06.498547 5005 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.233:6443: connect: connection refused" logger="UnhandledError" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.498734 5005 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.519403 5005 log.go:25] "Validated CRI v1 runtime API" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.554904 5005 log.go:25] "Validated CRI v1 image API" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.557270 5005 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.563423 5005 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-25-11-13-43-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.563457 5005 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.589260 5005 manager.go:217] Machine: {Timestamp:2026-02-25 11:18:06.586313397 +0000 UTC m=+0.627045744 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:25838fef-f2f6-482f-b878-b96864dc5280 BootID:37b239f9-8862-4454-946c-237d19e88927 Filesystems:[{Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:58:4f:6e Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:58:4f:6e Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:af:db:7c Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:d2:43:6b Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:e0:67:72 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:50:36:fd Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:8a:de:7d Speed:-1 Mtu:1496} {Name:eth10 MacAddress:72:d9:bd:80:9e:03 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:22:fb:b8:7a:bc:f9 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.589527 5005 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.589770 5005 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.590108 5005 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.590286 5005 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.590326 5005 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.591888 5005 topology_manager.go:138] "Creating topology manager with none policy" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.591912 5005 container_manager_linux.go:303] "Creating device plugin manager" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.592512 5005 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.592541 5005 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.592798 5005 state_mem.go:36] "Initialized new in-memory state store" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.592908 5005 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.596884 5005 kubelet.go:418] "Attempting to sync node with API server" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.596914 5005 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.596954 5005 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.596969 5005 kubelet.go:324] "Adding apiserver pod source" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.597178 5005 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.603327 5005 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.604486 5005 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.605035 5005 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.233:6443: connect: connection refused Feb 25 11:18:06 crc kubenswrapper[5005]: E0225 11:18:06.605148 5005 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.233:6443: connect: connection refused" logger="UnhandledError" Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.605167 5005 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.233:6443: connect: connection refused Feb 25 11:18:06 crc kubenswrapper[5005]: E0225 11:18:06.605259 5005 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.233:6443: connect: connection refused" logger="UnhandledError" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.607451 5005 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.609412 5005 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.609457 5005 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.609473 5005 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.609487 5005 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.609509 5005 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.609522 5005 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.609535 5005 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.609581 5005 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.609596 5005 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.609610 5005 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.609648 5005 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.609663 5005 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.610145 5005 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.610782 5005 server.go:1280] "Started kubelet" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.611002 5005 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.611645 5005 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.612685 5005 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.233:6443: connect: connection refused Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.612782 5005 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 25 11:18:06 crc systemd[1]: Started Kubernetes Kubelet. Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.616465 5005 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.616638 5005 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 25 11:18:06 crc kubenswrapper[5005]: E0225 11:18:06.616830 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.616942 5005 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.616955 5005 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.622579 5005 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 25 11:18:06 crc kubenswrapper[5005]: E0225 11:18:06.623307 5005 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" interval="200ms" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.623565 5005 factory.go:55] Registering systemd factory Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.623598 5005 factory.go:221] Registration of the systemd container factory successfully Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.623779 5005 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.233:6443: connect: connection refused Feb 25 11:18:06 crc kubenswrapper[5005]: E0225 11:18:06.623863 5005 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.233:6443: connect: connection refused" logger="UnhandledError" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.623965 5005 factory.go:153] Registering CRI-O factory Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.623977 5005 factory.go:221] Registration of the crio container factory successfully Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.624045 5005 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.624110 5005 factory.go:103] Registering Raw factory Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.624129 5005 manager.go:1196] Started watching for new ooms in manager Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.624871 5005 manager.go:319] Starting recovery of all containers Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.627007 5005 server.go:460] "Adding debug handlers to kubelet server" Feb 25 11:18:06 crc kubenswrapper[5005]: E0225 11:18:06.627600 5005 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.233:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1897793e3cedeb78 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:06.61073804 +0000 UTC m=+0.651470407,LastTimestamp:2026-02-25 11:18:06.61073804 +0000 UTC m=+0.651470407,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.637632 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.637720 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.637730 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.637739 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.637749 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.637759 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.637767 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.637776 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.639550 5005 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.639581 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.639594 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.639609 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.639621 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.639630 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.639644 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.639654 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.639668 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.639678 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.639687 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.639696 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.639706 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.639717 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.639728 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.639737 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.639748 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.639757 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.639768 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.639788 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.639800 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.639810 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.639820 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.639834 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.639848 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.639860 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.639872 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.639884 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.639896 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.639906 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.639963 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.639975 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.639985 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.639995 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640004 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640014 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640025 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640035 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640045 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640054 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640064 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640109 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640120 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640131 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640142 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640163 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640179 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640191 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640203 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640213 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640224 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640244 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640254 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640264 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640277 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640290 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640302 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640313 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640324 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640334 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640345 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640356 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640403 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640417 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640430 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640440 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640452 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640467 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640480 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640490 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640499 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640513 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640522 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640531 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640542 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640553 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640564 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640577 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640622 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640634 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640645 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640661 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640672 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640686 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640696 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640706 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640715 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640725 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640737 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640747 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640758 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640769 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640779 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640789 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640801 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640810 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640820 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640848 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640859 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640870 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640880 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640890 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640901 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640911 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640921 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640931 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640940 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640951 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640960 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640968 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640977 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640986 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.640994 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641003 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641013 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641025 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641034 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641043 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641052 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641061 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641069 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641078 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641087 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641095 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641103 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641112 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641122 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641130 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641139 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641148 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641159 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641168 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641177 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641186 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641195 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641203 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641212 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641223 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641232 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641241 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641250 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641259 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641268 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641277 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641286 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641297 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641308 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641321 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641329 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641360 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641381 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641390 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641399 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641408 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641417 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641428 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641436 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641445 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641452 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641461 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641469 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641477 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641486 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641494 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641519 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641530 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641539 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641547 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641556 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641564 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641575 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641583 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641592 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641600 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641609 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641689 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641699 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641730 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641740 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641753 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641762 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641771 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641782 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641792 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641801 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641810 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641819 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641827 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641836 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641844 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641853 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641861 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641871 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641882 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641891 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641900 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641909 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641918 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641927 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641935 5005 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641943 5005 reconstruct.go:97] "Volume reconstruction finished" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.641951 5005 reconciler.go:26] "Reconciler: start to sync state" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.661096 5005 manager.go:324] Recovery completed Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.680759 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.681558 5005 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.683038 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.683069 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.683080 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.684134 5005 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.684193 5005 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.684209 5005 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.684230 5005 kubelet.go:2335] "Starting kubelet main sync loop" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.684233 5005 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.684301 5005 state_mem.go:36] "Initialized new in-memory state store" Feb 25 11:18:06 crc kubenswrapper[5005]: E0225 11:18:06.684295 5005 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 25 11:18:06 crc kubenswrapper[5005]: W0225 11:18:06.686436 5005 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.233:6443: connect: connection refused Feb 25 11:18:06 crc kubenswrapper[5005]: E0225 11:18:06.686505 5005 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.233:6443: connect: connection refused" logger="UnhandledError" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.700097 5005 policy_none.go:49] "None policy: Start" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.701111 5005 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.701196 5005 state_mem.go:35] "Initializing new in-memory state store" Feb 25 11:18:06 crc kubenswrapper[5005]: E0225 11:18:06.717429 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.769161 5005 manager.go:334] "Starting Device Plugin manager" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.769288 5005 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.769312 5005 server.go:79] "Starting device plugin registration server" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.770078 5005 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.770128 5005 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.770355 5005 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.770507 5005 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.770524 5005 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 25 11:18:06 crc kubenswrapper[5005]: E0225 11:18:06.778947 5005 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.785233 5005 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc"] Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.785346 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.786683 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.786726 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.786741 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.787022 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.787315 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.787387 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.788306 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.788408 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.788430 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.788450 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.788411 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.788488 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.788714 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.788852 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.788894 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.790321 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.790359 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.790475 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.790395 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.790517 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.790532 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.790749 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.790929 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.791027 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.791898 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.791935 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.791952 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.792053 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.792084 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.792100 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.792138 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.792358 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.792439 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.793003 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.793035 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.793048 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.793304 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.793347 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.793466 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.793517 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.793537 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.796577 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.796615 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.796628 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:06 crc kubenswrapper[5005]: E0225 11:18:06.824116 5005 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" interval="400ms" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.844047 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.844110 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.844151 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.844273 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.844394 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.844440 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.844487 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.844513 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.844580 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.844619 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.844642 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.844663 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.844757 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.844806 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.844880 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.870344 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.872124 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.872189 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.872247 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.872290 5005 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 25 11:18:06 crc kubenswrapper[5005]: E0225 11:18:06.873156 5005 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.233:6443: connect: connection refused" node="crc" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.946108 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.946195 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.946245 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.946284 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.946322 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.946357 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.946414 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.946419 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.946554 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.946554 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.946583 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.946630 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.946645 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.946604 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.946446 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.946753 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.946720 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.946823 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.946829 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.946869 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.946892 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.946921 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.946926 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.946936 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.946966 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.946977 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.946983 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.946941 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.947040 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 25 11:18:06 crc kubenswrapper[5005]: I0225 11:18:06.947470 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 25 11:18:07 crc kubenswrapper[5005]: I0225 11:18:07.073556 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:07 crc kubenswrapper[5005]: I0225 11:18:07.074810 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:07 crc kubenswrapper[5005]: I0225 11:18:07.074845 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:07 crc kubenswrapper[5005]: I0225 11:18:07.074856 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:07 crc kubenswrapper[5005]: I0225 11:18:07.074886 5005 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 25 11:18:07 crc kubenswrapper[5005]: E0225 11:18:07.075340 5005 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.233:6443: connect: connection refused" node="crc" Feb 25 11:18:07 crc kubenswrapper[5005]: I0225 11:18:07.114019 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 25 11:18:07 crc kubenswrapper[5005]: I0225 11:18:07.120916 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 25 11:18:07 crc kubenswrapper[5005]: I0225 11:18:07.138437 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 25 11:18:07 crc kubenswrapper[5005]: I0225 11:18:07.163514 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 25 11:18:07 crc kubenswrapper[5005]: W0225 11:18:07.165600 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-f504156442cf9865f239827ccecacf3272e0f29517c76f4c361b6a5f7df1bac8 WatchSource:0}: Error finding container f504156442cf9865f239827ccecacf3272e0f29517c76f4c361b6a5f7df1bac8: Status 404 returned error can't find the container with id f504156442cf9865f239827ccecacf3272e0f29517c76f4c361b6a5f7df1bac8 Feb 25 11:18:07 crc kubenswrapper[5005]: W0225 11:18:07.166724 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-a264e5eb5aa4904d536aaed25de2279e7bf8e860b4f52034c40bf9f823b1a483 WatchSource:0}: Error finding container a264e5eb5aa4904d536aaed25de2279e7bf8e860b4f52034c40bf9f823b1a483: Status 404 returned error can't find the container with id a264e5eb5aa4904d536aaed25de2279e7bf8e860b4f52034c40bf9f823b1a483 Feb 25 11:18:07 crc kubenswrapper[5005]: I0225 11:18:07.173099 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 25 11:18:07 crc kubenswrapper[5005]: W0225 11:18:07.176280 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-1d4cc3b2c4855ec772aa8100e34aee8bcb0b4831da6b729f01237f7be03535a1 WatchSource:0}: Error finding container 1d4cc3b2c4855ec772aa8100e34aee8bcb0b4831da6b729f01237f7be03535a1: Status 404 returned error can't find the container with id 1d4cc3b2c4855ec772aa8100e34aee8bcb0b4831da6b729f01237f7be03535a1 Feb 25 11:18:07 crc kubenswrapper[5005]: W0225 11:18:07.185470 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-b6ca2499fb5e68037fe713ec0ba07e41b578c633555754ca5c3d2d1960f0254c WatchSource:0}: Error finding container b6ca2499fb5e68037fe713ec0ba07e41b578c633555754ca5c3d2d1960f0254c: Status 404 returned error can't find the container with id b6ca2499fb5e68037fe713ec0ba07e41b578c633555754ca5c3d2d1960f0254c Feb 25 11:18:07 crc kubenswrapper[5005]: W0225 11:18:07.194360 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-3a18f295ac52be23fb41ab99b1f4e1e256f0e1e511f4b1a400e81716ef240f53 WatchSource:0}: Error finding container 3a18f295ac52be23fb41ab99b1f4e1e256f0e1e511f4b1a400e81716ef240f53: Status 404 returned error can't find the container with id 3a18f295ac52be23fb41ab99b1f4e1e256f0e1e511f4b1a400e81716ef240f53 Feb 25 11:18:07 crc kubenswrapper[5005]: E0225 11:18:07.225775 5005 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" interval="800ms" Feb 25 11:18:07 crc kubenswrapper[5005]: I0225 11:18:07.475792 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:07 crc kubenswrapper[5005]: I0225 11:18:07.477604 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:07 crc kubenswrapper[5005]: I0225 11:18:07.477661 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:07 crc kubenswrapper[5005]: I0225 11:18:07.477675 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:07 crc kubenswrapper[5005]: I0225 11:18:07.477711 5005 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 25 11:18:07 crc kubenswrapper[5005]: E0225 11:18:07.478162 5005 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.233:6443: connect: connection refused" node="crc" Feb 25 11:18:07 crc kubenswrapper[5005]: I0225 11:18:07.614395 5005 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.233:6443: connect: connection refused Feb 25 11:18:07 crc kubenswrapper[5005]: I0225 11:18:07.688215 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a264e5eb5aa4904d536aaed25de2279e7bf8e860b4f52034c40bf9f823b1a483"} Feb 25 11:18:07 crc kubenswrapper[5005]: I0225 11:18:07.689332 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3a18f295ac52be23fb41ab99b1f4e1e256f0e1e511f4b1a400e81716ef240f53"} Feb 25 11:18:07 crc kubenswrapper[5005]: I0225 11:18:07.690188 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"b6ca2499fb5e68037fe713ec0ba07e41b578c633555754ca5c3d2d1960f0254c"} Feb 25 11:18:07 crc kubenswrapper[5005]: I0225 11:18:07.691009 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"1d4cc3b2c4855ec772aa8100e34aee8bcb0b4831da6b729f01237f7be03535a1"} Feb 25 11:18:07 crc kubenswrapper[5005]: I0225 11:18:07.691687 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f504156442cf9865f239827ccecacf3272e0f29517c76f4c361b6a5f7df1bac8"} Feb 25 11:18:07 crc kubenswrapper[5005]: W0225 11:18:07.796412 5005 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.233:6443: connect: connection refused Feb 25 11:18:07 crc kubenswrapper[5005]: E0225 11:18:07.796556 5005 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.233:6443: connect: connection refused" logger="UnhandledError" Feb 25 11:18:07 crc kubenswrapper[5005]: W0225 11:18:07.879615 5005 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.233:6443: connect: connection refused Feb 25 11:18:07 crc kubenswrapper[5005]: E0225 11:18:07.879709 5005 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.233:6443: connect: connection refused" logger="UnhandledError" Feb 25 11:18:08 crc kubenswrapper[5005]: E0225 11:18:08.026800 5005 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" interval="1.6s" Feb 25 11:18:08 crc kubenswrapper[5005]: W0225 11:18:08.060974 5005 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.233:6443: connect: connection refused Feb 25 11:18:08 crc kubenswrapper[5005]: E0225 11:18:08.061097 5005 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.233:6443: connect: connection refused" logger="UnhandledError" Feb 25 11:18:08 crc kubenswrapper[5005]: W0225 11:18:08.134557 5005 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.233:6443: connect: connection refused Feb 25 11:18:08 crc kubenswrapper[5005]: E0225 11:18:08.134690 5005 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.233:6443: connect: connection refused" logger="UnhandledError" Feb 25 11:18:08 crc kubenswrapper[5005]: I0225 11:18:08.279109 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:08 crc kubenswrapper[5005]: I0225 11:18:08.281537 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:08 crc kubenswrapper[5005]: I0225 11:18:08.281588 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:08 crc kubenswrapper[5005]: I0225 11:18:08.281601 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:08 crc kubenswrapper[5005]: I0225 11:18:08.281638 5005 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 25 11:18:08 crc kubenswrapper[5005]: E0225 11:18:08.282195 5005 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.233:6443: connect: connection refused" node="crc" Feb 25 11:18:08 crc kubenswrapper[5005]: I0225 11:18:08.614411 5005 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.233:6443: connect: connection refused Feb 25 11:18:08 crc kubenswrapper[5005]: I0225 11:18:08.649010 5005 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 25 11:18:08 crc kubenswrapper[5005]: E0225 11:18:08.650517 5005 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.233:6443: connect: connection refused" logger="UnhandledError" Feb 25 11:18:08 crc kubenswrapper[5005]: I0225 11:18:08.696906 5005 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f1215ec0ba3dc9272bbd8f648ab046459d3c8dd9de728a938102d269a234c9b8" exitCode=0 Feb 25 11:18:08 crc kubenswrapper[5005]: I0225 11:18:08.697050 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"f1215ec0ba3dc9272bbd8f648ab046459d3c8dd9de728a938102d269a234c9b8"} Feb 25 11:18:08 crc kubenswrapper[5005]: I0225 11:18:08.697254 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:08 crc kubenswrapper[5005]: I0225 11:18:08.699054 5005 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="fe81953c38a51b126c7a2d2138e2640da0fc3f9648fa4eaa92d06307d4847301" exitCode=0 Feb 25 11:18:08 crc kubenswrapper[5005]: I0225 11:18:08.699116 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"fe81953c38a51b126c7a2d2138e2640da0fc3f9648fa4eaa92d06307d4847301"} Feb 25 11:18:08 crc kubenswrapper[5005]: I0225 11:18:08.699304 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:08 crc kubenswrapper[5005]: I0225 11:18:08.699345 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:08 crc kubenswrapper[5005]: I0225 11:18:08.699356 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:08 crc kubenswrapper[5005]: I0225 11:18:08.699521 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:08 crc kubenswrapper[5005]: I0225 11:18:08.700923 5005 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="ac90cf045284dd4eebd73436f815796ed7da40e53d5e650c94df83b44b3c23e5" exitCode=0 Feb 25 11:18:08 crc kubenswrapper[5005]: I0225 11:18:08.700989 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"ac90cf045284dd4eebd73436f815796ed7da40e53d5e650c94df83b44b3c23e5"} Feb 25 11:18:08 crc kubenswrapper[5005]: I0225 11:18:08.701083 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:08 crc kubenswrapper[5005]: I0225 11:18:08.701626 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:08 crc kubenswrapper[5005]: I0225 11:18:08.702841 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:08 crc kubenswrapper[5005]: I0225 11:18:08.702897 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:08 crc kubenswrapper[5005]: I0225 11:18:08.702917 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:08 crc kubenswrapper[5005]: I0225 11:18:08.702932 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:08 crc kubenswrapper[5005]: I0225 11:18:08.702954 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:08 crc kubenswrapper[5005]: I0225 11:18:08.702965 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:08 crc kubenswrapper[5005]: I0225 11:18:08.703596 5005 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="7909b9056bec0aabc866cbb67e8217ee44cc9136a5077dc7db73bb2ee94c25ed" exitCode=0 Feb 25 11:18:08 crc kubenswrapper[5005]: I0225 11:18:08.703724 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:08 crc kubenswrapper[5005]: I0225 11:18:08.703772 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"7909b9056bec0aabc866cbb67e8217ee44cc9136a5077dc7db73bb2ee94c25ed"} Feb 25 11:18:08 crc kubenswrapper[5005]: I0225 11:18:08.703998 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:08 crc kubenswrapper[5005]: I0225 11:18:08.704037 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:08 crc kubenswrapper[5005]: I0225 11:18:08.704049 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:08 crc kubenswrapper[5005]: I0225 11:18:08.704961 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:08 crc kubenswrapper[5005]: I0225 11:18:08.705006 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:08 crc kubenswrapper[5005]: I0225 11:18:08.705023 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:08 crc kubenswrapper[5005]: I0225 11:18:08.711991 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"22eb10b8f2613a94536cca2ae11c54c3a8491dc87bf3e864e9842dcb74b75baa"} Feb 25 11:18:08 crc kubenswrapper[5005]: I0225 11:18:08.712042 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"42476dee3ffaa9ac66ce2a29024ca2663cd8f81abd08ae608e741ed3645b3d70"} Feb 25 11:18:08 crc kubenswrapper[5005]: I0225 11:18:08.712057 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"aefec33351776ece26b3ae45cbd0ee44941bf6c6451f745d8657623198de9c14"} Feb 25 11:18:08 crc kubenswrapper[5005]: I0225 11:18:08.712072 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"cc7b6fc03c411ec651f8db31f2510f6b0fa45f7397e8466048befea7c261ae8e"} Feb 25 11:18:08 crc kubenswrapper[5005]: I0225 11:18:08.712115 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:08 crc kubenswrapper[5005]: I0225 11:18:08.713477 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:08 crc kubenswrapper[5005]: I0225 11:18:08.713531 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:08 crc kubenswrapper[5005]: I0225 11:18:08.713550 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:09 crc kubenswrapper[5005]: W0225 11:18:09.511709 5005 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.233:6443: connect: connection refused Feb 25 11:18:09 crc kubenswrapper[5005]: E0225 11:18:09.512336 5005 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.233:6443: connect: connection refused" logger="UnhandledError" Feb 25 11:18:09 crc kubenswrapper[5005]: I0225 11:18:09.614084 5005 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.233:6443: connect: connection refused Feb 25 11:18:09 crc kubenswrapper[5005]: E0225 11:18:09.627624 5005 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" interval="3.2s" Feb 25 11:18:09 crc kubenswrapper[5005]: I0225 11:18:09.718073 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"5733bda466d8100c86e9ae87d3522442369ecdf4bd2faa4f4231b88808e8444d"} Feb 25 11:18:09 crc kubenswrapper[5005]: I0225 11:18:09.718128 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"811b14d49ac3bf58327569d54a22902568522dd1a93fdd9f500c2cd681501125"} Feb 25 11:18:09 crc kubenswrapper[5005]: I0225 11:18:09.718138 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:09 crc kubenswrapper[5005]: I0225 11:18:09.718142 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d63dc70b5fb46561c9cdcecff7afb706a569f6e916cb70eb5beacb426a094388"} Feb 25 11:18:09 crc kubenswrapper[5005]: I0225 11:18:09.719112 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:09 crc kubenswrapper[5005]: I0225 11:18:09.719152 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:09 crc kubenswrapper[5005]: I0225 11:18:09.719165 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:09 crc kubenswrapper[5005]: I0225 11:18:09.721571 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9fb86efb4724977eee4e0b80af3aa1b7320b68118016250cd74ef6f631008417"} Feb 25 11:18:09 crc kubenswrapper[5005]: I0225 11:18:09.721604 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"bc3059892241d2a21084bd5b7d8bda16e9d67b83712919ce2ea4c7157f3c0b19"} Feb 25 11:18:09 crc kubenswrapper[5005]: I0225 11:18:09.721617 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"68ec20b6113b19505a0f124a1bbd4c2a1d418c686236a02e96759f3ccbe3b7e1"} Feb 25 11:18:09 crc kubenswrapper[5005]: I0225 11:18:09.721629 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5d75520daa9f2549db391f75b35e5a1ae156323f4ed30dd5c3eff1c669b99079"} Feb 25 11:18:09 crc kubenswrapper[5005]: I0225 11:18:09.723831 5005 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="91a4f733643e1a4246fb2c0a151e06be2c624ab338a3087f1617062580aaf03c" exitCode=0 Feb 25 11:18:09 crc kubenswrapper[5005]: I0225 11:18:09.723870 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"91a4f733643e1a4246fb2c0a151e06be2c624ab338a3087f1617062580aaf03c"} Feb 25 11:18:09 crc kubenswrapper[5005]: I0225 11:18:09.723971 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:09 crc kubenswrapper[5005]: I0225 11:18:09.724724 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:09 crc kubenswrapper[5005]: I0225 11:18:09.724745 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:09 crc kubenswrapper[5005]: I0225 11:18:09.724755 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:09 crc kubenswrapper[5005]: I0225 11:18:09.725820 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:09 crc kubenswrapper[5005]: I0225 11:18:09.725872 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"994b36ddb3c4dd02f4ca9428bbffdc3bd360fb5756a99fa150ca0f09c688781d"} Feb 25 11:18:09 crc kubenswrapper[5005]: I0225 11:18:09.725921 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:09 crc kubenswrapper[5005]: I0225 11:18:09.726490 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:09 crc kubenswrapper[5005]: I0225 11:18:09.726523 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:09 crc kubenswrapper[5005]: I0225 11:18:09.726537 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:09 crc kubenswrapper[5005]: I0225 11:18:09.727072 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:09 crc kubenswrapper[5005]: I0225 11:18:09.727293 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:09 crc kubenswrapper[5005]: I0225 11:18:09.727307 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:09 crc kubenswrapper[5005]: I0225 11:18:09.883066 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:09 crc kubenswrapper[5005]: I0225 11:18:09.884280 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:09 crc kubenswrapper[5005]: I0225 11:18:09.884338 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:09 crc kubenswrapper[5005]: I0225 11:18:09.884353 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:09 crc kubenswrapper[5005]: I0225 11:18:09.884421 5005 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 25 11:18:09 crc kubenswrapper[5005]: E0225 11:18:09.884953 5005 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.233:6443: connect: connection refused" node="crc" Feb 25 11:18:09 crc kubenswrapper[5005]: I0225 11:18:09.930039 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 25 11:18:10 crc kubenswrapper[5005]: I0225 11:18:10.734739 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a2f329c4e2573c6fb985a94776f7147f59434cc73b03b28daf84ea23b84fb7fb"} Feb 25 11:18:10 crc kubenswrapper[5005]: I0225 11:18:10.734892 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:10 crc kubenswrapper[5005]: I0225 11:18:10.736327 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:10 crc kubenswrapper[5005]: I0225 11:18:10.736367 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:10 crc kubenswrapper[5005]: I0225 11:18:10.736419 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:10 crc kubenswrapper[5005]: I0225 11:18:10.738482 5005 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f523d445c10bca3ea5d7002a1d9c3a9b504a09ffc389081f0e96e5463ac560f2" exitCode=0 Feb 25 11:18:10 crc kubenswrapper[5005]: I0225 11:18:10.738580 5005 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 25 11:18:10 crc kubenswrapper[5005]: I0225 11:18:10.738610 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:10 crc kubenswrapper[5005]: I0225 11:18:10.738661 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:10 crc kubenswrapper[5005]: I0225 11:18:10.738610 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:10 crc kubenswrapper[5005]: I0225 11:18:10.738601 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f523d445c10bca3ea5d7002a1d9c3a9b504a09ffc389081f0e96e5463ac560f2"} Feb 25 11:18:10 crc kubenswrapper[5005]: I0225 11:18:10.738615 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:10 crc kubenswrapper[5005]: I0225 11:18:10.739979 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:10 crc kubenswrapper[5005]: I0225 11:18:10.740001 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:10 crc kubenswrapper[5005]: I0225 11:18:10.740013 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:10 crc kubenswrapper[5005]: I0225 11:18:10.740035 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:10 crc kubenswrapper[5005]: I0225 11:18:10.740072 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:10 crc kubenswrapper[5005]: I0225 11:18:10.740092 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:10 crc kubenswrapper[5005]: I0225 11:18:10.740363 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:10 crc kubenswrapper[5005]: I0225 11:18:10.740423 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:10 crc kubenswrapper[5005]: I0225 11:18:10.740439 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:10 crc kubenswrapper[5005]: I0225 11:18:10.740970 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:10 crc kubenswrapper[5005]: I0225 11:18:10.740998 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:10 crc kubenswrapper[5005]: I0225 11:18:10.741008 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:10 crc kubenswrapper[5005]: I0225 11:18:10.995666 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 25 11:18:11 crc kubenswrapper[5005]: I0225 11:18:11.748000 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"abbc6361c35e7ece78bcb3feba696a65ea5d4cdb6d089e11a42b06baa3b491c9"} Feb 25 11:18:11 crc kubenswrapper[5005]: I0225 11:18:11.748054 5005 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 25 11:18:11 crc kubenswrapper[5005]: I0225 11:18:11.748089 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"11b99b7299ae569668e3a7fa5af2a82326286e982e2aab988b6a77e3555e9d34"} Feb 25 11:18:11 crc kubenswrapper[5005]: I0225 11:18:11.748140 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"805d86c45023ab12ec4fbd1ac158922a87d5760f5b1bb88278ae08d1dffb84fa"} Feb 25 11:18:11 crc kubenswrapper[5005]: I0225 11:18:11.748154 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:11 crc kubenswrapper[5005]: I0225 11:18:11.748166 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1d28f40b63f2206b3b1031f9cf79925127971649da26ce3c09052dd194270b90"} Feb 25 11:18:11 crc kubenswrapper[5005]: I0225 11:18:11.748103 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:11 crc kubenswrapper[5005]: I0225 11:18:11.749779 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:11 crc kubenswrapper[5005]: I0225 11:18:11.749831 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:11 crc kubenswrapper[5005]: I0225 11:18:11.749852 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:11 crc kubenswrapper[5005]: I0225 11:18:11.750064 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:11 crc kubenswrapper[5005]: I0225 11:18:11.750107 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:11 crc kubenswrapper[5005]: I0225 11:18:11.750118 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:12 crc kubenswrapper[5005]: I0225 11:18:12.372103 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 25 11:18:12 crc kubenswrapper[5005]: I0225 11:18:12.758733 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7fe10effdabc01470ca7c131fa908a28477c229ac9b900c8ef96e48299ae0474"} Feb 25 11:18:12 crc kubenswrapper[5005]: I0225 11:18:12.758841 5005 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 25 11:18:12 crc kubenswrapper[5005]: I0225 11:18:12.758935 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:12 crc kubenswrapper[5005]: I0225 11:18:12.758849 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:12 crc kubenswrapper[5005]: I0225 11:18:12.760536 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:12 crc kubenswrapper[5005]: I0225 11:18:12.760609 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:12 crc kubenswrapper[5005]: I0225 11:18:12.760632 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:12 crc kubenswrapper[5005]: I0225 11:18:12.760665 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:12 crc kubenswrapper[5005]: I0225 11:18:12.760703 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:12 crc kubenswrapper[5005]: I0225 11:18:12.760721 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:12 crc kubenswrapper[5005]: I0225 11:18:12.930971 5005 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 25 11:18:12 crc kubenswrapper[5005]: I0225 11:18:12.931113 5005 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 25 11:18:13 crc kubenswrapper[5005]: I0225 11:18:13.022316 5005 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 25 11:18:13 crc kubenswrapper[5005]: I0225 11:18:13.085127 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:13 crc kubenswrapper[5005]: I0225 11:18:13.086312 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:13 crc kubenswrapper[5005]: I0225 11:18:13.086345 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:13 crc kubenswrapper[5005]: I0225 11:18:13.086354 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:13 crc kubenswrapper[5005]: I0225 11:18:13.086394 5005 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 25 11:18:13 crc kubenswrapper[5005]: I0225 11:18:13.272512 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 25 11:18:13 crc kubenswrapper[5005]: I0225 11:18:13.761032 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:13 crc kubenswrapper[5005]: I0225 11:18:13.761126 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:13 crc kubenswrapper[5005]: I0225 11:18:13.762155 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:13 crc kubenswrapper[5005]: I0225 11:18:13.762186 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:13 crc kubenswrapper[5005]: I0225 11:18:13.762196 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:13 crc kubenswrapper[5005]: I0225 11:18:13.762332 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:13 crc kubenswrapper[5005]: I0225 11:18:13.762420 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:13 crc kubenswrapper[5005]: I0225 11:18:13.762436 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:14 crc kubenswrapper[5005]: I0225 11:18:14.886842 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 25 11:18:14 crc kubenswrapper[5005]: I0225 11:18:14.887320 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:14 crc kubenswrapper[5005]: I0225 11:18:14.889038 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:14 crc kubenswrapper[5005]: I0225 11:18:14.889109 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:14 crc kubenswrapper[5005]: I0225 11:18:14.889127 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:15 crc kubenswrapper[5005]: I0225 11:18:15.711494 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 25 11:18:15 crc kubenswrapper[5005]: I0225 11:18:15.711863 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:15 crc kubenswrapper[5005]: I0225 11:18:15.714355 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:15 crc kubenswrapper[5005]: I0225 11:18:15.714470 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:15 crc kubenswrapper[5005]: I0225 11:18:15.714490 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:16 crc kubenswrapper[5005]: E0225 11:18:16.779078 5005 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 25 11:18:17 crc kubenswrapper[5005]: I0225 11:18:17.096977 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 25 11:18:17 crc kubenswrapper[5005]: I0225 11:18:17.097226 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:17 crc kubenswrapper[5005]: I0225 11:18:17.098974 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:17 crc kubenswrapper[5005]: I0225 11:18:17.099021 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:17 crc kubenswrapper[5005]: I0225 11:18:17.099038 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:17 crc kubenswrapper[5005]: I0225 11:18:17.669017 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 25 11:18:17 crc kubenswrapper[5005]: I0225 11:18:17.669310 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:17 crc kubenswrapper[5005]: I0225 11:18:17.672044 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:17 crc kubenswrapper[5005]: I0225 11:18:17.672136 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:17 crc kubenswrapper[5005]: I0225 11:18:17.672160 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:17 crc kubenswrapper[5005]: I0225 11:18:17.679529 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 25 11:18:17 crc kubenswrapper[5005]: I0225 11:18:17.774419 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:17 crc kubenswrapper[5005]: I0225 11:18:17.776485 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:17 crc kubenswrapper[5005]: I0225 11:18:17.776561 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:17 crc kubenswrapper[5005]: I0225 11:18:17.776581 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:17 crc kubenswrapper[5005]: I0225 11:18:17.781720 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 25 11:18:18 crc kubenswrapper[5005]: I0225 11:18:18.197400 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 25 11:18:18 crc kubenswrapper[5005]: I0225 11:18:18.777507 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:18 crc kubenswrapper[5005]: I0225 11:18:18.778895 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:18 crc kubenswrapper[5005]: I0225 11:18:18.778952 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:18 crc kubenswrapper[5005]: I0225 11:18:18.778970 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:19 crc kubenswrapper[5005]: I0225 11:18:19.779943 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:19 crc kubenswrapper[5005]: I0225 11:18:19.781230 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:19 crc kubenswrapper[5005]: I0225 11:18:19.781274 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:19 crc kubenswrapper[5005]: I0225 11:18:19.781286 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:20 crc kubenswrapper[5005]: I0225 11:18:20.615464 5005 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 25 11:18:20 crc kubenswrapper[5005]: W0225 11:18:20.691740 5005 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 25 11:18:20 crc kubenswrapper[5005]: I0225 11:18:20.691894 5005 trace.go:236] Trace[1400799799]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (25-Feb-2026 11:18:10.690) (total time: 10001ms): Feb 25 11:18:20 crc kubenswrapper[5005]: Trace[1400799799]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (11:18:20.691) Feb 25 11:18:20 crc kubenswrapper[5005]: Trace[1400799799]: [10.001582281s] [10.001582281s] END Feb 25 11:18:20 crc kubenswrapper[5005]: E0225 11:18:20.691936 5005 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 25 11:18:21 crc kubenswrapper[5005]: W0225 11:18:21.136313 5005 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 25 11:18:21 crc kubenswrapper[5005]: I0225 11:18:21.136493 5005 trace.go:236] Trace[903499579]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (25-Feb-2026 11:18:11.134) (total time: 10002ms): Feb 25 11:18:21 crc kubenswrapper[5005]: Trace[903499579]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (11:18:21.136) Feb 25 11:18:21 crc kubenswrapper[5005]: Trace[903499579]: [10.002140527s] [10.002140527s] END Feb 25 11:18:21 crc kubenswrapper[5005]: E0225 11:18:21.136555 5005 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 25 11:18:21 crc kubenswrapper[5005]: W0225 11:18:21.237506 5005 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 25 11:18:21 crc kubenswrapper[5005]: I0225 11:18:21.237640 5005 trace.go:236] Trace[459752792]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (25-Feb-2026 11:18:11.235) (total time: 10001ms): Feb 25 11:18:21 crc kubenswrapper[5005]: Trace[459752792]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (11:18:21.237) Feb 25 11:18:21 crc kubenswrapper[5005]: Trace[459752792]: [10.00179978s] [10.00179978s] END Feb 25 11:18:21 crc kubenswrapper[5005]: E0225 11:18:21.237678 5005 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 25 11:18:21 crc kubenswrapper[5005]: I0225 11:18:21.698828 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 25 11:18:21 crc kubenswrapper[5005]: I0225 11:18:21.699070 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:21 crc kubenswrapper[5005]: I0225 11:18:21.700631 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:21 crc kubenswrapper[5005]: I0225 11:18:21.700691 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:21 crc kubenswrapper[5005]: I0225 11:18:21.700713 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:21 crc kubenswrapper[5005]: I0225 11:18:21.747035 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 25 11:18:21 crc kubenswrapper[5005]: I0225 11:18:21.786131 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:21 crc kubenswrapper[5005]: I0225 11:18:21.787213 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:21 crc kubenswrapper[5005]: I0225 11:18:21.787264 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:21 crc kubenswrapper[5005]: I0225 11:18:21.787335 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:21 crc kubenswrapper[5005]: I0225 11:18:21.814560 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 25 11:18:21 crc kubenswrapper[5005]: E0225 11:18:21.951273 5005 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-25T11:18:21Z is after 2026-02-23T05:33:13Z" node="crc" Feb 25 11:18:21 crc kubenswrapper[5005]: E0225 11:18:21.952955 5005 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-25T11:18:21Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 25 11:18:21 crc kubenswrapper[5005]: W0225 11:18:21.953678 5005 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-25T11:18:21Z is after 2026-02-23T05:33:13Z Feb 25 11:18:21 crc kubenswrapper[5005]: E0225 11:18:21.953751 5005 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-25T11:18:21Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 25 11:18:21 crc kubenswrapper[5005]: E0225 11:18:21.956002 5005 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-25T11:18:21Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.1897793e3cedeb78 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:06.61073804 +0000 UTC m=+0.651470407,LastTimestamp:2026-02-25 11:18:06.61073804 +0000 UTC m=+0.651470407,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:21 crc kubenswrapper[5005]: I0225 11:18:21.958206 5005 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-25T11:18:21Z is after 2026-02-23T05:33:13Z Feb 25 11:18:21 crc kubenswrapper[5005]: E0225 11:18:21.959670 5005 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-25T11:18:21Z is after 2026-02-23T05:33:13Z" interval="6.4s" Feb 25 11:18:21 crc kubenswrapper[5005]: I0225 11:18:21.959910 5005 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 25 11:18:21 crc kubenswrapper[5005]: I0225 11:18:21.959978 5005 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 25 11:18:21 crc kubenswrapper[5005]: I0225 11:18:21.967350 5005 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 25 11:18:21 crc kubenswrapper[5005]: I0225 11:18:21.967439 5005 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 25 11:18:22 crc kubenswrapper[5005]: I0225 11:18:22.620149 5005 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-25T11:18:22Z is after 2026-02-23T05:33:13Z Feb 25 11:18:22 crc kubenswrapper[5005]: I0225 11:18:22.791640 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 25 11:18:22 crc kubenswrapper[5005]: I0225 11:18:22.794827 5005 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a2f329c4e2573c6fb985a94776f7147f59434cc73b03b28daf84ea23b84fb7fb" exitCode=255 Feb 25 11:18:22 crc kubenswrapper[5005]: I0225 11:18:22.794945 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"a2f329c4e2573c6fb985a94776f7147f59434cc73b03b28daf84ea23b84fb7fb"} Feb 25 11:18:22 crc kubenswrapper[5005]: I0225 11:18:22.795746 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:22 crc kubenswrapper[5005]: I0225 11:18:22.796267 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:22 crc kubenswrapper[5005]: I0225 11:18:22.800018 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:22 crc kubenswrapper[5005]: I0225 11:18:22.800062 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:22 crc kubenswrapper[5005]: I0225 11:18:22.800077 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:22 crc kubenswrapper[5005]: I0225 11:18:22.800475 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:22 crc kubenswrapper[5005]: I0225 11:18:22.800556 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:22 crc kubenswrapper[5005]: I0225 11:18:22.800583 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:22 crc kubenswrapper[5005]: I0225 11:18:22.801660 5005 scope.go:117] "RemoveContainer" containerID="a2f329c4e2573c6fb985a94776f7147f59434cc73b03b28daf84ea23b84fb7fb" Feb 25 11:18:22 crc kubenswrapper[5005]: I0225 11:18:22.931587 5005 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 25 11:18:22 crc kubenswrapper[5005]: I0225 11:18:22.931700 5005 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 25 11:18:23 crc kubenswrapper[5005]: I0225 11:18:23.080756 5005 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 25 11:18:23 crc kubenswrapper[5005]: I0225 11:18:23.618321 5005 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-25T11:18:23Z is after 2026-02-23T05:33:13Z Feb 25 11:18:23 crc kubenswrapper[5005]: I0225 11:18:23.801315 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 25 11:18:23 crc kubenswrapper[5005]: I0225 11:18:23.804431 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"acf6f09c25cf42554d0dfb93d9601106187d3356eadc3f4ddb8bd2e7de6512e3"} Feb 25 11:18:23 crc kubenswrapper[5005]: I0225 11:18:23.804618 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:23 crc kubenswrapper[5005]: I0225 11:18:23.812848 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:23 crc kubenswrapper[5005]: I0225 11:18:23.812900 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:23 crc kubenswrapper[5005]: I0225 11:18:23.812913 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:24 crc kubenswrapper[5005]: I0225 11:18:24.618219 5005 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-25T11:18:24Z is after 2026-02-23T05:33:13Z Feb 25 11:18:24 crc kubenswrapper[5005]: I0225 11:18:24.808493 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 25 11:18:24 crc kubenswrapper[5005]: I0225 11:18:24.809086 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 25 11:18:24 crc kubenswrapper[5005]: I0225 11:18:24.811854 5005 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="acf6f09c25cf42554d0dfb93d9601106187d3356eadc3f4ddb8bd2e7de6512e3" exitCode=255 Feb 25 11:18:24 crc kubenswrapper[5005]: I0225 11:18:24.811902 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"acf6f09c25cf42554d0dfb93d9601106187d3356eadc3f4ddb8bd2e7de6512e3"} Feb 25 11:18:24 crc kubenswrapper[5005]: I0225 11:18:24.811954 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:24 crc kubenswrapper[5005]: I0225 11:18:24.811957 5005 scope.go:117] "RemoveContainer" containerID="a2f329c4e2573c6fb985a94776f7147f59434cc73b03b28daf84ea23b84fb7fb" Feb 25 11:18:24 crc kubenswrapper[5005]: I0225 11:18:24.813157 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:24 crc kubenswrapper[5005]: I0225 11:18:24.813319 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:24 crc kubenswrapper[5005]: I0225 11:18:24.813341 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:24 crc kubenswrapper[5005]: I0225 11:18:24.814230 5005 scope.go:117] "RemoveContainer" containerID="acf6f09c25cf42554d0dfb93d9601106187d3356eadc3f4ddb8bd2e7de6512e3" Feb 25 11:18:24 crc kubenswrapper[5005]: E0225 11:18:24.814719 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 25 11:18:25 crc kubenswrapper[5005]: I0225 11:18:25.618507 5005 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-25T11:18:25Z is after 2026-02-23T05:33:13Z Feb 25 11:18:25 crc kubenswrapper[5005]: I0225 11:18:25.719512 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 25 11:18:25 crc kubenswrapper[5005]: I0225 11:18:25.817231 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 25 11:18:25 crc kubenswrapper[5005]: I0225 11:18:25.820168 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:25 crc kubenswrapper[5005]: I0225 11:18:25.821698 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:25 crc kubenswrapper[5005]: I0225 11:18:25.821774 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:25 crc kubenswrapper[5005]: I0225 11:18:25.821786 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:25 crc kubenswrapper[5005]: I0225 11:18:25.825118 5005 scope.go:117] "RemoveContainer" containerID="acf6f09c25cf42554d0dfb93d9601106187d3356eadc3f4ddb8bd2e7de6512e3" Feb 25 11:18:25 crc kubenswrapper[5005]: E0225 11:18:25.825401 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 25 11:18:25 crc kubenswrapper[5005]: I0225 11:18:25.830363 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 25 11:18:25 crc kubenswrapper[5005]: W0225 11:18:25.932204 5005 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-25T11:18:25Z is after 2026-02-23T05:33:13Z Feb 25 11:18:25 crc kubenswrapper[5005]: E0225 11:18:25.932320 5005 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-25T11:18:25Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 25 11:18:26 crc kubenswrapper[5005]: W0225 11:18:26.154063 5005 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-25T11:18:26Z is after 2026-02-23T05:33:13Z Feb 25 11:18:26 crc kubenswrapper[5005]: E0225 11:18:26.154174 5005 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-25T11:18:26Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 25 11:18:26 crc kubenswrapper[5005]: I0225 11:18:26.617324 5005 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-25T11:18:26Z is after 2026-02-23T05:33:13Z Feb 25 11:18:26 crc kubenswrapper[5005]: E0225 11:18:26.779329 5005 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 25 11:18:26 crc kubenswrapper[5005]: I0225 11:18:26.822490 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:26 crc kubenswrapper[5005]: I0225 11:18:26.823670 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:26 crc kubenswrapper[5005]: I0225 11:18:26.823754 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:26 crc kubenswrapper[5005]: I0225 11:18:26.823776 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:26 crc kubenswrapper[5005]: I0225 11:18:26.824945 5005 scope.go:117] "RemoveContainer" containerID="acf6f09c25cf42554d0dfb93d9601106187d3356eadc3f4ddb8bd2e7de6512e3" Feb 25 11:18:26 crc kubenswrapper[5005]: E0225 11:18:26.825284 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 25 11:18:26 crc kubenswrapper[5005]: W0225 11:18:26.851789 5005 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-25T11:18:26Z is after 2026-02-23T05:33:13Z Feb 25 11:18:26 crc kubenswrapper[5005]: E0225 11:18:26.851909 5005 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-25T11:18:26Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 25 11:18:27 crc kubenswrapper[5005]: I0225 11:18:27.618295 5005 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-25T11:18:27Z is after 2026-02-23T05:33:13Z Feb 25 11:18:27 crc kubenswrapper[5005]: I0225 11:18:27.825447 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:27 crc kubenswrapper[5005]: I0225 11:18:27.826770 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:27 crc kubenswrapper[5005]: I0225 11:18:27.826836 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:27 crc kubenswrapper[5005]: I0225 11:18:27.826848 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:27 crc kubenswrapper[5005]: I0225 11:18:27.827435 5005 scope.go:117] "RemoveContainer" containerID="acf6f09c25cf42554d0dfb93d9601106187d3356eadc3f4ddb8bd2e7de6512e3" Feb 25 11:18:27 crc kubenswrapper[5005]: E0225 11:18:27.827609 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 25 11:18:28 crc kubenswrapper[5005]: I0225 11:18:28.351421 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:28 crc kubenswrapper[5005]: I0225 11:18:28.353055 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:28 crc kubenswrapper[5005]: I0225 11:18:28.353098 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:28 crc kubenswrapper[5005]: I0225 11:18:28.353110 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:28 crc kubenswrapper[5005]: I0225 11:18:28.353140 5005 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 25 11:18:28 crc kubenswrapper[5005]: E0225 11:18:28.360485 5005 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 25 11:18:28 crc kubenswrapper[5005]: E0225 11:18:28.364898 5005 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 25 11:18:28 crc kubenswrapper[5005]: I0225 11:18:28.618909 5005 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 25 11:18:29 crc kubenswrapper[5005]: I0225 11:18:29.619306 5005 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 25 11:18:30 crc kubenswrapper[5005]: I0225 11:18:30.188999 5005 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 25 11:18:30 crc kubenswrapper[5005]: I0225 11:18:30.205828 5005 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 25 11:18:30 crc kubenswrapper[5005]: I0225 11:18:30.620811 5005 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 25 11:18:31 crc kubenswrapper[5005]: I0225 11:18:31.621027 5005 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 25 11:18:31 crc kubenswrapper[5005]: E0225 11:18:31.963038 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897793e3cedeb78 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:06.61073804 +0000 UTC m=+0.651470407,LastTimestamp:2026-02-25 11:18:06.61073804 +0000 UTC m=+0.651470407,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:31 crc kubenswrapper[5005]: E0225 11:18:31.967599 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897793e413d8180 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:06.683062656 +0000 UTC m=+0.723794983,LastTimestamp:2026-02-25 11:18:06.683062656 +0000 UTC m=+0.723794983,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:31 crc kubenswrapper[5005]: E0225 11:18:31.972804 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897793e413db416 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:06.683075606 +0000 UTC m=+0.723807923,LastTimestamp:2026-02-25 11:18:06.683075606 +0000 UTC m=+0.723807923,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:31 crc kubenswrapper[5005]: E0225 11:18:31.979009 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897793e413dd928 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:06.683085096 +0000 UTC m=+0.723817423,LastTimestamp:2026-02-25 11:18:06.683085096 +0000 UTC m=+0.723817423,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:31 crc kubenswrapper[5005]: E0225 11:18:31.985185 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897793e469eed1f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:06.773333279 +0000 UTC m=+0.814065656,LastTimestamp:2026-02-25 11:18:06.773333279 +0000 UTC m=+0.814065656,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:31 crc kubenswrapper[5005]: E0225 11:18:31.991532 5005 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897793e413d8180\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897793e413d8180 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:06.683062656 +0000 UTC m=+0.723794983,LastTimestamp:2026-02-25 11:18:06.786709133 +0000 UTC m=+0.827441470,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:31 crc kubenswrapper[5005]: E0225 11:18:31.996225 5005 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897793e413db416\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897793e413db416 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:06.683075606 +0000 UTC m=+0.723807923,LastTimestamp:2026-02-25 11:18:06.786735433 +0000 UTC m=+0.827467770,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.002672 5005 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897793e413dd928\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897793e413dd928 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:06.683085096 +0000 UTC m=+0.723817423,LastTimestamp:2026-02-25 11:18:06.786748994 +0000 UTC m=+0.827481331,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.012151 5005 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897793e413d8180\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897793e413d8180 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:06.683062656 +0000 UTC m=+0.723794983,LastTimestamp:2026-02-25 11:18:06.788356257 +0000 UTC m=+0.829088624,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.018850 5005 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897793e413d8180\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897793e413d8180 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:06.683062656 +0000 UTC m=+0.723794983,LastTimestamp:2026-02-25 11:18:06.788422449 +0000 UTC m=+0.829154776,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.024952 5005 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897793e413db416\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897793e413db416 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:06.683075606 +0000 UTC m=+0.723807923,LastTimestamp:2026-02-25 11:18:06.788445549 +0000 UTC m=+0.829177876,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.029184 5005 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897793e413dd928\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897793e413dd928 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:06.683085096 +0000 UTC m=+0.723817423,LastTimestamp:2026-02-25 11:18:06.78845665 +0000 UTC m=+0.829188977,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.034236 5005 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897793e413db416\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897793e413db416 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:06.683075606 +0000 UTC m=+0.723807923,LastTimestamp:2026-02-25 11:18:06.78847983 +0000 UTC m=+0.829212207,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.038213 5005 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897793e413dd928\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897793e413dd928 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:06.683085096 +0000 UTC m=+0.723817423,LastTimestamp:2026-02-25 11:18:06.788500011 +0000 UTC m=+0.829232378,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.044942 5005 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897793e413d8180\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897793e413d8180 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:06.683062656 +0000 UTC m=+0.723794983,LastTimestamp:2026-02-25 11:18:06.790347582 +0000 UTC m=+0.831079949,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.050603 5005 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897793e413db416\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897793e413db416 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:06.683075606 +0000 UTC m=+0.723807923,LastTimestamp:2026-02-25 11:18:06.790466165 +0000 UTC m=+0.831198532,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.054389 5005 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897793e413dd928\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897793e413dd928 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:06.683085096 +0000 UTC m=+0.723817423,LastTimestamp:2026-02-25 11:18:06.790489696 +0000 UTC m=+0.831222063,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.056459 5005 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897793e413d8180\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897793e413d8180 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:06.683062656 +0000 UTC m=+0.723794983,LastTimestamp:2026-02-25 11:18:06.790503296 +0000 UTC m=+0.831235633,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.058966 5005 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897793e413db416\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897793e413db416 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:06.683075606 +0000 UTC m=+0.723807923,LastTimestamp:2026-02-25 11:18:06.790527067 +0000 UTC m=+0.831259404,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.060576 5005 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897793e413dd928\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897793e413dd928 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:06.683085096 +0000 UTC m=+0.723817423,LastTimestamp:2026-02-25 11:18:06.790538937 +0000 UTC m=+0.831271274,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.069984 5005 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897793e413d8180\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897793e413d8180 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:06.683062656 +0000 UTC m=+0.723794983,LastTimestamp:2026-02-25 11:18:06.791921674 +0000 UTC m=+0.832654011,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.077765 5005 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897793e413db416\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897793e413db416 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:06.683075606 +0000 UTC m=+0.723807923,LastTimestamp:2026-02-25 11:18:06.791946075 +0000 UTC m=+0.832678412,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.083187 5005 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897793e413dd928\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897793e413dd928 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:06.683085096 +0000 UTC m=+0.723817423,LastTimestamp:2026-02-25 11:18:06.791961355 +0000 UTC m=+0.832693702,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.087603 5005 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897793e413d8180\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897793e413d8180 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:06.683062656 +0000 UTC m=+0.723794983,LastTimestamp:2026-02-25 11:18:06.792073878 +0000 UTC m=+0.832806215,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.093685 5005 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897793e413db416\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897793e413db416 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:06.683075606 +0000 UTC m=+0.723807923,LastTimestamp:2026-02-25 11:18:06.792094389 +0000 UTC m=+0.832826736,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.099188 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897793e5e880a67 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:07.174486631 +0000 UTC m=+1.215218968,LastTimestamp:2026-02-25 11:18:07.174486631 +0000 UTC m=+1.215218968,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.104629 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897793e5e880c01 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:07.174487041 +0000 UTC m=+1.215219408,LastTimestamp:2026-02-25 11:18:07.174487041 +0000 UTC m=+1.215219408,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.109004 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897793e5f105820 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:07.183419424 +0000 UTC m=+1.224151761,LastTimestamp:2026-02-25 11:18:07.183419424 +0000 UTC m=+1.224151761,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.113223 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1897793e5fad6fd4 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:07.193714644 +0000 UTC m=+1.234447011,LastTimestamp:2026-02-25 11:18:07.193714644 +0000 UTC m=+1.234447011,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.119015 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897793e5ffc0c34 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:07.198866484 +0000 UTC m=+1.239598851,LastTimestamp:2026-02-25 11:18:07.198866484 +0000 UTC m=+1.239598851,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.122882 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897793e7fec96e7 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:07.734724327 +0000 UTC m=+1.775456664,LastTimestamp:2026-02-25 11:18:07.734724327 +0000 UTC m=+1.775456664,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.127979 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897793e8077c431 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:07.743845425 +0000 UTC m=+1.784577752,LastTimestamp:2026-02-25 11:18:07.743845425 +0000 UTC m=+1.784577752,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.131525 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1897793e809b744c openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:07.746184268 +0000 UTC m=+1.786916595,LastTimestamp:2026-02-25 11:18:07.746184268 +0000 UTC m=+1.786916595,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.136934 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897793e80d4d721 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:07.749945121 +0000 UTC m=+1.790677438,LastTimestamp:2026-02-25 11:18:07.749945121 +0000 UTC m=+1.790677438,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.140310 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897793e80d9ff5c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:07.7502831 +0000 UTC m=+1.791015437,LastTimestamp:2026-02-25 11:18:07.7502831 +0000 UTC m=+1.791015437,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.143569 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897793e80dfc429 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:07.750661161 +0000 UTC m=+1.791393528,LastTimestamp:2026-02-25 11:18:07.750661161 +0000 UTC m=+1.791393528,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.146756 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897793e8157d912 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:07.758530834 +0000 UTC m=+1.799263161,LastTimestamp:2026-02-25 11:18:07.758530834 +0000 UTC m=+1.799263161,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.149794 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897793e81690854 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:07.759657044 +0000 UTC m=+1.800389371,LastTimestamp:2026-02-25 11:18:07.759657044 +0000 UTC m=+1.800389371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.153294 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897793e818a0590 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:07.761819024 +0000 UTC m=+1.802551351,LastTimestamp:2026-02-25 11:18:07.761819024 +0000 UTC m=+1.802551351,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.156571 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1897793e81a15b01 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:07.763348225 +0000 UTC m=+1.804080552,LastTimestamp:2026-02-25 11:18:07.763348225 +0000 UTC m=+1.804080552,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.159264 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897793e81a2b813 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:07.763437587 +0000 UTC m=+1.804169934,LastTimestamp:2026-02-25 11:18:07.763437587 +0000 UTC m=+1.804169934,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.164884 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897793e9378d4bc openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:08.0626823 +0000 UTC m=+2.103414627,LastTimestamp:2026-02-25 11:18:08.0626823 +0000 UTC m=+2.103414627,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.175954 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897793e941a5fff openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:08.073269247 +0000 UTC m=+2.114001574,LastTimestamp:2026-02-25 11:18:08.073269247 +0000 UTC m=+2.114001574,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.180442 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897793e9428917f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:08.074199423 +0000 UTC m=+2.114931750,LastTimestamp:2026-02-25 11:18:08.074199423 +0000 UTC m=+2.114931750,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.185206 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897793e9f2c5c03 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:08.258997251 +0000 UTC m=+2.299729578,LastTimestamp:2026-02-25 11:18:08.258997251 +0000 UTC m=+2.299729578,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.195913 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897793e9ffd14a8 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:08.272676008 +0000 UTC m=+2.313408335,LastTimestamp:2026-02-25 11:18:08.272676008 +0000 UTC m=+2.313408335,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.200193 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897793ea010c135 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:08.273965365 +0000 UTC m=+2.314697692,LastTimestamp:2026-02-25 11:18:08.273965365 +0000 UTC m=+2.314697692,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.204755 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897793ead05d535 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:08.491353397 +0000 UTC m=+2.532085764,LastTimestamp:2026-02-25 11:18:08.491353397 +0000 UTC m=+2.532085764,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.210058 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897793eae2d73cd openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:08.510727117 +0000 UTC m=+2.551459484,LastTimestamp:2026-02-25 11:18:08.510727117 +0000 UTC m=+2.551459484,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.214007 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897793eb9865724 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:08.70110186 +0000 UTC m=+2.741834217,LastTimestamp:2026-02-25 11:18:08.70110186 +0000 UTC m=+2.741834217,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.218650 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897793eb9c88029 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:08.705437737 +0000 UTC m=+2.746170104,LastTimestamp:2026-02-25 11:18:08.705437737 +0000 UTC m=+2.746170104,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.223147 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1897793eb9d87a49 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:08.706484809 +0000 UTC m=+2.747217166,LastTimestamp:2026-02-25 11:18:08.706484809 +0000 UTC m=+2.747217166,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.226431 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897793eb9f95eab openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:08.708640427 +0000 UTC m=+2.749372794,LastTimestamp:2026-02-25 11:18:08.708640427 +0000 UTC m=+2.749372794,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.230557 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897793eca8fac43 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:08.986926147 +0000 UTC m=+3.027658474,LastTimestamp:2026-02-25 11:18:08.986926147 +0000 UTC m=+3.027658474,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.234832 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897793eca9ecd84 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:08.9879177 +0000 UTC m=+3.028650047,LastTimestamp:2026-02-25 11:18:08.9879177 +0000 UTC m=+3.028650047,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.238324 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1897793ecaa02ef5 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:08.988008181 +0000 UTC m=+3.028740518,LastTimestamp:2026-02-25 11:18:08.988008181 +0000 UTC m=+3.028740518,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.241760 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897793ecaa1966b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:08.988100203 +0000 UTC m=+3.028832540,LastTimestamp:2026-02-25 11:18:08.988100203 +0000 UTC m=+3.028832540,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.246858 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897793ecb5a291a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:09.000196378 +0000 UTC m=+3.040928725,LastTimestamp:2026-02-25 11:18:09.000196378 +0000 UTC m=+3.040928725,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.252598 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897793ecb6f318d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:09.001574797 +0000 UTC m=+3.042307124,LastTimestamp:2026-02-25 11:18:09.001574797 +0000 UTC m=+3.042307124,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.255979 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897793ecb6f8629 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:09.001596457 +0000 UTC m=+3.042328784,LastTimestamp:2026-02-25 11:18:09.001596457 +0000 UTC m=+3.042328784,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.259650 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1897793ecb700379 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:09.001628537 +0000 UTC m=+3.042360864,LastTimestamp:2026-02-25 11:18:09.001628537 +0000 UTC m=+3.042360864,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.263293 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897793ecb73d38b openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:09.001878411 +0000 UTC m=+3.042610758,LastTimestamp:2026-02-25 11:18:09.001878411 +0000 UTC m=+3.042610758,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.267588 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897793ecb7fd0b8 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:09.00266412 +0000 UTC m=+3.043396447,LastTimestamp:2026-02-25 11:18:09.00266412 +0000 UTC m=+3.043396447,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.271444 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897793ed94b7e45 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:09.234116165 +0000 UTC m=+3.274848482,LastTimestamp:2026-02-25 11:18:09.234116165 +0000 UTC m=+3.274848482,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.276188 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897793ed9510805 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:09.234479109 +0000 UTC m=+3.275211436,LastTimestamp:2026-02-25 11:18:09.234479109 +0000 UTC m=+3.275211436,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.280753 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897793eda0dc99f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:09.246849439 +0000 UTC m=+3.287581766,LastTimestamp:2026-02-25 11:18:09.246849439 +0000 UTC m=+3.287581766,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.284478 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897793eda216b0a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:09.248135946 +0000 UTC m=+3.288868273,LastTimestamp:2026-02-25 11:18:09.248135946 +0000 UTC m=+3.288868273,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.288668 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897793eda9f5a03 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:09.256389123 +0000 UTC m=+3.297121450,LastTimestamp:2026-02-25 11:18:09.256389123 +0000 UTC m=+3.297121450,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.292403 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897793edaaa4cf9 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:09.257106681 +0000 UTC m=+3.297839008,LastTimestamp:2026-02-25 11:18:09.257106681 +0000 UTC m=+3.297839008,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.297877 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897793ee75a34c6 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:09.469961414 +0000 UTC m=+3.510693751,LastTimestamp:2026-02-25 11:18:09.469961414 +0000 UTC m=+3.510693751,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.301805 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897793ee77baa66 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:09.472154214 +0000 UTC m=+3.512886551,LastTimestamp:2026-02-25 11:18:09.472154214 +0000 UTC m=+3.512886551,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.307365 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897793ee85e5529 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:09.487009065 +0000 UTC m=+3.527741402,LastTimestamp:2026-02-25 11:18:09.487009065 +0000 UTC m=+3.527741402,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.313421 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897793ee8753109 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:09.488507145 +0000 UTC m=+3.529239492,LastTimestamp:2026-02-25 11:18:09.488507145 +0000 UTC m=+3.529239492,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.317276 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897793ee88a1aa8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:09.489877672 +0000 UTC m=+3.530609999,LastTimestamp:2026-02-25 11:18:09.489877672 +0000 UTC m=+3.530609999,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.322206 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897793ef3a0820d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:09.675895309 +0000 UTC m=+3.716627656,LastTimestamp:2026-02-25 11:18:09.675895309 +0000 UTC m=+3.716627656,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.327825 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897793ef448948b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:09.686910091 +0000 UTC m=+3.727642428,LastTimestamp:2026-02-25 11:18:09.686910091 +0000 UTC m=+3.727642428,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.332580 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897793ef45f7a6a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:09.68841073 +0000 UTC m=+3.729143067,LastTimestamp:2026-02-25 11:18:09.68841073 +0000 UTC m=+3.729143067,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.340463 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897793ef6b2aa2a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:09.727416874 +0000 UTC m=+3.768149201,LastTimestamp:2026-02-25 11:18:09.727416874 +0000 UTC m=+3.768149201,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.345099 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897793f00b35224 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:09.895232036 +0000 UTC m=+3.935964353,LastTimestamp:2026-02-25 11:18:09.895232036 +0000 UTC m=+3.935964353,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.349193 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897793f01628896 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:09.906714774 +0000 UTC m=+3.947447101,LastTimestamp:2026-02-25 11:18:09.906714774 +0000 UTC m=+3.947447101,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.354538 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897793f02a8ee1d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:09.928105501 +0000 UTC m=+3.968837828,LastTimestamp:2026-02-25 11:18:09.928105501 +0000 UTC m=+3.968837828,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.358599 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897793f03762eef openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:09.941556975 +0000 UTC m=+3.982289312,LastTimestamp:2026-02-25 11:18:09.941556975 +0000 UTC m=+3.982289312,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.363240 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897793f332c751d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:10.742031645 +0000 UTC m=+4.782763972,LastTimestamp:2026-02-25 11:18:10.742031645 +0000 UTC m=+4.782763972,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.364867 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897793f3f522b1e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:10.945829662 +0000 UTC m=+4.986562009,LastTimestamp:2026-02-25 11:18:10.945829662 +0000 UTC m=+4.986562009,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.367778 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897793f400e59dc openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:10.958162396 +0000 UTC m=+4.998894743,LastTimestamp:2026-02-25 11:18:10.958162396 +0000 UTC m=+4.998894743,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.373403 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897793f401fe9b9 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:10.959313337 +0000 UTC m=+5.000045674,LastTimestamp:2026-02-25 11:18:10.959313337 +0000 UTC m=+5.000045674,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.379064 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897793f4c82bf32 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:11.167117106 +0000 UTC m=+5.207849453,LastTimestamp:2026-02-25 11:18:11.167117106 +0000 UTC m=+5.207849453,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.382933 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897793f4d286896 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:11.17797391 +0000 UTC m=+5.218706237,LastTimestamp:2026-02-25 11:18:11.17797391 +0000 UTC m=+5.218706237,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.386839 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897793f4d3623cb openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:11.178873803 +0000 UTC m=+5.219606140,LastTimestamp:2026-02-25 11:18:11.178873803 +0000 UTC m=+5.219606140,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.390971 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897793f5a2493e3 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:11.395826659 +0000 UTC m=+5.436558986,LastTimestamp:2026-02-25 11:18:11.395826659 +0000 UTC m=+5.436558986,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.395620 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897793f5ae9fa7d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:11.408763517 +0000 UTC m=+5.449495844,LastTimestamp:2026-02-25 11:18:11.408763517 +0000 UTC m=+5.449495844,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.400119 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897793f5b049100 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:11.410505984 +0000 UTC m=+5.451238311,LastTimestamp:2026-02-25 11:18:11.410505984 +0000 UTC m=+5.451238311,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.403545 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897793f6711e807 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:11.612706823 +0000 UTC m=+5.653439190,LastTimestamp:2026-02-25 11:18:11.612706823 +0000 UTC m=+5.653439190,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.408026 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897793f67d2e756 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:11.625355094 +0000 UTC m=+5.666087431,LastTimestamp:2026-02-25 11:18:11.625355094 +0000 UTC m=+5.666087431,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.412092 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897793f67e9b205 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:11.626848773 +0000 UTC m=+5.667581110,LastTimestamp:2026-02-25 11:18:11.626848773 +0000 UTC m=+5.667581110,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.417225 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897793f7543ff82 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:11.850870658 +0000 UTC m=+5.891602995,LastTimestamp:2026-02-25 11:18:11.850870658 +0000 UTC m=+5.891602995,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.421441 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897793f7604a368 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:11.863495528 +0000 UTC m=+5.904227865,LastTimestamp:2026-02-25 11:18:11.863495528 +0000 UTC m=+5.904227865,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.426053 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 25 11:18:32 crc kubenswrapper[5005]: &Event{ObjectMeta:{kube-controller-manager-crc.1897793fb5a6808c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Feb 25 11:18:32 crc kubenswrapper[5005]: body: Feb 25 11:18:32 crc kubenswrapper[5005]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:12.931068044 +0000 UTC m=+6.971800421,LastTimestamp:2026-02-25 11:18:12.931068044 +0000 UTC m=+6.971800421,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 25 11:18:32 crc kubenswrapper[5005]: > Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.429603 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897793fb5a80375 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:12.931167093 +0000 UTC m=+6.971899460,LastTimestamp:2026-02-25 11:18:12.931167093 +0000 UTC m=+6.971899460,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.436695 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 25 11:18:32 crc kubenswrapper[5005]: &Event{ObjectMeta:{kube-apiserver-crc.18977941cfd05a5d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Feb 25 11:18:32 crc kubenswrapper[5005]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 25 11:18:32 crc kubenswrapper[5005]: Feb 25 11:18:32 crc kubenswrapper[5005]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:21.959952989 +0000 UTC m=+16.000685326,LastTimestamp:2026-02-25 11:18:21.959952989 +0000 UTC m=+16.000685326,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 25 11:18:32 crc kubenswrapper[5005]: > Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.440133 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18977941cfd12f2c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:21.960007468 +0000 UTC m=+16.000739815,LastTimestamp:2026-02-25 11:18:21.960007468 +0000 UTC m=+16.000739815,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.443603 5005 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18977941cfd05a5d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 25 11:18:32 crc kubenswrapper[5005]: &Event{ObjectMeta:{kube-apiserver-crc.18977941cfd05a5d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Feb 25 11:18:32 crc kubenswrapper[5005]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 25 11:18:32 crc kubenswrapper[5005]: Feb 25 11:18:32 crc kubenswrapper[5005]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:21.959952989 +0000 UTC m=+16.000685326,LastTimestamp:2026-02-25 11:18:21.967419086 +0000 UTC m=+16.008151423,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 25 11:18:32 crc kubenswrapper[5005]: > Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.447519 5005 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18977941cfd12f2c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18977941cfd12f2c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:21.960007468 +0000 UTC m=+16.000739815,LastTimestamp:2026-02-25 11:18:21.967468175 +0000 UTC m=+16.008200522,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.453133 5005 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1897793ef45f7a6a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897793ef45f7a6a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:09.68841073 +0000 UTC m=+3.729143067,LastTimestamp:2026-02-25 11:18:22.80334697 +0000 UTC m=+16.844079297,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.456992 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 25 11:18:32 crc kubenswrapper[5005]: &Event{ObjectMeta:{kube-controller-manager-crc.1897794209bb856a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 25 11:18:32 crc kubenswrapper[5005]: body: Feb 25 11:18:32 crc kubenswrapper[5005]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:22.931666282 +0000 UTC m=+16.972398649,LastTimestamp:2026-02-25 11:18:22.931666282 +0000 UTC m=+16.972398649,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 25 11:18:32 crc kubenswrapper[5005]: > Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.461137 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897794209bca6f2 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:22.931740402 +0000 UTC m=+16.972472779,LastTimestamp:2026-02-25 11:18:22.931740402 +0000 UTC m=+16.972472779,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.465217 5005 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1897793f00b35224\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897793f00b35224 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:09.895232036 +0000 UTC m=+3.935964353,LastTimestamp:2026-02-25 11:18:23.098082374 +0000 UTC m=+17.138814701,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.469755 5005 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1897793f01628896\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897793f01628896 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:09.906714774 +0000 UTC m=+3.947447101,LastTimestamp:2026-02-25 11:18:23.106546514 +0000 UTC m=+17.147278841,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: I0225 11:18:32.617411 5005 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 25 11:18:32 crc kubenswrapper[5005]: I0225 11:18:32.930495 5005 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 25 11:18:32 crc kubenswrapper[5005]: I0225 11:18:32.930574 5005 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 25 11:18:32 crc kubenswrapper[5005]: I0225 11:18:32.930629 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 25 11:18:32 crc kubenswrapper[5005]: I0225 11:18:32.930766 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:32 crc kubenswrapper[5005]: I0225 11:18:32.932227 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:32 crc kubenswrapper[5005]: I0225 11:18:32.932260 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:32 crc kubenswrapper[5005]: I0225 11:18:32.932272 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:32 crc kubenswrapper[5005]: I0225 11:18:32.932736 5005 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"aefec33351776ece26b3ae45cbd0ee44941bf6c6451f745d8657623198de9c14"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Feb 25 11:18:32 crc kubenswrapper[5005]: I0225 11:18:32.932883 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://aefec33351776ece26b3ae45cbd0ee44941bf6c6451f745d8657623198de9c14" gracePeriod=30 Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.936567 5005 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1897794209bb856a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 25 11:18:32 crc kubenswrapper[5005]: &Event{ObjectMeta:{kube-controller-manager-crc.1897794209bb856a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 25 11:18:32 crc kubenswrapper[5005]: body: Feb 25 11:18:32 crc kubenswrapper[5005]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:22.931666282 +0000 UTC m=+16.972398649,LastTimestamp:2026-02-25 11:18:32.930557228 +0000 UTC m=+26.971289565,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 25 11:18:32 crc kubenswrapper[5005]: > Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.942893 5005 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1897794209bca6f2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897794209bca6f2 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:22.931740402 +0000 UTC m=+16.972472779,LastTimestamp:2026-02-25 11:18:32.930600909 +0000 UTC m=+26.971333256,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:32 crc kubenswrapper[5005]: E0225 11:18:32.949278 5005 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189779445dd9c410 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Killing,Message:Container cluster-policy-controller failed startup probe, will be restarted,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:32.932869136 +0000 UTC m=+26.973601473,LastTimestamp:2026-02-25 11:18:32.932869136 +0000 UTC m=+26.973601473,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:33 crc kubenswrapper[5005]: W0225 11:18:33.040923 5005 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 25 11:18:33 crc kubenswrapper[5005]: E0225 11:18:33.041025 5005 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 25 11:18:33 crc kubenswrapper[5005]: E0225 11:18:33.056987 5005 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1897793e81690854\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897793e81690854 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:07.759657044 +0000 UTC m=+1.800389371,LastTimestamp:2026-02-25 11:18:33.052821829 +0000 UTC m=+27.093554166,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:33 crc kubenswrapper[5005]: I0225 11:18:33.080093 5005 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 25 11:18:33 crc kubenswrapper[5005]: I0225 11:18:33.080268 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:33 crc kubenswrapper[5005]: I0225 11:18:33.081530 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:33 crc kubenswrapper[5005]: I0225 11:18:33.081713 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:33 crc kubenswrapper[5005]: I0225 11:18:33.081824 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:33 crc kubenswrapper[5005]: I0225 11:18:33.082618 5005 scope.go:117] "RemoveContainer" containerID="acf6f09c25cf42554d0dfb93d9601106187d3356eadc3f4ddb8bd2e7de6512e3" Feb 25 11:18:33 crc kubenswrapper[5005]: E0225 11:18:33.082908 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 25 11:18:33 crc kubenswrapper[5005]: E0225 11:18:33.208042 5005 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1897793e9378d4bc\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897793e9378d4bc openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:08.0626823 +0000 UTC m=+2.103414627,LastTimestamp:2026-02-25 11:18:33.204613054 +0000 UTC m=+27.245345411,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:33 crc kubenswrapper[5005]: E0225 11:18:33.217694 5005 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1897793e941a5fff\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897793e941a5fff openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:08.073269247 +0000 UTC m=+2.114001574,LastTimestamp:2026-02-25 11:18:33.216429936 +0000 UTC m=+27.257162263,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:33 crc kubenswrapper[5005]: I0225 11:18:33.273080 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 25 11:18:33 crc kubenswrapper[5005]: W0225 11:18:33.398317 5005 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 25 11:18:33 crc kubenswrapper[5005]: E0225 11:18:33.398410 5005 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 25 11:18:33 crc kubenswrapper[5005]: I0225 11:18:33.620812 5005 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 25 11:18:33 crc kubenswrapper[5005]: I0225 11:18:33.851354 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 25 11:18:33 crc kubenswrapper[5005]: I0225 11:18:33.851974 5005 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="aefec33351776ece26b3ae45cbd0ee44941bf6c6451f745d8657623198de9c14" exitCode=255 Feb 25 11:18:33 crc kubenswrapper[5005]: I0225 11:18:33.852108 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"aefec33351776ece26b3ae45cbd0ee44941bf6c6451f745d8657623198de9c14"} Feb 25 11:18:33 crc kubenswrapper[5005]: I0225 11:18:33.852179 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:33 crc kubenswrapper[5005]: I0225 11:18:33.852180 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e12bfe39e38d7f0dc9d4a92290deacbe721ede5d0571ff0889849cdd397164bc"} Feb 25 11:18:33 crc kubenswrapper[5005]: I0225 11:18:33.852308 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:33 crc kubenswrapper[5005]: I0225 11:18:33.853490 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:33 crc kubenswrapper[5005]: I0225 11:18:33.853538 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:33 crc kubenswrapper[5005]: I0225 11:18:33.853558 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:33 crc kubenswrapper[5005]: I0225 11:18:33.853888 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:33 crc kubenswrapper[5005]: I0225 11:18:33.853938 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:33 crc kubenswrapper[5005]: I0225 11:18:33.853956 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:33 crc kubenswrapper[5005]: I0225 11:18:33.854326 5005 scope.go:117] "RemoveContainer" containerID="acf6f09c25cf42554d0dfb93d9601106187d3356eadc3f4ddb8bd2e7de6512e3" Feb 25 11:18:33 crc kubenswrapper[5005]: E0225 11:18:33.854622 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 25 11:18:33 crc kubenswrapper[5005]: W0225 11:18:33.985714 5005 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 25 11:18:33 crc kubenswrapper[5005]: E0225 11:18:33.985781 5005 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 25 11:18:34 crc kubenswrapper[5005]: I0225 11:18:34.623206 5005 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 25 11:18:34 crc kubenswrapper[5005]: I0225 11:18:34.855286 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:34 crc kubenswrapper[5005]: I0225 11:18:34.857193 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:34 crc kubenswrapper[5005]: I0225 11:18:34.857256 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:34 crc kubenswrapper[5005]: I0225 11:18:34.857279 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:35 crc kubenswrapper[5005]: I0225 11:18:35.361013 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:35 crc kubenswrapper[5005]: I0225 11:18:35.362697 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:35 crc kubenswrapper[5005]: I0225 11:18:35.362761 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:35 crc kubenswrapper[5005]: I0225 11:18:35.362786 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:35 crc kubenswrapper[5005]: I0225 11:18:35.362860 5005 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 25 11:18:35 crc kubenswrapper[5005]: E0225 11:18:35.367213 5005 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 25 11:18:35 crc kubenswrapper[5005]: E0225 11:18:35.367287 5005 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 25 11:18:35 crc kubenswrapper[5005]: W0225 11:18:35.592569 5005 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 25 11:18:35 crc kubenswrapper[5005]: E0225 11:18:35.592627 5005 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 25 11:18:35 crc kubenswrapper[5005]: I0225 11:18:35.618428 5005 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 25 11:18:36 crc kubenswrapper[5005]: I0225 11:18:36.618207 5005 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 25 11:18:36 crc kubenswrapper[5005]: E0225 11:18:36.779516 5005 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 25 11:18:37 crc kubenswrapper[5005]: I0225 11:18:37.620139 5005 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 25 11:18:38 crc kubenswrapper[5005]: I0225 11:18:38.197756 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 25 11:18:38 crc kubenswrapper[5005]: I0225 11:18:38.197973 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:38 crc kubenswrapper[5005]: I0225 11:18:38.199324 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:38 crc kubenswrapper[5005]: I0225 11:18:38.199454 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:38 crc kubenswrapper[5005]: I0225 11:18:38.199474 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:38 crc kubenswrapper[5005]: I0225 11:18:38.617788 5005 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 25 11:18:39 crc kubenswrapper[5005]: I0225 11:18:39.621017 5005 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 25 11:18:39 crc kubenswrapper[5005]: I0225 11:18:39.930124 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 25 11:18:39 crc kubenswrapper[5005]: I0225 11:18:39.930319 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:39 crc kubenswrapper[5005]: I0225 11:18:39.931857 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:39 crc kubenswrapper[5005]: I0225 11:18:39.931917 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:39 crc kubenswrapper[5005]: I0225 11:18:39.931937 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:40 crc kubenswrapper[5005]: I0225 11:18:40.620294 5005 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 25 11:18:41 crc kubenswrapper[5005]: I0225 11:18:41.620613 5005 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 25 11:18:42 crc kubenswrapper[5005]: I0225 11:18:42.368284 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:42 crc kubenswrapper[5005]: I0225 11:18:42.370102 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:42 crc kubenswrapper[5005]: I0225 11:18:42.370152 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:42 crc kubenswrapper[5005]: I0225 11:18:42.370170 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:42 crc kubenswrapper[5005]: I0225 11:18:42.370201 5005 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 25 11:18:42 crc kubenswrapper[5005]: E0225 11:18:42.377768 5005 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 25 11:18:42 crc kubenswrapper[5005]: E0225 11:18:42.377846 5005 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 25 11:18:42 crc kubenswrapper[5005]: I0225 11:18:42.620674 5005 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 25 11:18:42 crc kubenswrapper[5005]: I0225 11:18:42.930139 5005 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 25 11:18:42 crc kubenswrapper[5005]: I0225 11:18:42.930246 5005 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 25 11:18:42 crc kubenswrapper[5005]: E0225 11:18:42.938203 5005 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1897794209bb856a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 25 11:18:42 crc kubenswrapper[5005]: &Event{ObjectMeta:{kube-controller-manager-crc.1897794209bb856a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 25 11:18:42 crc kubenswrapper[5005]: body: Feb 25 11:18:42 crc kubenswrapper[5005]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:22.931666282 +0000 UTC m=+16.972398649,LastTimestamp:2026-02-25 11:18:42.93022167 +0000 UTC m=+36.970954027,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 25 11:18:42 crc kubenswrapper[5005]: > Feb 25 11:18:42 crc kubenswrapper[5005]: E0225 11:18:42.945588 5005 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1897794209bca6f2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897794209bca6f2 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:22.931740402 +0000 UTC m=+16.972472779,LastTimestamp:2026-02-25 11:18:42.930282022 +0000 UTC m=+36.971014389,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:18:43 crc kubenswrapper[5005]: I0225 11:18:43.620463 5005 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 25 11:18:44 crc kubenswrapper[5005]: I0225 11:18:44.621590 5005 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 25 11:18:45 crc kubenswrapper[5005]: I0225 11:18:45.621810 5005 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 25 11:18:46 crc kubenswrapper[5005]: I0225 11:18:46.619536 5005 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 25 11:18:46 crc kubenswrapper[5005]: E0225 11:18:46.779692 5005 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 25 11:18:47 crc kubenswrapper[5005]: I0225 11:18:47.620979 5005 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 25 11:18:47 crc kubenswrapper[5005]: I0225 11:18:47.685605 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:47 crc kubenswrapper[5005]: I0225 11:18:47.687793 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:47 crc kubenswrapper[5005]: I0225 11:18:47.687865 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:47 crc kubenswrapper[5005]: I0225 11:18:47.687885 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:47 crc kubenswrapper[5005]: I0225 11:18:47.689032 5005 scope.go:117] "RemoveContainer" containerID="acf6f09c25cf42554d0dfb93d9601106187d3356eadc3f4ddb8bd2e7de6512e3" Feb 25 11:18:48 crc kubenswrapper[5005]: I0225 11:18:48.618115 5005 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 25 11:18:48 crc kubenswrapper[5005]: I0225 11:18:48.894529 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 25 11:18:48 crc kubenswrapper[5005]: I0225 11:18:48.895251 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 25 11:18:48 crc kubenswrapper[5005]: I0225 11:18:48.898528 5005 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c77ea7bd4fbefb3d7a618523f56e9b9fc2611206f6b5de06d68a9ed26d940ebd" exitCode=255 Feb 25 11:18:48 crc kubenswrapper[5005]: I0225 11:18:48.898588 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"c77ea7bd4fbefb3d7a618523f56e9b9fc2611206f6b5de06d68a9ed26d940ebd"} Feb 25 11:18:48 crc kubenswrapper[5005]: I0225 11:18:48.898636 5005 scope.go:117] "RemoveContainer" containerID="acf6f09c25cf42554d0dfb93d9601106187d3356eadc3f4ddb8bd2e7de6512e3" Feb 25 11:18:48 crc kubenswrapper[5005]: I0225 11:18:48.898862 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:48 crc kubenswrapper[5005]: I0225 11:18:48.901603 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:48 crc kubenswrapper[5005]: I0225 11:18:48.901659 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:48 crc kubenswrapper[5005]: I0225 11:18:48.901682 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:48 crc kubenswrapper[5005]: I0225 11:18:48.902703 5005 scope.go:117] "RemoveContainer" containerID="c77ea7bd4fbefb3d7a618523f56e9b9fc2611206f6b5de06d68a9ed26d940ebd" Feb 25 11:18:48 crc kubenswrapper[5005]: E0225 11:18:48.903144 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 25 11:18:49 crc kubenswrapper[5005]: I0225 11:18:49.378589 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:49 crc kubenswrapper[5005]: I0225 11:18:49.380284 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:49 crc kubenswrapper[5005]: I0225 11:18:49.380343 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:49 crc kubenswrapper[5005]: I0225 11:18:49.380361 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:49 crc kubenswrapper[5005]: I0225 11:18:49.380430 5005 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 25 11:18:49 crc kubenswrapper[5005]: E0225 11:18:49.387178 5005 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 25 11:18:49 crc kubenswrapper[5005]: E0225 11:18:49.387272 5005 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 25 11:18:49 crc kubenswrapper[5005]: I0225 11:18:49.619055 5005 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 25 11:18:49 crc kubenswrapper[5005]: I0225 11:18:49.906962 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 25 11:18:50 crc kubenswrapper[5005]: I0225 11:18:50.616566 5005 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 25 11:18:51 crc kubenswrapper[5005]: I0225 11:18:51.621037 5005 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 25 11:18:52 crc kubenswrapper[5005]: W0225 11:18:52.536256 5005 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 25 11:18:52 crc kubenswrapper[5005]: E0225 11:18:52.536347 5005 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 25 11:18:52 crc kubenswrapper[5005]: I0225 11:18:52.620974 5005 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 25 11:18:52 crc kubenswrapper[5005]: W0225 11:18:52.840434 5005 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 25 11:18:52 crc kubenswrapper[5005]: E0225 11:18:52.840526 5005 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 25 11:18:52 crc kubenswrapper[5005]: I0225 11:18:52.930745 5005 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 25 11:18:52 crc kubenswrapper[5005]: I0225 11:18:52.930836 5005 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 25 11:18:52 crc kubenswrapper[5005]: E0225 11:18:52.938112 5005 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1897794209bb856a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 25 11:18:52 crc kubenswrapper[5005]: &Event{ObjectMeta:{kube-controller-manager-crc.1897794209bb856a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 25 11:18:52 crc kubenswrapper[5005]: body: Feb 25 11:18:52 crc kubenswrapper[5005]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:18:22.931666282 +0000 UTC m=+16.972398649,LastTimestamp:2026-02-25 11:18:52.930811585 +0000 UTC m=+46.971543942,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 25 11:18:52 crc kubenswrapper[5005]: > Feb 25 11:18:53 crc kubenswrapper[5005]: I0225 11:18:53.081004 5005 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 25 11:18:53 crc kubenswrapper[5005]: I0225 11:18:53.081283 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:53 crc kubenswrapper[5005]: I0225 11:18:53.083226 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:53 crc kubenswrapper[5005]: I0225 11:18:53.083306 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:53 crc kubenswrapper[5005]: I0225 11:18:53.083334 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:53 crc kubenswrapper[5005]: I0225 11:18:53.084517 5005 scope.go:117] "RemoveContainer" containerID="c77ea7bd4fbefb3d7a618523f56e9b9fc2611206f6b5de06d68a9ed26d940ebd" Feb 25 11:18:53 crc kubenswrapper[5005]: E0225 11:18:53.084910 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 25 11:18:53 crc kubenswrapper[5005]: I0225 11:18:53.273253 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 25 11:18:53 crc kubenswrapper[5005]: I0225 11:18:53.619457 5005 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 25 11:18:53 crc kubenswrapper[5005]: I0225 11:18:53.924048 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:53 crc kubenswrapper[5005]: I0225 11:18:53.925842 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:53 crc kubenswrapper[5005]: I0225 11:18:53.925910 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:53 crc kubenswrapper[5005]: I0225 11:18:53.925929 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:53 crc kubenswrapper[5005]: I0225 11:18:53.926847 5005 scope.go:117] "RemoveContainer" containerID="c77ea7bd4fbefb3d7a618523f56e9b9fc2611206f6b5de06d68a9ed26d940ebd" Feb 25 11:18:53 crc kubenswrapper[5005]: E0225 11:18:53.927134 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 25 11:18:54 crc kubenswrapper[5005]: I0225 11:18:54.621295 5005 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 25 11:18:55 crc kubenswrapper[5005]: I0225 11:18:55.620317 5005 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 25 11:18:55 crc kubenswrapper[5005]: W0225 11:18:55.698272 5005 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 25 11:18:55 crc kubenswrapper[5005]: E0225 11:18:55.698417 5005 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 25 11:18:56 crc kubenswrapper[5005]: I0225 11:18:56.387708 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:56 crc kubenswrapper[5005]: I0225 11:18:56.389779 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:56 crc kubenswrapper[5005]: I0225 11:18:56.389841 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:56 crc kubenswrapper[5005]: I0225 11:18:56.389862 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:56 crc kubenswrapper[5005]: I0225 11:18:56.389893 5005 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 25 11:18:56 crc kubenswrapper[5005]: E0225 11:18:56.395957 5005 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 25 11:18:56 crc kubenswrapper[5005]: E0225 11:18:56.396581 5005 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 25 11:18:56 crc kubenswrapper[5005]: I0225 11:18:56.621765 5005 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 25 11:18:56 crc kubenswrapper[5005]: E0225 11:18:56.779951 5005 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 25 11:18:56 crc kubenswrapper[5005]: W0225 11:18:56.904639 5005 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 25 11:18:56 crc kubenswrapper[5005]: E0225 11:18:56.904733 5005 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 25 11:18:57 crc kubenswrapper[5005]: I0225 11:18:57.103689 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 25 11:18:57 crc kubenswrapper[5005]: I0225 11:18:57.104005 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:18:57 crc kubenswrapper[5005]: I0225 11:18:57.106048 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:18:57 crc kubenswrapper[5005]: I0225 11:18:57.106105 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:18:57 crc kubenswrapper[5005]: I0225 11:18:57.106131 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:18:57 crc kubenswrapper[5005]: I0225 11:18:57.621004 5005 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 25 11:18:58 crc kubenswrapper[5005]: I0225 11:18:58.621779 5005 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 25 11:18:59 crc kubenswrapper[5005]: I0225 11:18:59.620449 5005 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 25 11:19:00 crc kubenswrapper[5005]: I0225 11:19:00.620592 5005 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 25 11:19:01 crc kubenswrapper[5005]: I0225 11:19:01.619729 5005 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 25 11:19:02 crc kubenswrapper[5005]: I0225 11:19:02.622831 5005 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 25 11:19:02 crc kubenswrapper[5005]: I0225 11:19:02.836592 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 25 11:19:02 crc kubenswrapper[5005]: I0225 11:19:02.836763 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:19:02 crc kubenswrapper[5005]: I0225 11:19:02.838081 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:02 crc kubenswrapper[5005]: I0225 11:19:02.838137 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:02 crc kubenswrapper[5005]: I0225 11:19:02.838173 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:02 crc kubenswrapper[5005]: I0225 11:19:02.844991 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 25 11:19:02 crc kubenswrapper[5005]: I0225 11:19:02.956458 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:19:02 crc kubenswrapper[5005]: I0225 11:19:02.958476 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:02 crc kubenswrapper[5005]: I0225 11:19:02.958556 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:02 crc kubenswrapper[5005]: I0225 11:19:02.958585 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:03 crc kubenswrapper[5005]: I0225 11:19:03.396108 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:19:03 crc kubenswrapper[5005]: I0225 11:19:03.397244 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:03 crc kubenswrapper[5005]: I0225 11:19:03.397282 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:03 crc kubenswrapper[5005]: I0225 11:19:03.397297 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:03 crc kubenswrapper[5005]: I0225 11:19:03.397327 5005 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 25 11:19:03 crc kubenswrapper[5005]: E0225 11:19:03.405246 5005 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 25 11:19:03 crc kubenswrapper[5005]: E0225 11:19:03.405548 5005 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 25 11:19:03 crc kubenswrapper[5005]: I0225 11:19:03.620691 5005 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 25 11:19:04 crc kubenswrapper[5005]: I0225 11:19:04.620757 5005 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 25 11:19:05 crc kubenswrapper[5005]: I0225 11:19:05.618589 5005 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 25 11:19:06 crc kubenswrapper[5005]: I0225 11:19:06.617286 5005 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 25 11:19:06 crc kubenswrapper[5005]: E0225 11:19:06.780143 5005 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 25 11:19:07 crc kubenswrapper[5005]: I0225 11:19:07.617812 5005 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 25 11:19:07 crc kubenswrapper[5005]: I0225 11:19:07.684775 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:19:07 crc kubenswrapper[5005]: I0225 11:19:07.686202 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:07 crc kubenswrapper[5005]: I0225 11:19:07.686321 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:07 crc kubenswrapper[5005]: I0225 11:19:07.686416 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:07 crc kubenswrapper[5005]: I0225 11:19:07.686983 5005 scope.go:117] "RemoveContainer" containerID="c77ea7bd4fbefb3d7a618523f56e9b9fc2611206f6b5de06d68a9ed26d940ebd" Feb 25 11:19:07 crc kubenswrapper[5005]: E0225 11:19:07.687265 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 25 11:19:08 crc kubenswrapper[5005]: I0225 11:19:08.618461 5005 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 25 11:19:09 crc kubenswrapper[5005]: I0225 11:19:09.619475 5005 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 25 11:19:10 crc kubenswrapper[5005]: I0225 11:19:10.405730 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:19:10 crc kubenswrapper[5005]: I0225 11:19:10.406774 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:10 crc kubenswrapper[5005]: I0225 11:19:10.406806 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:10 crc kubenswrapper[5005]: I0225 11:19:10.406816 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:10 crc kubenswrapper[5005]: I0225 11:19:10.406837 5005 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 25 11:19:10 crc kubenswrapper[5005]: E0225 11:19:10.410719 5005 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 25 11:19:10 crc kubenswrapper[5005]: E0225 11:19:10.411095 5005 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 25 11:19:10 crc kubenswrapper[5005]: I0225 11:19:10.617966 5005 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 25 11:19:11 crc kubenswrapper[5005]: I0225 11:19:11.612064 5005 csr.go:261] certificate signing request csr-qdsqv is approved, waiting to be issued Feb 25 11:19:11 crc kubenswrapper[5005]: I0225 11:19:11.618955 5005 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 25 11:19:11 crc kubenswrapper[5005]: I0225 11:19:11.624019 5005 csr.go:257] certificate signing request csr-qdsqv is issued Feb 25 11:19:11 crc kubenswrapper[5005]: I0225 11:19:11.697969 5005 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 25 11:19:12 crc kubenswrapper[5005]: I0225 11:19:12.473321 5005 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 25 11:19:12 crc kubenswrapper[5005]: I0225 11:19:12.625850 5005 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-11-25 03:19:24.69199595 +0000 UTC Feb 25 11:19:12 crc kubenswrapper[5005]: I0225 11:19:12.625908 5005 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6544h0m12.066093585s for next certificate rotation Feb 25 11:19:16 crc kubenswrapper[5005]: E0225 11:19:16.781209 5005 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 25 11:19:17 crc kubenswrapper[5005]: I0225 11:19:17.411653 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:19:17 crc kubenswrapper[5005]: I0225 11:19:17.413331 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:17 crc kubenswrapper[5005]: I0225 11:19:17.413365 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:17 crc kubenswrapper[5005]: I0225 11:19:17.413401 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:17 crc kubenswrapper[5005]: I0225 11:19:17.413523 5005 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 25 11:19:17 crc kubenswrapper[5005]: I0225 11:19:17.421204 5005 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 25 11:19:17 crc kubenswrapper[5005]: I0225 11:19:17.421624 5005 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 25 11:19:17 crc kubenswrapper[5005]: E0225 11:19:17.421664 5005 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 25 11:19:17 crc kubenswrapper[5005]: I0225 11:19:17.425506 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:17 crc kubenswrapper[5005]: I0225 11:19:17.425558 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:17 crc kubenswrapper[5005]: I0225 11:19:17.425579 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:17 crc kubenswrapper[5005]: I0225 11:19:17.425601 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:17 crc kubenswrapper[5005]: I0225 11:19:17.425617 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:17Z","lastTransitionTime":"2026-02-25T11:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:17 crc kubenswrapper[5005]: E0225 11:19:17.435696 5005 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-25T11:19:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-25T11:19:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-25T11:19:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-25T11:19:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37b239f9-8862-4454-946c-237d19e88927\\\",\\\"systemUUID\\\":\\\"25838fef-f2f6-482f-b878-b96864dc5280\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:17 crc kubenswrapper[5005]: I0225 11:19:17.452400 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:17 crc kubenswrapper[5005]: I0225 11:19:17.452876 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:17 crc kubenswrapper[5005]: I0225 11:19:17.453020 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:17 crc kubenswrapper[5005]: I0225 11:19:17.453217 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:17 crc kubenswrapper[5005]: I0225 11:19:17.453475 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:17Z","lastTransitionTime":"2026-02-25T11:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:17 crc kubenswrapper[5005]: E0225 11:19:17.464941 5005 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-25T11:19:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-25T11:19:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-25T11:19:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-25T11:19:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37b239f9-8862-4454-946c-237d19e88927\\\",\\\"systemUUID\\\":\\\"25838fef-f2f6-482f-b878-b96864dc5280\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:17 crc kubenswrapper[5005]: I0225 11:19:17.471491 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:17 crc kubenswrapper[5005]: I0225 11:19:17.471523 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:17 crc kubenswrapper[5005]: I0225 11:19:17.471534 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:17 crc kubenswrapper[5005]: I0225 11:19:17.471550 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:17 crc kubenswrapper[5005]: I0225 11:19:17.471560 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:17Z","lastTransitionTime":"2026-02-25T11:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:17 crc kubenswrapper[5005]: E0225 11:19:17.479006 5005 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-25T11:19:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-25T11:19:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-25T11:19:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-25T11:19:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37b239f9-8862-4454-946c-237d19e88927\\\",\\\"systemUUID\\\":\\\"25838fef-f2f6-482f-b878-b96864dc5280\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:17 crc kubenswrapper[5005]: I0225 11:19:17.484744 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:17 crc kubenswrapper[5005]: I0225 11:19:17.484803 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:17 crc kubenswrapper[5005]: I0225 11:19:17.484823 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:17 crc kubenswrapper[5005]: I0225 11:19:17.484854 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:17 crc kubenswrapper[5005]: I0225 11:19:17.484876 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:17Z","lastTransitionTime":"2026-02-25T11:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:17 crc kubenswrapper[5005]: E0225 11:19:17.494087 5005 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-25T11:19:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-25T11:19:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-25T11:19:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-25T11:19:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37b239f9-8862-4454-946c-237d19e88927\\\",\\\"systemUUID\\\":\\\"25838fef-f2f6-482f-b878-b96864dc5280\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:17 crc kubenswrapper[5005]: E0225 11:19:17.494240 5005 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 25 11:19:17 crc kubenswrapper[5005]: E0225 11:19:17.494272 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:17 crc kubenswrapper[5005]: E0225 11:19:17.594877 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:17 crc kubenswrapper[5005]: E0225 11:19:17.695041 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:17 crc kubenswrapper[5005]: E0225 11:19:17.795874 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:17 crc kubenswrapper[5005]: E0225 11:19:17.896852 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:17 crc kubenswrapper[5005]: E0225 11:19:17.996975 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:18 crc kubenswrapper[5005]: E0225 11:19:18.097829 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:18 crc kubenswrapper[5005]: E0225 11:19:18.198417 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:18 crc kubenswrapper[5005]: E0225 11:19:18.299150 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:18 crc kubenswrapper[5005]: E0225 11:19:18.399804 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:18 crc kubenswrapper[5005]: E0225 11:19:18.500325 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:18 crc kubenswrapper[5005]: E0225 11:19:18.601461 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:18 crc kubenswrapper[5005]: E0225 11:19:18.702174 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:18 crc kubenswrapper[5005]: E0225 11:19:18.803462 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:18 crc kubenswrapper[5005]: E0225 11:19:18.904145 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:19 crc kubenswrapper[5005]: E0225 11:19:19.004532 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:19 crc kubenswrapper[5005]: E0225 11:19:19.105175 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:19 crc kubenswrapper[5005]: E0225 11:19:19.205955 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:19 crc kubenswrapper[5005]: E0225 11:19:19.306777 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:19 crc kubenswrapper[5005]: E0225 11:19:19.407273 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:19 crc kubenswrapper[5005]: E0225 11:19:19.508349 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:19 crc kubenswrapper[5005]: E0225 11:19:19.609250 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:19 crc kubenswrapper[5005]: E0225 11:19:19.710472 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:19 crc kubenswrapper[5005]: E0225 11:19:19.811399 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:19 crc kubenswrapper[5005]: E0225 11:19:19.911716 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:20 crc kubenswrapper[5005]: E0225 11:19:20.012385 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:20 crc kubenswrapper[5005]: E0225 11:19:20.113459 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:20 crc kubenswrapper[5005]: E0225 11:19:20.214358 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:20 crc kubenswrapper[5005]: E0225 11:19:20.315316 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:20 crc kubenswrapper[5005]: E0225 11:19:20.416177 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:20 crc kubenswrapper[5005]: E0225 11:19:20.516301 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:20 crc kubenswrapper[5005]: E0225 11:19:20.617293 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:20 crc kubenswrapper[5005]: E0225 11:19:20.717360 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:20 crc kubenswrapper[5005]: E0225 11:19:20.818420 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:20 crc kubenswrapper[5005]: E0225 11:19:20.918957 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:21 crc kubenswrapper[5005]: E0225 11:19:21.019720 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:21 crc kubenswrapper[5005]: E0225 11:19:21.120320 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:21 crc kubenswrapper[5005]: E0225 11:19:21.220946 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:21 crc kubenswrapper[5005]: E0225 11:19:21.321480 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:21 crc kubenswrapper[5005]: E0225 11:19:21.422617 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:21 crc kubenswrapper[5005]: E0225 11:19:21.522749 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:21 crc kubenswrapper[5005]: E0225 11:19:21.623632 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:21 crc kubenswrapper[5005]: E0225 11:19:21.723975 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:21 crc kubenswrapper[5005]: E0225 11:19:21.824578 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:21 crc kubenswrapper[5005]: E0225 11:19:21.925587 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:22 crc kubenswrapper[5005]: E0225 11:19:22.026042 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:22 crc kubenswrapper[5005]: E0225 11:19:22.126347 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:22 crc kubenswrapper[5005]: E0225 11:19:22.227039 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:22 crc kubenswrapper[5005]: E0225 11:19:22.327993 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:22 crc kubenswrapper[5005]: E0225 11:19:22.428643 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:22 crc kubenswrapper[5005]: E0225 11:19:22.528951 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:22 crc kubenswrapper[5005]: E0225 11:19:22.629732 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:22 crc kubenswrapper[5005]: I0225 11:19:22.684951 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:19:22 crc kubenswrapper[5005]: I0225 11:19:22.686082 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:22 crc kubenswrapper[5005]: I0225 11:19:22.686176 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:22 crc kubenswrapper[5005]: I0225 11:19:22.686195 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:22 crc kubenswrapper[5005]: I0225 11:19:22.687615 5005 scope.go:117] "RemoveContainer" containerID="c77ea7bd4fbefb3d7a618523f56e9b9fc2611206f6b5de06d68a9ed26d940ebd" Feb 25 11:19:22 crc kubenswrapper[5005]: E0225 11:19:22.730767 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:22 crc kubenswrapper[5005]: E0225 11:19:22.831723 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:22 crc kubenswrapper[5005]: E0225 11:19:22.932084 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:23 crc kubenswrapper[5005]: I0225 11:19:23.007292 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 25 11:19:23 crc kubenswrapper[5005]: I0225 11:19:23.009748 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c59c602484c6cda3ffbd176e13b44ae1676fa65bde2f71e60e0e03bdc0c96375"} Feb 25 11:19:23 crc kubenswrapper[5005]: I0225 11:19:23.009887 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:19:23 crc kubenswrapper[5005]: I0225 11:19:23.010750 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:23 crc kubenswrapper[5005]: I0225 11:19:23.010794 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:23 crc kubenswrapper[5005]: I0225 11:19:23.010812 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:23 crc kubenswrapper[5005]: E0225 11:19:23.032738 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:23 crc kubenswrapper[5005]: E0225 11:19:23.133668 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:23 crc kubenswrapper[5005]: E0225 11:19:23.234549 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:23 crc kubenswrapper[5005]: I0225 11:19:23.272941 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 25 11:19:23 crc kubenswrapper[5005]: E0225 11:19:23.334650 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:23 crc kubenswrapper[5005]: E0225 11:19:23.435315 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:23 crc kubenswrapper[5005]: E0225 11:19:23.536444 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:23 crc kubenswrapper[5005]: E0225 11:19:23.636901 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:23 crc kubenswrapper[5005]: I0225 11:19:23.684840 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:19:23 crc kubenswrapper[5005]: I0225 11:19:23.686017 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:23 crc kubenswrapper[5005]: I0225 11:19:23.686075 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:23 crc kubenswrapper[5005]: I0225 11:19:23.686100 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:23 crc kubenswrapper[5005]: E0225 11:19:23.737318 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:23 crc kubenswrapper[5005]: E0225 11:19:23.838236 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:23 crc kubenswrapper[5005]: E0225 11:19:23.939026 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:24 crc kubenswrapper[5005]: I0225 11:19:24.014413 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 25 11:19:24 crc kubenswrapper[5005]: I0225 11:19:24.015094 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 25 11:19:24 crc kubenswrapper[5005]: I0225 11:19:24.018034 5005 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c59c602484c6cda3ffbd176e13b44ae1676fa65bde2f71e60e0e03bdc0c96375" exitCode=255 Feb 25 11:19:24 crc kubenswrapper[5005]: I0225 11:19:24.018102 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"c59c602484c6cda3ffbd176e13b44ae1676fa65bde2f71e60e0e03bdc0c96375"} Feb 25 11:19:24 crc kubenswrapper[5005]: I0225 11:19:24.018165 5005 scope.go:117] "RemoveContainer" containerID="c77ea7bd4fbefb3d7a618523f56e9b9fc2611206f6b5de06d68a9ed26d940ebd" Feb 25 11:19:24 crc kubenswrapper[5005]: I0225 11:19:24.018118 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:19:24 crc kubenswrapper[5005]: I0225 11:19:24.019634 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:24 crc kubenswrapper[5005]: I0225 11:19:24.019673 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:24 crc kubenswrapper[5005]: I0225 11:19:24.019691 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:24 crc kubenswrapper[5005]: I0225 11:19:24.020627 5005 scope.go:117] "RemoveContainer" containerID="c59c602484c6cda3ffbd176e13b44ae1676fa65bde2f71e60e0e03bdc0c96375" Feb 25 11:19:24 crc kubenswrapper[5005]: E0225 11:19:24.020993 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 25 11:19:24 crc kubenswrapper[5005]: E0225 11:19:24.039791 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:24 crc kubenswrapper[5005]: E0225 11:19:24.140873 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:24 crc kubenswrapper[5005]: E0225 11:19:24.241827 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:24 crc kubenswrapper[5005]: E0225 11:19:24.342433 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:24 crc kubenswrapper[5005]: E0225 11:19:24.443357 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:24 crc kubenswrapper[5005]: E0225 11:19:24.543637 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:24 crc kubenswrapper[5005]: E0225 11:19:24.644230 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:24 crc kubenswrapper[5005]: E0225 11:19:24.744976 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:24 crc kubenswrapper[5005]: E0225 11:19:24.846113 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:24 crc kubenswrapper[5005]: E0225 11:19:24.946683 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:25 crc kubenswrapper[5005]: I0225 11:19:25.022900 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 25 11:19:25 crc kubenswrapper[5005]: I0225 11:19:25.025716 5005 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 25 11:19:25 crc kubenswrapper[5005]: I0225 11:19:25.027068 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:25 crc kubenswrapper[5005]: I0225 11:19:25.027147 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:25 crc kubenswrapper[5005]: I0225 11:19:25.027166 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:25 crc kubenswrapper[5005]: I0225 11:19:25.028139 5005 scope.go:117] "RemoveContainer" containerID="c59c602484c6cda3ffbd176e13b44ae1676fa65bde2f71e60e0e03bdc0c96375" Feb 25 11:19:25 crc kubenswrapper[5005]: E0225 11:19:25.028501 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 25 11:19:25 crc kubenswrapper[5005]: E0225 11:19:25.046774 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:25 crc kubenswrapper[5005]: E0225 11:19:25.147402 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:25 crc kubenswrapper[5005]: E0225 11:19:25.247566 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:25 crc kubenswrapper[5005]: E0225 11:19:25.348787 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:25 crc kubenswrapper[5005]: E0225 11:19:25.449212 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:25 crc kubenswrapper[5005]: E0225 11:19:25.549916 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:25 crc kubenswrapper[5005]: E0225 11:19:25.650274 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:25 crc kubenswrapper[5005]: E0225 11:19:25.750752 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:25 crc kubenswrapper[5005]: E0225 11:19:25.851616 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:25 crc kubenswrapper[5005]: E0225 11:19:25.952535 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:26 crc kubenswrapper[5005]: E0225 11:19:26.053645 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:26 crc kubenswrapper[5005]: E0225 11:19:26.154211 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:26 crc kubenswrapper[5005]: E0225 11:19:26.254864 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:26 crc kubenswrapper[5005]: E0225 11:19:26.355164 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:26 crc kubenswrapper[5005]: E0225 11:19:26.456521 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:26 crc kubenswrapper[5005]: E0225 11:19:26.557272 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:26 crc kubenswrapper[5005]: E0225 11:19:26.658354 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:26 crc kubenswrapper[5005]: E0225 11:19:26.759068 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:26 crc kubenswrapper[5005]: E0225 11:19:26.782610 5005 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 25 11:19:26 crc kubenswrapper[5005]: E0225 11:19:26.859267 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:26 crc kubenswrapper[5005]: E0225 11:19:26.959919 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:27 crc kubenswrapper[5005]: E0225 11:19:27.061033 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:27 crc kubenswrapper[5005]: E0225 11:19:27.161434 5005 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.224742 5005 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.264293 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.264342 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.264358 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.264410 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.264428 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:27Z","lastTransitionTime":"2026-02-25T11:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.367327 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.367401 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.367416 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.367434 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.367447 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:27Z","lastTransitionTime":"2026-02-25T11:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.469894 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.469933 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.469941 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.469958 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.469967 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:27Z","lastTransitionTime":"2026-02-25T11:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.573651 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.573724 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.573747 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.573774 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.573791 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:27Z","lastTransitionTime":"2026-02-25T11:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.638499 5005 apiserver.go:52] "Watching apiserver" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.648665 5005 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.649587 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-splp7","openshift-multus/multus-additional-cni-plugins-7l6vx","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wjgc","openshift-machine-config-operator/machine-config-daemon-tct5q","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-image-registry/node-ca-xx5w9","openshift-multus/network-metrics-daemon-x2fvb","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-ovn-kubernetes/ovnkube-node-bfx5c","openshift-multus/multus-dsd74","openshift-network-diagnostics/network-check-target-xd92c"] Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.650562 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.650578 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.650656 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 25 11:19:27 crc kubenswrapper[5005]: E0225 11:19:27.650774 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 25 11:19:27 crc kubenswrapper[5005]: E0225 11:19:27.650965 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.651122 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.651639 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.651718 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 25 11:19:27 crc kubenswrapper[5005]: E0225 11:19:27.652001 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.651738 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wjgc" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.652191 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.652261 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-xx5w9" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.652305 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-splp7" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.652344 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x2fvb" Feb 25 11:19:27 crc kubenswrapper[5005]: E0225 11:19:27.652986 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x2fvb" podUID="67964f07-93aa-42ec-90a7-730363ab668b" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.653913 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-7l6vx" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.654487 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.654936 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-dsd74" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.662171 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.662466 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.662608 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.662705 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.662913 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.663044 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.663290 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.663295 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.663512 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.663786 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.663891 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.664077 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.664125 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.664187 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.664349 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.664431 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.665721 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.665822 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.665827 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.666025 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.666028 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.666516 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.666974 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.667163 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.667217 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.667274 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.667219 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.667451 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.667495 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.667532 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.667625 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.667715 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.668289 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.668322 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.668524 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.670119 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.671034 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.678437 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.678482 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.678495 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.678510 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.678522 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:27Z","lastTransitionTime":"2026-02-25T11:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.683698 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.695883 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.712567 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.724875 5005 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.729889 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.741059 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xx5w9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6300726a-8703-4d2a-9688-264da029b561\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7qdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xx5w9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.754664 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-splp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e33bce4-e290-4389-b690-398e3566f35d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdmtt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-splp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.755504 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.755531 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.755543 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.755563 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.755576 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:27Z","lastTransitionTime":"2026-02-25T11:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:27 crc kubenswrapper[5005]: E0225 11:19:27.772681 5005 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37b239f9-8862-4454-946c-237d19e88927\\\",\\\"systemUUID\\\":\\\"25838fef-f2f6-482f-b878-b96864dc5280\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.774491 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dsd74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03175783-f1a5-4ac6-b942-91a23ab4439d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8fbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dsd74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.777011 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.777055 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.777067 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.777086 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.777097 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:27Z","lastTransitionTime":"2026-02-25T11:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.788107 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:27 crc kubenswrapper[5005]: E0225 11:19:27.790528 5005 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37b239f9-8862-4454-946c-237d19e88927\\\",\\\"systemUUID\\\":\\\"25838fef-f2f6-482f-b878-b96864dc5280\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.794629 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.794675 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.794688 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.794705 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.794716 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:27Z","lastTransitionTime":"2026-02-25T11:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:27 crc kubenswrapper[5005]: E0225 11:19:27.805519 5005 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37b239f9-8862-4454-946c-237d19e88927\\\",\\\"systemUUID\\\":\\\"25838fef-f2f6-482f-b878-b96864dc5280\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.805942 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.806026 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.806093 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.806149 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.806195 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.806223 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.806244 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.806291 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.806340 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.806425 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.806464 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.806486 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.806496 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.806527 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.806559 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.806598 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.806639 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.806684 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.806731 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.806776 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.806824 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.806914 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.806964 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.807010 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.807034 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.807051 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.807085 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.807118 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.807148 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.807179 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.807198 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.807211 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.807242 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.807275 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.807306 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.807340 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.807354 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.807401 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.807433 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.807444 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.807464 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.807496 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.807528 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.807557 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.807587 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.807639 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.807678 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.807713 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.807746 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.807779 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.807812 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.807848 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.807880 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.807915 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.807966 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.808004 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.808036 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.808066 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.808226 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.808261 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.808293 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.808326 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.808361 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.808424 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.808459 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.808489 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.808522 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.808553 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.808585 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.808621 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.808657 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.808687 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.808718 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.808750 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.808784 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.808818 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.808849 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.808880 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.808919 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.808965 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.809001 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.809034 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.809065 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.809096 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.809128 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.809162 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.809201 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.809238 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.809270 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.809301 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.809332 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.809363 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.809422 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.809456 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.809489 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.809523 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.809556 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.809599 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.809632 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.809663 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.809694 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.809727 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.809765 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.809796 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.809828 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.809865 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.809898 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.809945 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.809992 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.810039 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.810089 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.810124 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.810156 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.810189 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.810221 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.810256 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.810295 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.810327 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.810367 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.810432 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.810467 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.810502 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.810536 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.810570 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.810469 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c496d07b-7684-4d5f-b36e-be187e76a3de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bfx5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.810603 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.810637 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.810670 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.810703 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.810736 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.810768 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.810802 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.810836 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.810892 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.810942 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.810991 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.811180 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.811219 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.811268 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.811320 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.811369 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.811451 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.811503 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.811558 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.811612 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.811654 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.811689 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.811723 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.811756 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.811798 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.811833 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.811873 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.811910 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.811961 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.812013 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.812074 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.812126 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.812176 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.812232 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.812286 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.812339 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.812420 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.812472 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.807715 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.807852 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.807956 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.808316 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.808315 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.808721 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.808726 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.808736 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.809005 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.809235 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.809330 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.809431 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.809441 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.809492 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.809673 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.809696 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.809715 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.810254 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.810408 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.810585 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.810626 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.810625 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.810850 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.810959 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.810976 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.811201 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.811572 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.811958 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.811971 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.811996 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.811982 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.812322 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.812396 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: E0225 11:19:27.812515 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-25 11:19:28.312495538 +0000 UTC m=+82.353227865 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.816927 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.816988 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.817033 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.817075 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.817115 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.817154 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.817190 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.817228 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.817270 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.817308 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.817346 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.817409 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.817448 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.817487 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.817523 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.817559 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.817596 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.817638 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.817675 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.817710 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.817749 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.817788 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.817825 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.817862 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.817900 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.817938 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.817974 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.818012 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.818047 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.818083 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.818117 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.818149 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.818189 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.818229 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.818268 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.818304 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.818341 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.818403 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.818439 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.818535 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-7l6vx\" (UID: \"ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a\") " pod="openshift-multus/multus-additional-cni-plugins-7l6vx" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.818584 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-systemd-units\") pod \"ovnkube-node-bfx5c\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.818620 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c496d07b-7684-4d5f-b36e-be187e76a3de-ovnkube-script-lib\") pod \"ovnkube-node-bfx5c\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.818668 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.818705 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/03175783-f1a5-4ac6-b942-91a23ab4439d-cni-binary-copy\") pod \"multus-dsd74\" (UID: \"03175783-f1a5-4ac6-b942-91a23ab4439d\") " pod="openshift-multus/multus-dsd74" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.818743 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/12691472-eb44-46a1-bd71-cf3250a90e2b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-9wjgc\" (UID: \"12691472-eb44-46a1-bd71-cf3250a90e2b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wjgc" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.818779 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n28xf\" (UniqueName: \"kubernetes.io/projected/12691472-eb44-46a1-bd71-cf3250a90e2b-kube-api-access-n28xf\") pod \"ovnkube-control-plane-749d76644c-9wjgc\" (UID: \"12691472-eb44-46a1-bd71-cf3250a90e2b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wjgc" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.818816 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-var-lib-openvswitch\") pod \"ovnkube-node-bfx5c\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.818852 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-host-cni-netd\") pod \"ovnkube-node-bfx5c\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.818923 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/03175783-f1a5-4ac6-b942-91a23ab4439d-host-var-lib-cni-multus\") pod \"multus-dsd74\" (UID: \"03175783-f1a5-4ac6-b942-91a23ab4439d\") " pod="openshift-multus/multus-dsd74" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.818972 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.818977 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.819007 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/67964f07-93aa-42ec-90a7-730363ab668b-metrics-certs\") pod \"network-metrics-daemon-x2fvb\" (UID: \"67964f07-93aa-42ec-90a7-730363ab668b\") " pod="openshift-multus/network-metrics-daemon-x2fvb" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.819487 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.819531 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.819555 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.819588 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.819599 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.812600 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.819614 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.812683 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.819627 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:27Z","lastTransitionTime":"2026-02-25T11:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.819567 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/03175783-f1a5-4ac6-b942-91a23ab4439d-multus-cni-dir\") pod \"multus-dsd74\" (UID: \"03175783-f1a5-4ac6-b942-91a23ab4439d\") " pod="openshift-multus/multus-dsd74" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.822842 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.822943 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a-os-release\") pod \"multus-additional-cni-plugins-7l6vx\" (UID: \"ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a\") " pod="openshift-multus/multus-additional-cni-plugins-7l6vx" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.822988 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/03175783-f1a5-4ac6-b942-91a23ab4439d-host-run-k8s-cni-cncf-io\") pod \"multus-dsd74\" (UID: \"03175783-f1a5-4ac6-b942-91a23ab4439d\") " pod="openshift-multus/multus-dsd74" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.823052 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/6300726a-8703-4d2a-9688-264da029b561-serviceca\") pod \"node-ca-xx5w9\" (UID: \"6300726a-8703-4d2a-9688-264da029b561\") " pod="openshift-image-registry/node-ca-xx5w9" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.823102 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/03175783-f1a5-4ac6-b942-91a23ab4439d-multus-socket-dir-parent\") pod \"multus-dsd74\" (UID: \"03175783-f1a5-4ac6-b942-91a23ab4439d\") " pod="openshift-multus/multus-dsd74" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.823148 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/03175783-f1a5-4ac6-b942-91a23ab4439d-host-run-netns\") pod \"multus-dsd74\" (UID: \"03175783-f1a5-4ac6-b942-91a23ab4439d\") " pod="openshift-multus/multus-dsd74" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.823246 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/03175783-f1a5-4ac6-b942-91a23ab4439d-host-var-lib-kubelet\") pod \"multus-dsd74\" (UID: \"03175783-f1a5-4ac6-b942-91a23ab4439d\") " pod="openshift-multus/multus-dsd74" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.823288 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/2e33bce4-e290-4389-b690-398e3566f35d-hosts-file\") pod \"node-resolver-splp7\" (UID: \"2e33bce4-e290-4389-b690-398e3566f35d\") " pod="openshift-dns/node-resolver-splp7" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.823326 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-run-openvswitch\") pod \"ovnkube-node-bfx5c\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.823356 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-run-ovn\") pod \"ovnkube-node-bfx5c\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.823428 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-649nb\" (UniqueName: \"kubernetes.io/projected/67964f07-93aa-42ec-90a7-730363ab668b-kube-api-access-649nb\") pod \"network-metrics-daemon-x2fvb\" (UID: \"67964f07-93aa-42ec-90a7-730363ab668b\") " pod="openshift-multus/network-metrics-daemon-x2fvb" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.823463 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-host-kubelet\") pod \"ovnkube-node-bfx5c\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.823494 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-log-socket\") pod \"ovnkube-node-bfx5c\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.823523 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c496d07b-7684-4d5f-b36e-be187e76a3de-env-overrides\") pod \"ovnkube-node-bfx5c\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.823555 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/03175783-f1a5-4ac6-b942-91a23ab4439d-multus-conf-dir\") pod \"multus-dsd74\" (UID: \"03175783-f1a5-4ac6-b942-91a23ab4439d\") " pod="openshift-multus/multus-dsd74" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.823587 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/03175783-f1a5-4ac6-b942-91a23ab4439d-host-run-multus-certs\") pod \"multus-dsd74\" (UID: \"03175783-f1a5-4ac6-b942-91a23ab4439d\") " pod="openshift-multus/multus-dsd74" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.823623 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/12691472-eb44-46a1-bd71-cf3250a90e2b-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-9wjgc\" (UID: \"12691472-eb44-46a1-bd71-cf3250a90e2b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wjgc" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.823666 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.823702 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.823734 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.823770 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a-system-cni-dir\") pod \"multus-additional-cni-plugins-7l6vx\" (UID: \"ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a\") " pod="openshift-multus/multus-additional-cni-plugins-7l6vx" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.823804 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-run-systemd\") pod \"ovnkube-node-bfx5c\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.823855 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-etc-openvswitch\") pod \"ovnkube-node-bfx5c\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.824206 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-node-log\") pod \"ovnkube-node-bfx5c\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.824245 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/03175783-f1a5-4ac6-b942-91a23ab4439d-cnibin\") pod \"multus-dsd74\" (UID: \"03175783-f1a5-4ac6-b942-91a23ab4439d\") " pod="openshift-multus/multus-dsd74" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.824275 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/03175783-f1a5-4ac6-b942-91a23ab4439d-multus-daemon-config\") pod \"multus-dsd74\" (UID: \"03175783-f1a5-4ac6-b942-91a23ab4439d\") " pod="openshift-multus/multus-dsd74" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.824310 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdmtt\" (UniqueName: \"kubernetes.io/projected/2e33bce4-e290-4389-b690-398e3566f35d-kube-api-access-cdmtt\") pod \"node-resolver-splp7\" (UID: \"2e33bce4-e290-4389-b690-398e3566f35d\") " pod="openshift-dns/node-resolver-splp7" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.824341 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-host-run-ovn-kubernetes\") pod \"ovnkube-node-bfx5c\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.824407 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7qdn\" (UniqueName: \"kubernetes.io/projected/6300726a-8703-4d2a-9688-264da029b561-kube-api-access-s7qdn\") pod \"node-ca-xx5w9\" (UID: \"6300726a-8703-4d2a-9688-264da029b561\") " pod="openshift-image-registry/node-ca-xx5w9" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.824440 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d56aef23-d794-49a4-8e6b-2c9e2d1adebf-mcd-auth-proxy-config\") pod \"machine-config-daemon-tct5q\" (UID: \"d56aef23-d794-49a4-8e6b-2c9e2d1adebf\") " pod="openshift-machine-config-operator/machine-config-daemon-tct5q" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.824526 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2wmb\" (UniqueName: \"kubernetes.io/projected/d56aef23-d794-49a4-8e6b-2c9e2d1adebf-kube-api-access-r2wmb\") pod \"machine-config-daemon-tct5q\" (UID: \"d56aef23-d794-49a4-8e6b-2c9e2d1adebf\") " pod="openshift-machine-config-operator/machine-config-daemon-tct5q" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.824566 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/03175783-f1a5-4ac6-b942-91a23ab4439d-system-cni-dir\") pod \"multus-dsd74\" (UID: \"03175783-f1a5-4ac6-b942-91a23ab4439d\") " pod="openshift-multus/multus-dsd74" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.824600 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c496d07b-7684-4d5f-b36e-be187e76a3de-ovn-node-metrics-cert\") pod \"ovnkube-node-bfx5c\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.824633 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5665\" (UniqueName: \"kubernetes.io/projected/ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a-kube-api-access-m5665\") pod \"multus-additional-cni-plugins-7l6vx\" (UID: \"ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a\") " pod="openshift-multus/multus-additional-cni-plugins-7l6vx" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.824641 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.824756 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/03175783-f1a5-4ac6-b942-91a23ab4439d-os-release\") pod \"multus-dsd74\" (UID: \"03175783-f1a5-4ac6-b942-91a23ab4439d\") " pod="openshift-multus/multus-dsd74" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.824790 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/03175783-f1a5-4ac6-b942-91a23ab4439d-hostroot\") pod \"multus-dsd74\" (UID: \"03175783-f1a5-4ac6-b942-91a23ab4439d\") " pod="openshift-multus/multus-dsd74" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.824844 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a-cnibin\") pod \"multus-additional-cni-plugins-7l6vx\" (UID: \"ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a\") " pod="openshift-multus/multus-additional-cni-plugins-7l6vx" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.824873 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/d56aef23-d794-49a4-8e6b-2c9e2d1adebf-rootfs\") pod \"machine-config-daemon-tct5q\" (UID: \"d56aef23-d794-49a4-8e6b-2c9e2d1adebf\") " pod="openshift-machine-config-operator/machine-config-daemon-tct5q" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.824902 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/03175783-f1a5-4ac6-b942-91a23ab4439d-etc-kubernetes\") pod \"multus-dsd74\" (UID: \"03175783-f1a5-4ac6-b942-91a23ab4439d\") " pod="openshift-multus/multus-dsd74" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.824977 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a-cni-binary-copy\") pod \"multus-additional-cni-plugins-7l6vx\" (UID: \"ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a\") " pod="openshift-multus/multus-additional-cni-plugins-7l6vx" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.825013 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d56aef23-d794-49a4-8e6b-2c9e2d1adebf-proxy-tls\") pod \"machine-config-daemon-tct5q\" (UID: \"d56aef23-d794-49a4-8e6b-2c9e2d1adebf\") " pod="openshift-machine-config-operator/machine-config-daemon-tct5q" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.825048 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bfx5c\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.825085 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/12691472-eb44-46a1-bd71-cf3250a90e2b-env-overrides\") pod \"ovnkube-control-plane-749d76644c-9wjgc\" (UID: \"12691472-eb44-46a1-bd71-cf3250a90e2b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wjgc" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.825155 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-host-slash\") pod \"ovnkube-node-bfx5c\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.825187 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-host-cni-bin\") pod \"ovnkube-node-bfx5c\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.825218 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/03175783-f1a5-4ac6-b942-91a23ab4439d-host-var-lib-cni-bin\") pod \"multus-dsd74\" (UID: \"03175783-f1a5-4ac6-b942-91a23ab4439d\") " pod="openshift-multus/multus-dsd74" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.825260 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.825292 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-7l6vx\" (UID: \"ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a\") " pod="openshift-multus/multus-additional-cni-plugins-7l6vx" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.825323 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z24kc\" (UniqueName: \"kubernetes.io/projected/c496d07b-7684-4d5f-b36e-be187e76a3de-kube-api-access-z24kc\") pod \"ovnkube-node-bfx5c\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.825354 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8fbb\" (UniqueName: \"kubernetes.io/projected/03175783-f1a5-4ac6-b942-91a23ab4439d-kube-api-access-v8fbb\") pod \"multus-dsd74\" (UID: \"03175783-f1a5-4ac6-b942-91a23ab4439d\") " pod="openshift-multus/multus-dsd74" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.825416 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.825452 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.825535 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6300726a-8703-4d2a-9688-264da029b561-host\") pod \"node-ca-xx5w9\" (UID: \"6300726a-8703-4d2a-9688-264da029b561\") " pod="openshift-image-registry/node-ca-xx5w9" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.825966 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.825998 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.826022 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-host-run-netns\") pod \"ovnkube-node-bfx5c\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.826043 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c496d07b-7684-4d5f-b36e-be187e76a3de-ovnkube-config\") pod \"ovnkube-node-bfx5c\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.826072 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.826294 5005 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.826313 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.812723 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.826871 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.813024 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.813078 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.813065 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.813201 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.813584 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.813654 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.813661 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.813871 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.814231 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.814247 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.814497 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.814588 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.814757 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.814744 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.815029 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.815017 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.815243 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.815539 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.815536 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.815555 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.815621 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.816312 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.816393 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.816409 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: E0225 11:19:27.821239 5005 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.822013 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.822530 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.822712 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.822740 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.822587 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.822867 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.822927 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.823159 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.823249 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.823269 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.823480 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.827217 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.823619 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.823638 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.823727 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.823797 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.823824 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.824067 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.824078 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.824090 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: E0225 11:19:27.824307 5005 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 25 11:19:27 crc kubenswrapper[5005]: E0225 11:19:27.827340 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-25 11:19:28.327322948 +0000 UTC m=+82.368055285 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.827885 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.828818 5005 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.828970 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.829127 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.830090 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.830414 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.830412 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.830537 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.830747 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.831011 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.831016 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.824566 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.824598 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.824924 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.825002 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.825020 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.825095 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.825594 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.825760 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.825777 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.826522 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.826779 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.831266 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.831344 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.831562 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.831623 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.831849 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.831889 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.831994 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.832132 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.832340 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.832360 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.832461 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.832770 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.832789 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.833438 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.834061 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.834109 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.834158 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.834977 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.835242 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.835888 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.836046 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.836465 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: E0225 11:19:27.836306 5005 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37b239f9-8862-4454-946c-237d19e88927\\\",\\\"systemUUID\\\":\\\"25838fef-f2f6-482f-b878-b96864dc5280\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.836632 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.836809 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.837033 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.837037 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.837469 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.837609 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.837702 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.838567 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.838606 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.838931 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.839289 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.839357 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.839396 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.839525 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.839716 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.840062 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.840136 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.840183 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.840521 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.840560 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.840787 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 25 11:19:27 crc kubenswrapper[5005]: E0225 11:19:27.840915 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-25 11:19:28.340889542 +0000 UTC m=+82.381621939 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.841221 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.841571 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.841865 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.842346 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 25 11:19:27 crc kubenswrapper[5005]: E0225 11:19:27.842779 5005 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.824491 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.843215 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: E0225 11:19:27.844447 5005 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 25 11:19:27 crc kubenswrapper[5005]: E0225 11:19:27.844700 5005 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.844632 5005 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.843747 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.843547 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.846056 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:27 crc kubenswrapper[5005]: E0225 11:19:27.846163 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-25 11:19:28.346134854 +0000 UTC m=+82.386867201 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.846185 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.846495 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.846679 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:27Z","lastTransitionTime":"2026-02-25T11:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.827029 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.847449 5005 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.847484 5005 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.847504 5005 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.847780 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.848084 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.848113 5005 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.848166 5005 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.848210 5005 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.848262 5005 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.848294 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.848325 5005 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.848357 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.848416 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.848448 5005 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.848513 5005 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.848540 5005 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.848585 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.848658 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.848704 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.848917 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.849028 5005 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.849128 5005 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.849159 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.849187 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.849455 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.849740 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.849830 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.849887 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.849920 5005 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.849953 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.850006 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.850119 5005 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.850139 5005 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.850155 5005 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.850172 5005 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.850169 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.850191 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.850211 5005 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.850232 5005 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.850251 5005 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.850267 5005 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.850284 5005 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.850301 5005 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.851427 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.851448 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.853582 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.853610 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.853735 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: E0225 11:19:27.854723 5005 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 25 11:19:27 crc kubenswrapper[5005]: E0225 11:19:27.854966 5005 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 25 11:19:27 crc kubenswrapper[5005]: E0225 11:19:27.855098 5005 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 25 11:19:27 crc kubenswrapper[5005]: E0225 11:19:27.855284 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-25 11:19:28.355253296 +0000 UTC m=+82.395985703 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.855285 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.855415 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.855523 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.855537 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.855594 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.856307 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.857287 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.859780 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.859923 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.860068 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.860517 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.861004 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.861272 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.861431 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.861592 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.861814 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.861185 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.863682 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.864040 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.864242 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.864282 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.864693 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.864710 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.868328 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.868404 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.868597 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: E0225 11:19:27.868552 5005 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37b239f9-8862-4454-946c-237d19e88927\\\",\\\"systemUUID\\\":\\\"25838fef-f2f6-482f-b878-b96864dc5280\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:27 crc kubenswrapper[5005]: E0225 11:19:27.868729 5005 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.869063 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-x2fvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67964f07-93aa-42ec-90a7-730363ab668b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-649nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-649nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-x2fvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.869154 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.871662 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.872274 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.872325 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.872341 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.872360 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.872399 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:27Z","lastTransitionTime":"2026-02-25T11:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.878046 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.879416 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.884168 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7l6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7l6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.889108 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.894996 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.897517 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.904127 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.907334 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wjgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12691472-eb44-46a1-bd71-cf3250a90e2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n28xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n28xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9wjgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.917074 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d56aef23-d794-49a4-8e6b-2c9e2d1adebf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tct5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.950833 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/03175783-f1a5-4ac6-b942-91a23ab4439d-hostroot\") pod \"multus-dsd74\" (UID: \"03175783-f1a5-4ac6-b942-91a23ab4439d\") " pod="openshift-multus/multus-dsd74" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.950881 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a-cnibin\") pod \"multus-additional-cni-plugins-7l6vx\" (UID: \"ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a\") " pod="openshift-multus/multus-additional-cni-plugins-7l6vx" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.950903 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5665\" (UniqueName: \"kubernetes.io/projected/ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a-kube-api-access-m5665\") pod \"multus-additional-cni-plugins-7l6vx\" (UID: \"ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a\") " pod="openshift-multus/multus-additional-cni-plugins-7l6vx" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.950924 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/03175783-f1a5-4ac6-b942-91a23ab4439d-os-release\") pod \"multus-dsd74\" (UID: \"03175783-f1a5-4ac6-b942-91a23ab4439d\") " pod="openshift-multus/multus-dsd74" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.950968 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a-cni-binary-copy\") pod \"multus-additional-cni-plugins-7l6vx\" (UID: \"ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a\") " pod="openshift-multus/multus-additional-cni-plugins-7l6vx" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.950992 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/d56aef23-d794-49a4-8e6b-2c9e2d1adebf-rootfs\") pod \"machine-config-daemon-tct5q\" (UID: \"d56aef23-d794-49a4-8e6b-2c9e2d1adebf\") " pod="openshift-machine-config-operator/machine-config-daemon-tct5q" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.951009 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/03175783-f1a5-4ac6-b942-91a23ab4439d-etc-kubernetes\") pod \"multus-dsd74\" (UID: \"03175783-f1a5-4ac6-b942-91a23ab4439d\") " pod="openshift-multus/multus-dsd74" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.951010 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a-cnibin\") pod \"multus-additional-cni-plugins-7l6vx\" (UID: \"ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a\") " pod="openshift-multus/multus-additional-cni-plugins-7l6vx" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.951031 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bfx5c\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.951053 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d56aef23-d794-49a4-8e6b-2c9e2d1adebf-proxy-tls\") pod \"machine-config-daemon-tct5q\" (UID: \"d56aef23-d794-49a4-8e6b-2c9e2d1adebf\") " pod="openshift-machine-config-operator/machine-config-daemon-tct5q" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.951073 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-host-slash\") pod \"ovnkube-node-bfx5c\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.951092 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-host-cni-bin\") pod \"ovnkube-node-bfx5c\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.951110 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/03175783-f1a5-4ac6-b942-91a23ab4439d-host-var-lib-cni-bin\") pod \"multus-dsd74\" (UID: \"03175783-f1a5-4ac6-b942-91a23ab4439d\") " pod="openshift-multus/multus-dsd74" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.951130 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/12691472-eb44-46a1-bd71-cf3250a90e2b-env-overrides\") pod \"ovnkube-control-plane-749d76644c-9wjgc\" (UID: \"12691472-eb44-46a1-bd71-cf3250a90e2b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wjgc" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.951150 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z24kc\" (UniqueName: \"kubernetes.io/projected/c496d07b-7684-4d5f-b36e-be187e76a3de-kube-api-access-z24kc\") pod \"ovnkube-node-bfx5c\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.951171 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8fbb\" (UniqueName: \"kubernetes.io/projected/03175783-f1a5-4ac6-b942-91a23ab4439d-kube-api-access-v8fbb\") pod \"multus-dsd74\" (UID: \"03175783-f1a5-4ac6-b942-91a23ab4439d\") " pod="openshift-multus/multus-dsd74" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.951216 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-7l6vx\" (UID: \"ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a\") " pod="openshift-multus/multus-additional-cni-plugins-7l6vx" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.951242 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6300726a-8703-4d2a-9688-264da029b561-host\") pod \"node-ca-xx5w9\" (UID: \"6300726a-8703-4d2a-9688-264da029b561\") " pod="openshift-image-registry/node-ca-xx5w9" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.951262 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-host-run-netns\") pod \"ovnkube-node-bfx5c\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.951281 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c496d07b-7684-4d5f-b36e-be187e76a3de-ovnkube-config\") pod \"ovnkube-node-bfx5c\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.951310 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.951339 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c496d07b-7684-4d5f-b36e-be187e76a3de-ovnkube-script-lib\") pod \"ovnkube-node-bfx5c\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.951406 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/03175783-f1a5-4ac6-b942-91a23ab4439d-cni-binary-copy\") pod \"multus-dsd74\" (UID: \"03175783-f1a5-4ac6-b942-91a23ab4439d\") " pod="openshift-multus/multus-dsd74" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.951447 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/12691472-eb44-46a1-bd71-cf3250a90e2b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-9wjgc\" (UID: \"12691472-eb44-46a1-bd71-cf3250a90e2b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wjgc" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.951467 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n28xf\" (UniqueName: \"kubernetes.io/projected/12691472-eb44-46a1-bd71-cf3250a90e2b-kube-api-access-n28xf\") pod \"ovnkube-control-plane-749d76644c-9wjgc\" (UID: \"12691472-eb44-46a1-bd71-cf3250a90e2b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wjgc" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.951488 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-7l6vx\" (UID: \"ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a\") " pod="openshift-multus/multus-additional-cni-plugins-7l6vx" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.951511 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-systemd-units\") pod \"ovnkube-node-bfx5c\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.951533 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/03175783-f1a5-4ac6-b942-91a23ab4439d-host-var-lib-cni-multus\") pod \"multus-dsd74\" (UID: \"03175783-f1a5-4ac6-b942-91a23ab4439d\") " pod="openshift-multus/multus-dsd74" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.951556 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/67964f07-93aa-42ec-90a7-730363ab668b-metrics-certs\") pod \"network-metrics-daemon-x2fvb\" (UID: \"67964f07-93aa-42ec-90a7-730363ab668b\") " pod="openshift-multus/network-metrics-daemon-x2fvb" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.951580 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-var-lib-openvswitch\") pod \"ovnkube-node-bfx5c\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.951602 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-host-cni-netd\") pod \"ovnkube-node-bfx5c\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.951625 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/03175783-f1a5-4ac6-b942-91a23ab4439d-multus-cni-dir\") pod \"multus-dsd74\" (UID: \"03175783-f1a5-4ac6-b942-91a23ab4439d\") " pod="openshift-multus/multus-dsd74" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.951646 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.951678 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a-os-release\") pod \"multus-additional-cni-plugins-7l6vx\" (UID: \"ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a\") " pod="openshift-multus/multus-additional-cni-plugins-7l6vx" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.951703 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/03175783-f1a5-4ac6-b942-91a23ab4439d-host-run-k8s-cni-cncf-io\") pod \"multus-dsd74\" (UID: \"03175783-f1a5-4ac6-b942-91a23ab4439d\") " pod="openshift-multus/multus-dsd74" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.951725 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/03175783-f1a5-4ac6-b942-91a23ab4439d-host-run-netns\") pod \"multus-dsd74\" (UID: \"03175783-f1a5-4ac6-b942-91a23ab4439d\") " pod="openshift-multus/multus-dsd74" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.951742 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a-cni-binary-copy\") pod \"multus-additional-cni-plugins-7l6vx\" (UID: \"ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a\") " pod="openshift-multus/multus-additional-cni-plugins-7l6vx" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.951746 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/03175783-f1a5-4ac6-b942-91a23ab4439d-host-var-lib-kubelet\") pod \"multus-dsd74\" (UID: \"03175783-f1a5-4ac6-b942-91a23ab4439d\") " pod="openshift-multus/multus-dsd74" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.951794 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/03175783-f1a5-4ac6-b942-91a23ab4439d-host-var-lib-kubelet\") pod \"multus-dsd74\" (UID: \"03175783-f1a5-4ac6-b942-91a23ab4439d\") " pod="openshift-multus/multus-dsd74" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.951807 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/2e33bce4-e290-4389-b690-398e3566f35d-hosts-file\") pod \"node-resolver-splp7\" (UID: \"2e33bce4-e290-4389-b690-398e3566f35d\") " pod="openshift-dns/node-resolver-splp7" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.951825 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-run-openvswitch\") pod \"ovnkube-node-bfx5c\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.951845 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-run-ovn\") pod \"ovnkube-node-bfx5c\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.951868 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/6300726a-8703-4d2a-9688-264da029b561-serviceca\") pod \"node-ca-xx5w9\" (UID: \"6300726a-8703-4d2a-9688-264da029b561\") " pod="openshift-image-registry/node-ca-xx5w9" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.951889 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/03175783-f1a5-4ac6-b942-91a23ab4439d-multus-socket-dir-parent\") pod \"multus-dsd74\" (UID: \"03175783-f1a5-4ac6-b942-91a23ab4439d\") " pod="openshift-multus/multus-dsd74" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.951910 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-log-socket\") pod \"ovnkube-node-bfx5c\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.951926 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c496d07b-7684-4d5f-b36e-be187e76a3de-env-overrides\") pod \"ovnkube-node-bfx5c\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.951953 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/03175783-f1a5-4ac6-b942-91a23ab4439d-multus-conf-dir\") pod \"multus-dsd74\" (UID: \"03175783-f1a5-4ac6-b942-91a23ab4439d\") " pod="openshift-multus/multus-dsd74" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.951957 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/d56aef23-d794-49a4-8e6b-2c9e2d1adebf-rootfs\") pod \"machine-config-daemon-tct5q\" (UID: \"d56aef23-d794-49a4-8e6b-2c9e2d1adebf\") " pod="openshift-machine-config-operator/machine-config-daemon-tct5q" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.951972 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/03175783-f1a5-4ac6-b942-91a23ab4439d-host-run-multus-certs\") pod \"multus-dsd74\" (UID: \"03175783-f1a5-4ac6-b942-91a23ab4439d\") " pod="openshift-multus/multus-dsd74" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.952032 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/03175783-f1a5-4ac6-b942-91a23ab4439d-os-release\") pod \"multus-dsd74\" (UID: \"03175783-f1a5-4ac6-b942-91a23ab4439d\") " pod="openshift-multus/multus-dsd74" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.952031 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/12691472-eb44-46a1-bd71-cf3250a90e2b-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-9wjgc\" (UID: \"12691472-eb44-46a1-bd71-cf3250a90e2b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wjgc" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.952031 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-host-cni-bin\") pod \"ovnkube-node-bfx5c\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.952070 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-649nb\" (UniqueName: \"kubernetes.io/projected/67964f07-93aa-42ec-90a7-730363ab668b-kube-api-access-649nb\") pod \"network-metrics-daemon-x2fvb\" (UID: \"67964f07-93aa-42ec-90a7-730363ab668b\") " pod="openshift-multus/network-metrics-daemon-x2fvb" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.952062 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/2e33bce4-e290-4389-b690-398e3566f35d-hosts-file\") pod \"node-resolver-splp7\" (UID: \"2e33bce4-e290-4389-b690-398e3566f35d\") " pod="openshift-dns/node-resolver-splp7" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.952091 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-run-ovn\") pod \"ovnkube-node-bfx5c\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.952105 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-host-kubelet\") pod \"ovnkube-node-bfx5c\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.952139 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a-system-cni-dir\") pod \"multus-additional-cni-plugins-7l6vx\" (UID: \"ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a\") " pod="openshift-multus/multus-additional-cni-plugins-7l6vx" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.952143 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/03175783-f1a5-4ac6-b942-91a23ab4439d-etc-kubernetes\") pod \"multus-dsd74\" (UID: \"03175783-f1a5-4ac6-b942-91a23ab4439d\") " pod="openshift-multus/multus-dsd74" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.952169 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-run-systemd\") pod \"ovnkube-node-bfx5c\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.952191 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bfx5c\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.952199 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-etc-openvswitch\") pod \"ovnkube-node-bfx5c\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.952233 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-etc-openvswitch\") pod \"ovnkube-node-bfx5c\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.952254 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-node-log\") pod \"ovnkube-node-bfx5c\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.952278 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/03175783-f1a5-4ac6-b942-91a23ab4439d-host-var-lib-cni-bin\") pod \"multus-dsd74\" (UID: \"03175783-f1a5-4ac6-b942-91a23ab4439d\") " pod="openshift-multus/multus-dsd74" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.952295 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdmtt\" (UniqueName: \"kubernetes.io/projected/2e33bce4-e290-4389-b690-398e3566f35d-kube-api-access-cdmtt\") pod \"node-resolver-splp7\" (UID: \"2e33bce4-e290-4389-b690-398e3566f35d\") " pod="openshift-dns/node-resolver-splp7" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.952331 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-host-run-ovn-kubernetes\") pod \"ovnkube-node-bfx5c\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.952364 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7qdn\" (UniqueName: \"kubernetes.io/projected/6300726a-8703-4d2a-9688-264da029b561-kube-api-access-s7qdn\") pod \"node-ca-xx5w9\" (UID: \"6300726a-8703-4d2a-9688-264da029b561\") " pod="openshift-image-registry/node-ca-xx5w9" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.952427 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/03175783-f1a5-4ac6-b942-91a23ab4439d-cnibin\") pod \"multus-dsd74\" (UID: \"03175783-f1a5-4ac6-b942-91a23ab4439d\") " pod="openshift-multus/multus-dsd74" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.952459 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/03175783-f1a5-4ac6-b942-91a23ab4439d-multus-daemon-config\") pod \"multus-dsd74\" (UID: \"03175783-f1a5-4ac6-b942-91a23ab4439d\") " pod="openshift-multus/multus-dsd74" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.952489 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/03175783-f1a5-4ac6-b942-91a23ab4439d-system-cni-dir\") pod \"multus-dsd74\" (UID: \"03175783-f1a5-4ac6-b942-91a23ab4439d\") " pod="openshift-multus/multus-dsd74" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.952520 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c496d07b-7684-4d5f-b36e-be187e76a3de-ovn-node-metrics-cert\") pod \"ovnkube-node-bfx5c\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.952553 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d56aef23-d794-49a4-8e6b-2c9e2d1adebf-mcd-auth-proxy-config\") pod \"machine-config-daemon-tct5q\" (UID: \"d56aef23-d794-49a4-8e6b-2c9e2d1adebf\") " pod="openshift-machine-config-operator/machine-config-daemon-tct5q" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.952578 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-host-run-netns\") pod \"ovnkube-node-bfx5c\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.952585 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2wmb\" (UniqueName: \"kubernetes.io/projected/d56aef23-d794-49a4-8e6b-2c9e2d1adebf-kube-api-access-r2wmb\") pod \"machine-config-daemon-tct5q\" (UID: \"d56aef23-d794-49a4-8e6b-2c9e2d1adebf\") " pod="openshift-machine-config-operator/machine-config-daemon-tct5q" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.952707 5005 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.952728 5005 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.952748 5005 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.952769 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.952789 5005 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.952807 5005 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.952825 5005 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.952842 5005 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.952874 5005 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.952894 5005 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.952913 5005 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.952910 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/03175783-f1a5-4ac6-b942-91a23ab4439d-cnibin\") pod \"multus-dsd74\" (UID: \"03175783-f1a5-4ac6-b942-91a23ab4439d\") " pod="openshift-multus/multus-dsd74" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.952932 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.952950 5005 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.952968 5005 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.952987 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.953005 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.953024 5005 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.953042 5005 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.953059 5005 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.953077 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.953095 5005 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.953112 5005 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.953128 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.953146 5005 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.953163 5005 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.953181 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.953199 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.953216 5005 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.953233 5005 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.953250 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.953270 5005 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.953290 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.953307 5005 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.953324 5005 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.953342 5005 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.953361 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.953403 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.953420 5005 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.953441 5005 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.953458 5005 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.953453 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/03175783-f1a5-4ac6-b942-91a23ab4439d-hostroot\") pod \"multus-dsd74\" (UID: \"03175783-f1a5-4ac6-b942-91a23ab4439d\") " pod="openshift-multus/multus-dsd74" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.953476 5005 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.953494 5005 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.953512 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.953530 5005 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.953550 5005 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.953569 5005 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.953587 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.953608 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.953627 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.953643 5005 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.953661 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.953670 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/03175783-f1a5-4ac6-b942-91a23ab4439d-multus-conf-dir\") pod \"multus-dsd74\" (UID: \"03175783-f1a5-4ac6-b942-91a23ab4439d\") " pod="openshift-multus/multus-dsd74" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.953672 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-host-slash\") pod \"ovnkube-node-bfx5c\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.953718 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-log-socket\") pod \"ovnkube-node-bfx5c\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.953678 5005 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.953752 5005 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.953768 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.953772 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-node-log\") pod \"ovnkube-node-bfx5c\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.953779 5005 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.953803 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-host-run-ovn-kubernetes\") pod \"ovnkube-node-bfx5c\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.953829 5005 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.953853 5005 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.953872 5005 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.953869 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.953893 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.953914 5005 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.953936 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.953955 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.953973 5005 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.953993 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.954013 5005 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.954019 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-host-kubelet\") pod \"ovnkube-node-bfx5c\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.954033 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.951993 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/03175783-f1a5-4ac6-b942-91a23ab4439d-host-run-multus-certs\") pod \"multus-dsd74\" (UID: \"03175783-f1a5-4ac6-b942-91a23ab4439d\") " pod="openshift-multus/multus-dsd74" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.954067 5005 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.954096 5005 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.954127 5005 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.954154 5005 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.954180 5005 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.954205 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.954223 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.953641 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/03175783-f1a5-4ac6-b942-91a23ab4439d-multus-socket-dir-parent\") pod \"multus-dsd74\" (UID: \"03175783-f1a5-4ac6-b942-91a23ab4439d\") " pod="openshift-multus/multus-dsd74" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.954255 5005 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.952077 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-run-openvswitch\") pod \"ovnkube-node-bfx5c\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.954274 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.954292 5005 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.954309 5005 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.954326 5005 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.954344 5005 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.954362 5005 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.954406 5005 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.954424 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.954429 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a-system-cni-dir\") pod \"multus-additional-cni-plugins-7l6vx\" (UID: \"ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a\") " pod="openshift-multus/multus-additional-cni-plugins-7l6vx" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.954442 5005 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.954462 5005 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.954479 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.954497 5005 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.954515 5005 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.954535 5005 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.954547 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-run-systemd\") pod \"ovnkube-node-bfx5c\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.954556 5005 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.954577 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.954596 5005 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.954612 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/03175783-f1a5-4ac6-b942-91a23ab4439d-system-cni-dir\") pod \"multus-dsd74\" (UID: \"03175783-f1a5-4ac6-b942-91a23ab4439d\") " pod="openshift-multus/multus-dsd74" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.954616 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.954664 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.954687 5005 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: E0225 11:19:27.954692 5005 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.954738 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-systemd-units\") pod \"ovnkube-node-bfx5c\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.954516 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/12691472-eb44-46a1-bd71-cf3250a90e2b-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-9wjgc\" (UID: \"12691472-eb44-46a1-bd71-cf3250a90e2b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wjgc" Feb 25 11:19:27 crc kubenswrapper[5005]: E0225 11:19:27.954769 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67964f07-93aa-42ec-90a7-730363ab668b-metrics-certs podName:67964f07-93aa-42ec-90a7-730363ab668b nodeName:}" failed. No retries permitted until 2026-02-25 11:19:28.454746152 +0000 UTC m=+82.495478519 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/67964f07-93aa-42ec-90a7-730363ab668b-metrics-certs") pod "network-metrics-daemon-x2fvb" (UID: "67964f07-93aa-42ec-90a7-730363ab668b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.954775 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/03175783-f1a5-4ac6-b942-91a23ab4439d-multus-daemon-config\") pod \"multus-dsd74\" (UID: \"03175783-f1a5-4ac6-b942-91a23ab4439d\") " pod="openshift-multus/multus-dsd74" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.952555 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6300726a-8703-4d2a-9688-264da029b561-host\") pod \"node-ca-xx5w9\" (UID: \"6300726a-8703-4d2a-9688-264da029b561\") " pod="openshift-image-registry/node-ca-xx5w9" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.954817 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/03175783-f1a5-4ac6-b942-91a23ab4439d-host-var-lib-cni-multus\") pod \"multus-dsd74\" (UID: \"03175783-f1a5-4ac6-b942-91a23ab4439d\") " pod="openshift-multus/multus-dsd74" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.954706 5005 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.954873 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.952523 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-7l6vx\" (UID: \"ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a\") " pod="openshift-multus/multus-additional-cni-plugins-7l6vx" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.954938 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-var-lib-openvswitch\") pod \"ovnkube-node-bfx5c\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.955424 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-7l6vx\" (UID: \"ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a\") " pod="openshift-multus/multus-additional-cni-plugins-7l6vx" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.955473 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-host-cni-netd\") pod \"ovnkube-node-bfx5c\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.955503 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/03175783-f1a5-4ac6-b942-91a23ab4439d-host-run-k8s-cni-cncf-io\") pod \"multus-dsd74\" (UID: \"03175783-f1a5-4ac6-b942-91a23ab4439d\") " pod="openshift-multus/multus-dsd74" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.955508 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/6300726a-8703-4d2a-9688-264da029b561-serviceca\") pod \"node-ca-xx5w9\" (UID: \"6300726a-8703-4d2a-9688-264da029b561\") " pod="openshift-image-registry/node-ca-xx5w9" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.955553 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a-os-release\") pod \"multus-additional-cni-plugins-7l6vx\" (UID: \"ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a\") " pod="openshift-multus/multus-additional-cni-plugins-7l6vx" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.955560 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/03175783-f1a5-4ac6-b942-91a23ab4439d-host-run-netns\") pod \"multus-dsd74\" (UID: \"03175783-f1a5-4ac6-b942-91a23ab4439d\") " pod="openshift-multus/multus-dsd74" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.955606 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/03175783-f1a5-4ac6-b942-91a23ab4439d-multus-cni-dir\") pod \"multus-dsd74\" (UID: \"03175783-f1a5-4ac6-b942-91a23ab4439d\") " pod="openshift-multus/multus-dsd74" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.953416 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/12691472-eb44-46a1-bd71-cf3250a90e2b-env-overrides\") pod \"ovnkube-control-plane-749d76644c-9wjgc\" (UID: \"12691472-eb44-46a1-bd71-cf3250a90e2b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wjgc" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.955767 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.955989 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/03175783-f1a5-4ac6-b942-91a23ab4439d-cni-binary-copy\") pod \"multus-dsd74\" (UID: \"03175783-f1a5-4ac6-b942-91a23ab4439d\") " pod="openshift-multus/multus-dsd74" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.956447 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.956476 5005 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.956500 5005 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.956518 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.956536 5005 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.956554 5005 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.956577 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.956596 5005 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.956612 5005 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.956630 5005 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.956652 5005 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.956669 5005 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.956688 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.956706 5005 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.956724 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.956741 5005 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.956758 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.956775 5005 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.956791 5005 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.956808 5005 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.956825 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.956851 5005 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.956868 5005 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.956884 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.956901 5005 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.956917 5005 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.956934 5005 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.956951 5005 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.956970 5005 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.956987 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.957005 5005 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.957022 5005 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.957038 5005 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.957055 5005 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.957074 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.957090 5005 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.957108 5005 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.957126 5005 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.957143 5005 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.957160 5005 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.957177 5005 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.957194 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.957210 5005 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.957227 5005 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.957246 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.957263 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.957279 5005 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.957296 5005 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.957313 5005 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.957329 5005 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.957365 5005 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.957410 5005 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.957428 5005 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.957445 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.957462 5005 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.957480 5005 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.959496 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d56aef23-d794-49a4-8e6b-2c9e2d1adebf-proxy-tls\") pod \"machine-config-daemon-tct5q\" (UID: \"d56aef23-d794-49a4-8e6b-2c9e2d1adebf\") " pod="openshift-machine-config-operator/machine-config-daemon-tct5q" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.960026 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/12691472-eb44-46a1-bd71-cf3250a90e2b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-9wjgc\" (UID: \"12691472-eb44-46a1-bd71-cf3250a90e2b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wjgc" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.963899 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c496d07b-7684-4d5f-b36e-be187e76a3de-ovnkube-config\") pod \"ovnkube-node-bfx5c\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.965507 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c496d07b-7684-4d5f-b36e-be187e76a3de-ovnkube-script-lib\") pod \"ovnkube-node-bfx5c\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.967849 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c496d07b-7684-4d5f-b36e-be187e76a3de-ovn-node-metrics-cert\") pod \"ovnkube-node-bfx5c\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.967952 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c496d07b-7684-4d5f-b36e-be187e76a3de-env-overrides\") pod \"ovnkube-node-bfx5c\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.968191 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d56aef23-d794-49a4-8e6b-2c9e2d1adebf-mcd-auth-proxy-config\") pod \"machine-config-daemon-tct5q\" (UID: \"d56aef23-d794-49a4-8e6b-2c9e2d1adebf\") " pod="openshift-machine-config-operator/machine-config-daemon-tct5q" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.976330 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.976410 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.976427 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.976453 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.976470 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:27Z","lastTransitionTime":"2026-02-25T11:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.977227 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2wmb\" (UniqueName: \"kubernetes.io/projected/d56aef23-d794-49a4-8e6b-2c9e2d1adebf-kube-api-access-r2wmb\") pod \"machine-config-daemon-tct5q\" (UID: \"d56aef23-d794-49a4-8e6b-2c9e2d1adebf\") " pod="openshift-machine-config-operator/machine-config-daemon-tct5q" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.978111 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5665\" (UniqueName: \"kubernetes.io/projected/ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a-kube-api-access-m5665\") pod \"multus-additional-cni-plugins-7l6vx\" (UID: \"ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a\") " pod="openshift-multus/multus-additional-cni-plugins-7l6vx" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.978517 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8fbb\" (UniqueName: \"kubernetes.io/projected/03175783-f1a5-4ac6-b942-91a23ab4439d-kube-api-access-v8fbb\") pod \"multus-dsd74\" (UID: \"03175783-f1a5-4ac6-b942-91a23ab4439d\") " pod="openshift-multus/multus-dsd74" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.978584 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n28xf\" (UniqueName: \"kubernetes.io/projected/12691472-eb44-46a1-bd71-cf3250a90e2b-kube-api-access-n28xf\") pod \"ovnkube-control-plane-749d76644c-9wjgc\" (UID: \"12691472-eb44-46a1-bd71-cf3250a90e2b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wjgc" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.978705 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z24kc\" (UniqueName: \"kubernetes.io/projected/c496d07b-7684-4d5f-b36e-be187e76a3de-kube-api-access-z24kc\") pod \"ovnkube-node-bfx5c\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.979501 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7qdn\" (UniqueName: \"kubernetes.io/projected/6300726a-8703-4d2a-9688-264da029b561-kube-api-access-s7qdn\") pod \"node-ca-xx5w9\" (UID: \"6300726a-8703-4d2a-9688-264da029b561\") " pod="openshift-image-registry/node-ca-xx5w9" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.980474 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdmtt\" (UniqueName: \"kubernetes.io/projected/2e33bce4-e290-4389-b690-398e3566f35d-kube-api-access-cdmtt\") pod \"node-resolver-splp7\" (UID: \"2e33bce4-e290-4389-b690-398e3566f35d\") " pod="openshift-dns/node-resolver-splp7" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.986526 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-649nb\" (UniqueName: \"kubernetes.io/projected/67964f07-93aa-42ec-90a7-730363ab668b-kube-api-access-649nb\") pod \"network-metrics-daemon-x2fvb\" (UID: \"67964f07-93aa-42ec-90a7-730363ab668b\") " pod="openshift-multus/network-metrics-daemon-x2fvb" Feb 25 11:19:27 crc kubenswrapper[5005]: I0225 11:19:27.992833 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 25 11:19:28 crc kubenswrapper[5005]: W0225 11:19:28.008342 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-225074ddd4c0256100b4fd06dfbc35820c21f998e5ea405c86d9c2fb88618969 WatchSource:0}: Error finding container 225074ddd4c0256100b4fd06dfbc35820c21f998e5ea405c86d9c2fb88618969: Status 404 returned error can't find the container with id 225074ddd4c0256100b4fd06dfbc35820c21f998e5ea405c86d9c2fb88618969 Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.010779 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 25 11:19:28 crc kubenswrapper[5005]: E0225 11:19:28.011096 5005 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 25 11:19:28 crc kubenswrapper[5005]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Feb 25 11:19:28 crc kubenswrapper[5005]: set -o allexport Feb 25 11:19:28 crc kubenswrapper[5005]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Feb 25 11:19:28 crc kubenswrapper[5005]: source /etc/kubernetes/apiserver-url.env Feb 25 11:19:28 crc kubenswrapper[5005]: else Feb 25 11:19:28 crc kubenswrapper[5005]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Feb 25 11:19:28 crc kubenswrapper[5005]: exit 1 Feb 25 11:19:28 crc kubenswrapper[5005]: fi Feb 25 11:19:28 crc kubenswrapper[5005]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Feb 25 11:19:28 crc kubenswrapper[5005]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 25 11:19:28 crc kubenswrapper[5005]: > logger="UnhandledError" Feb 25 11:19:28 crc kubenswrapper[5005]: E0225 11:19:28.012830 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Feb 25 11:19:28 crc kubenswrapper[5005]: W0225 11:19:28.027825 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-01ea4ded82e3c06dfb1bbea64226f8eab0b8275c9d7a1b5bdcc0bcaab64da0f9 WatchSource:0}: Error finding container 01ea4ded82e3c06dfb1bbea64226f8eab0b8275c9d7a1b5bdcc0bcaab64da0f9: Status 404 returned error can't find the container with id 01ea4ded82e3c06dfb1bbea64226f8eab0b8275c9d7a1b5bdcc0bcaab64da0f9 Feb 25 11:19:28 crc kubenswrapper[5005]: E0225 11:19:28.031559 5005 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 25 11:19:28 crc kubenswrapper[5005]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 25 11:19:28 crc kubenswrapper[5005]: if [[ -f "/env/_master" ]]; then Feb 25 11:19:28 crc kubenswrapper[5005]: set -o allexport Feb 25 11:19:28 crc kubenswrapper[5005]: source "/env/_master" Feb 25 11:19:28 crc kubenswrapper[5005]: set +o allexport Feb 25 11:19:28 crc kubenswrapper[5005]: fi Feb 25 11:19:28 crc kubenswrapper[5005]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Feb 25 11:19:28 crc kubenswrapper[5005]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Feb 25 11:19:28 crc kubenswrapper[5005]: ho_enable="--enable-hybrid-overlay" Feb 25 11:19:28 crc kubenswrapper[5005]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Feb 25 11:19:28 crc kubenswrapper[5005]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Feb 25 11:19:28 crc kubenswrapper[5005]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Feb 25 11:19:28 crc kubenswrapper[5005]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 25 11:19:28 crc kubenswrapper[5005]: --webhook-cert-dir="/etc/webhook-cert" \ Feb 25 11:19:28 crc kubenswrapper[5005]: --webhook-host=127.0.0.1 \ Feb 25 11:19:28 crc kubenswrapper[5005]: --webhook-port=9743 \ Feb 25 11:19:28 crc kubenswrapper[5005]: ${ho_enable} \ Feb 25 11:19:28 crc kubenswrapper[5005]: --enable-interconnect \ Feb 25 11:19:28 crc kubenswrapper[5005]: --disable-approver \ Feb 25 11:19:28 crc kubenswrapper[5005]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Feb 25 11:19:28 crc kubenswrapper[5005]: --wait-for-kubernetes-api=200s \ Feb 25 11:19:28 crc kubenswrapper[5005]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Feb 25 11:19:28 crc kubenswrapper[5005]: --loglevel="${LOGLEVEL}" Feb 25 11:19:28 crc kubenswrapper[5005]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 25 11:19:28 crc kubenswrapper[5005]: > logger="UnhandledError" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.032830 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 25 11:19:28 crc kubenswrapper[5005]: E0225 11:19:28.037006 5005 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 25 11:19:28 crc kubenswrapper[5005]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 25 11:19:28 crc kubenswrapper[5005]: if [[ -f "/env/_master" ]]; then Feb 25 11:19:28 crc kubenswrapper[5005]: set -o allexport Feb 25 11:19:28 crc kubenswrapper[5005]: source "/env/_master" Feb 25 11:19:28 crc kubenswrapper[5005]: set +o allexport Feb 25 11:19:28 crc kubenswrapper[5005]: fi Feb 25 11:19:28 crc kubenswrapper[5005]: Feb 25 11:19:28 crc kubenswrapper[5005]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Feb 25 11:19:28 crc kubenswrapper[5005]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 25 11:19:28 crc kubenswrapper[5005]: --disable-webhook \ Feb 25 11:19:28 crc kubenswrapper[5005]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Feb 25 11:19:28 crc kubenswrapper[5005]: --loglevel="${LOGLEVEL}" Feb 25 11:19:28 crc kubenswrapper[5005]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 25 11:19:28 crc kubenswrapper[5005]: > logger="UnhandledError" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.037293 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"01ea4ded82e3c06dfb1bbea64226f8eab0b8275c9d7a1b5bdcc0bcaab64da0f9"} Feb 25 11:19:28 crc kubenswrapper[5005]: E0225 11:19:28.038202 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.038683 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"225074ddd4c0256100b4fd06dfbc35820c21f998e5ea405c86d9c2fb88618969"} Feb 25 11:19:28 crc kubenswrapper[5005]: E0225 11:19:28.041243 5005 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 25 11:19:28 crc kubenswrapper[5005]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Feb 25 11:19:28 crc kubenswrapper[5005]: set -o allexport Feb 25 11:19:28 crc kubenswrapper[5005]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Feb 25 11:19:28 crc kubenswrapper[5005]: source /etc/kubernetes/apiserver-url.env Feb 25 11:19:28 crc kubenswrapper[5005]: else Feb 25 11:19:28 crc kubenswrapper[5005]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Feb 25 11:19:28 crc kubenswrapper[5005]: exit 1 Feb 25 11:19:28 crc kubenswrapper[5005]: fi Feb 25 11:19:28 crc kubenswrapper[5005]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Feb 25 11:19:28 crc kubenswrapper[5005]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 25 11:19:28 crc kubenswrapper[5005]: > logger="UnhandledError" Feb 25 11:19:28 crc kubenswrapper[5005]: E0225 11:19:28.042365 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.042520 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-7l6vx" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.052050 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:28 crc kubenswrapper[5005]: W0225 11:19:28.053311 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-33744808ce141729c89151ae1fcb954b67933173b91614dd941dd8cc793b99ca WatchSource:0}: Error finding container 33744808ce141729c89151ae1fcb954b67933173b91614dd941dd8cc793b99ca: Status 404 returned error can't find the container with id 33744808ce141729c89151ae1fcb954b67933173b91614dd941dd8cc793b99ca Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.054547 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:28 crc kubenswrapper[5005]: E0225 11:19:28.057840 5005 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 25 11:19:28 crc kubenswrapper[5005]: E0225 11:19:28.059068 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Feb 25 11:19:28 crc kubenswrapper[5005]: W0225 11:19:28.059232 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab0464f0_94f5_4c58_8b46_0dbfc3c15a4a.slice/crio-054e357c64bc4b18dcf88957e64e92e81d88b9abae7bd526b3a1dafd89005788 WatchSource:0}: Error finding container 054e357c64bc4b18dcf88957e64e92e81d88b9abae7bd526b3a1dafd89005788: Status 404 returned error can't find the container with id 054e357c64bc4b18dcf88957e64e92e81d88b9abae7bd526b3a1dafd89005788 Feb 25 11:19:28 crc kubenswrapper[5005]: E0225 11:19:28.061163 5005 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m5665,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-7l6vx_openshift-multus(ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 25 11:19:28 crc kubenswrapper[5005]: E0225 11:19:28.062413 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-7l6vx" podUID="ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.064064 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-dsd74" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.064431 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wjgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12691472-eb44-46a1-bd71-cf3250a90e2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n28xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n28xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9wjgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:28 crc kubenswrapper[5005]: W0225 11:19:28.067221 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc496d07b_7684_4d5f_b36e_be187e76a3de.slice/crio-4a02fd2dd8ef9c173f9e5cd97f53894b0bbf7a0424f96b8c29d8e326350bbf5b WatchSource:0}: Error finding container 4a02fd2dd8ef9c173f9e5cd97f53894b0bbf7a0424f96b8c29d8e326350bbf5b: Status 404 returned error can't find the container with id 4a02fd2dd8ef9c173f9e5cd97f53894b0bbf7a0424f96b8c29d8e326350bbf5b Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.071885 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-x2fvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67964f07-93aa-42ec-90a7-730363ab668b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-649nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-649nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-x2fvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:28 crc kubenswrapper[5005]: E0225 11:19:28.072305 5005 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 25 11:19:28 crc kubenswrapper[5005]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Feb 25 11:19:28 crc kubenswrapper[5005]: apiVersion: v1 Feb 25 11:19:28 crc kubenswrapper[5005]: clusters: Feb 25 11:19:28 crc kubenswrapper[5005]: - cluster: Feb 25 11:19:28 crc kubenswrapper[5005]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Feb 25 11:19:28 crc kubenswrapper[5005]: server: https://api-int.crc.testing:6443 Feb 25 11:19:28 crc kubenswrapper[5005]: name: default-cluster Feb 25 11:19:28 crc kubenswrapper[5005]: contexts: Feb 25 11:19:28 crc kubenswrapper[5005]: - context: Feb 25 11:19:28 crc kubenswrapper[5005]: cluster: default-cluster Feb 25 11:19:28 crc kubenswrapper[5005]: namespace: default Feb 25 11:19:28 crc kubenswrapper[5005]: user: default-auth Feb 25 11:19:28 crc kubenswrapper[5005]: name: default-context Feb 25 11:19:28 crc kubenswrapper[5005]: current-context: default-context Feb 25 11:19:28 crc kubenswrapper[5005]: kind: Config Feb 25 11:19:28 crc kubenswrapper[5005]: preferences: {} Feb 25 11:19:28 crc kubenswrapper[5005]: users: Feb 25 11:19:28 crc kubenswrapper[5005]: - name: default-auth Feb 25 11:19:28 crc kubenswrapper[5005]: user: Feb 25 11:19:28 crc kubenswrapper[5005]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 25 11:19:28 crc kubenswrapper[5005]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 25 11:19:28 crc kubenswrapper[5005]: EOF Feb 25 11:19:28 crc kubenswrapper[5005]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z24kc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-bfx5c_openshift-ovn-kubernetes(c496d07b-7684-4d5f-b36e-be187e76a3de): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 25 11:19:28 crc kubenswrapper[5005]: > logger="UnhandledError" Feb 25 11:19:28 crc kubenswrapper[5005]: E0225 11:19:28.073705 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" podUID="c496d07b-7684-4d5f-b36e-be187e76a3de" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.078225 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wjgc" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.079098 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.079133 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.079147 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.079163 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.079175 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:28Z","lastTransitionTime":"2026-02-25T11:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:28 crc kubenswrapper[5005]: W0225 11:19:28.080305 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03175783_f1a5_4ac6_b942_91a23ab4439d.slice/crio-57131b42ebbbd25574a06b93c2c630505f7005ad74931b288a7eac9a4cbb7747 WatchSource:0}: Error finding container 57131b42ebbbd25574a06b93c2c630505f7005ad74931b288a7eac9a4cbb7747: Status 404 returned error can't find the container with id 57131b42ebbbd25574a06b93c2c630505f7005ad74931b288a7eac9a4cbb7747 Feb 25 11:19:28 crc kubenswrapper[5005]: E0225 11:19:28.083735 5005 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 25 11:19:28 crc kubenswrapper[5005]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Feb 25 11:19:28 crc kubenswrapper[5005]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Feb 25 11:19:28 crc kubenswrapper[5005]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v8fbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-dsd74_openshift-multus(03175783-f1a5-4ac6-b942-91a23ab4439d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 25 11:19:28 crc kubenswrapper[5005]: > logger="UnhandledError" Feb 25 11:19:28 crc kubenswrapper[5005]: E0225 11:19:28.084899 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-dsd74" podUID="03175783-f1a5-4ac6-b942-91a23ab4439d" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.086275 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.088563 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7l6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7l6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:28 crc kubenswrapper[5005]: E0225 11:19:28.093473 5005 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 25 11:19:28 crc kubenswrapper[5005]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[/bin/bash -c #!/bin/bash Feb 25 11:19:28 crc kubenswrapper[5005]: set -euo pipefail Feb 25 11:19:28 crc kubenswrapper[5005]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Feb 25 11:19:28 crc kubenswrapper[5005]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Feb 25 11:19:28 crc kubenswrapper[5005]: # As the secret mount is optional we must wait for the files to be present. Feb 25 11:19:28 crc kubenswrapper[5005]: # The service is created in monitor.yaml and this is created in sdn.yaml. Feb 25 11:19:28 crc kubenswrapper[5005]: TS=$(date +%s) Feb 25 11:19:28 crc kubenswrapper[5005]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Feb 25 11:19:28 crc kubenswrapper[5005]: HAS_LOGGED_INFO=0 Feb 25 11:19:28 crc kubenswrapper[5005]: Feb 25 11:19:28 crc kubenswrapper[5005]: log_missing_certs(){ Feb 25 11:19:28 crc kubenswrapper[5005]: CUR_TS=$(date +%s) Feb 25 11:19:28 crc kubenswrapper[5005]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Feb 25 11:19:28 crc kubenswrapper[5005]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Feb 25 11:19:28 crc kubenswrapper[5005]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Feb 25 11:19:28 crc kubenswrapper[5005]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Feb 25 11:19:28 crc kubenswrapper[5005]: HAS_LOGGED_INFO=1 Feb 25 11:19:28 crc kubenswrapper[5005]: fi Feb 25 11:19:28 crc kubenswrapper[5005]: } Feb 25 11:19:28 crc kubenswrapper[5005]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Feb 25 11:19:28 crc kubenswrapper[5005]: log_missing_certs Feb 25 11:19:28 crc kubenswrapper[5005]: sleep 5 Feb 25 11:19:28 crc kubenswrapper[5005]: done Feb 25 11:19:28 crc kubenswrapper[5005]: Feb 25 11:19:28 crc kubenswrapper[5005]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Feb 25 11:19:28 crc kubenswrapper[5005]: exec /usr/bin/kube-rbac-proxy \ Feb 25 11:19:28 crc kubenswrapper[5005]: --logtostderr \ Feb 25 11:19:28 crc kubenswrapper[5005]: --secure-listen-address=:9108 \ Feb 25 11:19:28 crc kubenswrapper[5005]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Feb 25 11:19:28 crc kubenswrapper[5005]: --upstream=http://127.0.0.1:29108/ \ Feb 25 11:19:28 crc kubenswrapper[5005]: --tls-private-key-file=${TLS_PK} \ Feb 25 11:19:28 crc kubenswrapper[5005]: --tls-cert-file=${TLS_CERT} Feb 25 11:19:28 crc kubenswrapper[5005]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n28xf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-9wjgc_openshift-ovn-kubernetes(12691472-eb44-46a1-bd71-cf3250a90e2b): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 25 11:19:28 crc kubenswrapper[5005]: > logger="UnhandledError" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.096039 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-xx5w9" Feb 25 11:19:28 crc kubenswrapper[5005]: E0225 11:19:28.096655 5005 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 25 11:19:28 crc kubenswrapper[5005]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 25 11:19:28 crc kubenswrapper[5005]: if [[ -f "/env/_master" ]]; then Feb 25 11:19:28 crc kubenswrapper[5005]: set -o allexport Feb 25 11:19:28 crc kubenswrapper[5005]: source "/env/_master" Feb 25 11:19:28 crc kubenswrapper[5005]: set +o allexport Feb 25 11:19:28 crc kubenswrapper[5005]: fi Feb 25 11:19:28 crc kubenswrapper[5005]: Feb 25 11:19:28 crc kubenswrapper[5005]: ovn_v4_join_subnet_opt= Feb 25 11:19:28 crc kubenswrapper[5005]: if [[ "" != "" ]]; then Feb 25 11:19:28 crc kubenswrapper[5005]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Feb 25 11:19:28 crc kubenswrapper[5005]: fi Feb 25 11:19:28 crc kubenswrapper[5005]: ovn_v6_join_subnet_opt= Feb 25 11:19:28 crc kubenswrapper[5005]: if [[ "" != "" ]]; then Feb 25 11:19:28 crc kubenswrapper[5005]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Feb 25 11:19:28 crc kubenswrapper[5005]: fi Feb 25 11:19:28 crc kubenswrapper[5005]: Feb 25 11:19:28 crc kubenswrapper[5005]: ovn_v4_transit_switch_subnet_opt= Feb 25 11:19:28 crc kubenswrapper[5005]: if [[ "" != "" ]]; then Feb 25 11:19:28 crc kubenswrapper[5005]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Feb 25 11:19:28 crc kubenswrapper[5005]: fi Feb 25 11:19:28 crc kubenswrapper[5005]: ovn_v6_transit_switch_subnet_opt= Feb 25 11:19:28 crc kubenswrapper[5005]: if [[ "" != "" ]]; then Feb 25 11:19:28 crc kubenswrapper[5005]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Feb 25 11:19:28 crc kubenswrapper[5005]: fi Feb 25 11:19:28 crc kubenswrapper[5005]: Feb 25 11:19:28 crc kubenswrapper[5005]: dns_name_resolver_enabled_flag= Feb 25 11:19:28 crc kubenswrapper[5005]: if [[ "false" == "true" ]]; then Feb 25 11:19:28 crc kubenswrapper[5005]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Feb 25 11:19:28 crc kubenswrapper[5005]: fi Feb 25 11:19:28 crc kubenswrapper[5005]: Feb 25 11:19:28 crc kubenswrapper[5005]: persistent_ips_enabled_flag= Feb 25 11:19:28 crc kubenswrapper[5005]: if [[ "true" == "true" ]]; then Feb 25 11:19:28 crc kubenswrapper[5005]: persistent_ips_enabled_flag="--enable-persistent-ips" Feb 25 11:19:28 crc kubenswrapper[5005]: fi Feb 25 11:19:28 crc kubenswrapper[5005]: Feb 25 11:19:28 crc kubenswrapper[5005]: # This is needed so that converting clusters from GA to TP Feb 25 11:19:28 crc kubenswrapper[5005]: # will rollout control plane pods as well Feb 25 11:19:28 crc kubenswrapper[5005]: network_segmentation_enabled_flag= Feb 25 11:19:28 crc kubenswrapper[5005]: multi_network_enabled_flag= Feb 25 11:19:28 crc kubenswrapper[5005]: if [[ "true" == "true" ]]; then Feb 25 11:19:28 crc kubenswrapper[5005]: multi_network_enabled_flag="--enable-multi-network" Feb 25 11:19:28 crc kubenswrapper[5005]: network_segmentation_enabled_flag="--enable-network-segmentation" Feb 25 11:19:28 crc kubenswrapper[5005]: fi Feb 25 11:19:28 crc kubenswrapper[5005]: Feb 25 11:19:28 crc kubenswrapper[5005]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Feb 25 11:19:28 crc kubenswrapper[5005]: exec /usr/bin/ovnkube \ Feb 25 11:19:28 crc kubenswrapper[5005]: --enable-interconnect \ Feb 25 11:19:28 crc kubenswrapper[5005]: --init-cluster-manager "${K8S_NODE}" \ Feb 25 11:19:28 crc kubenswrapper[5005]: --config-file=/run/ovnkube-config/ovnkube.conf \ Feb 25 11:19:28 crc kubenswrapper[5005]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Feb 25 11:19:28 crc kubenswrapper[5005]: --metrics-bind-address "127.0.0.1:29108" \ Feb 25 11:19:28 crc kubenswrapper[5005]: --metrics-enable-pprof \ Feb 25 11:19:28 crc kubenswrapper[5005]: --metrics-enable-config-duration \ Feb 25 11:19:28 crc kubenswrapper[5005]: ${ovn_v4_join_subnet_opt} \ Feb 25 11:19:28 crc kubenswrapper[5005]: ${ovn_v6_join_subnet_opt} \ Feb 25 11:19:28 crc kubenswrapper[5005]: ${ovn_v4_transit_switch_subnet_opt} \ Feb 25 11:19:28 crc kubenswrapper[5005]: ${ovn_v6_transit_switch_subnet_opt} \ Feb 25 11:19:28 crc kubenswrapper[5005]: ${dns_name_resolver_enabled_flag} \ Feb 25 11:19:28 crc kubenswrapper[5005]: ${persistent_ips_enabled_flag} \ Feb 25 11:19:28 crc kubenswrapper[5005]: ${multi_network_enabled_flag} \ Feb 25 11:19:28 crc kubenswrapper[5005]: ${network_segmentation_enabled_flag} Feb 25 11:19:28 crc kubenswrapper[5005]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n28xf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-9wjgc_openshift-ovn-kubernetes(12691472-eb44-46a1-bd71-cf3250a90e2b): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 25 11:19:28 crc kubenswrapper[5005]: > logger="UnhandledError" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.096724 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d56aef23-d794-49a4-8e6b-2c9e2d1adebf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tct5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:28 crc kubenswrapper[5005]: W0225 11:19:28.098506 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd56aef23_d794_49a4_8e6b_2c9e2d1adebf.slice/crio-90a633b11d4653bc114cf93249670efdebe2f74361760c6451aa32d1837da2c7 WatchSource:0}: Error finding container 90a633b11d4653bc114cf93249670efdebe2f74361760c6451aa32d1837da2c7: Status 404 returned error can't find the container with id 90a633b11d4653bc114cf93249670efdebe2f74361760c6451aa32d1837da2c7 Feb 25 11:19:28 crc kubenswrapper[5005]: E0225 11:19:28.098557 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wjgc" podUID="12691472-eb44-46a1-bd71-cf3250a90e2b" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.100261 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-splp7" Feb 25 11:19:28 crc kubenswrapper[5005]: E0225 11:19:28.105213 5005 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r2wmb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.106344 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xx5w9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6300726a-8703-4d2a-9688-264da029b561\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7qdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xx5w9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:28 crc kubenswrapper[5005]: E0225 11:19:28.108062 5005 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r2wmb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 25 11:19:28 crc kubenswrapper[5005]: E0225 11:19:28.109144 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 11:19:28 crc kubenswrapper[5005]: W0225 11:19:28.111251 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6300726a_8703_4d2a_9688_264da029b561.slice/crio-84b19be8fdea2b9749a8df15df8c45ba8119572c01840c7bda4e9174dace7cf1 WatchSource:0}: Error finding container 84b19be8fdea2b9749a8df15df8c45ba8119572c01840c7bda4e9174dace7cf1: Status 404 returned error can't find the container with id 84b19be8fdea2b9749a8df15df8c45ba8119572c01840c7bda4e9174dace7cf1 Feb 25 11:19:28 crc kubenswrapper[5005]: E0225 11:19:28.113606 5005 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 25 11:19:28 crc kubenswrapper[5005]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Feb 25 11:19:28 crc kubenswrapper[5005]: while [ true ]; Feb 25 11:19:28 crc kubenswrapper[5005]: do Feb 25 11:19:28 crc kubenswrapper[5005]: for f in $(ls /tmp/serviceca); do Feb 25 11:19:28 crc kubenswrapper[5005]: echo $f Feb 25 11:19:28 crc kubenswrapper[5005]: ca_file_path="/tmp/serviceca/${f}" Feb 25 11:19:28 crc kubenswrapper[5005]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Feb 25 11:19:28 crc kubenswrapper[5005]: reg_dir_path="/etc/docker/certs.d/${f}" Feb 25 11:19:28 crc kubenswrapper[5005]: if [ -e "${reg_dir_path}" ]; then Feb 25 11:19:28 crc kubenswrapper[5005]: cp -u $ca_file_path $reg_dir_path/ca.crt Feb 25 11:19:28 crc kubenswrapper[5005]: else Feb 25 11:19:28 crc kubenswrapper[5005]: mkdir $reg_dir_path Feb 25 11:19:28 crc kubenswrapper[5005]: cp $ca_file_path $reg_dir_path/ca.crt Feb 25 11:19:28 crc kubenswrapper[5005]: fi Feb 25 11:19:28 crc kubenswrapper[5005]: done Feb 25 11:19:28 crc kubenswrapper[5005]: for d in $(ls /etc/docker/certs.d); do Feb 25 11:19:28 crc kubenswrapper[5005]: echo $d Feb 25 11:19:28 crc kubenswrapper[5005]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Feb 25 11:19:28 crc kubenswrapper[5005]: reg_conf_path="/tmp/serviceca/${dp}" Feb 25 11:19:28 crc kubenswrapper[5005]: if [ ! -e "${reg_conf_path}" ]; then Feb 25 11:19:28 crc kubenswrapper[5005]: rm -rf /etc/docker/certs.d/$d Feb 25 11:19:28 crc kubenswrapper[5005]: fi Feb 25 11:19:28 crc kubenswrapper[5005]: done Feb 25 11:19:28 crc kubenswrapper[5005]: sleep 60 & wait ${!} Feb 25 11:19:28 crc kubenswrapper[5005]: done Feb 25 11:19:28 crc kubenswrapper[5005]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s7qdn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-xx5w9_openshift-image-registry(6300726a-8703-4d2a-9688-264da029b561): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 25 11:19:28 crc kubenswrapper[5005]: > logger="UnhandledError" Feb 25 11:19:28 crc kubenswrapper[5005]: W0225 11:19:28.114106 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e33bce4_e290_4389_b690_398e3566f35d.slice/crio-ed5d804488a5b856d2618d4427a45da2b8739844013c921f871f81fb1f9bdd53 WatchSource:0}: Error finding container ed5d804488a5b856d2618d4427a45da2b8739844013c921f871f81fb1f9bdd53: Status 404 returned error can't find the container with id ed5d804488a5b856d2618d4427a45da2b8739844013c921f871f81fb1f9bdd53 Feb 25 11:19:28 crc kubenswrapper[5005]: E0225 11:19:28.115195 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-xx5w9" podUID="6300726a-8703-4d2a-9688-264da029b561" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.116449 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-splp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e33bce4-e290-4389-b690-398e3566f35d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdmtt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-splp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:28 crc kubenswrapper[5005]: E0225 11:19:28.118216 5005 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 25 11:19:28 crc kubenswrapper[5005]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Feb 25 11:19:28 crc kubenswrapper[5005]: set -uo pipefail Feb 25 11:19:28 crc kubenswrapper[5005]: Feb 25 11:19:28 crc kubenswrapper[5005]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Feb 25 11:19:28 crc kubenswrapper[5005]: Feb 25 11:19:28 crc kubenswrapper[5005]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Feb 25 11:19:28 crc kubenswrapper[5005]: HOSTS_FILE="/etc/hosts" Feb 25 11:19:28 crc kubenswrapper[5005]: TEMP_FILE="/etc/hosts.tmp" Feb 25 11:19:28 crc kubenswrapper[5005]: Feb 25 11:19:28 crc kubenswrapper[5005]: IFS=', ' read -r -a services <<< "${SERVICES}" Feb 25 11:19:28 crc kubenswrapper[5005]: Feb 25 11:19:28 crc kubenswrapper[5005]: # Make a temporary file with the old hosts file's attributes. Feb 25 11:19:28 crc kubenswrapper[5005]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Feb 25 11:19:28 crc kubenswrapper[5005]: echo "Failed to preserve hosts file. Exiting." Feb 25 11:19:28 crc kubenswrapper[5005]: exit 1 Feb 25 11:19:28 crc kubenswrapper[5005]: fi Feb 25 11:19:28 crc kubenswrapper[5005]: Feb 25 11:19:28 crc kubenswrapper[5005]: while true; do Feb 25 11:19:28 crc kubenswrapper[5005]: declare -A svc_ips Feb 25 11:19:28 crc kubenswrapper[5005]: for svc in "${services[@]}"; do Feb 25 11:19:28 crc kubenswrapper[5005]: # Fetch service IP from cluster dns if present. We make several tries Feb 25 11:19:28 crc kubenswrapper[5005]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Feb 25 11:19:28 crc kubenswrapper[5005]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Feb 25 11:19:28 crc kubenswrapper[5005]: # support UDP loadbalancers and require reaching DNS through TCP. Feb 25 11:19:28 crc kubenswrapper[5005]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 25 11:19:28 crc kubenswrapper[5005]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 25 11:19:28 crc kubenswrapper[5005]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 25 11:19:28 crc kubenswrapper[5005]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Feb 25 11:19:28 crc kubenswrapper[5005]: for i in ${!cmds[*]} Feb 25 11:19:28 crc kubenswrapper[5005]: do Feb 25 11:19:28 crc kubenswrapper[5005]: ips=($(eval "${cmds[i]}")) Feb 25 11:19:28 crc kubenswrapper[5005]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Feb 25 11:19:28 crc kubenswrapper[5005]: svc_ips["${svc}"]="${ips[@]}" Feb 25 11:19:28 crc kubenswrapper[5005]: break Feb 25 11:19:28 crc kubenswrapper[5005]: fi Feb 25 11:19:28 crc kubenswrapper[5005]: done Feb 25 11:19:28 crc kubenswrapper[5005]: done Feb 25 11:19:28 crc kubenswrapper[5005]: Feb 25 11:19:28 crc kubenswrapper[5005]: # Update /etc/hosts only if we get valid service IPs Feb 25 11:19:28 crc kubenswrapper[5005]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Feb 25 11:19:28 crc kubenswrapper[5005]: # Stale entries could exist in /etc/hosts if the service is deleted Feb 25 11:19:28 crc kubenswrapper[5005]: if [[ -n "${svc_ips[*]-}" ]]; then Feb 25 11:19:28 crc kubenswrapper[5005]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Feb 25 11:19:28 crc kubenswrapper[5005]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Feb 25 11:19:28 crc kubenswrapper[5005]: # Only continue rebuilding the hosts entries if its original content is preserved Feb 25 11:19:28 crc kubenswrapper[5005]: sleep 60 & wait Feb 25 11:19:28 crc kubenswrapper[5005]: continue Feb 25 11:19:28 crc kubenswrapper[5005]: fi Feb 25 11:19:28 crc kubenswrapper[5005]: Feb 25 11:19:28 crc kubenswrapper[5005]: # Append resolver entries for services Feb 25 11:19:28 crc kubenswrapper[5005]: rc=0 Feb 25 11:19:28 crc kubenswrapper[5005]: for svc in "${!svc_ips[@]}"; do Feb 25 11:19:28 crc kubenswrapper[5005]: for ip in ${svc_ips[${svc}]}; do Feb 25 11:19:28 crc kubenswrapper[5005]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Feb 25 11:19:28 crc kubenswrapper[5005]: done Feb 25 11:19:28 crc kubenswrapper[5005]: done Feb 25 11:19:28 crc kubenswrapper[5005]: if [[ $rc -ne 0 ]]; then Feb 25 11:19:28 crc kubenswrapper[5005]: sleep 60 & wait Feb 25 11:19:28 crc kubenswrapper[5005]: continue Feb 25 11:19:28 crc kubenswrapper[5005]: fi Feb 25 11:19:28 crc kubenswrapper[5005]: Feb 25 11:19:28 crc kubenswrapper[5005]: Feb 25 11:19:28 crc kubenswrapper[5005]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Feb 25 11:19:28 crc kubenswrapper[5005]: # Replace /etc/hosts with our modified version if needed Feb 25 11:19:28 crc kubenswrapper[5005]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Feb 25 11:19:28 crc kubenswrapper[5005]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Feb 25 11:19:28 crc kubenswrapper[5005]: fi Feb 25 11:19:28 crc kubenswrapper[5005]: sleep 60 & wait Feb 25 11:19:28 crc kubenswrapper[5005]: unset svc_ips Feb 25 11:19:28 crc kubenswrapper[5005]: done Feb 25 11:19:28 crc kubenswrapper[5005]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cdmtt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-splp7_openshift-dns(2e33bce4-e290-4389-b690-398e3566f35d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 25 11:19:28 crc kubenswrapper[5005]: > logger="UnhandledError" Feb 25 11:19:28 crc kubenswrapper[5005]: E0225 11:19:28.119476 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-splp7" podUID="2e33bce4-e290-4389-b690-398e3566f35d" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.126941 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dsd74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03175783-f1a5-4ac6-b942-91a23ab4439d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8fbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dsd74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.141352 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.151178 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.161663 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.175613 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.181119 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.181158 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.181169 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.181190 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.181202 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:28Z","lastTransitionTime":"2026-02-25T11:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.187415 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.201654 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c496d07b-7684-4d5f-b36e-be187e76a3de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bfx5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.284265 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.284305 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.284314 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.284347 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.284357 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:28Z","lastTransitionTime":"2026-02-25T11:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.362831 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.363060 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 25 11:19:28 crc kubenswrapper[5005]: E0225 11:19:28.363080 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-25 11:19:29.363047232 +0000 UTC m=+83.403779569 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.363133 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.363231 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.363314 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 25 11:19:28 crc kubenswrapper[5005]: E0225 11:19:28.363319 5005 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 25 11:19:28 crc kubenswrapper[5005]: E0225 11:19:28.363476 5005 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 25 11:19:28 crc kubenswrapper[5005]: E0225 11:19:28.363404 5005 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 25 11:19:28 crc kubenswrapper[5005]: E0225 11:19:28.363508 5005 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 25 11:19:28 crc kubenswrapper[5005]: E0225 11:19:28.363548 5005 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 25 11:19:28 crc kubenswrapper[5005]: E0225 11:19:28.363579 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-25 11:19:29.363557081 +0000 UTC m=+83.404289408 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 25 11:19:28 crc kubenswrapper[5005]: E0225 11:19:28.363412 5005 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 25 11:19:28 crc kubenswrapper[5005]: E0225 11:19:28.363716 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-25 11:19:29.363668975 +0000 UTC m=+83.404401342 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 25 11:19:28 crc kubenswrapper[5005]: E0225 11:19:28.363737 5005 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 25 11:19:28 crc kubenswrapper[5005]: E0225 11:19:28.363756 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-25 11:19:29.363743558 +0000 UTC m=+83.404475925 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 25 11:19:28 crc kubenswrapper[5005]: E0225 11:19:28.363759 5005 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 25 11:19:28 crc kubenswrapper[5005]: E0225 11:19:28.363842 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-25 11:19:29.363822321 +0000 UTC m=+83.404554648 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.387561 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.387621 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.387640 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.387664 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.387682 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:28Z","lastTransitionTime":"2026-02-25T11:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.463930 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/67964f07-93aa-42ec-90a7-730363ab668b-metrics-certs\") pod \"network-metrics-daemon-x2fvb\" (UID: \"67964f07-93aa-42ec-90a7-730363ab668b\") " pod="openshift-multus/network-metrics-daemon-x2fvb" Feb 25 11:19:28 crc kubenswrapper[5005]: E0225 11:19:28.464056 5005 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 25 11:19:28 crc kubenswrapper[5005]: E0225 11:19:28.464112 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67964f07-93aa-42ec-90a7-730363ab668b-metrics-certs podName:67964f07-93aa-42ec-90a7-730363ab668b nodeName:}" failed. No retries permitted until 2026-02-25 11:19:29.464095645 +0000 UTC m=+83.504827972 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/67964f07-93aa-42ec-90a7-730363ab668b-metrics-certs") pod "network-metrics-daemon-x2fvb" (UID: "67964f07-93aa-42ec-90a7-730363ab668b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.490433 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.490495 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.490518 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.490549 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.490570 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:28Z","lastTransitionTime":"2026-02-25T11:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.592864 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.592901 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.592912 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.592925 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.592935 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:28Z","lastTransitionTime":"2026-02-25T11:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.690116 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.690868 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.691922 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.692530 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.693494 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.693957 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.694520 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.695425 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.696013 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.696442 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.696538 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.696558 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.696582 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.696600 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:28Z","lastTransitionTime":"2026-02-25T11:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.696886 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.697348 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.698386 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.698897 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.699412 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.700229 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.700747 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.701651 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.702063 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.702616 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.703591 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.704009 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.704969 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.705396 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.706324 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.706768 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.707329 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.708336 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.708907 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.709780 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.710205 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.711006 5005 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.711099 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.712650 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.713510 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.713893 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.715312 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.715940 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.716834 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.717468 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.718440 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.718916 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.719872 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.720558 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.721490 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.722129 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.723035 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.723580 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.724730 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.725170 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.726478 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.727558 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.729807 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.731052 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.732237 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.803265 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.803313 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.803323 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.803338 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.803350 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:28Z","lastTransitionTime":"2026-02-25T11:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.905570 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.905598 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.905605 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.905619 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:28 crc kubenswrapper[5005]: I0225 11:19:28.905627 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:28Z","lastTransitionTime":"2026-02-25T11:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.007882 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.007909 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.007919 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.007932 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.007941 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:29Z","lastTransitionTime":"2026-02-25T11:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.042152 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerStarted","Data":"90a633b11d4653bc114cf93249670efdebe2f74361760c6451aa32d1837da2c7"} Feb 25 11:19:29 crc kubenswrapper[5005]: E0225 11:19:29.043593 5005 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r2wmb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.044198 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-xx5w9" event={"ID":"6300726a-8703-4d2a-9688-264da029b561","Type":"ContainerStarted","Data":"84b19be8fdea2b9749a8df15df8c45ba8119572c01840c7bda4e9174dace7cf1"} Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.045778 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wjgc" event={"ID":"12691472-eb44-46a1-bd71-cf3250a90e2b","Type":"ContainerStarted","Data":"b3be27724691ec3669f977960aeeb4fecff10a672e37ce281230b5fea04777f8"} Feb 25 11:19:29 crc kubenswrapper[5005]: E0225 11:19:29.045920 5005 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r2wmb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 25 11:19:29 crc kubenswrapper[5005]: E0225 11:19:29.046114 5005 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 25 11:19:29 crc kubenswrapper[5005]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Feb 25 11:19:29 crc kubenswrapper[5005]: while [ true ]; Feb 25 11:19:29 crc kubenswrapper[5005]: do Feb 25 11:19:29 crc kubenswrapper[5005]: for f in $(ls /tmp/serviceca); do Feb 25 11:19:29 crc kubenswrapper[5005]: echo $f Feb 25 11:19:29 crc kubenswrapper[5005]: ca_file_path="/tmp/serviceca/${f}" Feb 25 11:19:29 crc kubenswrapper[5005]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Feb 25 11:19:29 crc kubenswrapper[5005]: reg_dir_path="/etc/docker/certs.d/${f}" Feb 25 11:19:29 crc kubenswrapper[5005]: if [ -e "${reg_dir_path}" ]; then Feb 25 11:19:29 crc kubenswrapper[5005]: cp -u $ca_file_path $reg_dir_path/ca.crt Feb 25 11:19:29 crc kubenswrapper[5005]: else Feb 25 11:19:29 crc kubenswrapper[5005]: mkdir $reg_dir_path Feb 25 11:19:29 crc kubenswrapper[5005]: cp $ca_file_path $reg_dir_path/ca.crt Feb 25 11:19:29 crc kubenswrapper[5005]: fi Feb 25 11:19:29 crc kubenswrapper[5005]: done Feb 25 11:19:29 crc kubenswrapper[5005]: for d in $(ls /etc/docker/certs.d); do Feb 25 11:19:29 crc kubenswrapper[5005]: echo $d Feb 25 11:19:29 crc kubenswrapper[5005]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Feb 25 11:19:29 crc kubenswrapper[5005]: reg_conf_path="/tmp/serviceca/${dp}" Feb 25 11:19:29 crc kubenswrapper[5005]: if [ ! -e "${reg_conf_path}" ]; then Feb 25 11:19:29 crc kubenswrapper[5005]: rm -rf /etc/docker/certs.d/$d Feb 25 11:19:29 crc kubenswrapper[5005]: fi Feb 25 11:19:29 crc kubenswrapper[5005]: done Feb 25 11:19:29 crc kubenswrapper[5005]: sleep 60 & wait ${!} Feb 25 11:19:29 crc kubenswrapper[5005]: done Feb 25 11:19:29 crc kubenswrapper[5005]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s7qdn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-xx5w9_openshift-image-registry(6300726a-8703-4d2a-9688-264da029b561): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 25 11:19:29 crc kubenswrapper[5005]: > logger="UnhandledError" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.046723 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dsd74" event={"ID":"03175783-f1a5-4ac6-b942-91a23ab4439d","Type":"ContainerStarted","Data":"57131b42ebbbd25574a06b93c2c630505f7005ad74931b288a7eac9a4cbb7747"} Feb 25 11:19:29 crc kubenswrapper[5005]: E0225 11:19:29.046765 5005 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 25 11:19:29 crc kubenswrapper[5005]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[/bin/bash -c #!/bin/bash Feb 25 11:19:29 crc kubenswrapper[5005]: set -euo pipefail Feb 25 11:19:29 crc kubenswrapper[5005]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Feb 25 11:19:29 crc kubenswrapper[5005]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Feb 25 11:19:29 crc kubenswrapper[5005]: # As the secret mount is optional we must wait for the files to be present. Feb 25 11:19:29 crc kubenswrapper[5005]: # The service is created in monitor.yaml and this is created in sdn.yaml. Feb 25 11:19:29 crc kubenswrapper[5005]: TS=$(date +%s) Feb 25 11:19:29 crc kubenswrapper[5005]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Feb 25 11:19:29 crc kubenswrapper[5005]: HAS_LOGGED_INFO=0 Feb 25 11:19:29 crc kubenswrapper[5005]: Feb 25 11:19:29 crc kubenswrapper[5005]: log_missing_certs(){ Feb 25 11:19:29 crc kubenswrapper[5005]: CUR_TS=$(date +%s) Feb 25 11:19:29 crc kubenswrapper[5005]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Feb 25 11:19:29 crc kubenswrapper[5005]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Feb 25 11:19:29 crc kubenswrapper[5005]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Feb 25 11:19:29 crc kubenswrapper[5005]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Feb 25 11:19:29 crc kubenswrapper[5005]: HAS_LOGGED_INFO=1 Feb 25 11:19:29 crc kubenswrapper[5005]: fi Feb 25 11:19:29 crc kubenswrapper[5005]: } Feb 25 11:19:29 crc kubenswrapper[5005]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Feb 25 11:19:29 crc kubenswrapper[5005]: log_missing_certs Feb 25 11:19:29 crc kubenswrapper[5005]: sleep 5 Feb 25 11:19:29 crc kubenswrapper[5005]: done Feb 25 11:19:29 crc kubenswrapper[5005]: Feb 25 11:19:29 crc kubenswrapper[5005]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Feb 25 11:19:29 crc kubenswrapper[5005]: exec /usr/bin/kube-rbac-proxy \ Feb 25 11:19:29 crc kubenswrapper[5005]: --logtostderr \ Feb 25 11:19:29 crc kubenswrapper[5005]: --secure-listen-address=:9108 \ Feb 25 11:19:29 crc kubenswrapper[5005]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Feb 25 11:19:29 crc kubenswrapper[5005]: --upstream=http://127.0.0.1:29108/ \ Feb 25 11:19:29 crc kubenswrapper[5005]: --tls-private-key-file=${TLS_PK} \ Feb 25 11:19:29 crc kubenswrapper[5005]: --tls-cert-file=${TLS_CERT} Feb 25 11:19:29 crc kubenswrapper[5005]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n28xf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-9wjgc_openshift-ovn-kubernetes(12691472-eb44-46a1-bd71-cf3250a90e2b): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 25 11:19:29 crc kubenswrapper[5005]: > logger="UnhandledError" Feb 25 11:19:29 crc kubenswrapper[5005]: E0225 11:19:29.047418 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-xx5w9" podUID="6300726a-8703-4d2a-9688-264da029b561" Feb 25 11:19:29 crc kubenswrapper[5005]: E0225 11:19:29.047871 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 11:19:29 crc kubenswrapper[5005]: E0225 11:19:29.047926 5005 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 25 11:19:29 crc kubenswrapper[5005]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Feb 25 11:19:29 crc kubenswrapper[5005]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Feb 25 11:19:29 crc kubenswrapper[5005]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v8fbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-dsd74_openshift-multus(03175783-f1a5-4ac6-b942-91a23ab4439d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 25 11:19:29 crc kubenswrapper[5005]: > logger="UnhandledError" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.048567 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7l6vx" event={"ID":"ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a","Type":"ContainerStarted","Data":"054e357c64bc4b18dcf88957e64e92e81d88b9abae7bd526b3a1dafd89005788"} Feb 25 11:19:29 crc kubenswrapper[5005]: E0225 11:19:29.048615 5005 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 25 11:19:29 crc kubenswrapper[5005]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 25 11:19:29 crc kubenswrapper[5005]: if [[ -f "/env/_master" ]]; then Feb 25 11:19:29 crc kubenswrapper[5005]: set -o allexport Feb 25 11:19:29 crc kubenswrapper[5005]: source "/env/_master" Feb 25 11:19:29 crc kubenswrapper[5005]: set +o allexport Feb 25 11:19:29 crc kubenswrapper[5005]: fi Feb 25 11:19:29 crc kubenswrapper[5005]: Feb 25 11:19:29 crc kubenswrapper[5005]: ovn_v4_join_subnet_opt= Feb 25 11:19:29 crc kubenswrapper[5005]: if [[ "" != "" ]]; then Feb 25 11:19:29 crc kubenswrapper[5005]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Feb 25 11:19:29 crc kubenswrapper[5005]: fi Feb 25 11:19:29 crc kubenswrapper[5005]: ovn_v6_join_subnet_opt= Feb 25 11:19:29 crc kubenswrapper[5005]: if [[ "" != "" ]]; then Feb 25 11:19:29 crc kubenswrapper[5005]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Feb 25 11:19:29 crc kubenswrapper[5005]: fi Feb 25 11:19:29 crc kubenswrapper[5005]: Feb 25 11:19:29 crc kubenswrapper[5005]: ovn_v4_transit_switch_subnet_opt= Feb 25 11:19:29 crc kubenswrapper[5005]: if [[ "" != "" ]]; then Feb 25 11:19:29 crc kubenswrapper[5005]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Feb 25 11:19:29 crc kubenswrapper[5005]: fi Feb 25 11:19:29 crc kubenswrapper[5005]: ovn_v6_transit_switch_subnet_opt= Feb 25 11:19:29 crc kubenswrapper[5005]: if [[ "" != "" ]]; then Feb 25 11:19:29 crc kubenswrapper[5005]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Feb 25 11:19:29 crc kubenswrapper[5005]: fi Feb 25 11:19:29 crc kubenswrapper[5005]: Feb 25 11:19:29 crc kubenswrapper[5005]: dns_name_resolver_enabled_flag= Feb 25 11:19:29 crc kubenswrapper[5005]: if [[ "false" == "true" ]]; then Feb 25 11:19:29 crc kubenswrapper[5005]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Feb 25 11:19:29 crc kubenswrapper[5005]: fi Feb 25 11:19:29 crc kubenswrapper[5005]: Feb 25 11:19:29 crc kubenswrapper[5005]: persistent_ips_enabled_flag= Feb 25 11:19:29 crc kubenswrapper[5005]: if [[ "true" == "true" ]]; then Feb 25 11:19:29 crc kubenswrapper[5005]: persistent_ips_enabled_flag="--enable-persistent-ips" Feb 25 11:19:29 crc kubenswrapper[5005]: fi Feb 25 11:19:29 crc kubenswrapper[5005]: Feb 25 11:19:29 crc kubenswrapper[5005]: # This is needed so that converting clusters from GA to TP Feb 25 11:19:29 crc kubenswrapper[5005]: # will rollout control plane pods as well Feb 25 11:19:29 crc kubenswrapper[5005]: network_segmentation_enabled_flag= Feb 25 11:19:29 crc kubenswrapper[5005]: multi_network_enabled_flag= Feb 25 11:19:29 crc kubenswrapper[5005]: if [[ "true" == "true" ]]; then Feb 25 11:19:29 crc kubenswrapper[5005]: multi_network_enabled_flag="--enable-multi-network" Feb 25 11:19:29 crc kubenswrapper[5005]: network_segmentation_enabled_flag="--enable-network-segmentation" Feb 25 11:19:29 crc kubenswrapper[5005]: fi Feb 25 11:19:29 crc kubenswrapper[5005]: Feb 25 11:19:29 crc kubenswrapper[5005]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Feb 25 11:19:29 crc kubenswrapper[5005]: exec /usr/bin/ovnkube \ Feb 25 11:19:29 crc kubenswrapper[5005]: --enable-interconnect \ Feb 25 11:19:29 crc kubenswrapper[5005]: --init-cluster-manager "${K8S_NODE}" \ Feb 25 11:19:29 crc kubenswrapper[5005]: --config-file=/run/ovnkube-config/ovnkube.conf \ Feb 25 11:19:29 crc kubenswrapper[5005]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Feb 25 11:19:29 crc kubenswrapper[5005]: --metrics-bind-address "127.0.0.1:29108" \ Feb 25 11:19:29 crc kubenswrapper[5005]: --metrics-enable-pprof \ Feb 25 11:19:29 crc kubenswrapper[5005]: --metrics-enable-config-duration \ Feb 25 11:19:29 crc kubenswrapper[5005]: ${ovn_v4_join_subnet_opt} \ Feb 25 11:19:29 crc kubenswrapper[5005]: ${ovn_v6_join_subnet_opt} \ Feb 25 11:19:29 crc kubenswrapper[5005]: ${ovn_v4_transit_switch_subnet_opt} \ Feb 25 11:19:29 crc kubenswrapper[5005]: ${ovn_v6_transit_switch_subnet_opt} \ Feb 25 11:19:29 crc kubenswrapper[5005]: ${dns_name_resolver_enabled_flag} \ Feb 25 11:19:29 crc kubenswrapper[5005]: ${persistent_ips_enabled_flag} \ Feb 25 11:19:29 crc kubenswrapper[5005]: ${multi_network_enabled_flag} \ Feb 25 11:19:29 crc kubenswrapper[5005]: ${network_segmentation_enabled_flag} Feb 25 11:19:29 crc kubenswrapper[5005]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n28xf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-9wjgc_openshift-ovn-kubernetes(12691472-eb44-46a1-bd71-cf3250a90e2b): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 25 11:19:29 crc kubenswrapper[5005]: > logger="UnhandledError" Feb 25 11:19:29 crc kubenswrapper[5005]: E0225 11:19:29.049033 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-dsd74" podUID="03175783-f1a5-4ac6-b942-91a23ab4439d" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.049629 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"33744808ce141729c89151ae1fcb954b67933173b91614dd941dd8cc793b99ca"} Feb 25 11:19:29 crc kubenswrapper[5005]: E0225 11:19:29.049727 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wjgc" podUID="12691472-eb44-46a1-bd71-cf3250a90e2b" Feb 25 11:19:29 crc kubenswrapper[5005]: E0225 11:19:29.053542 5005 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.053751 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-splp7" event={"ID":"2e33bce4-e290-4389-b690-398e3566f35d","Type":"ContainerStarted","Data":"ed5d804488a5b856d2618d4427a45da2b8739844013c921f871f81fb1f9bdd53"} Feb 25 11:19:29 crc kubenswrapper[5005]: E0225 11:19:29.053838 5005 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m5665,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-7l6vx_openshift-multus(ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 25 11:19:29 crc kubenswrapper[5005]: E0225 11:19:29.054749 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Feb 25 11:19:29 crc kubenswrapper[5005]: E0225 11:19:29.054703 5005 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 25 11:19:29 crc kubenswrapper[5005]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Feb 25 11:19:29 crc kubenswrapper[5005]: set -uo pipefail Feb 25 11:19:29 crc kubenswrapper[5005]: Feb 25 11:19:29 crc kubenswrapper[5005]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Feb 25 11:19:29 crc kubenswrapper[5005]: Feb 25 11:19:29 crc kubenswrapper[5005]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Feb 25 11:19:29 crc kubenswrapper[5005]: HOSTS_FILE="/etc/hosts" Feb 25 11:19:29 crc kubenswrapper[5005]: TEMP_FILE="/etc/hosts.tmp" Feb 25 11:19:29 crc kubenswrapper[5005]: Feb 25 11:19:29 crc kubenswrapper[5005]: IFS=', ' read -r -a services <<< "${SERVICES}" Feb 25 11:19:29 crc kubenswrapper[5005]: Feb 25 11:19:29 crc kubenswrapper[5005]: # Make a temporary file with the old hosts file's attributes. Feb 25 11:19:29 crc kubenswrapper[5005]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Feb 25 11:19:29 crc kubenswrapper[5005]: echo "Failed to preserve hosts file. Exiting." Feb 25 11:19:29 crc kubenswrapper[5005]: exit 1 Feb 25 11:19:29 crc kubenswrapper[5005]: fi Feb 25 11:19:29 crc kubenswrapper[5005]: Feb 25 11:19:29 crc kubenswrapper[5005]: while true; do Feb 25 11:19:29 crc kubenswrapper[5005]: declare -A svc_ips Feb 25 11:19:29 crc kubenswrapper[5005]: for svc in "${services[@]}"; do Feb 25 11:19:29 crc kubenswrapper[5005]: # Fetch service IP from cluster dns if present. We make several tries Feb 25 11:19:29 crc kubenswrapper[5005]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Feb 25 11:19:29 crc kubenswrapper[5005]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Feb 25 11:19:29 crc kubenswrapper[5005]: # support UDP loadbalancers and require reaching DNS through TCP. Feb 25 11:19:29 crc kubenswrapper[5005]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 25 11:19:29 crc kubenswrapper[5005]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 25 11:19:29 crc kubenswrapper[5005]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 25 11:19:29 crc kubenswrapper[5005]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Feb 25 11:19:29 crc kubenswrapper[5005]: for i in ${!cmds[*]} Feb 25 11:19:29 crc kubenswrapper[5005]: do Feb 25 11:19:29 crc kubenswrapper[5005]: ips=($(eval "${cmds[i]}")) Feb 25 11:19:29 crc kubenswrapper[5005]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Feb 25 11:19:29 crc kubenswrapper[5005]: svc_ips["${svc}"]="${ips[@]}" Feb 25 11:19:29 crc kubenswrapper[5005]: break Feb 25 11:19:29 crc kubenswrapper[5005]: fi Feb 25 11:19:29 crc kubenswrapper[5005]: done Feb 25 11:19:29 crc kubenswrapper[5005]: done Feb 25 11:19:29 crc kubenswrapper[5005]: Feb 25 11:19:29 crc kubenswrapper[5005]: # Update /etc/hosts only if we get valid service IPs Feb 25 11:19:29 crc kubenswrapper[5005]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Feb 25 11:19:29 crc kubenswrapper[5005]: # Stale entries could exist in /etc/hosts if the service is deleted Feb 25 11:19:29 crc kubenswrapper[5005]: if [[ -n "${svc_ips[*]-}" ]]; then Feb 25 11:19:29 crc kubenswrapper[5005]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Feb 25 11:19:29 crc kubenswrapper[5005]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Feb 25 11:19:29 crc kubenswrapper[5005]: # Only continue rebuilding the hosts entries if its original content is preserved Feb 25 11:19:29 crc kubenswrapper[5005]: sleep 60 & wait Feb 25 11:19:29 crc kubenswrapper[5005]: continue Feb 25 11:19:29 crc kubenswrapper[5005]: fi Feb 25 11:19:29 crc kubenswrapper[5005]: Feb 25 11:19:29 crc kubenswrapper[5005]: # Append resolver entries for services Feb 25 11:19:29 crc kubenswrapper[5005]: rc=0 Feb 25 11:19:29 crc kubenswrapper[5005]: for svc in "${!svc_ips[@]}"; do Feb 25 11:19:29 crc kubenswrapper[5005]: for ip in ${svc_ips[${svc}]}; do Feb 25 11:19:29 crc kubenswrapper[5005]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Feb 25 11:19:29 crc kubenswrapper[5005]: done Feb 25 11:19:29 crc kubenswrapper[5005]: done Feb 25 11:19:29 crc kubenswrapper[5005]: if [[ $rc -ne 0 ]]; then Feb 25 11:19:29 crc kubenswrapper[5005]: sleep 60 & wait Feb 25 11:19:29 crc kubenswrapper[5005]: continue Feb 25 11:19:29 crc kubenswrapper[5005]: fi Feb 25 11:19:29 crc kubenswrapper[5005]: Feb 25 11:19:29 crc kubenswrapper[5005]: Feb 25 11:19:29 crc kubenswrapper[5005]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Feb 25 11:19:29 crc kubenswrapper[5005]: # Replace /etc/hosts with our modified version if needed Feb 25 11:19:29 crc kubenswrapper[5005]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Feb 25 11:19:29 crc kubenswrapper[5005]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Feb 25 11:19:29 crc kubenswrapper[5005]: fi Feb 25 11:19:29 crc kubenswrapper[5005]: sleep 60 & wait Feb 25 11:19:29 crc kubenswrapper[5005]: unset svc_ips Feb 25 11:19:29 crc kubenswrapper[5005]: done Feb 25 11:19:29 crc kubenswrapper[5005]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cdmtt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-splp7_openshift-dns(2e33bce4-e290-4389-b690-398e3566f35d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 25 11:19:29 crc kubenswrapper[5005]: > logger="UnhandledError" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.055246 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wjgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12691472-eb44-46a1-bd71-cf3250a90e2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n28xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n28xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9wjgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.055435 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" event={"ID":"c496d07b-7684-4d5f-b36e-be187e76a3de","Type":"ContainerStarted","Data":"4a02fd2dd8ef9c173f9e5cd97f53894b0bbf7a0424f96b8c29d8e326350bbf5b"} Feb 25 11:19:29 crc kubenswrapper[5005]: E0225 11:19:29.055484 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-7l6vx" podUID="ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a" Feb 25 11:19:29 crc kubenswrapper[5005]: E0225 11:19:29.055908 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-splp7" podUID="2e33bce4-e290-4389-b690-398e3566f35d" Feb 25 11:19:29 crc kubenswrapper[5005]: E0225 11:19:29.056137 5005 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 25 11:19:29 crc kubenswrapper[5005]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 25 11:19:29 crc kubenswrapper[5005]: if [[ -f "/env/_master" ]]; then Feb 25 11:19:29 crc kubenswrapper[5005]: set -o allexport Feb 25 11:19:29 crc kubenswrapper[5005]: source "/env/_master" Feb 25 11:19:29 crc kubenswrapper[5005]: set +o allexport Feb 25 11:19:29 crc kubenswrapper[5005]: fi Feb 25 11:19:29 crc kubenswrapper[5005]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Feb 25 11:19:29 crc kubenswrapper[5005]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Feb 25 11:19:29 crc kubenswrapper[5005]: ho_enable="--enable-hybrid-overlay" Feb 25 11:19:29 crc kubenswrapper[5005]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Feb 25 11:19:29 crc kubenswrapper[5005]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Feb 25 11:19:29 crc kubenswrapper[5005]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Feb 25 11:19:29 crc kubenswrapper[5005]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 25 11:19:29 crc kubenswrapper[5005]: --webhook-cert-dir="/etc/webhook-cert" \ Feb 25 11:19:29 crc kubenswrapper[5005]: --webhook-host=127.0.0.1 \ Feb 25 11:19:29 crc kubenswrapper[5005]: --webhook-port=9743 \ Feb 25 11:19:29 crc kubenswrapper[5005]: ${ho_enable} \ Feb 25 11:19:29 crc kubenswrapper[5005]: --enable-interconnect \ Feb 25 11:19:29 crc kubenswrapper[5005]: --disable-approver \ Feb 25 11:19:29 crc kubenswrapper[5005]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Feb 25 11:19:29 crc kubenswrapper[5005]: --wait-for-kubernetes-api=200s \ Feb 25 11:19:29 crc kubenswrapper[5005]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Feb 25 11:19:29 crc kubenswrapper[5005]: --loglevel="${LOGLEVEL}" Feb 25 11:19:29 crc kubenswrapper[5005]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 25 11:19:29 crc kubenswrapper[5005]: > logger="UnhandledError" Feb 25 11:19:29 crc kubenswrapper[5005]: E0225 11:19:29.056788 5005 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 25 11:19:29 crc kubenswrapper[5005]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Feb 25 11:19:29 crc kubenswrapper[5005]: apiVersion: v1 Feb 25 11:19:29 crc kubenswrapper[5005]: clusters: Feb 25 11:19:29 crc kubenswrapper[5005]: - cluster: Feb 25 11:19:29 crc kubenswrapper[5005]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Feb 25 11:19:29 crc kubenswrapper[5005]: server: https://api-int.crc.testing:6443 Feb 25 11:19:29 crc kubenswrapper[5005]: name: default-cluster Feb 25 11:19:29 crc kubenswrapper[5005]: contexts: Feb 25 11:19:29 crc kubenswrapper[5005]: - context: Feb 25 11:19:29 crc kubenswrapper[5005]: cluster: default-cluster Feb 25 11:19:29 crc kubenswrapper[5005]: namespace: default Feb 25 11:19:29 crc kubenswrapper[5005]: user: default-auth Feb 25 11:19:29 crc kubenswrapper[5005]: name: default-context Feb 25 11:19:29 crc kubenswrapper[5005]: current-context: default-context Feb 25 11:19:29 crc kubenswrapper[5005]: kind: Config Feb 25 11:19:29 crc kubenswrapper[5005]: preferences: {} Feb 25 11:19:29 crc kubenswrapper[5005]: users: Feb 25 11:19:29 crc kubenswrapper[5005]: - name: default-auth Feb 25 11:19:29 crc kubenswrapper[5005]: user: Feb 25 11:19:29 crc kubenswrapper[5005]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 25 11:19:29 crc kubenswrapper[5005]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 25 11:19:29 crc kubenswrapper[5005]: EOF Feb 25 11:19:29 crc kubenswrapper[5005]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z24kc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-bfx5c_openshift-ovn-kubernetes(c496d07b-7684-4d5f-b36e-be187e76a3de): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 25 11:19:29 crc kubenswrapper[5005]: > logger="UnhandledError" Feb 25 11:19:29 crc kubenswrapper[5005]: E0225 11:19:29.058011 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" podUID="c496d07b-7684-4d5f-b36e-be187e76a3de" Feb 25 11:19:29 crc kubenswrapper[5005]: E0225 11:19:29.058104 5005 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 25 11:19:29 crc kubenswrapper[5005]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 25 11:19:29 crc kubenswrapper[5005]: if [[ -f "/env/_master" ]]; then Feb 25 11:19:29 crc kubenswrapper[5005]: set -o allexport Feb 25 11:19:29 crc kubenswrapper[5005]: source "/env/_master" Feb 25 11:19:29 crc kubenswrapper[5005]: set +o allexport Feb 25 11:19:29 crc kubenswrapper[5005]: fi Feb 25 11:19:29 crc kubenswrapper[5005]: Feb 25 11:19:29 crc kubenswrapper[5005]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Feb 25 11:19:29 crc kubenswrapper[5005]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 25 11:19:29 crc kubenswrapper[5005]: --disable-webhook \ Feb 25 11:19:29 crc kubenswrapper[5005]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Feb 25 11:19:29 crc kubenswrapper[5005]: --loglevel="${LOGLEVEL}" Feb 25 11:19:29 crc kubenswrapper[5005]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 25 11:19:29 crc kubenswrapper[5005]: > logger="UnhandledError" Feb 25 11:19:29 crc kubenswrapper[5005]: E0225 11:19:29.059626 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.064693 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-x2fvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67964f07-93aa-42ec-90a7-730363ab668b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-649nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-649nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-x2fvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.076798 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7l6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7l6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.086798 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.094794 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d56aef23-d794-49a4-8e6b-2c9e2d1adebf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tct5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.103484 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.109877 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.109992 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.110089 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.110181 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.110264 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:29Z","lastTransitionTime":"2026-02-25T11:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.112757 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.120520 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.127188 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xx5w9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6300726a-8703-4d2a-9688-264da029b561\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7qdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xx5w9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.134070 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-splp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e33bce4-e290-4389-b690-398e3566f35d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdmtt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-splp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.142706 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dsd74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03175783-f1a5-4ac6-b942-91a23ab4439d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8fbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dsd74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.151982 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.161884 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.178091 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c496d07b-7684-4d5f-b36e-be187e76a3de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bfx5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.187149 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.197354 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.212206 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.212410 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.212487 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.212592 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.212675 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:29Z","lastTransitionTime":"2026-02-25T11:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.218688 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c496d07b-7684-4d5f-b36e-be187e76a3de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bfx5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.228124 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.239158 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wjgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12691472-eb44-46a1-bd71-cf3250a90e2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n28xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n28xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9wjgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.248584 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-x2fvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67964f07-93aa-42ec-90a7-730363ab668b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-649nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-649nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-x2fvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.286220 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7l6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7l6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.298942 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d56aef23-d794-49a4-8e6b-2c9e2d1adebf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tct5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.310821 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xx5w9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6300726a-8703-4d2a-9688-264da029b561\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7qdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xx5w9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.314955 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.314986 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.314995 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.315008 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.315016 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:29Z","lastTransitionTime":"2026-02-25T11:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.319672 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-splp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e33bce4-e290-4389-b690-398e3566f35d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdmtt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-splp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.331651 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dsd74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03175783-f1a5-4ac6-b942-91a23ab4439d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8fbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dsd74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.341963 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.349732 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.359535 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.372921 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.373028 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 25 11:19:29 crc kubenswrapper[5005]: E0225 11:19:29.373059 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-25 11:19:31.373042581 +0000 UTC m=+85.413774908 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.373082 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.373106 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.373132 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 25 11:19:29 crc kubenswrapper[5005]: E0225 11:19:29.373139 5005 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 25 11:19:29 crc kubenswrapper[5005]: E0225 11:19:29.373156 5005 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 25 11:19:29 crc kubenswrapper[5005]: E0225 11:19:29.373168 5005 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 25 11:19:29 crc kubenswrapper[5005]: E0225 11:19:29.373192 5005 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 25 11:19:29 crc kubenswrapper[5005]: E0225 11:19:29.373206 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-25 11:19:31.373193977 +0000 UTC m=+85.413926314 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 25 11:19:29 crc kubenswrapper[5005]: E0225 11:19:29.373219 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-25 11:19:31.373213478 +0000 UTC m=+85.413945795 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 25 11:19:29 crc kubenswrapper[5005]: E0225 11:19:29.373269 5005 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 25 11:19:29 crc kubenswrapper[5005]: E0225 11:19:29.373311 5005 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 25 11:19:29 crc kubenswrapper[5005]: E0225 11:19:29.373333 5005 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 25 11:19:29 crc kubenswrapper[5005]: E0225 11:19:29.373351 5005 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 25 11:19:29 crc kubenswrapper[5005]: E0225 11:19:29.373367 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-25 11:19:31.373346302 +0000 UTC m=+85.414078629 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 25 11:19:29 crc kubenswrapper[5005]: E0225 11:19:29.373410 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-25 11:19:31.373399274 +0000 UTC m=+85.414131681 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.417290 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.417333 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.417342 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.417355 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.417380 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:29Z","lastTransitionTime":"2026-02-25T11:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.474225 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/67964f07-93aa-42ec-90a7-730363ab668b-metrics-certs\") pod \"network-metrics-daemon-x2fvb\" (UID: \"67964f07-93aa-42ec-90a7-730363ab668b\") " pod="openshift-multus/network-metrics-daemon-x2fvb" Feb 25 11:19:29 crc kubenswrapper[5005]: E0225 11:19:29.474358 5005 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 25 11:19:29 crc kubenswrapper[5005]: E0225 11:19:29.474431 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67964f07-93aa-42ec-90a7-730363ab668b-metrics-certs podName:67964f07-93aa-42ec-90a7-730363ab668b nodeName:}" failed. No retries permitted until 2026-02-25 11:19:31.474414766 +0000 UTC m=+85.515147083 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/67964f07-93aa-42ec-90a7-730363ab668b-metrics-certs") pod "network-metrics-daemon-x2fvb" (UID: "67964f07-93aa-42ec-90a7-730363ab668b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.519530 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.519571 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.519582 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.519597 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.519609 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:29Z","lastTransitionTime":"2026-02-25T11:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.621865 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.621902 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.621915 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.621931 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.621944 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:29Z","lastTransitionTime":"2026-02-25T11:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.685528 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x2fvb" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.685528 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.685548 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.685713 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 25 11:19:29 crc kubenswrapper[5005]: E0225 11:19:29.685807 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 25 11:19:29 crc kubenswrapper[5005]: E0225 11:19:29.685995 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 25 11:19:29 crc kubenswrapper[5005]: E0225 11:19:29.686111 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x2fvb" podUID="67964f07-93aa-42ec-90a7-730363ab668b" Feb 25 11:19:29 crc kubenswrapper[5005]: E0225 11:19:29.686196 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.695449 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.724317 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.724350 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.724359 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.724396 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.724409 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:29Z","lastTransitionTime":"2026-02-25T11:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.827456 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.827504 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.827516 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.827532 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.827544 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:29Z","lastTransitionTime":"2026-02-25T11:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.929854 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.929905 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.929923 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.929947 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:29 crc kubenswrapper[5005]: I0225 11:19:29.929964 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:29Z","lastTransitionTime":"2026-02-25T11:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:30 crc kubenswrapper[5005]: I0225 11:19:30.032808 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:30 crc kubenswrapper[5005]: I0225 11:19:30.032866 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:30 crc kubenswrapper[5005]: I0225 11:19:30.032882 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:30 crc kubenswrapper[5005]: I0225 11:19:30.032905 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:30 crc kubenswrapper[5005]: I0225 11:19:30.032922 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:30Z","lastTransitionTime":"2026-02-25T11:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:30 crc kubenswrapper[5005]: I0225 11:19:30.135657 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:30 crc kubenswrapper[5005]: I0225 11:19:30.135713 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:30 crc kubenswrapper[5005]: I0225 11:19:30.135728 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:30 crc kubenswrapper[5005]: I0225 11:19:30.135747 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:30 crc kubenswrapper[5005]: I0225 11:19:30.135760 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:30Z","lastTransitionTime":"2026-02-25T11:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:30 crc kubenswrapper[5005]: I0225 11:19:30.238341 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:30 crc kubenswrapper[5005]: I0225 11:19:30.238421 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:30 crc kubenswrapper[5005]: I0225 11:19:30.238435 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:30 crc kubenswrapper[5005]: I0225 11:19:30.238455 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:30 crc kubenswrapper[5005]: I0225 11:19:30.238468 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:30Z","lastTransitionTime":"2026-02-25T11:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:30 crc kubenswrapper[5005]: I0225 11:19:30.340846 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:30 crc kubenswrapper[5005]: I0225 11:19:30.340915 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:30 crc kubenswrapper[5005]: I0225 11:19:30.340931 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:30 crc kubenswrapper[5005]: I0225 11:19:30.341286 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:30 crc kubenswrapper[5005]: I0225 11:19:30.341332 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:30Z","lastTransitionTime":"2026-02-25T11:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:30 crc kubenswrapper[5005]: I0225 11:19:30.444592 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:30 crc kubenswrapper[5005]: I0225 11:19:30.444651 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:30 crc kubenswrapper[5005]: I0225 11:19:30.444670 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:30 crc kubenswrapper[5005]: I0225 11:19:30.444695 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:30 crc kubenswrapper[5005]: I0225 11:19:30.444713 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:30Z","lastTransitionTime":"2026-02-25T11:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:30 crc kubenswrapper[5005]: I0225 11:19:30.548677 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:30 crc kubenswrapper[5005]: I0225 11:19:30.548776 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:30 crc kubenswrapper[5005]: I0225 11:19:30.548796 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:30 crc kubenswrapper[5005]: I0225 11:19:30.548829 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:30 crc kubenswrapper[5005]: I0225 11:19:30.548849 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:30Z","lastTransitionTime":"2026-02-25T11:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:30 crc kubenswrapper[5005]: I0225 11:19:30.650835 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:30 crc kubenswrapper[5005]: I0225 11:19:30.650905 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:30 crc kubenswrapper[5005]: I0225 11:19:30.650921 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:30 crc kubenswrapper[5005]: I0225 11:19:30.650947 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:30 crc kubenswrapper[5005]: I0225 11:19:30.650964 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:30Z","lastTransitionTime":"2026-02-25T11:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:30 crc kubenswrapper[5005]: I0225 11:19:30.754593 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:30 crc kubenswrapper[5005]: I0225 11:19:30.754659 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:30 crc kubenswrapper[5005]: I0225 11:19:30.754682 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:30 crc kubenswrapper[5005]: I0225 11:19:30.754706 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:30 crc kubenswrapper[5005]: I0225 11:19:30.754726 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:30Z","lastTransitionTime":"2026-02-25T11:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:30 crc kubenswrapper[5005]: I0225 11:19:30.857357 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:30 crc kubenswrapper[5005]: I0225 11:19:30.857414 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:30 crc kubenswrapper[5005]: I0225 11:19:30.857425 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:30 crc kubenswrapper[5005]: I0225 11:19:30.857441 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:30 crc kubenswrapper[5005]: I0225 11:19:30.857452 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:30Z","lastTransitionTime":"2026-02-25T11:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:30 crc kubenswrapper[5005]: I0225 11:19:30.959871 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:30 crc kubenswrapper[5005]: I0225 11:19:30.959910 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:30 crc kubenswrapper[5005]: I0225 11:19:30.959921 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:30 crc kubenswrapper[5005]: I0225 11:19:30.959938 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:30 crc kubenswrapper[5005]: I0225 11:19:30.959950 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:30Z","lastTransitionTime":"2026-02-25T11:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:31 crc kubenswrapper[5005]: I0225 11:19:31.062500 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:31 crc kubenswrapper[5005]: I0225 11:19:31.062561 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:31 crc kubenswrapper[5005]: I0225 11:19:31.062583 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:31 crc kubenswrapper[5005]: I0225 11:19:31.062611 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:31 crc kubenswrapper[5005]: I0225 11:19:31.062634 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:31Z","lastTransitionTime":"2026-02-25T11:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:31 crc kubenswrapper[5005]: I0225 11:19:31.165641 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:31 crc kubenswrapper[5005]: I0225 11:19:31.165726 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:31 crc kubenswrapper[5005]: I0225 11:19:31.165737 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:31 crc kubenswrapper[5005]: I0225 11:19:31.165767 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:31 crc kubenswrapper[5005]: I0225 11:19:31.165776 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:31Z","lastTransitionTime":"2026-02-25T11:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:31 crc kubenswrapper[5005]: I0225 11:19:31.268812 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:31 crc kubenswrapper[5005]: I0225 11:19:31.268922 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:31 crc kubenswrapper[5005]: I0225 11:19:31.268985 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:31 crc kubenswrapper[5005]: I0225 11:19:31.269025 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:31 crc kubenswrapper[5005]: I0225 11:19:31.269084 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:31Z","lastTransitionTime":"2026-02-25T11:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:31 crc kubenswrapper[5005]: I0225 11:19:31.371975 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:31 crc kubenswrapper[5005]: I0225 11:19:31.372015 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:31 crc kubenswrapper[5005]: I0225 11:19:31.372023 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:31 crc kubenswrapper[5005]: I0225 11:19:31.372039 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:31 crc kubenswrapper[5005]: I0225 11:19:31.372048 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:31Z","lastTransitionTime":"2026-02-25T11:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:31 crc kubenswrapper[5005]: I0225 11:19:31.394060 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 25 11:19:31 crc kubenswrapper[5005]: E0225 11:19:31.394278 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-25 11:19:35.394239663 +0000 UTC m=+89.434972020 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:31 crc kubenswrapper[5005]: I0225 11:19:31.394421 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 25 11:19:31 crc kubenswrapper[5005]: I0225 11:19:31.394498 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 25 11:19:31 crc kubenswrapper[5005]: I0225 11:19:31.394550 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 25 11:19:31 crc kubenswrapper[5005]: I0225 11:19:31.394623 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 25 11:19:31 crc kubenswrapper[5005]: E0225 11:19:31.394639 5005 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 25 11:19:31 crc kubenswrapper[5005]: E0225 11:19:31.394675 5005 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 25 11:19:31 crc kubenswrapper[5005]: E0225 11:19:31.394686 5005 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 25 11:19:31 crc kubenswrapper[5005]: E0225 11:19:31.394758 5005 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 25 11:19:31 crc kubenswrapper[5005]: E0225 11:19:31.394797 5005 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 25 11:19:31 crc kubenswrapper[5005]: E0225 11:19:31.394821 5005 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 25 11:19:31 crc kubenswrapper[5005]: E0225 11:19:31.394847 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-25 11:19:35.394823484 +0000 UTC m=+89.435555851 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 25 11:19:31 crc kubenswrapper[5005]: E0225 11:19:31.394694 5005 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 25 11:19:31 crc kubenswrapper[5005]: E0225 11:19:31.394875 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-25 11:19:35.394863126 +0000 UTC m=+89.435595493 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 25 11:19:31 crc kubenswrapper[5005]: E0225 11:19:31.394939 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-25 11:19:35.394917338 +0000 UTC m=+89.435649695 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 25 11:19:31 crc kubenswrapper[5005]: E0225 11:19:31.394852 5005 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 25 11:19:31 crc kubenswrapper[5005]: E0225 11:19:31.395009 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-25 11:19:35.394992591 +0000 UTC m=+89.435724958 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 25 11:19:31 crc kubenswrapper[5005]: I0225 11:19:31.475772 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:31 crc kubenswrapper[5005]: I0225 11:19:31.475840 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:31 crc kubenswrapper[5005]: I0225 11:19:31.475866 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:31 crc kubenswrapper[5005]: I0225 11:19:31.475893 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:31 crc kubenswrapper[5005]: I0225 11:19:31.475914 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:31Z","lastTransitionTime":"2026-02-25T11:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:31 crc kubenswrapper[5005]: I0225 11:19:31.496461 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/67964f07-93aa-42ec-90a7-730363ab668b-metrics-certs\") pod \"network-metrics-daemon-x2fvb\" (UID: \"67964f07-93aa-42ec-90a7-730363ab668b\") " pod="openshift-multus/network-metrics-daemon-x2fvb" Feb 25 11:19:31 crc kubenswrapper[5005]: E0225 11:19:31.496616 5005 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 25 11:19:31 crc kubenswrapper[5005]: E0225 11:19:31.496779 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67964f07-93aa-42ec-90a7-730363ab668b-metrics-certs podName:67964f07-93aa-42ec-90a7-730363ab668b nodeName:}" failed. No retries permitted until 2026-02-25 11:19:35.496756399 +0000 UTC m=+89.537488746 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/67964f07-93aa-42ec-90a7-730363ab668b-metrics-certs") pod "network-metrics-daemon-x2fvb" (UID: "67964f07-93aa-42ec-90a7-730363ab668b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 25 11:19:31 crc kubenswrapper[5005]: I0225 11:19:31.579290 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:31 crc kubenswrapper[5005]: I0225 11:19:31.579350 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:31 crc kubenswrapper[5005]: I0225 11:19:31.579367 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:31 crc kubenswrapper[5005]: I0225 11:19:31.579426 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:31 crc kubenswrapper[5005]: I0225 11:19:31.579450 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:31Z","lastTransitionTime":"2026-02-25T11:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:31 crc kubenswrapper[5005]: I0225 11:19:31.682439 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:31 crc kubenswrapper[5005]: I0225 11:19:31.682498 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:31 crc kubenswrapper[5005]: I0225 11:19:31.682514 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:31 crc kubenswrapper[5005]: I0225 11:19:31.682538 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:31 crc kubenswrapper[5005]: I0225 11:19:31.682556 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:31Z","lastTransitionTime":"2026-02-25T11:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:31 crc kubenswrapper[5005]: I0225 11:19:31.684696 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 25 11:19:31 crc kubenswrapper[5005]: I0225 11:19:31.684745 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 25 11:19:31 crc kubenswrapper[5005]: E0225 11:19:31.684836 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 25 11:19:31 crc kubenswrapper[5005]: I0225 11:19:31.684696 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 25 11:19:31 crc kubenswrapper[5005]: I0225 11:19:31.684910 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x2fvb" Feb 25 11:19:31 crc kubenswrapper[5005]: E0225 11:19:31.685007 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 25 11:19:31 crc kubenswrapper[5005]: E0225 11:19:31.685110 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x2fvb" podUID="67964f07-93aa-42ec-90a7-730363ab668b" Feb 25 11:19:31 crc kubenswrapper[5005]: E0225 11:19:31.685191 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 25 11:19:31 crc kubenswrapper[5005]: I0225 11:19:31.785161 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:31 crc kubenswrapper[5005]: I0225 11:19:31.785235 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:31 crc kubenswrapper[5005]: I0225 11:19:31.785259 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:31 crc kubenswrapper[5005]: I0225 11:19:31.785288 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:31 crc kubenswrapper[5005]: I0225 11:19:31.785312 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:31Z","lastTransitionTime":"2026-02-25T11:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:31 crc kubenswrapper[5005]: I0225 11:19:31.888296 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:31 crc kubenswrapper[5005]: I0225 11:19:31.888364 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:31 crc kubenswrapper[5005]: I0225 11:19:31.888419 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:31 crc kubenswrapper[5005]: I0225 11:19:31.888445 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:31 crc kubenswrapper[5005]: I0225 11:19:31.888464 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:31Z","lastTransitionTime":"2026-02-25T11:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:31 crc kubenswrapper[5005]: I0225 11:19:31.991971 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:31 crc kubenswrapper[5005]: I0225 11:19:31.992039 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:31 crc kubenswrapper[5005]: I0225 11:19:31.992055 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:31 crc kubenswrapper[5005]: I0225 11:19:31.992079 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:31 crc kubenswrapper[5005]: I0225 11:19:31.992097 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:31Z","lastTransitionTime":"2026-02-25T11:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:32 crc kubenswrapper[5005]: I0225 11:19:32.094652 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:32 crc kubenswrapper[5005]: I0225 11:19:32.094761 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:32 crc kubenswrapper[5005]: I0225 11:19:32.094785 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:32 crc kubenswrapper[5005]: I0225 11:19:32.094812 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:32 crc kubenswrapper[5005]: I0225 11:19:32.094830 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:32Z","lastTransitionTime":"2026-02-25T11:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:32 crc kubenswrapper[5005]: I0225 11:19:32.198458 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:32 crc kubenswrapper[5005]: I0225 11:19:32.198523 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:32 crc kubenswrapper[5005]: I0225 11:19:32.198547 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:32 crc kubenswrapper[5005]: I0225 11:19:32.198622 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:32 crc kubenswrapper[5005]: I0225 11:19:32.198645 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:32Z","lastTransitionTime":"2026-02-25T11:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:32 crc kubenswrapper[5005]: I0225 11:19:32.300950 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:32 crc kubenswrapper[5005]: I0225 11:19:32.301003 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:32 crc kubenswrapper[5005]: I0225 11:19:32.301019 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:32 crc kubenswrapper[5005]: I0225 11:19:32.301044 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:32 crc kubenswrapper[5005]: I0225 11:19:32.301064 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:32Z","lastTransitionTime":"2026-02-25T11:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:32 crc kubenswrapper[5005]: I0225 11:19:32.403846 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:32 crc kubenswrapper[5005]: I0225 11:19:32.403892 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:32 crc kubenswrapper[5005]: I0225 11:19:32.403908 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:32 crc kubenswrapper[5005]: I0225 11:19:32.403929 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:32 crc kubenswrapper[5005]: I0225 11:19:32.403945 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:32Z","lastTransitionTime":"2026-02-25T11:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:32 crc kubenswrapper[5005]: I0225 11:19:32.507067 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:32 crc kubenswrapper[5005]: I0225 11:19:32.507111 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:32 crc kubenswrapper[5005]: I0225 11:19:32.507122 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:32 crc kubenswrapper[5005]: I0225 11:19:32.507140 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:32 crc kubenswrapper[5005]: I0225 11:19:32.507152 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:32Z","lastTransitionTime":"2026-02-25T11:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:32 crc kubenswrapper[5005]: I0225 11:19:32.609844 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:32 crc kubenswrapper[5005]: I0225 11:19:32.609896 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:32 crc kubenswrapper[5005]: I0225 11:19:32.609914 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:32 crc kubenswrapper[5005]: I0225 11:19:32.609937 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:32 crc kubenswrapper[5005]: I0225 11:19:32.609954 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:32Z","lastTransitionTime":"2026-02-25T11:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:32 crc kubenswrapper[5005]: I0225 11:19:32.712996 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:32 crc kubenswrapper[5005]: I0225 11:19:32.713057 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:32 crc kubenswrapper[5005]: I0225 11:19:32.713081 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:32 crc kubenswrapper[5005]: I0225 11:19:32.713111 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:32 crc kubenswrapper[5005]: I0225 11:19:32.713134 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:32Z","lastTransitionTime":"2026-02-25T11:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:32 crc kubenswrapper[5005]: I0225 11:19:32.816859 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:32 crc kubenswrapper[5005]: I0225 11:19:32.816904 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:32 crc kubenswrapper[5005]: I0225 11:19:32.816956 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:32 crc kubenswrapper[5005]: I0225 11:19:32.816972 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:32 crc kubenswrapper[5005]: I0225 11:19:32.816984 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:32Z","lastTransitionTime":"2026-02-25T11:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:32 crc kubenswrapper[5005]: I0225 11:19:32.920296 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:32 crc kubenswrapper[5005]: I0225 11:19:32.920362 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:32 crc kubenswrapper[5005]: I0225 11:19:32.920584 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:32 crc kubenswrapper[5005]: I0225 11:19:32.920603 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:32 crc kubenswrapper[5005]: I0225 11:19:32.920613 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:32Z","lastTransitionTime":"2026-02-25T11:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:33 crc kubenswrapper[5005]: I0225 11:19:33.022853 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:33 crc kubenswrapper[5005]: I0225 11:19:33.022900 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:33 crc kubenswrapper[5005]: I0225 11:19:33.022916 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:33 crc kubenswrapper[5005]: I0225 11:19:33.022935 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:33 crc kubenswrapper[5005]: I0225 11:19:33.022949 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:33Z","lastTransitionTime":"2026-02-25T11:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:33 crc kubenswrapper[5005]: I0225 11:19:33.080882 5005 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 25 11:19:33 crc kubenswrapper[5005]: I0225 11:19:33.094943 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 25 11:19:33 crc kubenswrapper[5005]: I0225 11:19:33.095284 5005 scope.go:117] "RemoveContainer" containerID="c59c602484c6cda3ffbd176e13b44ae1676fa65bde2f71e60e0e03bdc0c96375" Feb 25 11:19:33 crc kubenswrapper[5005]: E0225 11:19:33.095478 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 25 11:19:33 crc kubenswrapper[5005]: I0225 11:19:33.124887 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:33 crc kubenswrapper[5005]: I0225 11:19:33.124949 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:33 crc kubenswrapper[5005]: I0225 11:19:33.124972 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:33 crc kubenswrapper[5005]: I0225 11:19:33.124999 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:33 crc kubenswrapper[5005]: I0225 11:19:33.125022 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:33Z","lastTransitionTime":"2026-02-25T11:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:33 crc kubenswrapper[5005]: I0225 11:19:33.228241 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:33 crc kubenswrapper[5005]: I0225 11:19:33.228322 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:33 crc kubenswrapper[5005]: I0225 11:19:33.228351 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:33 crc kubenswrapper[5005]: I0225 11:19:33.228412 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:33 crc kubenswrapper[5005]: I0225 11:19:33.228463 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:33Z","lastTransitionTime":"2026-02-25T11:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:33 crc kubenswrapper[5005]: I0225 11:19:33.331622 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:33 crc kubenswrapper[5005]: I0225 11:19:33.331692 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:33 crc kubenswrapper[5005]: I0225 11:19:33.331711 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:33 crc kubenswrapper[5005]: I0225 11:19:33.331738 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:33 crc kubenswrapper[5005]: I0225 11:19:33.331758 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:33Z","lastTransitionTime":"2026-02-25T11:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:33 crc kubenswrapper[5005]: I0225 11:19:33.435547 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:33 crc kubenswrapper[5005]: I0225 11:19:33.435609 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:33 crc kubenswrapper[5005]: I0225 11:19:33.435621 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:33 crc kubenswrapper[5005]: I0225 11:19:33.435640 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:33 crc kubenswrapper[5005]: I0225 11:19:33.435652 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:33Z","lastTransitionTime":"2026-02-25T11:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:33 crc kubenswrapper[5005]: I0225 11:19:33.538413 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:33 crc kubenswrapper[5005]: I0225 11:19:33.538488 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:33 crc kubenswrapper[5005]: I0225 11:19:33.538522 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:33 crc kubenswrapper[5005]: I0225 11:19:33.538552 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:33 crc kubenswrapper[5005]: I0225 11:19:33.538570 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:33Z","lastTransitionTime":"2026-02-25T11:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:33 crc kubenswrapper[5005]: I0225 11:19:33.641798 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:33 crc kubenswrapper[5005]: I0225 11:19:33.641850 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:33 crc kubenswrapper[5005]: I0225 11:19:33.641859 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:33 crc kubenswrapper[5005]: I0225 11:19:33.641880 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:33 crc kubenswrapper[5005]: I0225 11:19:33.641892 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:33Z","lastTransitionTime":"2026-02-25T11:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:33 crc kubenswrapper[5005]: I0225 11:19:33.685290 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x2fvb" Feb 25 11:19:33 crc kubenswrapper[5005]: I0225 11:19:33.685309 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 25 11:19:33 crc kubenswrapper[5005]: I0225 11:19:33.685454 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 25 11:19:33 crc kubenswrapper[5005]: E0225 11:19:33.685461 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x2fvb" podUID="67964f07-93aa-42ec-90a7-730363ab668b" Feb 25 11:19:33 crc kubenswrapper[5005]: E0225 11:19:33.685756 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 25 11:19:33 crc kubenswrapper[5005]: E0225 11:19:33.685813 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 25 11:19:33 crc kubenswrapper[5005]: I0225 11:19:33.685802 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 25 11:19:33 crc kubenswrapper[5005]: E0225 11:19:33.685969 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 25 11:19:33 crc kubenswrapper[5005]: I0225 11:19:33.745670 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:33 crc kubenswrapper[5005]: I0225 11:19:33.745740 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:33 crc kubenswrapper[5005]: I0225 11:19:33.745758 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:33 crc kubenswrapper[5005]: I0225 11:19:33.745785 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:33 crc kubenswrapper[5005]: I0225 11:19:33.745802 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:33Z","lastTransitionTime":"2026-02-25T11:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:33 crc kubenswrapper[5005]: I0225 11:19:33.849140 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:33 crc kubenswrapper[5005]: I0225 11:19:33.849183 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:33 crc kubenswrapper[5005]: I0225 11:19:33.849195 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:33 crc kubenswrapper[5005]: I0225 11:19:33.849211 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:33 crc kubenswrapper[5005]: I0225 11:19:33.849223 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:33Z","lastTransitionTime":"2026-02-25T11:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:33 crc kubenswrapper[5005]: I0225 11:19:33.951751 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:33 crc kubenswrapper[5005]: I0225 11:19:33.951792 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:33 crc kubenswrapper[5005]: I0225 11:19:33.951805 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:33 crc kubenswrapper[5005]: I0225 11:19:33.951822 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:33 crc kubenswrapper[5005]: I0225 11:19:33.951834 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:33Z","lastTransitionTime":"2026-02-25T11:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:34 crc kubenswrapper[5005]: I0225 11:19:34.054127 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:34 crc kubenswrapper[5005]: I0225 11:19:34.054170 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:34 crc kubenswrapper[5005]: I0225 11:19:34.054180 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:34 crc kubenswrapper[5005]: I0225 11:19:34.054196 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:34 crc kubenswrapper[5005]: I0225 11:19:34.054210 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:34Z","lastTransitionTime":"2026-02-25T11:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:34 crc kubenswrapper[5005]: I0225 11:19:34.069164 5005 scope.go:117] "RemoveContainer" containerID="c59c602484c6cda3ffbd176e13b44ae1676fa65bde2f71e60e0e03bdc0c96375" Feb 25 11:19:34 crc kubenswrapper[5005]: E0225 11:19:34.069469 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 25 11:19:34 crc kubenswrapper[5005]: I0225 11:19:34.156641 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:34 crc kubenswrapper[5005]: I0225 11:19:34.156704 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:34 crc kubenswrapper[5005]: I0225 11:19:34.156722 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:34 crc kubenswrapper[5005]: I0225 11:19:34.156746 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:34 crc kubenswrapper[5005]: I0225 11:19:34.156765 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:34Z","lastTransitionTime":"2026-02-25T11:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:34 crc kubenswrapper[5005]: I0225 11:19:34.259601 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:34 crc kubenswrapper[5005]: I0225 11:19:34.259647 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:34 crc kubenswrapper[5005]: I0225 11:19:34.259663 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:34 crc kubenswrapper[5005]: I0225 11:19:34.259682 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:34 crc kubenswrapper[5005]: I0225 11:19:34.259699 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:34Z","lastTransitionTime":"2026-02-25T11:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:34 crc kubenswrapper[5005]: I0225 11:19:34.361985 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:34 crc kubenswrapper[5005]: I0225 11:19:34.362046 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:34 crc kubenswrapper[5005]: I0225 11:19:34.362064 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:34 crc kubenswrapper[5005]: I0225 11:19:34.362086 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:34 crc kubenswrapper[5005]: I0225 11:19:34.362102 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:34Z","lastTransitionTime":"2026-02-25T11:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:34 crc kubenswrapper[5005]: I0225 11:19:34.464420 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:34 crc kubenswrapper[5005]: I0225 11:19:34.464489 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:34 crc kubenswrapper[5005]: I0225 11:19:34.464506 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:34 crc kubenswrapper[5005]: I0225 11:19:34.464531 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:34 crc kubenswrapper[5005]: I0225 11:19:34.464554 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:34Z","lastTransitionTime":"2026-02-25T11:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:34 crc kubenswrapper[5005]: I0225 11:19:34.509821 5005 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 25 11:19:34 crc kubenswrapper[5005]: I0225 11:19:34.567606 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:34 crc kubenswrapper[5005]: I0225 11:19:34.567666 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:34 crc kubenswrapper[5005]: I0225 11:19:34.567684 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:34 crc kubenswrapper[5005]: I0225 11:19:34.567708 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:34 crc kubenswrapper[5005]: I0225 11:19:34.567728 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:34Z","lastTransitionTime":"2026-02-25T11:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:34 crc kubenswrapper[5005]: I0225 11:19:34.670509 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:34 crc kubenswrapper[5005]: I0225 11:19:34.670606 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:34 crc kubenswrapper[5005]: I0225 11:19:34.670626 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:34 crc kubenswrapper[5005]: I0225 11:19:34.670650 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:34 crc kubenswrapper[5005]: I0225 11:19:34.670667 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:34Z","lastTransitionTime":"2026-02-25T11:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:34 crc kubenswrapper[5005]: I0225 11:19:34.772833 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:34 crc kubenswrapper[5005]: I0225 11:19:34.772896 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:34 crc kubenswrapper[5005]: I0225 11:19:34.772912 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:34 crc kubenswrapper[5005]: I0225 11:19:34.772934 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:34 crc kubenswrapper[5005]: I0225 11:19:34.772950 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:34Z","lastTransitionTime":"2026-02-25T11:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:34 crc kubenswrapper[5005]: I0225 11:19:34.875260 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:34 crc kubenswrapper[5005]: I0225 11:19:34.875440 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:34 crc kubenswrapper[5005]: I0225 11:19:34.875470 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:34 crc kubenswrapper[5005]: I0225 11:19:34.875534 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:34 crc kubenswrapper[5005]: I0225 11:19:34.875554 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:34Z","lastTransitionTime":"2026-02-25T11:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:34 crc kubenswrapper[5005]: I0225 11:19:34.977791 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:34 crc kubenswrapper[5005]: I0225 11:19:34.977859 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:34 crc kubenswrapper[5005]: I0225 11:19:34.977876 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:34 crc kubenswrapper[5005]: I0225 11:19:34.977901 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:34 crc kubenswrapper[5005]: I0225 11:19:34.977921 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:34Z","lastTransitionTime":"2026-02-25T11:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:35 crc kubenswrapper[5005]: I0225 11:19:35.080135 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:35 crc kubenswrapper[5005]: I0225 11:19:35.080217 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:35 crc kubenswrapper[5005]: I0225 11:19:35.080244 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:35 crc kubenswrapper[5005]: I0225 11:19:35.080274 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:35 crc kubenswrapper[5005]: I0225 11:19:35.080303 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:35Z","lastTransitionTime":"2026-02-25T11:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:35 crc kubenswrapper[5005]: I0225 11:19:35.185124 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:35 crc kubenswrapper[5005]: I0225 11:19:35.185204 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:35 crc kubenswrapper[5005]: I0225 11:19:35.185225 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:35 crc kubenswrapper[5005]: I0225 11:19:35.185254 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:35 crc kubenswrapper[5005]: I0225 11:19:35.185285 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:35Z","lastTransitionTime":"2026-02-25T11:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:35 crc kubenswrapper[5005]: I0225 11:19:35.288450 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:35 crc kubenswrapper[5005]: I0225 11:19:35.288495 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:35 crc kubenswrapper[5005]: I0225 11:19:35.288507 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:35 crc kubenswrapper[5005]: I0225 11:19:35.288524 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:35 crc kubenswrapper[5005]: I0225 11:19:35.288538 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:35Z","lastTransitionTime":"2026-02-25T11:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:35 crc kubenswrapper[5005]: I0225 11:19:35.391315 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:35 crc kubenswrapper[5005]: I0225 11:19:35.391408 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:35 crc kubenswrapper[5005]: I0225 11:19:35.391426 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:35 crc kubenswrapper[5005]: I0225 11:19:35.391452 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:35 crc kubenswrapper[5005]: I0225 11:19:35.391469 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:35Z","lastTransitionTime":"2026-02-25T11:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:35 crc kubenswrapper[5005]: I0225 11:19:35.437571 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 25 11:19:35 crc kubenswrapper[5005]: I0225 11:19:35.437771 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 25 11:19:35 crc kubenswrapper[5005]: I0225 11:19:35.437822 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 25 11:19:35 crc kubenswrapper[5005]: E0225 11:19:35.437939 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-25 11:19:43.437891551 +0000 UTC m=+97.478623918 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:35 crc kubenswrapper[5005]: E0225 11:19:35.437965 5005 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 25 11:19:35 crc kubenswrapper[5005]: E0225 11:19:35.438018 5005 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 25 11:19:35 crc kubenswrapper[5005]: E0225 11:19:35.438057 5005 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 25 11:19:35 crc kubenswrapper[5005]: E0225 11:19:35.438073 5005 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 25 11:19:35 crc kubenswrapper[5005]: I0225 11:19:35.438079 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 25 11:19:35 crc kubenswrapper[5005]: E0225 11:19:35.438134 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-25 11:19:43.438115669 +0000 UTC m=+97.478848076 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 25 11:19:35 crc kubenswrapper[5005]: I0225 11:19:35.438178 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 25 11:19:35 crc kubenswrapper[5005]: E0225 11:19:35.438235 5005 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 25 11:19:35 crc kubenswrapper[5005]: E0225 11:19:35.438314 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-25 11:19:43.438289385 +0000 UTC m=+97.479021742 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 25 11:19:35 crc kubenswrapper[5005]: E0225 11:19:35.438032 5005 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 25 11:19:35 crc kubenswrapper[5005]: E0225 11:19:35.438347 5005 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 25 11:19:35 crc kubenswrapper[5005]: E0225 11:19:35.438346 5005 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 25 11:19:35 crc kubenswrapper[5005]: E0225 11:19:35.438433 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-25 11:19:43.43841994 +0000 UTC m=+97.479152307 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 25 11:19:35 crc kubenswrapper[5005]: E0225 11:19:35.438468 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-25 11:19:43.438446751 +0000 UTC m=+97.479179108 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 25 11:19:35 crc kubenswrapper[5005]: I0225 11:19:35.493476 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:35 crc kubenswrapper[5005]: I0225 11:19:35.493529 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:35 crc kubenswrapper[5005]: I0225 11:19:35.493543 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:35 crc kubenswrapper[5005]: I0225 11:19:35.493562 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:35 crc kubenswrapper[5005]: I0225 11:19:35.493577 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:35Z","lastTransitionTime":"2026-02-25T11:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:35 crc kubenswrapper[5005]: I0225 11:19:35.538881 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/67964f07-93aa-42ec-90a7-730363ab668b-metrics-certs\") pod \"network-metrics-daemon-x2fvb\" (UID: \"67964f07-93aa-42ec-90a7-730363ab668b\") " pod="openshift-multus/network-metrics-daemon-x2fvb" Feb 25 11:19:35 crc kubenswrapper[5005]: E0225 11:19:35.539167 5005 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 25 11:19:35 crc kubenswrapper[5005]: E0225 11:19:35.539275 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67964f07-93aa-42ec-90a7-730363ab668b-metrics-certs podName:67964f07-93aa-42ec-90a7-730363ab668b nodeName:}" failed. No retries permitted until 2026-02-25 11:19:43.539247455 +0000 UTC m=+97.579979822 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/67964f07-93aa-42ec-90a7-730363ab668b-metrics-certs") pod "network-metrics-daemon-x2fvb" (UID: "67964f07-93aa-42ec-90a7-730363ab668b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 25 11:19:35 crc kubenswrapper[5005]: I0225 11:19:35.596654 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:35 crc kubenswrapper[5005]: I0225 11:19:35.596703 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:35 crc kubenswrapper[5005]: I0225 11:19:35.596722 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:35 crc kubenswrapper[5005]: I0225 11:19:35.596744 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:35 crc kubenswrapper[5005]: I0225 11:19:35.596762 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:35Z","lastTransitionTime":"2026-02-25T11:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:35 crc kubenswrapper[5005]: I0225 11:19:35.684996 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x2fvb" Feb 25 11:19:35 crc kubenswrapper[5005]: I0225 11:19:35.685628 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 25 11:19:35 crc kubenswrapper[5005]: E0225 11:19:35.685852 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x2fvb" podUID="67964f07-93aa-42ec-90a7-730363ab668b" Feb 25 11:19:35 crc kubenswrapper[5005]: I0225 11:19:35.685895 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 25 11:19:35 crc kubenswrapper[5005]: I0225 11:19:35.685994 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 25 11:19:35 crc kubenswrapper[5005]: E0225 11:19:35.686626 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 25 11:19:35 crc kubenswrapper[5005]: E0225 11:19:35.686959 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 25 11:19:35 crc kubenswrapper[5005]: E0225 11:19:35.687269 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 25 11:19:35 crc kubenswrapper[5005]: I0225 11:19:35.699413 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:35 crc kubenswrapper[5005]: I0225 11:19:35.699467 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:35 crc kubenswrapper[5005]: I0225 11:19:35.699482 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:35 crc kubenswrapper[5005]: I0225 11:19:35.699504 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:35 crc kubenswrapper[5005]: I0225 11:19:35.699518 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:35Z","lastTransitionTime":"2026-02-25T11:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:35 crc kubenswrapper[5005]: I0225 11:19:35.801896 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:35 crc kubenswrapper[5005]: I0225 11:19:35.801946 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:35 crc kubenswrapper[5005]: I0225 11:19:35.801956 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:35 crc kubenswrapper[5005]: I0225 11:19:35.801969 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:35 crc kubenswrapper[5005]: I0225 11:19:35.801979 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:35Z","lastTransitionTime":"2026-02-25T11:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:35 crc kubenswrapper[5005]: I0225 11:19:35.904953 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:35 crc kubenswrapper[5005]: I0225 11:19:35.905227 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:35 crc kubenswrapper[5005]: I0225 11:19:35.905331 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:35 crc kubenswrapper[5005]: I0225 11:19:35.905466 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:35 crc kubenswrapper[5005]: I0225 11:19:35.905556 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:35Z","lastTransitionTime":"2026-02-25T11:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:36 crc kubenswrapper[5005]: I0225 11:19:36.007900 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:36 crc kubenswrapper[5005]: I0225 11:19:36.008178 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:36 crc kubenswrapper[5005]: I0225 11:19:36.008242 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:36 crc kubenswrapper[5005]: I0225 11:19:36.008310 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:36 crc kubenswrapper[5005]: I0225 11:19:36.008395 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:36Z","lastTransitionTime":"2026-02-25T11:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:36 crc kubenswrapper[5005]: I0225 11:19:36.110479 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:36 crc kubenswrapper[5005]: I0225 11:19:36.110524 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:36 crc kubenswrapper[5005]: I0225 11:19:36.110536 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:36 crc kubenswrapper[5005]: I0225 11:19:36.110555 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:36 crc kubenswrapper[5005]: I0225 11:19:36.110572 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:36Z","lastTransitionTime":"2026-02-25T11:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:36 crc kubenswrapper[5005]: I0225 11:19:36.212562 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:36 crc kubenswrapper[5005]: I0225 11:19:36.212848 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:36 crc kubenswrapper[5005]: I0225 11:19:36.212944 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:36 crc kubenswrapper[5005]: I0225 11:19:36.213022 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:36 crc kubenswrapper[5005]: I0225 11:19:36.213094 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:36Z","lastTransitionTime":"2026-02-25T11:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:36 crc kubenswrapper[5005]: I0225 11:19:36.315176 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:36 crc kubenswrapper[5005]: I0225 11:19:36.315209 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:36 crc kubenswrapper[5005]: I0225 11:19:36.315218 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:36 crc kubenswrapper[5005]: I0225 11:19:36.315261 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:36 crc kubenswrapper[5005]: I0225 11:19:36.315270 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:36Z","lastTransitionTime":"2026-02-25T11:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:36 crc kubenswrapper[5005]: I0225 11:19:36.418101 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:36 crc kubenswrapper[5005]: I0225 11:19:36.418140 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:36 crc kubenswrapper[5005]: I0225 11:19:36.418148 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:36 crc kubenswrapper[5005]: I0225 11:19:36.418163 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:36 crc kubenswrapper[5005]: I0225 11:19:36.418171 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:36Z","lastTransitionTime":"2026-02-25T11:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:36 crc kubenswrapper[5005]: I0225 11:19:36.520497 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:36 crc kubenswrapper[5005]: I0225 11:19:36.520527 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:36 crc kubenswrapper[5005]: I0225 11:19:36.520554 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:36 crc kubenswrapper[5005]: I0225 11:19:36.520569 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:36 crc kubenswrapper[5005]: I0225 11:19:36.520578 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:36Z","lastTransitionTime":"2026-02-25T11:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:36 crc kubenswrapper[5005]: I0225 11:19:36.623212 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:36 crc kubenswrapper[5005]: I0225 11:19:36.623250 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:36 crc kubenswrapper[5005]: I0225 11:19:36.623260 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:36 crc kubenswrapper[5005]: I0225 11:19:36.623274 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:36 crc kubenswrapper[5005]: I0225 11:19:36.623282 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:36Z","lastTransitionTime":"2026-02-25T11:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:36 crc kubenswrapper[5005]: I0225 11:19:36.701223 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:36 crc kubenswrapper[5005]: I0225 11:19:36.722394 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c496d07b-7684-4d5f-b36e-be187e76a3de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bfx5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:36 crc kubenswrapper[5005]: I0225 11:19:36.726141 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:36 crc kubenswrapper[5005]: I0225 11:19:36.726225 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:36 crc kubenswrapper[5005]: I0225 11:19:36.726236 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:36 crc kubenswrapper[5005]: I0225 11:19:36.726251 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:36 crc kubenswrapper[5005]: I0225 11:19:36.726261 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:36Z","lastTransitionTime":"2026-02-25T11:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:36 crc kubenswrapper[5005]: I0225 11:19:36.734170 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:36 crc kubenswrapper[5005]: I0225 11:19:36.744310 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-x2fvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67964f07-93aa-42ec-90a7-730363ab668b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-649nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-649nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-x2fvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:36 crc kubenswrapper[5005]: I0225 11:19:36.759544 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7l6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7l6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:36 crc kubenswrapper[5005]: I0225 11:19:36.769797 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:36 crc kubenswrapper[5005]: I0225 11:19:36.777357 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wjgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12691472-eb44-46a1-bd71-cf3250a90e2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n28xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n28xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9wjgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:36 crc kubenswrapper[5005]: I0225 11:19:36.786047 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d56aef23-d794-49a4-8e6b-2c9e2d1adebf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tct5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:36 crc kubenswrapper[5005]: I0225 11:19:36.799362 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9c06f67-7a5e-4278-818e-873a7b9618de\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d75520daa9f2549db391f75b35e5a1ae156323f4ed30dd5c3eff1c669b99079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:18:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc3059892241d2a21084bd5b7d8bda16e9d67b83712919ce2ea4c7157f3c0b19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:18:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68ec20b6113b19505a0f124a1bbd4c2a1d418c686236a02e96759f3ccbe3b7e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:18:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c59c602484c6cda3ffbd176e13b44ae1676fa65bde2f71e60e0e03bdc0c96375\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c59c602484c6cda3ffbd176e13b44ae1676fa65bde2f71e60e0e03bdc0c96375\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-25T11:19:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW0225 11:19:23.375161 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0225 11:19:23.375417 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0225 11:19:23.376481 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1757710392/tls.crt::/tmp/serving-cert-1757710392/tls.key\\\\\\\"\\\\nI0225 11:19:23.773334 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0225 11:19:23.776532 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0225 11:19:23.776566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0225 11:19:23.776611 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0225 11:19:23.776627 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0225 11:19:23.783108 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0225 11:19:23.783145 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0225 11:19:23.783151 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0225 11:19:23.783197 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0225 11:19:23.783210 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0225 11:19:23.783224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0225 11:19:23.783232 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0225 11:19:23.783240 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0225 11:19:23.784611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-25T11:19:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fb86efb4724977eee4e0b80af3aa1b7320b68118016250cd74ef6f631008417\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:18:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1215ec0ba3dc9272bbd8f648ab046459d3c8dd9de728a938102d269a234c9b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1215ec0ba3dc9272bbd8f648ab046459d3c8dd9de728a938102d269a234c9b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-25T11:18:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-25T11:18:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:18:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:36 crc kubenswrapper[5005]: I0225 11:19:36.807493 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09f6eae2-6ce6-420b-91c8-a2056755f042\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://994b36ddb3c4dd02f4ca9428bbffdc3bd360fb5756a99fa150ca0f09c688781d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:18:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac90cf045284dd4eebd73436f815796ed7da40e53d5e650c94df83b44b3c23e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac90cf045284dd4eebd73436f815796ed7da40e53d5e650c94df83b44b3c23e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-25T11:18:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-25T11:18:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:18:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:36 crc kubenswrapper[5005]: I0225 11:19:36.817632 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:36 crc kubenswrapper[5005]: I0225 11:19:36.825182 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:36 crc kubenswrapper[5005]: I0225 11:19:36.828074 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:36 crc kubenswrapper[5005]: I0225 11:19:36.828113 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:36 crc kubenswrapper[5005]: I0225 11:19:36.828125 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:36 crc kubenswrapper[5005]: I0225 11:19:36.828143 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:36 crc kubenswrapper[5005]: I0225 11:19:36.828155 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:36Z","lastTransitionTime":"2026-02-25T11:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:36 crc kubenswrapper[5005]: I0225 11:19:36.834157 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:36 crc kubenswrapper[5005]: I0225 11:19:36.840240 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xx5w9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6300726a-8703-4d2a-9688-264da029b561\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7qdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xx5w9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:36 crc kubenswrapper[5005]: I0225 11:19:36.846666 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-splp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e33bce4-e290-4389-b690-398e3566f35d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdmtt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-splp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:36 crc kubenswrapper[5005]: I0225 11:19:36.859131 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dsd74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03175783-f1a5-4ac6-b942-91a23ab4439d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8fbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dsd74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:36 crc kubenswrapper[5005]: I0225 11:19:36.941827 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:36 crc kubenswrapper[5005]: I0225 11:19:36.941859 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:36 crc kubenswrapper[5005]: I0225 11:19:36.941866 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:36 crc kubenswrapper[5005]: I0225 11:19:36.941880 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:36 crc kubenswrapper[5005]: I0225 11:19:36.941889 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:36Z","lastTransitionTime":"2026-02-25T11:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:37 crc kubenswrapper[5005]: I0225 11:19:37.044102 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:37 crc kubenswrapper[5005]: I0225 11:19:37.044152 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:37 crc kubenswrapper[5005]: I0225 11:19:37.044168 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:37 crc kubenswrapper[5005]: I0225 11:19:37.044190 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:37 crc kubenswrapper[5005]: I0225 11:19:37.044208 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:37Z","lastTransitionTime":"2026-02-25T11:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:37 crc kubenswrapper[5005]: I0225 11:19:37.146255 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:37 crc kubenswrapper[5005]: I0225 11:19:37.146316 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:37 crc kubenswrapper[5005]: I0225 11:19:37.146339 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:37 crc kubenswrapper[5005]: I0225 11:19:37.146420 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:37 crc kubenswrapper[5005]: I0225 11:19:37.146444 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:37Z","lastTransitionTime":"2026-02-25T11:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:37 crc kubenswrapper[5005]: I0225 11:19:37.248286 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:37 crc kubenswrapper[5005]: I0225 11:19:37.248329 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:37 crc kubenswrapper[5005]: I0225 11:19:37.248342 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:37 crc kubenswrapper[5005]: I0225 11:19:37.248359 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:37 crc kubenswrapper[5005]: I0225 11:19:37.248394 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:37Z","lastTransitionTime":"2026-02-25T11:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:37 crc kubenswrapper[5005]: I0225 11:19:37.351313 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:37 crc kubenswrapper[5005]: I0225 11:19:37.351445 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:37 crc kubenswrapper[5005]: I0225 11:19:37.351472 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:37 crc kubenswrapper[5005]: I0225 11:19:37.351505 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:37 crc kubenswrapper[5005]: I0225 11:19:37.351526 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:37Z","lastTransitionTime":"2026-02-25T11:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:37 crc kubenswrapper[5005]: I0225 11:19:37.453728 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:37 crc kubenswrapper[5005]: I0225 11:19:37.453804 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:37 crc kubenswrapper[5005]: I0225 11:19:37.453823 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:37 crc kubenswrapper[5005]: I0225 11:19:37.453851 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:37 crc kubenswrapper[5005]: I0225 11:19:37.453875 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:37Z","lastTransitionTime":"2026-02-25T11:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:37 crc kubenswrapper[5005]: I0225 11:19:37.557022 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:37 crc kubenswrapper[5005]: I0225 11:19:37.557084 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:37 crc kubenswrapper[5005]: I0225 11:19:37.557101 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:37 crc kubenswrapper[5005]: I0225 11:19:37.557127 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:37 crc kubenswrapper[5005]: I0225 11:19:37.557144 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:37Z","lastTransitionTime":"2026-02-25T11:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:37 crc kubenswrapper[5005]: I0225 11:19:37.660865 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:37 crc kubenswrapper[5005]: I0225 11:19:37.660925 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:37 crc kubenswrapper[5005]: I0225 11:19:37.660949 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:37 crc kubenswrapper[5005]: I0225 11:19:37.660977 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:37 crc kubenswrapper[5005]: I0225 11:19:37.660998 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:37Z","lastTransitionTime":"2026-02-25T11:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:37 crc kubenswrapper[5005]: I0225 11:19:37.684781 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 25 11:19:37 crc kubenswrapper[5005]: I0225 11:19:37.684815 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 25 11:19:37 crc kubenswrapper[5005]: E0225 11:19:37.684922 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 25 11:19:37 crc kubenswrapper[5005]: I0225 11:19:37.684790 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x2fvb" Feb 25 11:19:37 crc kubenswrapper[5005]: I0225 11:19:37.684997 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 25 11:19:37 crc kubenswrapper[5005]: E0225 11:19:37.685157 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 25 11:19:37 crc kubenswrapper[5005]: E0225 11:19:37.685271 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x2fvb" podUID="67964f07-93aa-42ec-90a7-730363ab668b" Feb 25 11:19:37 crc kubenswrapper[5005]: E0225 11:19:37.685447 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 25 11:19:37 crc kubenswrapper[5005]: I0225 11:19:37.763607 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:37 crc kubenswrapper[5005]: I0225 11:19:37.763666 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:37 crc kubenswrapper[5005]: I0225 11:19:37.763678 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:37 crc kubenswrapper[5005]: I0225 11:19:37.763696 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:37 crc kubenswrapper[5005]: I0225 11:19:37.763707 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:37Z","lastTransitionTime":"2026-02-25T11:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:37 crc kubenswrapper[5005]: I0225 11:19:37.865937 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:37 crc kubenswrapper[5005]: I0225 11:19:37.865985 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:37 crc kubenswrapper[5005]: I0225 11:19:37.866000 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:37 crc kubenswrapper[5005]: I0225 11:19:37.866017 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:37 crc kubenswrapper[5005]: I0225 11:19:37.866029 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:37Z","lastTransitionTime":"2026-02-25T11:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:37 crc kubenswrapper[5005]: I0225 11:19:37.969279 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:37 crc kubenswrapper[5005]: I0225 11:19:37.969330 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:37 crc kubenswrapper[5005]: I0225 11:19:37.969345 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:37 crc kubenswrapper[5005]: I0225 11:19:37.969363 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:37 crc kubenswrapper[5005]: I0225 11:19:37.969401 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:37Z","lastTransitionTime":"2026-02-25T11:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.071945 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.072003 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.072021 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.072044 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.072062 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:38Z","lastTransitionTime":"2026-02-25T11:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.175303 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.175410 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.175436 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.175464 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.175486 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:38Z","lastTransitionTime":"2026-02-25T11:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.231466 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.231516 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.231534 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.231557 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.231575 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:38Z","lastTransitionTime":"2026-02-25T11:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:38 crc kubenswrapper[5005]: E0225 11:19:38.249598 5005 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-25T11:19:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-25T11:19:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-25T11:19:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-25T11:19:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37b239f9-8862-4454-946c-237d19e88927\\\",\\\"systemUUID\\\":\\\"25838fef-f2f6-482f-b878-b96864dc5280\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.254782 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.254844 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.254866 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.254894 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.254910 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:38Z","lastTransitionTime":"2026-02-25T11:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:38 crc kubenswrapper[5005]: E0225 11:19:38.270838 5005 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-25T11:19:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-25T11:19:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-25T11:19:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-25T11:19:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37b239f9-8862-4454-946c-237d19e88927\\\",\\\"systemUUID\\\":\\\"25838fef-f2f6-482f-b878-b96864dc5280\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.275612 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.275661 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.275678 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.275700 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.275718 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:38Z","lastTransitionTime":"2026-02-25T11:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:38 crc kubenswrapper[5005]: E0225 11:19:38.293030 5005 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-25T11:19:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-25T11:19:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-25T11:19:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-25T11:19:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37b239f9-8862-4454-946c-237d19e88927\\\",\\\"systemUUID\\\":\\\"25838fef-f2f6-482f-b878-b96864dc5280\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.297735 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.297790 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.297808 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.297831 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.297848 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:38Z","lastTransitionTime":"2026-02-25T11:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:38 crc kubenswrapper[5005]: E0225 11:19:38.313498 5005 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-25T11:19:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-25T11:19:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-25T11:19:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-25T11:19:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37b239f9-8862-4454-946c-237d19e88927\\\",\\\"systemUUID\\\":\\\"25838fef-f2f6-482f-b878-b96864dc5280\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.317637 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.317722 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.317747 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.317776 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.317799 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:38Z","lastTransitionTime":"2026-02-25T11:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:38 crc kubenswrapper[5005]: E0225 11:19:38.333212 5005 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-25T11:19:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-25T11:19:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-25T11:19:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-25T11:19:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"37b239f9-8862-4454-946c-237d19e88927\\\",\\\"systemUUID\\\":\\\"25838fef-f2f6-482f-b878-b96864dc5280\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:38 crc kubenswrapper[5005]: E0225 11:19:38.333351 5005 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.335118 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.335185 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.335197 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.335219 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.335234 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:38Z","lastTransitionTime":"2026-02-25T11:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.437729 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.437788 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.437805 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.437830 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.437847 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:38Z","lastTransitionTime":"2026-02-25T11:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.541194 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.541269 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.541305 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.541336 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.541359 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:38Z","lastTransitionTime":"2026-02-25T11:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.644803 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.644865 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.644887 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.644918 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.644941 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:38Z","lastTransitionTime":"2026-02-25T11:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.747723 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.747788 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.747808 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.747830 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.747849 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:38Z","lastTransitionTime":"2026-02-25T11:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.850960 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.851027 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.851049 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.851078 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.851113 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:38Z","lastTransitionTime":"2026-02-25T11:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.954695 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.954763 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.954786 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.954816 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:38 crc kubenswrapper[5005]: I0225 11:19:38.954839 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:38Z","lastTransitionTime":"2026-02-25T11:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:39 crc kubenswrapper[5005]: I0225 11:19:39.057621 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:39 crc kubenswrapper[5005]: I0225 11:19:39.057703 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:39 crc kubenswrapper[5005]: I0225 11:19:39.057722 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:39 crc kubenswrapper[5005]: I0225 11:19:39.058122 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:39 crc kubenswrapper[5005]: I0225 11:19:39.058193 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:39Z","lastTransitionTime":"2026-02-25T11:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:39 crc kubenswrapper[5005]: I0225 11:19:39.160821 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:39 crc kubenswrapper[5005]: I0225 11:19:39.160873 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:39 crc kubenswrapper[5005]: I0225 11:19:39.160891 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:39 crc kubenswrapper[5005]: I0225 11:19:39.160912 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:39 crc kubenswrapper[5005]: I0225 11:19:39.160929 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:39Z","lastTransitionTime":"2026-02-25T11:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:39 crc kubenswrapper[5005]: I0225 11:19:39.263582 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:39 crc kubenswrapper[5005]: I0225 11:19:39.263716 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:39 crc kubenswrapper[5005]: I0225 11:19:39.263735 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:39 crc kubenswrapper[5005]: I0225 11:19:39.263762 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:39 crc kubenswrapper[5005]: I0225 11:19:39.263781 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:39Z","lastTransitionTime":"2026-02-25T11:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:39 crc kubenswrapper[5005]: I0225 11:19:39.366312 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:39 crc kubenswrapper[5005]: I0225 11:19:39.366428 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:39 crc kubenswrapper[5005]: I0225 11:19:39.366454 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:39 crc kubenswrapper[5005]: I0225 11:19:39.366485 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:39 crc kubenswrapper[5005]: I0225 11:19:39.366502 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:39Z","lastTransitionTime":"2026-02-25T11:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:39 crc kubenswrapper[5005]: I0225 11:19:39.483774 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:39 crc kubenswrapper[5005]: I0225 11:19:39.483849 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:39 crc kubenswrapper[5005]: I0225 11:19:39.483872 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:39 crc kubenswrapper[5005]: I0225 11:19:39.483901 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:39 crc kubenswrapper[5005]: I0225 11:19:39.483924 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:39Z","lastTransitionTime":"2026-02-25T11:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:39 crc kubenswrapper[5005]: I0225 11:19:39.586859 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:39 crc kubenswrapper[5005]: I0225 11:19:39.586932 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:39 crc kubenswrapper[5005]: I0225 11:19:39.586956 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:39 crc kubenswrapper[5005]: I0225 11:19:39.586987 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:39 crc kubenswrapper[5005]: I0225 11:19:39.587010 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:39Z","lastTransitionTime":"2026-02-25T11:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:39 crc kubenswrapper[5005]: I0225 11:19:39.684837 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 25 11:19:39 crc kubenswrapper[5005]: I0225 11:19:39.684932 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x2fvb" Feb 25 11:19:39 crc kubenswrapper[5005]: I0225 11:19:39.685093 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 25 11:19:39 crc kubenswrapper[5005]: I0225 11:19:39.685128 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 25 11:19:39 crc kubenswrapper[5005]: E0225 11:19:39.685083 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 25 11:19:39 crc kubenswrapper[5005]: E0225 11:19:39.685952 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x2fvb" podUID="67964f07-93aa-42ec-90a7-730363ab668b" Feb 25 11:19:39 crc kubenswrapper[5005]: E0225 11:19:39.686114 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 25 11:19:39 crc kubenswrapper[5005]: E0225 11:19:39.686616 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 25 11:19:39 crc kubenswrapper[5005]: I0225 11:19:39.690141 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:39 crc kubenswrapper[5005]: I0225 11:19:39.690204 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:39 crc kubenswrapper[5005]: I0225 11:19:39.690227 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:39 crc kubenswrapper[5005]: I0225 11:19:39.690256 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:39 crc kubenswrapper[5005]: I0225 11:19:39.690274 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:39Z","lastTransitionTime":"2026-02-25T11:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:39 crc kubenswrapper[5005]: I0225 11:19:39.794129 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:39 crc kubenswrapper[5005]: I0225 11:19:39.794509 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:39 crc kubenswrapper[5005]: I0225 11:19:39.794522 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:39 crc kubenswrapper[5005]: I0225 11:19:39.794571 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:39 crc kubenswrapper[5005]: I0225 11:19:39.794583 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:39Z","lastTransitionTime":"2026-02-25T11:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:39 crc kubenswrapper[5005]: I0225 11:19:39.897942 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:39 crc kubenswrapper[5005]: I0225 11:19:39.897993 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:39 crc kubenswrapper[5005]: I0225 11:19:39.898008 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:39 crc kubenswrapper[5005]: I0225 11:19:39.898027 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:39 crc kubenswrapper[5005]: I0225 11:19:39.898038 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:39Z","lastTransitionTime":"2026-02-25T11:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.000393 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.000443 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.000457 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.000478 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.000493 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:40Z","lastTransitionTime":"2026-02-25T11:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.085330 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7l6vx" event={"ID":"ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a","Type":"ContainerStarted","Data":"ecd54fbedc45f5f52a5afcbc07b6198ed741fb9fc7619d74b56e1ff5a4f6a177"} Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.103219 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.103281 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.103299 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.103325 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.103345 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:40Z","lastTransitionTime":"2026-02-25T11:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.106644 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.124076 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.146120 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c496d07b-7684-4d5f-b36e-be187e76a3de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bfx5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.160791 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.173898 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wjgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12691472-eb44-46a1-bd71-cf3250a90e2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n28xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n28xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9wjgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.184271 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-x2fvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67964f07-93aa-42ec-90a7-730363ab668b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-649nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-649nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-x2fvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.199715 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7l6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecd54fbedc45f5f52a5afcbc07b6198ed741fb9fc7619d74b56e1ff5a4f6a177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:19:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7l6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.207081 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.207244 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.207353 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.207549 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.207667 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:40Z","lastTransitionTime":"2026-02-25T11:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.215903 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9c06f67-7a5e-4278-818e-873a7b9618de\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d75520daa9f2549db391f75b35e5a1ae156323f4ed30dd5c3eff1c669b99079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:18:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc3059892241d2a21084bd5b7d8bda16e9d67b83712919ce2ea4c7157f3c0b19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:18:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68ec20b6113b19505a0f124a1bbd4c2a1d418c686236a02e96759f3ccbe3b7e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:18:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c59c602484c6cda3ffbd176e13b44ae1676fa65bde2f71e60e0e03bdc0c96375\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c59c602484c6cda3ffbd176e13b44ae1676fa65bde2f71e60e0e03bdc0c96375\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-25T11:19:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW0225 11:19:23.375161 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0225 11:19:23.375417 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0225 11:19:23.376481 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1757710392/tls.crt::/tmp/serving-cert-1757710392/tls.key\\\\\\\"\\\\nI0225 11:19:23.773334 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0225 11:19:23.776532 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0225 11:19:23.776566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0225 11:19:23.776611 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0225 11:19:23.776627 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0225 11:19:23.783108 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0225 11:19:23.783145 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0225 11:19:23.783151 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0225 11:19:23.783197 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0225 11:19:23.783210 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0225 11:19:23.783224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0225 11:19:23.783232 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0225 11:19:23.783240 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0225 11:19:23.784611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-25T11:19:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fb86efb4724977eee4e0b80af3aa1b7320b68118016250cd74ef6f631008417\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:18:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1215ec0ba3dc9272bbd8f648ab046459d3c8dd9de728a938102d269a234c9b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1215ec0ba3dc9272bbd8f648ab046459d3c8dd9de728a938102d269a234c9b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-25T11:18:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-25T11:18:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:18:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.229663 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09f6eae2-6ce6-420b-91c8-a2056755f042\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://994b36ddb3c4dd02f4ca9428bbffdc3bd360fb5756a99fa150ca0f09c688781d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:18:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac90cf045284dd4eebd73436f815796ed7da40e53d5e650c94df83b44b3c23e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac90cf045284dd4eebd73436f815796ed7da40e53d5e650c94df83b44b3c23e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-25T11:18:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-25T11:18:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:18:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.241107 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d56aef23-d794-49a4-8e6b-2c9e2d1adebf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tct5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.254800 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.268632 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xx5w9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6300726a-8703-4d2a-9688-264da029b561\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7qdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xx5w9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.281287 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-splp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e33bce4-e290-4389-b690-398e3566f35d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdmtt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-splp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.298981 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dsd74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03175783-f1a5-4ac6-b942-91a23ab4439d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8fbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dsd74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.310089 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.310152 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.310168 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.310189 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.310204 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:40Z","lastTransitionTime":"2026-02-25T11:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.317849 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.331833 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.412994 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.413061 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.413085 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.413113 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.413136 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:40Z","lastTransitionTime":"2026-02-25T11:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.521448 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.521527 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.521550 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.521580 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.521602 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:40Z","lastTransitionTime":"2026-02-25T11:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.624461 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.624528 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.624555 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.624578 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.624595 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:40Z","lastTransitionTime":"2026-02-25T11:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.727365 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.727435 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.727451 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.727476 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.727491 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:40Z","lastTransitionTime":"2026-02-25T11:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.830728 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.830774 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.830786 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.830805 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.830818 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:40Z","lastTransitionTime":"2026-02-25T11:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.933942 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.934003 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.934020 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.934047 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:40 crc kubenswrapper[5005]: I0225 11:19:40.934064 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:40Z","lastTransitionTime":"2026-02-25T11:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.036229 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.036262 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.036271 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.036284 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.036292 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:41Z","lastTransitionTime":"2026-02-25T11:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.091758 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"ef06a60a8f708391519e4556b0e6d646fd08b52a4530abe31ab51ec93d4cf822"} Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.095225 5005 generic.go:334] "Generic (PLEG): container finished" podID="c496d07b-7684-4d5f-b36e-be187e76a3de" containerID="975d46ba18ce34a37e4862e5d5fd264a037d931f085f2f6d4ac11b51073bc492" exitCode=0 Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.095304 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" event={"ID":"c496d07b-7684-4d5f-b36e-be187e76a3de","Type":"ContainerDied","Data":"975d46ba18ce34a37e4862e5d5fd264a037d931f085f2f6d4ac11b51073bc492"} Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.098479 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-xx5w9" event={"ID":"6300726a-8703-4d2a-9688-264da029b561","Type":"ContainerStarted","Data":"f7fcdc3f750c06a386dfa36f1e62fb61c5ba3a989e452806a1f7d80724e62f5c"} Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.102854 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wjgc" event={"ID":"12691472-eb44-46a1-bd71-cf3250a90e2b","Type":"ContainerStarted","Data":"78febe04e22d16729c7070464df8e79c328a1afd4a4d6a52772afd0953e8607c"} Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.102912 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wjgc" event={"ID":"12691472-eb44-46a1-bd71-cf3250a90e2b","Type":"ContainerStarted","Data":"5cfb46a3a90ec5ee3daedd0fd483d5bb0d245ef0705a3712084d8584a44a64cc"} Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.104291 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.104739 5005 generic.go:334] "Generic (PLEG): container finished" podID="ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a" containerID="ecd54fbedc45f5f52a5afcbc07b6198ed741fb9fc7619d74b56e1ff5a4f6a177" exitCode=0 Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.104781 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7l6vx" event={"ID":"ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a","Type":"ContainerDied","Data":"ecd54fbedc45f5f52a5afcbc07b6198ed741fb9fc7619d74b56e1ff5a4f6a177"} Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.114334 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.129467 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c496d07b-7684-4d5f-b36e-be187e76a3de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bfx5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.137556 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wjgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12691472-eb44-46a1-bd71-cf3250a90e2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n28xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n28xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9wjgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.139607 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.139642 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.139651 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.139666 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.139675 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:41Z","lastTransitionTime":"2026-02-25T11:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.144765 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-x2fvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67964f07-93aa-42ec-90a7-730363ab668b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-649nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-649nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-x2fvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.157084 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7l6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecd54fbedc45f5f52a5afcbc07b6198ed741fb9fc7619d74b56e1ff5a4f6a177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:19:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7l6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.165502 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.173183 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09f6eae2-6ce6-420b-91c8-a2056755f042\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://994b36ddb3c4dd02f4ca9428bbffdc3bd360fb5756a99fa150ca0f09c688781d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:18:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac90cf045284dd4eebd73436f815796ed7da40e53d5e650c94df83b44b3c23e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac90cf045284dd4eebd73436f815796ed7da40e53d5e650c94df83b44b3c23e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-25T11:18:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-25T11:18:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:18:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.181505 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d56aef23-d794-49a4-8e6b-2c9e2d1adebf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tct5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.190536 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9c06f67-7a5e-4278-818e-873a7b9618de\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d75520daa9f2549db391f75b35e5a1ae156323f4ed30dd5c3eff1c669b99079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:18:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc3059892241d2a21084bd5b7d8bda16e9d67b83712919ce2ea4c7157f3c0b19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:18:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68ec20b6113b19505a0f124a1bbd4c2a1d418c686236a02e96759f3ccbe3b7e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:18:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c59c602484c6cda3ffbd176e13b44ae1676fa65bde2f71e60e0e03bdc0c96375\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c59c602484c6cda3ffbd176e13b44ae1676fa65bde2f71e60e0e03bdc0c96375\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-25T11:19:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW0225 11:19:23.375161 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0225 11:19:23.375417 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0225 11:19:23.376481 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1757710392/tls.crt::/tmp/serving-cert-1757710392/tls.key\\\\\\\"\\\\nI0225 11:19:23.773334 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0225 11:19:23.776532 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0225 11:19:23.776566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0225 11:19:23.776611 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0225 11:19:23.776627 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0225 11:19:23.783108 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0225 11:19:23.783145 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0225 11:19:23.783151 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0225 11:19:23.783197 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0225 11:19:23.783210 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0225 11:19:23.783224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0225 11:19:23.783232 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0225 11:19:23.783240 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0225 11:19:23.784611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-25T11:19:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fb86efb4724977eee4e0b80af3aa1b7320b68118016250cd74ef6f631008417\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:18:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1215ec0ba3dc9272bbd8f648ab046459d3c8dd9de728a938102d269a234c9b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1215ec0ba3dc9272bbd8f648ab046459d3c8dd9de728a938102d269a234c9b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-25T11:18:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-25T11:18:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:18:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.200336 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef06a60a8f708391519e4556b0e6d646fd08b52a4530abe31ab51ec93d4cf822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:19:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.213958 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.225605 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.233504 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xx5w9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6300726a-8703-4d2a-9688-264da029b561\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7qdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xx5w9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.242020 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.242146 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.242163 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.242183 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.242195 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:41Z","lastTransitionTime":"2026-02-25T11:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.243640 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-splp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e33bce4-e290-4389-b690-398e3566f35d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdmtt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-splp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.254484 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dsd74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03175783-f1a5-4ac6-b942-91a23ab4439d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8fbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dsd74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.270531 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9c06f67-7a5e-4278-818e-873a7b9618de\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d75520daa9f2549db391f75b35e5a1ae156323f4ed30dd5c3eff1c669b99079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:18:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc3059892241d2a21084bd5b7d8bda16e9d67b83712919ce2ea4c7157f3c0b19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:18:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68ec20b6113b19505a0f124a1bbd4c2a1d418c686236a02e96759f3ccbe3b7e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:18:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c59c602484c6cda3ffbd176e13b44ae1676fa65bde2f71e60e0e03bdc0c96375\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c59c602484c6cda3ffbd176e13b44ae1676fa65bde2f71e60e0e03bdc0c96375\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-25T11:19:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW0225 11:19:23.375161 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0225 11:19:23.375417 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0225 11:19:23.376481 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1757710392/tls.crt::/tmp/serving-cert-1757710392/tls.key\\\\\\\"\\\\nI0225 11:19:23.773334 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0225 11:19:23.776532 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0225 11:19:23.776566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0225 11:19:23.776611 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0225 11:19:23.776627 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0225 11:19:23.783108 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0225 11:19:23.783145 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0225 11:19:23.783151 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0225 11:19:23.783197 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0225 11:19:23.783210 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0225 11:19:23.783224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0225 11:19:23.783232 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0225 11:19:23.783240 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0225 11:19:23.784611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-25T11:19:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fb86efb4724977eee4e0b80af3aa1b7320b68118016250cd74ef6f631008417\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:18:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1215ec0ba3dc9272bbd8f648ab046459d3c8dd9de728a938102d269a234c9b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1215ec0ba3dc9272bbd8f648ab046459d3c8dd9de728a938102d269a234c9b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-25T11:18:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-25T11:18:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:18:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.293921 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09f6eae2-6ce6-420b-91c8-a2056755f042\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://994b36ddb3c4dd02f4ca9428bbffdc3bd360fb5756a99fa150ca0f09c688781d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:18:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac90cf045284dd4eebd73436f815796ed7da40e53d5e650c94df83b44b3c23e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac90cf045284dd4eebd73436f815796ed7da40e53d5e650c94df83b44b3c23e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-25T11:18:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-25T11:18:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:18:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.306962 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d56aef23-d794-49a4-8e6b-2c9e2d1adebf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tct5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.315710 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-splp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e33bce4-e290-4389-b690-398e3566f35d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdmtt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-splp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.327080 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dsd74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03175783-f1a5-4ac6-b942-91a23ab4439d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8fbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dsd74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.341237 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef06a60a8f708391519e4556b0e6d646fd08b52a4530abe31ab51ec93d4cf822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:19:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.345005 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.345038 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.345046 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.345061 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.345070 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:41Z","lastTransitionTime":"2026-02-25T11:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.356095 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.364423 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.373960 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xx5w9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6300726a-8703-4d2a-9688-264da029b561\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7fcdc3f750c06a386dfa36f1e62fb61c5ba3a989e452806a1f7d80724e62f5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:19:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7qdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xx5w9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.388844 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.402587 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.424051 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c496d07b-7684-4d5f-b36e-be187e76a3de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://975d46ba18ce34a37e4862e5d5fd264a037d931f085f2f6d4ac11b51073bc492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975d46ba18ce34a37e4862e5d5fd264a037d931f085f2f6d4ac11b51073bc492\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-25T11:19:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-25T11:19:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bfx5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.440563 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.447591 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.447841 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.447862 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.447887 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.447904 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:41Z","lastTransitionTime":"2026-02-25T11:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.453897 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wjgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12691472-eb44-46a1-bd71-cf3250a90e2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cfb46a3a90ec5ee3daedd0fd483d5bb0d245ef0705a3712084d8584a44a64cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:19:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n28xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78febe04e22d16729c7070464df8e79c328a1afd4a4d6a52772afd0953e8607c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n28xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9wjgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.467341 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-x2fvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67964f07-93aa-42ec-90a7-730363ab668b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-649nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-649nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-x2fvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.486398 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7l6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecd54fbedc45f5f52a5afcbc07b6198ed741fb9fc7619d74b56e1ff5a4f6a177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecd54fbedc45f5f52a5afcbc07b6198ed741fb9fc7619d74b56e1ff5a4f6a177\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-25T11:19:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-25T11:19:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7l6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.551125 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.551221 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.551242 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.551274 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.551294 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:41Z","lastTransitionTime":"2026-02-25T11:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.616922 5005 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.654141 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.654219 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.654238 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.654268 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.654287 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:41Z","lastTransitionTime":"2026-02-25T11:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.684648 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x2fvb" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.684688 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.684778 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 25 11:19:41 crc kubenswrapper[5005]: E0225 11:19:41.684927 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x2fvb" podUID="67964f07-93aa-42ec-90a7-730363ab668b" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.684952 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 25 11:19:41 crc kubenswrapper[5005]: E0225 11:19:41.685093 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 25 11:19:41 crc kubenswrapper[5005]: E0225 11:19:41.685208 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 25 11:19:41 crc kubenswrapper[5005]: E0225 11:19:41.685297 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.758701 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.758759 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.758773 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.758793 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.758810 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:41Z","lastTransitionTime":"2026-02-25T11:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.861659 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.861718 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.861729 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.861746 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.861760 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:41Z","lastTransitionTime":"2026-02-25T11:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.964534 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.964593 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.964610 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.964637 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:41 crc kubenswrapper[5005]: I0225 11:19:41.964654 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:41Z","lastTransitionTime":"2026-02-25T11:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.068074 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.068147 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.068160 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.068182 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.068197 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:42Z","lastTransitionTime":"2026-02-25T11:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.112621 5005 generic.go:334] "Generic (PLEG): container finished" podID="ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a" containerID="00a4c4381c80e9f7902648c290a58f3398b2d06b1f1add8ab8e89460b530dc5d" exitCode=0 Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.112722 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7l6vx" event={"ID":"ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a","Type":"ContainerDied","Data":"00a4c4381c80e9f7902648c290a58f3398b2d06b1f1add8ab8e89460b530dc5d"} Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.120790 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" event={"ID":"c496d07b-7684-4d5f-b36e-be187e76a3de","Type":"ContainerStarted","Data":"a687626d8ee79cf32049e2009ca48ae5ec596e0b0cad6eab05e7684c5c06c411"} Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.120914 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" event={"ID":"c496d07b-7684-4d5f-b36e-be187e76a3de","Type":"ContainerStarted","Data":"cd6c7b4e86f824f8c4269deb6403989232560aa77c122794119360feaefe82a3"} Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.120941 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" event={"ID":"c496d07b-7684-4d5f-b36e-be187e76a3de","Type":"ContainerStarted","Data":"ef3b7c4788ff3fa4664bbe7e070005ad16793de98bd4a4144e13756ea64f16fb"} Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.120964 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" event={"ID":"c496d07b-7684-4d5f-b36e-be187e76a3de","Type":"ContainerStarted","Data":"fc0d45f69c524d9a03fe2659ca1ce58474ff5785e3ac46b8f1cd0423f0b3ae0c"} Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.120988 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" event={"ID":"c496d07b-7684-4d5f-b36e-be187e76a3de","Type":"ContainerStarted","Data":"310fa0d159836088b5b743d8316afa36dce529854c0859acac09cc58bf7c2c99"} Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.121008 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" event={"ID":"c496d07b-7684-4d5f-b36e-be187e76a3de","Type":"ContainerStarted","Data":"675d052f77946192759abdff843ca242001c41f0531cd06410787bf1fce25f11"} Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.137960 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef06a60a8f708391519e4556b0e6d646fd08b52a4530abe31ab51ec93d4cf822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:19:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.150257 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.165411 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.170836 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.170876 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.170890 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.170909 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.170921 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:42Z","lastTransitionTime":"2026-02-25T11:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.177691 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xx5w9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6300726a-8703-4d2a-9688-264da029b561\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7fcdc3f750c06a386dfa36f1e62fb61c5ba3a989e452806a1f7d80724e62f5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:19:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7qdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xx5w9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.185745 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-splp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e33bce4-e290-4389-b690-398e3566f35d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdmtt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-splp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.196611 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dsd74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03175783-f1a5-4ac6-b942-91a23ab4439d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8fbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dsd74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.211573 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.226129 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.242419 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c496d07b-7684-4d5f-b36e-be187e76a3de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://975d46ba18ce34a37e4862e5d5fd264a037d931f085f2f6d4ac11b51073bc492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975d46ba18ce34a37e4862e5d5fd264a037d931f085f2f6d4ac11b51073bc492\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-25T11:19:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-25T11:19:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bfx5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.252058 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.260327 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wjgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12691472-eb44-46a1-bd71-cf3250a90e2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cfb46a3a90ec5ee3daedd0fd483d5bb0d245ef0705a3712084d8584a44a64cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:19:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n28xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78febe04e22d16729c7070464df8e79c328a1afd4a4d6a52772afd0953e8607c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n28xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9wjgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.269451 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-x2fvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67964f07-93aa-42ec-90a7-730363ab668b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-649nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-649nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-x2fvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.272401 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.272432 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.272443 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.272458 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.272468 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:42Z","lastTransitionTime":"2026-02-25T11:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.279115 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7l6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecd54fbedc45f5f52a5afcbc07b6198ed741fb9fc7619d74b56e1ff5a4f6a177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecd54fbedc45f5f52a5afcbc07b6198ed741fb9fc7619d74b56e1ff5a4f6a177\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-25T11:19:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-25T11:19:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a4c4381c80e9f7902648c290a58f3398b2d06b1f1add8ab8e89460b530dc5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a4c4381c80e9f7902648c290a58f3398b2d06b1f1add8ab8e89460b530dc5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-25T11:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-25T11:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7l6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.290610 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9c06f67-7a5e-4278-818e-873a7b9618de\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d75520daa9f2549db391f75b35e5a1ae156323f4ed30dd5c3eff1c669b99079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:18:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc3059892241d2a21084bd5b7d8bda16e9d67b83712919ce2ea4c7157f3c0b19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:18:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68ec20b6113b19505a0f124a1bbd4c2a1d418c686236a02e96759f3ccbe3b7e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:18:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c59c602484c6cda3ffbd176e13b44ae1676fa65bde2f71e60e0e03bdc0c96375\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c59c602484c6cda3ffbd176e13b44ae1676fa65bde2f71e60e0e03bdc0c96375\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-25T11:19:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW0225 11:19:23.375161 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0225 11:19:23.375417 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0225 11:19:23.376481 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1757710392/tls.crt::/tmp/serving-cert-1757710392/tls.key\\\\\\\"\\\\nI0225 11:19:23.773334 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0225 11:19:23.776532 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0225 11:19:23.776566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0225 11:19:23.776611 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0225 11:19:23.776627 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0225 11:19:23.783108 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0225 11:19:23.783145 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0225 11:19:23.783151 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0225 11:19:23.783197 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0225 11:19:23.783210 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0225 11:19:23.783224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0225 11:19:23.783232 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0225 11:19:23.783240 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0225 11:19:23.784611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-25T11:19:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fb86efb4724977eee4e0b80af3aa1b7320b68118016250cd74ef6f631008417\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:18:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1215ec0ba3dc9272bbd8f648ab046459d3c8dd9de728a938102d269a234c9b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1215ec0ba3dc9272bbd8f648ab046459d3c8dd9de728a938102d269a234c9b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-25T11:18:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-25T11:18:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:18:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.297985 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09f6eae2-6ce6-420b-91c8-a2056755f042\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://994b36ddb3c4dd02f4ca9428bbffdc3bd360fb5756a99fa150ca0f09c688781d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:18:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac90cf045284dd4eebd73436f815796ed7da40e53d5e650c94df83b44b3c23e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac90cf045284dd4eebd73436f815796ed7da40e53d5e650c94df83b44b3c23e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-25T11:18:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-25T11:18:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:18:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.304208 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d56aef23-d794-49a4-8e6b-2c9e2d1adebf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tct5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.374598 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.374645 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.374656 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.374672 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.374700 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:42Z","lastTransitionTime":"2026-02-25T11:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.476437 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.476472 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.476480 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.476492 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.476500 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:42Z","lastTransitionTime":"2026-02-25T11:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.578956 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.579027 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.579047 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.579072 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.579089 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:42Z","lastTransitionTime":"2026-02-25T11:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.681717 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.681769 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.681778 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.681793 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.681803 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:42Z","lastTransitionTime":"2026-02-25T11:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.785967 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.786032 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.786049 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.786074 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.786093 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:42Z","lastTransitionTime":"2026-02-25T11:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.889540 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.889578 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.889587 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.889603 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.889614 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:42Z","lastTransitionTime":"2026-02-25T11:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.991648 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.991683 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.991695 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.991711 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:42 crc kubenswrapper[5005]: I0225 11:19:42.991722 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:42Z","lastTransitionTime":"2026-02-25T11:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.094727 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.094770 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.094782 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.094797 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.094809 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:43Z","lastTransitionTime":"2026-02-25T11:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.154728 5005 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.155307 5005 generic.go:334] "Generic (PLEG): container finished" podID="ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a" containerID="829b4067c68452031dfaa345a9f72fa60cbb10a6b1bae9311519988dca212377" exitCode=0 Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.155420 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7l6vx" event={"ID":"ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a","Type":"ContainerDied","Data":"829b4067c68452031dfaa345a9f72fa60cbb10a6b1bae9311519988dca212377"} Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.159538 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-splp7" event={"ID":"2e33bce4-e290-4389-b690-398e3566f35d","Type":"ContainerStarted","Data":"99ddd9170ca9976c1bc2e62106d8ebe115dc8251c8666c5067dba543dcdc1c46"} Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.175397 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.194735 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wjgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12691472-eb44-46a1-bd71-cf3250a90e2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cfb46a3a90ec5ee3daedd0fd483d5bb0d245ef0705a3712084d8584a44a64cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:19:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n28xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78febe04e22d16729c7070464df8e79c328a1afd4a4d6a52772afd0953e8607c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n28xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9wjgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.196738 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.196785 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.196801 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.196822 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.196840 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:43Z","lastTransitionTime":"2026-02-25T11:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.207408 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-x2fvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67964f07-93aa-42ec-90a7-730363ab668b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-649nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-649nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-x2fvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.221553 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7l6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecd54fbedc45f5f52a5afcbc07b6198ed741fb9fc7619d74b56e1ff5a4f6a177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecd54fbedc45f5f52a5afcbc07b6198ed741fb9fc7619d74b56e1ff5a4f6a177\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-25T11:19:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-25T11:19:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a4c4381c80e9f7902648c290a58f3398b2d06b1f1add8ab8e89460b530dc5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a4c4381c80e9f7902648c290a58f3398b2d06b1f1add8ab8e89460b530dc5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-25T11:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-25T11:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://829b4067c68452031dfaa345a9f72fa60cbb10a6b1bae9311519988dca212377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://829b4067c68452031dfaa345a9f72fa60cbb10a6b1bae9311519988dca212377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-25T11:19:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-25T11:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7l6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.232969 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9c06f67-7a5e-4278-818e-873a7b9618de\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d75520daa9f2549db391f75b35e5a1ae156323f4ed30dd5c3eff1c669b99079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:18:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc3059892241d2a21084bd5b7d8bda16e9d67b83712919ce2ea4c7157f3c0b19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:18:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68ec20b6113b19505a0f124a1bbd4c2a1d418c686236a02e96759f3ccbe3b7e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:18:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c59c602484c6cda3ffbd176e13b44ae1676fa65bde2f71e60e0e03bdc0c96375\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c59c602484c6cda3ffbd176e13b44ae1676fa65bde2f71e60e0e03bdc0c96375\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-25T11:19:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW0225 11:19:23.375161 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0225 11:19:23.375417 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0225 11:19:23.376481 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1757710392/tls.crt::/tmp/serving-cert-1757710392/tls.key\\\\\\\"\\\\nI0225 11:19:23.773334 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0225 11:19:23.776532 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0225 11:19:23.776566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0225 11:19:23.776611 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0225 11:19:23.776627 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0225 11:19:23.783108 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0225 11:19:23.783145 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0225 11:19:23.783151 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0225 11:19:23.783197 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0225 11:19:23.783210 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0225 11:19:23.783224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0225 11:19:23.783232 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0225 11:19:23.783240 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0225 11:19:23.784611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-25T11:19:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fb86efb4724977eee4e0b80af3aa1b7320b68118016250cd74ef6f631008417\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:18:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1215ec0ba3dc9272bbd8f648ab046459d3c8dd9de728a938102d269a234c9b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1215ec0ba3dc9272bbd8f648ab046459d3c8dd9de728a938102d269a234c9b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-25T11:18:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-25T11:18:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:18:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.241475 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09f6eae2-6ce6-420b-91c8-a2056755f042\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://994b36ddb3c4dd02f4ca9428bbffdc3bd360fb5756a99fa150ca0f09c688781d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:18:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac90cf045284dd4eebd73436f815796ed7da40e53d5e650c94df83b44b3c23e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac90cf045284dd4eebd73436f815796ed7da40e53d5e650c94df83b44b3c23e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-25T11:18:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-25T11:18:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:18:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.250904 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d56aef23-d794-49a4-8e6b-2c9e2d1adebf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tct5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.262114 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dsd74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03175783-f1a5-4ac6-b942-91a23ab4439d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8fbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dsd74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.272761 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef06a60a8f708391519e4556b0e6d646fd08b52a4530abe31ab51ec93d4cf822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:19:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.283031 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.295284 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.303794 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xx5w9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6300726a-8703-4d2a-9688-264da029b561\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7fcdc3f750c06a386dfa36f1e62fb61c5ba3a989e452806a1f7d80724e62f5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:19:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7qdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xx5w9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.306715 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.306750 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.306764 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.306787 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.306800 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:43Z","lastTransitionTime":"2026-02-25T11:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.312078 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-splp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e33bce4-e290-4389-b690-398e3566f35d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdmtt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-splp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.322954 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.335112 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.354352 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c496d07b-7684-4d5f-b36e-be187e76a3de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://975d46ba18ce34a37e4862e5d5fd264a037d931f085f2f6d4ac11b51073bc492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975d46ba18ce34a37e4862e5d5fd264a037d931f085f2f6d4ac11b51073bc492\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-25T11:19:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-25T11:19:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bfx5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.368508 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dsd74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03175783-f1a5-4ac6-b942-91a23ab4439d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8fbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dsd74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.381208 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef06a60a8f708391519e4556b0e6d646fd08b52a4530abe31ab51ec93d4cf822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:19:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.391461 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.405742 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.409674 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.409898 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.409997 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.410102 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.410311 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:43Z","lastTransitionTime":"2026-02-25T11:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.415023 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xx5w9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6300726a-8703-4d2a-9688-264da029b561\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7fcdc3f750c06a386dfa36f1e62fb61c5ba3a989e452806a1f7d80724e62f5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:19:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7qdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xx5w9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.423425 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-splp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e33bce4-e290-4389-b690-398e3566f35d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99ddd9170ca9976c1bc2e62106d8ebe115dc8251c8666c5067dba543dcdc1c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdmtt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-splp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.437823 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.450122 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.471142 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c496d07b-7684-4d5f-b36e-be187e76a3de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://975d46ba18ce34a37e4862e5d5fd264a037d931f085f2f6d4ac11b51073bc492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975d46ba18ce34a37e4862e5d5fd264a037d931f085f2f6d4ac11b51073bc492\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-25T11:19:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-25T11:19:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bfx5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.487148 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.497628 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wjgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12691472-eb44-46a1-bd71-cf3250a90e2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cfb46a3a90ec5ee3daedd0fd483d5bb0d245ef0705a3712084d8584a44a64cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:19:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n28xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78febe04e22d16729c7070464df8e79c328a1afd4a4d6a52772afd0953e8607c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n28xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9wjgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.508210 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-x2fvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67964f07-93aa-42ec-90a7-730363ab668b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-649nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-649nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-x2fvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.512664 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.512697 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.512709 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.512726 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.512737 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:43Z","lastTransitionTime":"2026-02-25T11:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.519446 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7l6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecd54fbedc45f5f52a5afcbc07b6198ed741fb9fc7619d74b56e1ff5a4f6a177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecd54fbedc45f5f52a5afcbc07b6198ed741fb9fc7619d74b56e1ff5a4f6a177\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-25T11:19:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-25T11:19:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a4c4381c80e9f7902648c290a58f3398b2d06b1f1add8ab8e89460b530dc5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a4c4381c80e9f7902648c290a58f3398b2d06b1f1add8ab8e89460b530dc5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-25T11:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-25T11:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://829b4067c68452031dfaa345a9f72fa60cbb10a6b1bae9311519988dca212377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://829b4067c68452031dfaa345a9f72fa60cbb10a6b1bae9311519988dca212377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-25T11:19:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-25T11:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7l6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.521148 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.521243 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.521290 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.521319 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 25 11:19:43 crc kubenswrapper[5005]: E0225 11:19:43.521363 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-25 11:19:59.521322226 +0000 UTC m=+113.562054593 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:43 crc kubenswrapper[5005]: E0225 11:19:43.521448 5005 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 25 11:19:43 crc kubenswrapper[5005]: E0225 11:19:43.521447 5005 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.521470 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 25 11:19:43 crc kubenswrapper[5005]: E0225 11:19:43.521513 5005 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 25 11:19:43 crc kubenswrapper[5005]: E0225 11:19:43.521526 5005 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 25 11:19:43 crc kubenswrapper[5005]: E0225 11:19:43.521538 5005 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 25 11:19:43 crc kubenswrapper[5005]: E0225 11:19:43.521470 5005 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 25 11:19:43 crc kubenswrapper[5005]: E0225 11:19:43.521576 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-25 11:19:59.521540074 +0000 UTC m=+113.562272441 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 25 11:19:43 crc kubenswrapper[5005]: E0225 11:19:43.521581 5005 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 25 11:19:43 crc kubenswrapper[5005]: E0225 11:19:43.521618 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-25 11:19:59.521598356 +0000 UTC m=+113.562330783 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 25 11:19:43 crc kubenswrapper[5005]: E0225 11:19:43.521681 5005 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 25 11:19:43 crc kubenswrapper[5005]: E0225 11:19:43.521726 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-25 11:19:59.521701 +0000 UTC m=+113.562433367 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 25 11:19:43 crc kubenswrapper[5005]: E0225 11:19:43.521764 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-25 11:19:59.521747192 +0000 UTC m=+113.562479559 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.533854 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9c06f67-7a5e-4278-818e-873a7b9618de\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d75520daa9f2549db391f75b35e5a1ae156323f4ed30dd5c3eff1c669b99079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:18:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc3059892241d2a21084bd5b7d8bda16e9d67b83712919ce2ea4c7157f3c0b19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:18:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68ec20b6113b19505a0f124a1bbd4c2a1d418c686236a02e96759f3ccbe3b7e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:18:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c59c602484c6cda3ffbd176e13b44ae1676fa65bde2f71e60e0e03bdc0c96375\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c59c602484c6cda3ffbd176e13b44ae1676fa65bde2f71e60e0e03bdc0c96375\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-25T11:19:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW0225 11:19:23.375161 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0225 11:19:23.375417 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0225 11:19:23.376481 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1757710392/tls.crt::/tmp/serving-cert-1757710392/tls.key\\\\\\\"\\\\nI0225 11:19:23.773334 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0225 11:19:23.776532 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0225 11:19:23.776566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0225 11:19:23.776611 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0225 11:19:23.776627 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0225 11:19:23.783108 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0225 11:19:23.783145 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0225 11:19:23.783151 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0225 11:19:23.783197 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0225 11:19:23.783210 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0225 11:19:23.783224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0225 11:19:23.783232 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0225 11:19:23.783240 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0225 11:19:23.784611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-25T11:19:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fb86efb4724977eee4e0b80af3aa1b7320b68118016250cd74ef6f631008417\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:18:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1215ec0ba3dc9272bbd8f648ab046459d3c8dd9de728a938102d269a234c9b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1215ec0ba3dc9272bbd8f648ab046459d3c8dd9de728a938102d269a234c9b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-25T11:18:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-25T11:18:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:18:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.544201 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09f6eae2-6ce6-420b-91c8-a2056755f042\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://994b36ddb3c4dd02f4ca9428bbffdc3bd360fb5756a99fa150ca0f09c688781d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:18:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac90cf045284dd4eebd73436f815796ed7da40e53d5e650c94df83b44b3c23e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac90cf045284dd4eebd73436f815796ed7da40e53d5e650c94df83b44b3c23e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-25T11:18:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-25T11:18:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:18:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.554019 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d56aef23-d794-49a4-8e6b-2c9e2d1adebf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tct5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.615483 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.615674 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.616005 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.616092 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.616178 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:43Z","lastTransitionTime":"2026-02-25T11:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.622071 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/67964f07-93aa-42ec-90a7-730363ab668b-metrics-certs\") pod \"network-metrics-daemon-x2fvb\" (UID: \"67964f07-93aa-42ec-90a7-730363ab668b\") " pod="openshift-multus/network-metrics-daemon-x2fvb" Feb 25 11:19:43 crc kubenswrapper[5005]: E0225 11:19:43.622249 5005 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 25 11:19:43 crc kubenswrapper[5005]: E0225 11:19:43.622333 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67964f07-93aa-42ec-90a7-730363ab668b-metrics-certs podName:67964f07-93aa-42ec-90a7-730363ab668b nodeName:}" failed. No retries permitted until 2026-02-25 11:19:59.622310167 +0000 UTC m=+113.663042534 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/67964f07-93aa-42ec-90a7-730363ab668b-metrics-certs") pod "network-metrics-daemon-x2fvb" (UID: "67964f07-93aa-42ec-90a7-730363ab668b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.684577 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x2fvb" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.684629 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.684588 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.684922 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 25 11:19:43 crc kubenswrapper[5005]: E0225 11:19:43.685142 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x2fvb" podUID="67964f07-93aa-42ec-90a7-730363ab668b" Feb 25 11:19:43 crc kubenswrapper[5005]: E0225 11:19:43.685252 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 25 11:19:43 crc kubenswrapper[5005]: E0225 11:19:43.685347 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 25 11:19:43 crc kubenswrapper[5005]: E0225 11:19:43.686707 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.718925 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.719001 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.719021 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.719046 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.719063 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:43Z","lastTransitionTime":"2026-02-25T11:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.822019 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.822094 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.822118 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.822149 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.822174 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:43Z","lastTransitionTime":"2026-02-25T11:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.924455 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.924497 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.924509 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.924529 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:43 crc kubenswrapper[5005]: I0225 11:19:43.924540 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:43Z","lastTransitionTime":"2026-02-25T11:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.026422 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.026633 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.026647 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.026661 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.026670 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:44Z","lastTransitionTime":"2026-02-25T11:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.129671 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.129721 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.129735 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.129755 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.129773 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:44Z","lastTransitionTime":"2026-02-25T11:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.165297 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerStarted","Data":"9bcec292ba49b321b2664ea69a30a7f17e4a982f6d6d6eac1af0e4b399630cb7"} Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.165339 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerStarted","Data":"bf45a94e52ca384448ef1d62b7ede2d8e7a4b2be829f1165eaef95efdccc1bf9"} Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.166796 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dsd74" event={"ID":"03175783-f1a5-4ac6-b942-91a23ab4439d","Type":"ContainerStarted","Data":"0e7097a39853110bb9bfed8490c0eac568dea29dba97a8561d60566e3d98fa83"} Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.170227 5005 generic.go:334] "Generic (PLEG): container finished" podID="ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a" containerID="517d2faba7d933bb92235d0347be18b3c2c419013080a69452698372351515f8" exitCode=0 Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.170284 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7l6vx" event={"ID":"ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a","Type":"ContainerDied","Data":"517d2faba7d933bb92235d0347be18b3c2c419013080a69452698372351515f8"} Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.179324 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" event={"ID":"c496d07b-7684-4d5f-b36e-be187e76a3de","Type":"ContainerStarted","Data":"a0cf3587e332c4bc6e87a16b8e7608938b77c0869473ba73cbea5fd6ee279a39"} Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.180422 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.190525 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wjgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12691472-eb44-46a1-bd71-cf3250a90e2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cfb46a3a90ec5ee3daedd0fd483d5bb0d245ef0705a3712084d8584a44a64cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:19:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n28xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78febe04e22d16729c7070464df8e79c328a1afd4a4d6a52772afd0953e8607c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n28xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9wjgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.197964 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-x2fvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67964f07-93aa-42ec-90a7-730363ab668b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-649nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-649nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-x2fvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.216477 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7l6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecd54fbedc45f5f52a5afcbc07b6198ed741fb9fc7619d74b56e1ff5a4f6a177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecd54fbedc45f5f52a5afcbc07b6198ed741fb9fc7619d74b56e1ff5a4f6a177\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-25T11:19:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-25T11:19:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a4c4381c80e9f7902648c290a58f3398b2d06b1f1add8ab8e89460b530dc5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a4c4381c80e9f7902648c290a58f3398b2d06b1f1add8ab8e89460b530dc5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-25T11:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-25T11:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://829b4067c68452031dfaa345a9f72fa60cbb10a6b1bae9311519988dca212377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://829b4067c68452031dfaa345a9f72fa60cbb10a6b1bae9311519988dca212377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-25T11:19:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-25T11:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7l6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.226704 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9c06f67-7a5e-4278-818e-873a7b9618de\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d75520daa9f2549db391f75b35e5a1ae156323f4ed30dd5c3eff1c669b99079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:18:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc3059892241d2a21084bd5b7d8bda16e9d67b83712919ce2ea4c7157f3c0b19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:18:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68ec20b6113b19505a0f124a1bbd4c2a1d418c686236a02e96759f3ccbe3b7e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:18:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c59c602484c6cda3ffbd176e13b44ae1676fa65bde2f71e60e0e03bdc0c96375\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c59c602484c6cda3ffbd176e13b44ae1676fa65bde2f71e60e0e03bdc0c96375\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-25T11:19:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW0225 11:19:23.375161 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0225 11:19:23.375417 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0225 11:19:23.376481 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1757710392/tls.crt::/tmp/serving-cert-1757710392/tls.key\\\\\\\"\\\\nI0225 11:19:23.773334 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0225 11:19:23.776532 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0225 11:19:23.776566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0225 11:19:23.776611 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0225 11:19:23.776627 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0225 11:19:23.783108 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0225 11:19:23.783145 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0225 11:19:23.783151 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0225 11:19:23.783197 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0225 11:19:23.783210 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0225 11:19:23.783224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0225 11:19:23.783232 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0225 11:19:23.783240 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0225 11:19:23.784611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-25T11:19:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fb86efb4724977eee4e0b80af3aa1b7320b68118016250cd74ef6f631008417\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:18:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1215ec0ba3dc9272bbd8f648ab046459d3c8dd9de728a938102d269a234c9b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1215ec0ba3dc9272bbd8f648ab046459d3c8dd9de728a938102d269a234c9b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-25T11:18:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-25T11:18:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:18:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.232249 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.232268 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.232276 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.232288 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.232298 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:44Z","lastTransitionTime":"2026-02-25T11:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.234598 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09f6eae2-6ce6-420b-91c8-a2056755f042\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://994b36ddb3c4dd02f4ca9428bbffdc3bd360fb5756a99fa150ca0f09c688781d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:18:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac90cf045284dd4eebd73436f815796ed7da40e53d5e650c94df83b44b3c23e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac90cf045284dd4eebd73436f815796ed7da40e53d5e650c94df83b44b3c23e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-25T11:18:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-25T11:18:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:18:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.244687 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d56aef23-d794-49a4-8e6b-2c9e2d1adebf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bcec292ba49b321b2664ea69a30a7f17e4a982f6d6d6eac1af0e4b399630cb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf45a94e52ca384448ef1d62b7ede2d8e7a4b2be829f1165eaef95efdccc1bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tct5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.259073 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.269877 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xx5w9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6300726a-8703-4d2a-9688-264da029b561\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7fcdc3f750c06a386dfa36f1e62fb61c5ba3a989e452806a1f7d80724e62f5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:19:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7qdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xx5w9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.278618 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-splp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e33bce4-e290-4389-b690-398e3566f35d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99ddd9170ca9976c1bc2e62106d8ebe115dc8251c8666c5067dba543dcdc1c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdmtt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-splp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.289303 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dsd74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03175783-f1a5-4ac6-b942-91a23ab4439d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8fbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dsd74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.301409 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef06a60a8f708391519e4556b0e6d646fd08b52a4530abe31ab51ec93d4cf822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:19:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.313597 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.327598 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.334696 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.334728 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.334738 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.334751 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.334761 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:44Z","lastTransitionTime":"2026-02-25T11:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.337330 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.362691 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c496d07b-7684-4d5f-b36e-be187e76a3de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://975d46ba18ce34a37e4862e5d5fd264a037d931f085f2f6d4ac11b51073bc492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975d46ba18ce34a37e4862e5d5fd264a037d931f085f2f6d4ac11b51073bc492\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-25T11:19:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-25T11:19:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bfx5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.373339 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09f6eae2-6ce6-420b-91c8-a2056755f042\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://994b36ddb3c4dd02f4ca9428bbffdc3bd360fb5756a99fa150ca0f09c688781d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:18:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac90cf045284dd4eebd73436f815796ed7da40e53d5e650c94df83b44b3c23e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac90cf045284dd4eebd73436f815796ed7da40e53d5e650c94df83b44b3c23e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-25T11:18:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-25T11:18:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:18:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.386970 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d56aef23-d794-49a4-8e6b-2c9e2d1adebf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bcec292ba49b321b2664ea69a30a7f17e4a982f6d6d6eac1af0e4b399630cb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf45a94e52ca384448ef1d62b7ede2d8e7a4b2be829f1165eaef95efdccc1bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tct5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.401004 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9c06f67-7a5e-4278-818e-873a7b9618de\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d75520daa9f2549db391f75b35e5a1ae156323f4ed30dd5c3eff1c669b99079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:18:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc3059892241d2a21084bd5b7d8bda16e9d67b83712919ce2ea4c7157f3c0b19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:18:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68ec20b6113b19505a0f124a1bbd4c2a1d418c686236a02e96759f3ccbe3b7e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:18:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c59c602484c6cda3ffbd176e13b44ae1676fa65bde2f71e60e0e03bdc0c96375\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c59c602484c6cda3ffbd176e13b44ae1676fa65bde2f71e60e0e03bdc0c96375\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-25T11:19:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW0225 11:19:23.375161 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0225 11:19:23.375417 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0225 11:19:23.376481 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1757710392/tls.crt::/tmp/serving-cert-1757710392/tls.key\\\\\\\"\\\\nI0225 11:19:23.773334 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0225 11:19:23.776532 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0225 11:19:23.776566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0225 11:19:23.776611 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0225 11:19:23.776627 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0225 11:19:23.783108 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0225 11:19:23.783145 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0225 11:19:23.783151 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0225 11:19:23.783197 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0225 11:19:23.783210 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0225 11:19:23.783224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0225 11:19:23.783232 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0225 11:19:23.783240 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0225 11:19:23.784611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-25T11:19:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fb86efb4724977eee4e0b80af3aa1b7320b68118016250cd74ef6f631008417\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:18:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1215ec0ba3dc9272bbd8f648ab046459d3c8dd9de728a938102d269a234c9b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1215ec0ba3dc9272bbd8f648ab046459d3c8dd9de728a938102d269a234c9b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-25T11:18:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-25T11:18:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:18:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.412759 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef06a60a8f708391519e4556b0e6d646fd08b52a4530abe31ab51ec93d4cf822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:19:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.422311 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.430397 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.436361 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xx5w9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6300726a-8703-4d2a-9688-264da029b561\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7fcdc3f750c06a386dfa36f1e62fb61c5ba3a989e452806a1f7d80724e62f5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:19:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7qdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xx5w9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.437056 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.437139 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.437160 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.437212 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.437234 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:44Z","lastTransitionTime":"2026-02-25T11:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.442825 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-splp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e33bce4-e290-4389-b690-398e3566f35d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99ddd9170ca9976c1bc2e62106d8ebe115dc8251c8666c5067dba543dcdc1c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdmtt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-splp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.451517 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dsd74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03175783-f1a5-4ac6-b942-91a23ab4439d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e7097a39853110bb9bfed8490c0eac568dea29dba97a8561d60566e3d98fa83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8fbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dsd74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.464583 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.473580 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.488298 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c496d07b-7684-4d5f-b36e-be187e76a3de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://975d46ba18ce34a37e4862e5d5fd264a037d931f085f2f6d4ac11b51073bc492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975d46ba18ce34a37e4862e5d5fd264a037d931f085f2f6d4ac11b51073bc492\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-25T11:19:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-25T11:19:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bfx5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.501986 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wjgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12691472-eb44-46a1-bd71-cf3250a90e2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cfb46a3a90ec5ee3daedd0fd483d5bb0d245ef0705a3712084d8584a44a64cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:19:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n28xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78febe04e22d16729c7070464df8e79c328a1afd4a4d6a52772afd0953e8607c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n28xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9wjgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.511150 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-x2fvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67964f07-93aa-42ec-90a7-730363ab668b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-649nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-649nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-x2fvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.521735 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7l6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecd54fbedc45f5f52a5afcbc07b6198ed741fb9fc7619d74b56e1ff5a4f6a177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecd54fbedc45f5f52a5afcbc07b6198ed741fb9fc7619d74b56e1ff5a4f6a177\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-25T11:19:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-25T11:19:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a4c4381c80e9f7902648c290a58f3398b2d06b1f1add8ab8e89460b530dc5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a4c4381c80e9f7902648c290a58f3398b2d06b1f1add8ab8e89460b530dc5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-25T11:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-25T11:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://829b4067c68452031dfaa345a9f72fa60cbb10a6b1bae9311519988dca212377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://829b4067c68452031dfaa345a9f72fa60cbb10a6b1bae9311519988dca212377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-25T11:19:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-25T11:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://517d2faba7d933bb92235d0347be18b3c2c419013080a69452698372351515f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://517d2faba7d933bb92235d0347be18b3c2c419013080a69452698372351515f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-25T11:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-25T11:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7l6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.530733 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.540446 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.540486 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.540497 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.540515 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.540526 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:44Z","lastTransitionTime":"2026-02-25T11:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.643253 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.643303 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.643321 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.643339 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.643355 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:44Z","lastTransitionTime":"2026-02-25T11:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.747628 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.747953 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.748006 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.748024 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.748036 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:44Z","lastTransitionTime":"2026-02-25T11:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.850367 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.850422 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.850436 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.850455 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.850467 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:44Z","lastTransitionTime":"2026-02-25T11:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.956993 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.957591 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.957615 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.957635 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:44 crc kubenswrapper[5005]: I0225 11:19:44.957650 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:44Z","lastTransitionTime":"2026-02-25T11:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.077084 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.077129 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.077141 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.077158 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.077170 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:45Z","lastTransitionTime":"2026-02-25T11:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.181654 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.182111 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.182127 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.182146 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.182160 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:45Z","lastTransitionTime":"2026-02-25T11:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.186178 5005 generic.go:334] "Generic (PLEG): container finished" podID="ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a" containerID="fd3a4c3ba619b43c79fe77fa81e904c51564ba47d8d0a5454e6eb1a6c70c9ab0" exitCode=0 Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.186239 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7l6vx" event={"ID":"ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a","Type":"ContainerDied","Data":"fd3a4c3ba619b43c79fe77fa81e904c51564ba47d8d0a5454e6eb1a6c70c9ab0"} Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.222261 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c496d07b-7684-4d5f-b36e-be187e76a3de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://975d46ba18ce34a37e4862e5d5fd264a037d931f085f2f6d4ac11b51073bc492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://975d46ba18ce34a37e4862e5d5fd264a037d931f085f2f6d4ac11b51073bc492\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-25T11:19:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-25T11:19:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z24kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bfx5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.238309 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.252931 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.272025 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7l6vx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecd54fbedc45f5f52a5afcbc07b6198ed741fb9fc7619d74b56e1ff5a4f6a177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecd54fbedc45f5f52a5afcbc07b6198ed741fb9fc7619d74b56e1ff5a4f6a177\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-25T11:19:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-25T11:19:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a4c4381c80e9f7902648c290a58f3398b2d06b1f1add8ab8e89460b530dc5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a4c4381c80e9f7902648c290a58f3398b2d06b1f1add8ab8e89460b530dc5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-25T11:19:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-25T11:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://829b4067c68452031dfaa345a9f72fa60cbb10a6b1bae9311519988dca212377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://829b4067c68452031dfaa345a9f72fa60cbb10a6b1bae9311519988dca212377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-25T11:19:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-25T11:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://517d2faba7d933bb92235d0347be18b3c2c419013080a69452698372351515f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://517d2faba7d933bb92235d0347be18b3c2c419013080a69452698372351515f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-25T11:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-25T11:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd3a4c3ba619b43c79fe77fa81e904c51564ba47d8d0a5454e6eb1a6c70c9ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd3a4c3ba619b43c79fe77fa81e904c51564ba47d8d0a5454e6eb1a6c70c9ab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-25T11:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-25T11:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m5665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7l6vx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.285091 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.285133 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.285146 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.285163 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.285175 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:45Z","lastTransitionTime":"2026-02-25T11:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.286232 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.295938 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wjgc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12691472-eb44-46a1-bd71-cf3250a90e2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cfb46a3a90ec5ee3daedd0fd483d5bb0d245ef0705a3712084d8584a44a64cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:19:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n28xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78febe04e22d16729c7070464df8e79c328a1afd4a4d6a52772afd0953e8607c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:19:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n28xf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9wjgc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.306046 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-x2fvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67964f07-93aa-42ec-90a7-730363ab668b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-649nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-649nb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-x2fvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.320639 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9c06f67-7a5e-4278-818e-873a7b9618de\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d75520daa9f2549db391f75b35e5a1ae156323f4ed30dd5c3eff1c669b99079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:18:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc3059892241d2a21084bd5b7d8bda16e9d67b83712919ce2ea4c7157f3c0b19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:18:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68ec20b6113b19505a0f124a1bbd4c2a1d418c686236a02e96759f3ccbe3b7e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:18:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c59c602484c6cda3ffbd176e13b44ae1676fa65bde2f71e60e0e03bdc0c96375\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c59c602484c6cda3ffbd176e13b44ae1676fa65bde2f71e60e0e03bdc0c96375\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-25T11:19:23Z\\\",\\\"message\\\":\\\"le observer\\\\nW0225 11:19:23.375161 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0225 11:19:23.375417 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0225 11:19:23.376481 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1757710392/tls.crt::/tmp/serving-cert-1757710392/tls.key\\\\\\\"\\\\nI0225 11:19:23.773334 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0225 11:19:23.776532 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0225 11:19:23.776566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0225 11:19:23.776611 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0225 11:19:23.776627 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0225 11:19:23.783108 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0225 11:19:23.783145 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0225 11:19:23.783151 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0225 11:19:23.783197 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0225 11:19:23.783210 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0225 11:19:23.783224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0225 11:19:23.783232 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0225 11:19:23.783240 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0225 11:19:23.784611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-25T11:19:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fb86efb4724977eee4e0b80af3aa1b7320b68118016250cd74ef6f631008417\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:18:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1215ec0ba3dc9272bbd8f648ab046459d3c8dd9de728a938102d269a234c9b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1215ec0ba3dc9272bbd8f648ab046459d3c8dd9de728a938102d269a234c9b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-25T11:18:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-25T11:18:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:18:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.329800 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09f6eae2-6ce6-420b-91c8-a2056755f042\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:18:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://994b36ddb3c4dd02f4ca9428bbffdc3bd360fb5756a99fa150ca0f09c688781d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:18:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac90cf045284dd4eebd73436f815796ed7da40e53d5e650c94df83b44b3c23e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac90cf045284dd4eebd73436f815796ed7da40e53d5e650c94df83b44b3c23e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-25T11:18:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-25T11:18:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:18:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.342519 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d56aef23-d794-49a4-8e6b-2c9e2d1adebf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bcec292ba49b321b2664ea69a30a7f17e4a982f6d6d6eac1af0e4b399630cb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf45a94e52ca384448ef1d62b7ede2d8e7a4b2be829f1165eaef95efdccc1bf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2wmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tct5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.354701 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.362905 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.370522 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xx5w9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6300726a-8703-4d2a-9688-264da029b561\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7fcdc3f750c06a386dfa36f1e62fb61c5ba3a989e452806a1f7d80724e62f5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:19:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7qdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xx5w9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.379618 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-splp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e33bce4-e290-4389-b690-398e3566f35d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99ddd9170ca9976c1bc2e62106d8ebe115dc8251c8666c5067dba543dcdc1c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdmtt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-splp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.388760 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.389320 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.389341 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.389357 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.389418 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:45Z","lastTransitionTime":"2026-02-25T11:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.391075 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dsd74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03175783-f1a5-4ac6-b942-91a23ab4439d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e7097a39853110bb9bfed8490c0eac568dea29dba97a8561d60566e3d98fa83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8fbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-25T11:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dsd74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.401034 5005 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-25T11:19:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef06a60a8f708391519e4556b0e6d646fd08b52a4530abe31ab51ec93d4cf822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-25T11:19:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.492097 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.492142 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.492154 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.492170 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.492182 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:45Z","lastTransitionTime":"2026-02-25T11:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.595719 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.596061 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.596074 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.596092 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.596104 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:45Z","lastTransitionTime":"2026-02-25T11:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.685470 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.685543 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.685587 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x2fvb" Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.685638 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 25 11:19:45 crc kubenswrapper[5005]: E0225 11:19:45.685666 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 25 11:19:45 crc kubenswrapper[5005]: E0225 11:19:45.685782 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 25 11:19:45 crc kubenswrapper[5005]: E0225 11:19:45.685914 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x2fvb" podUID="67964f07-93aa-42ec-90a7-730363ab668b" Feb 25 11:19:45 crc kubenswrapper[5005]: E0225 11:19:45.686061 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.698343 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.698388 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.698398 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.698411 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.698420 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:45Z","lastTransitionTime":"2026-02-25T11:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.801291 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.801337 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.801354 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.801401 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.801420 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:45Z","lastTransitionTime":"2026-02-25T11:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.904824 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.904859 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.904868 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.904884 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:45 crc kubenswrapper[5005]: I0225 11:19:45.904893 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:45Z","lastTransitionTime":"2026-02-25T11:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:46 crc kubenswrapper[5005]: I0225 11:19:46.007910 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:46 crc kubenswrapper[5005]: I0225 11:19:46.007958 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:46 crc kubenswrapper[5005]: I0225 11:19:46.007972 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:46 crc kubenswrapper[5005]: I0225 11:19:46.007991 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:46 crc kubenswrapper[5005]: I0225 11:19:46.008005 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:46Z","lastTransitionTime":"2026-02-25T11:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:46 crc kubenswrapper[5005]: I0225 11:19:46.111803 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:46 crc kubenswrapper[5005]: I0225 11:19:46.111864 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:46 crc kubenswrapper[5005]: I0225 11:19:46.111890 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:46 crc kubenswrapper[5005]: I0225 11:19:46.111922 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:46 crc kubenswrapper[5005]: I0225 11:19:46.111947 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:46Z","lastTransitionTime":"2026-02-25T11:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:46 crc kubenswrapper[5005]: I0225 11:19:46.194200 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"51334d8c41b6524ff9ce408bcd95db8f97d9c2a78ce10ccb0f1f358e3d9fc8f9"} Feb 25 11:19:46 crc kubenswrapper[5005]: I0225 11:19:46.194302 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"deeffbb9d7719b1a82a28f4dfcf1cfa11a4da13c94aef9f883d1685ccdc2c564"} Feb 25 11:19:46 crc kubenswrapper[5005]: I0225 11:19:46.201833 5005 generic.go:334] "Generic (PLEG): container finished" podID="ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a" containerID="974fa88991b6dd728940799b845fe30a139ba69f03cca826fb3efc50c8b1ddd7" exitCode=0 Feb 25 11:19:46 crc kubenswrapper[5005]: I0225 11:19:46.201933 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7l6vx" event={"ID":"ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a","Type":"ContainerDied","Data":"974fa88991b6dd728940799b845fe30a139ba69f03cca826fb3efc50c8b1ddd7"} Feb 25 11:19:46 crc kubenswrapper[5005]: I0225 11:19:46.216040 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:46 crc kubenswrapper[5005]: I0225 11:19:46.216121 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:46 crc kubenswrapper[5005]: I0225 11:19:46.216151 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:46 crc kubenswrapper[5005]: I0225 11:19:46.216290 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:46 crc kubenswrapper[5005]: I0225 11:19:46.216318 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:46Z","lastTransitionTime":"2026-02-25T11:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:46 crc kubenswrapper[5005]: I0225 11:19:46.260642 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=17.260615889 podStartE2EDuration="17.260615889s" podCreationTimestamp="2026-02-25 11:19:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:19:46.260284307 +0000 UTC m=+100.301016664" watchObservedRunningTime="2026-02-25 11:19:46.260615889 +0000 UTC m=+100.301348246" Feb 25 11:19:46 crc kubenswrapper[5005]: I0225 11:19:46.284437 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podStartSLOduration=40.284353415 podStartE2EDuration="40.284353415s" podCreationTimestamp="2026-02-25 11:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:19:46.283761173 +0000 UTC m=+100.324493530" watchObservedRunningTime="2026-02-25 11:19:46.284353415 +0000 UTC m=+100.325085772" Feb 25 11:19:46 crc kubenswrapper[5005]: I0225 11:19:46.297483 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-xx5w9" podStartSLOduration=40.297460222 podStartE2EDuration="40.297460222s" podCreationTimestamp="2026-02-25 11:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:19:46.296463666 +0000 UTC m=+100.337196023" watchObservedRunningTime="2026-02-25 11:19:46.297460222 +0000 UTC m=+100.338192569" Feb 25 11:19:46 crc kubenswrapper[5005]: I0225 11:19:46.317202 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-splp7" podStartSLOduration=40.31714689 podStartE2EDuration="40.31714689s" podCreationTimestamp="2026-02-25 11:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:19:46.31689211 +0000 UTC m=+100.357624477" watchObservedRunningTime="2026-02-25 11:19:46.31714689 +0000 UTC m=+100.357879227" Feb 25 11:19:46 crc kubenswrapper[5005]: I0225 11:19:46.321654 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:46 crc kubenswrapper[5005]: I0225 11:19:46.321717 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:46 crc kubenswrapper[5005]: I0225 11:19:46.321729 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:46 crc kubenswrapper[5005]: I0225 11:19:46.321776 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:46 crc kubenswrapper[5005]: I0225 11:19:46.321788 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:46Z","lastTransitionTime":"2026-02-25T11:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:46 crc kubenswrapper[5005]: I0225 11:19:46.339296 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-dsd74" podStartSLOduration=40.339267246 podStartE2EDuration="40.339267246s" podCreationTimestamp="2026-02-25 11:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:19:46.338750667 +0000 UTC m=+100.379483034" watchObservedRunningTime="2026-02-25 11:19:46.339267246 +0000 UTC m=+100.379999613" Feb 25 11:19:46 crc kubenswrapper[5005]: I0225 11:19:46.426046 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:46 crc kubenswrapper[5005]: I0225 11:19:46.426081 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:46 crc kubenswrapper[5005]: I0225 11:19:46.426092 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:46 crc kubenswrapper[5005]: I0225 11:19:46.426106 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:46 crc kubenswrapper[5005]: I0225 11:19:46.426115 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:46Z","lastTransitionTime":"2026-02-25T11:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:46 crc kubenswrapper[5005]: I0225 11:19:46.476869 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wjgc" podStartSLOduration=40.47685032 podStartE2EDuration="40.47685032s" podCreationTimestamp="2026-02-25 11:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:19:46.464386796 +0000 UTC m=+100.505119123" watchObservedRunningTime="2026-02-25 11:19:46.47685032 +0000 UTC m=+100.517582637" Feb 25 11:19:46 crc kubenswrapper[5005]: I0225 11:19:46.529541 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:46 crc kubenswrapper[5005]: I0225 11:19:46.529585 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:46 crc kubenswrapper[5005]: I0225 11:19:46.529600 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:46 crc kubenswrapper[5005]: I0225 11:19:46.529618 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:46 crc kubenswrapper[5005]: I0225 11:19:46.529630 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:46Z","lastTransitionTime":"2026-02-25T11:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:46 crc kubenswrapper[5005]: I0225 11:19:46.632215 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:46 crc kubenswrapper[5005]: I0225 11:19:46.632255 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:46 crc kubenswrapper[5005]: I0225 11:19:46.632263 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:46 crc kubenswrapper[5005]: I0225 11:19:46.632277 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:46 crc kubenswrapper[5005]: I0225 11:19:46.632286 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:46Z","lastTransitionTime":"2026-02-25T11:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:46 crc kubenswrapper[5005]: I0225 11:19:46.735220 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:46 crc kubenswrapper[5005]: I0225 11:19:46.735632 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:46 crc kubenswrapper[5005]: I0225 11:19:46.735645 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:46 crc kubenswrapper[5005]: I0225 11:19:46.735664 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:46 crc kubenswrapper[5005]: I0225 11:19:46.735677 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:46Z","lastTransitionTime":"2026-02-25T11:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:46 crc kubenswrapper[5005]: I0225 11:19:46.837704 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:46 crc kubenswrapper[5005]: I0225 11:19:46.837745 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:46 crc kubenswrapper[5005]: I0225 11:19:46.837755 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:46 crc kubenswrapper[5005]: I0225 11:19:46.837770 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:46 crc kubenswrapper[5005]: I0225 11:19:46.837779 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:46Z","lastTransitionTime":"2026-02-25T11:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:46 crc kubenswrapper[5005]: I0225 11:19:46.941717 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:46 crc kubenswrapper[5005]: I0225 11:19:46.941770 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:46 crc kubenswrapper[5005]: I0225 11:19:46.941787 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:46 crc kubenswrapper[5005]: I0225 11:19:46.941811 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:46 crc kubenswrapper[5005]: I0225 11:19:46.941829 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:46Z","lastTransitionTime":"2026-02-25T11:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:47 crc kubenswrapper[5005]: I0225 11:19:47.046711 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:47 crc kubenswrapper[5005]: I0225 11:19:47.046760 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:47 crc kubenswrapper[5005]: I0225 11:19:47.046772 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:47 crc kubenswrapper[5005]: I0225 11:19:47.046792 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:47 crc kubenswrapper[5005]: I0225 11:19:47.046806 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:47Z","lastTransitionTime":"2026-02-25T11:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:47 crc kubenswrapper[5005]: I0225 11:19:47.151030 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:47 crc kubenswrapper[5005]: I0225 11:19:47.151365 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:47 crc kubenswrapper[5005]: I0225 11:19:47.151520 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:47 crc kubenswrapper[5005]: I0225 11:19:47.151642 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:47 crc kubenswrapper[5005]: I0225 11:19:47.152897 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:47Z","lastTransitionTime":"2026-02-25T11:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:47 crc kubenswrapper[5005]: I0225 11:19:47.214771 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7l6vx" event={"ID":"ab0464f0-94f5-4c58-8b46-0dbfc3c15a4a","Type":"ContainerStarted","Data":"d64432c89f91bf2848b7f6d618d25494372169e41a010e3e3352baca3b2c96ea"} Feb 25 11:19:47 crc kubenswrapper[5005]: I0225 11:19:47.223070 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" event={"ID":"c496d07b-7684-4d5f-b36e-be187e76a3de","Type":"ContainerStarted","Data":"2d2a7c161124112ded340d1bc04ef4f518e2666e8b47d7dbdd1f08a738c43224"} Feb 25 11:19:47 crc kubenswrapper[5005]: I0225 11:19:47.223742 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:47 crc kubenswrapper[5005]: I0225 11:19:47.223803 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:47 crc kubenswrapper[5005]: I0225 11:19:47.223830 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:47 crc kubenswrapper[5005]: I0225 11:19:47.257467 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:47 crc kubenswrapper[5005]: I0225 11:19:47.257524 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:47 crc kubenswrapper[5005]: I0225 11:19:47.257544 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:47 crc kubenswrapper[5005]: I0225 11:19:47.257569 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:47 crc kubenswrapper[5005]: I0225 11:19:47.257589 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:47Z","lastTransitionTime":"2026-02-25T11:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:47 crc kubenswrapper[5005]: I0225 11:19:47.286632 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:47 crc kubenswrapper[5005]: I0225 11:19:47.288616 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:47 crc kubenswrapper[5005]: I0225 11:19:47.303168 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" podStartSLOduration=41.303142635 podStartE2EDuration="41.303142635s" podCreationTimestamp="2026-02-25 11:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:19:47.302775121 +0000 UTC m=+101.343507488" watchObservedRunningTime="2026-02-25 11:19:47.303142635 +0000 UTC m=+101.343875002" Feb 25 11:19:47 crc kubenswrapper[5005]: I0225 11:19:47.303774 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-7l6vx" podStartSLOduration=41.303764396 podStartE2EDuration="41.303764396s" podCreationTimestamp="2026-02-25 11:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:19:47.256717292 +0000 UTC m=+101.297449699" watchObservedRunningTime="2026-02-25 11:19:47.303764396 +0000 UTC m=+101.344496763" Feb 25 11:19:47 crc kubenswrapper[5005]: I0225 11:19:47.360073 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:47 crc kubenswrapper[5005]: I0225 11:19:47.360108 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:47 crc kubenswrapper[5005]: I0225 11:19:47.360119 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:47 crc kubenswrapper[5005]: I0225 11:19:47.360133 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:47 crc kubenswrapper[5005]: I0225 11:19:47.360145 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:47Z","lastTransitionTime":"2026-02-25T11:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:47 crc kubenswrapper[5005]: I0225 11:19:47.464011 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:47 crc kubenswrapper[5005]: I0225 11:19:47.464067 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:47 crc kubenswrapper[5005]: I0225 11:19:47.464085 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:47 crc kubenswrapper[5005]: I0225 11:19:47.464109 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:47 crc kubenswrapper[5005]: I0225 11:19:47.464127 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:47Z","lastTransitionTime":"2026-02-25T11:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:47 crc kubenswrapper[5005]: I0225 11:19:47.566878 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:47 crc kubenswrapper[5005]: I0225 11:19:47.566925 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:47 crc kubenswrapper[5005]: I0225 11:19:47.566937 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:47 crc kubenswrapper[5005]: I0225 11:19:47.566953 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:47 crc kubenswrapper[5005]: I0225 11:19:47.566964 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:47Z","lastTransitionTime":"2026-02-25T11:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:47 crc kubenswrapper[5005]: I0225 11:19:47.670067 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:47 crc kubenswrapper[5005]: I0225 11:19:47.670130 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:47 crc kubenswrapper[5005]: I0225 11:19:47.670148 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:47 crc kubenswrapper[5005]: I0225 11:19:47.670175 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:47 crc kubenswrapper[5005]: I0225 11:19:47.670199 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:47Z","lastTransitionTime":"2026-02-25T11:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:47 crc kubenswrapper[5005]: I0225 11:19:47.684909 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 25 11:19:47 crc kubenswrapper[5005]: I0225 11:19:47.684952 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 25 11:19:47 crc kubenswrapper[5005]: I0225 11:19:47.685047 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x2fvb" Feb 25 11:19:47 crc kubenswrapper[5005]: E0225 11:19:47.685227 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 25 11:19:47 crc kubenswrapper[5005]: I0225 11:19:47.685284 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 25 11:19:47 crc kubenswrapper[5005]: E0225 11:19:47.685456 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 25 11:19:47 crc kubenswrapper[5005]: E0225 11:19:47.685606 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 25 11:19:47 crc kubenswrapper[5005]: E0225 11:19:47.685842 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x2fvb" podUID="67964f07-93aa-42ec-90a7-730363ab668b" Feb 25 11:19:47 crc kubenswrapper[5005]: I0225 11:19:47.686640 5005 scope.go:117] "RemoveContainer" containerID="c59c602484c6cda3ffbd176e13b44ae1676fa65bde2f71e60e0e03bdc0c96375" Feb 25 11:19:47 crc kubenswrapper[5005]: E0225 11:19:47.686912 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 25 11:19:47 crc kubenswrapper[5005]: I0225 11:19:47.773852 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:47 crc kubenswrapper[5005]: I0225 11:19:47.773918 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:47 crc kubenswrapper[5005]: I0225 11:19:47.773944 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:47 crc kubenswrapper[5005]: I0225 11:19:47.773980 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:47 crc kubenswrapper[5005]: I0225 11:19:47.774005 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:47Z","lastTransitionTime":"2026-02-25T11:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:47 crc kubenswrapper[5005]: I0225 11:19:47.876505 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:47 crc kubenswrapper[5005]: I0225 11:19:47.876583 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:47 crc kubenswrapper[5005]: I0225 11:19:47.876607 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:47 crc kubenswrapper[5005]: I0225 11:19:47.876639 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:47 crc kubenswrapper[5005]: I0225 11:19:47.876663 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:47Z","lastTransitionTime":"2026-02-25T11:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:47 crc kubenswrapper[5005]: I0225 11:19:47.978863 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:47 crc kubenswrapper[5005]: I0225 11:19:47.978909 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:47 crc kubenswrapper[5005]: I0225 11:19:47.978918 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:47 crc kubenswrapper[5005]: I0225 11:19:47.978932 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:47 crc kubenswrapper[5005]: I0225 11:19:47.978941 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:47Z","lastTransitionTime":"2026-02-25T11:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:48 crc kubenswrapper[5005]: I0225 11:19:48.080601 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:48 crc kubenswrapper[5005]: I0225 11:19:48.080637 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:48 crc kubenswrapper[5005]: I0225 11:19:48.080647 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:48 crc kubenswrapper[5005]: I0225 11:19:48.080663 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:48 crc kubenswrapper[5005]: I0225 11:19:48.080674 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:48Z","lastTransitionTime":"2026-02-25T11:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:48 crc kubenswrapper[5005]: I0225 11:19:48.182698 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:48 crc kubenswrapper[5005]: I0225 11:19:48.182738 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:48 crc kubenswrapper[5005]: I0225 11:19:48.182751 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:48 crc kubenswrapper[5005]: I0225 11:19:48.182769 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:48 crc kubenswrapper[5005]: I0225 11:19:48.182781 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:48Z","lastTransitionTime":"2026-02-25T11:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:48 crc kubenswrapper[5005]: I0225 11:19:48.264316 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-x2fvb"] Feb 25 11:19:48 crc kubenswrapper[5005]: I0225 11:19:48.264720 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x2fvb" Feb 25 11:19:48 crc kubenswrapper[5005]: E0225 11:19:48.264815 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x2fvb" podUID="67964f07-93aa-42ec-90a7-730363ab668b" Feb 25 11:19:48 crc kubenswrapper[5005]: I0225 11:19:48.285574 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:48 crc kubenswrapper[5005]: I0225 11:19:48.285622 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:48 crc kubenswrapper[5005]: I0225 11:19:48.285639 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:48 crc kubenswrapper[5005]: I0225 11:19:48.285661 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:48 crc kubenswrapper[5005]: I0225 11:19:48.285678 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:48Z","lastTransitionTime":"2026-02-25T11:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:48 crc kubenswrapper[5005]: I0225 11:19:48.388208 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:48 crc kubenswrapper[5005]: I0225 11:19:48.388247 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:48 crc kubenswrapper[5005]: I0225 11:19:48.388259 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:48 crc kubenswrapper[5005]: I0225 11:19:48.388280 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:48 crc kubenswrapper[5005]: I0225 11:19:48.388291 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:48Z","lastTransitionTime":"2026-02-25T11:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:48 crc kubenswrapper[5005]: I0225 11:19:48.461868 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 25 11:19:48 crc kubenswrapper[5005]: I0225 11:19:48.461906 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 25 11:19:48 crc kubenswrapper[5005]: I0225 11:19:48.461914 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 25 11:19:48 crc kubenswrapper[5005]: I0225 11:19:48.461928 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 25 11:19:48 crc kubenswrapper[5005]: I0225 11:19:48.461938 5005 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-25T11:19:48Z","lastTransitionTime":"2026-02-25T11:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 25 11:19:48 crc kubenswrapper[5005]: I0225 11:19:48.507144 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-9p4v5"] Feb 25 11:19:48 crc kubenswrapper[5005]: I0225 11:19:48.507772 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9p4v5" Feb 25 11:19:48 crc kubenswrapper[5005]: I0225 11:19:48.509138 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 25 11:19:48 crc kubenswrapper[5005]: I0225 11:19:48.509389 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 25 11:19:48 crc kubenswrapper[5005]: I0225 11:19:48.509880 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 25 11:19:48 crc kubenswrapper[5005]: I0225 11:19:48.511068 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 25 11:19:48 crc kubenswrapper[5005]: I0225 11:19:48.575335 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b801378-aad7-497c-87cf-5d52b2f5a268-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-9p4v5\" (UID: \"5b801378-aad7-497c-87cf-5d52b2f5a268\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9p4v5" Feb 25 11:19:48 crc kubenswrapper[5005]: I0225 11:19:48.575581 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5b801378-aad7-497c-87cf-5d52b2f5a268-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-9p4v5\" (UID: \"5b801378-aad7-497c-87cf-5d52b2f5a268\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9p4v5" Feb 25 11:19:48 crc kubenswrapper[5005]: I0225 11:19:48.575693 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5b801378-aad7-497c-87cf-5d52b2f5a268-service-ca\") pod \"cluster-version-operator-5c965bbfc6-9p4v5\" (UID: \"5b801378-aad7-497c-87cf-5d52b2f5a268\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9p4v5" Feb 25 11:19:48 crc kubenswrapper[5005]: I0225 11:19:48.575800 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/5b801378-aad7-497c-87cf-5d52b2f5a268-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-9p4v5\" (UID: \"5b801378-aad7-497c-87cf-5d52b2f5a268\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9p4v5" Feb 25 11:19:48 crc kubenswrapper[5005]: I0225 11:19:48.575863 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/5b801378-aad7-497c-87cf-5d52b2f5a268-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-9p4v5\" (UID: \"5b801378-aad7-497c-87cf-5d52b2f5a268\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9p4v5" Feb 25 11:19:48 crc kubenswrapper[5005]: I0225 11:19:48.675961 5005 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 25 11:19:48 crc kubenswrapper[5005]: I0225 11:19:48.676238 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b801378-aad7-497c-87cf-5d52b2f5a268-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-9p4v5\" (UID: \"5b801378-aad7-497c-87cf-5d52b2f5a268\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9p4v5" Feb 25 11:19:48 crc kubenswrapper[5005]: I0225 11:19:48.676326 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5b801378-aad7-497c-87cf-5d52b2f5a268-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-9p4v5\" (UID: \"5b801378-aad7-497c-87cf-5d52b2f5a268\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9p4v5" Feb 25 11:19:48 crc kubenswrapper[5005]: I0225 11:19:48.676355 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5b801378-aad7-497c-87cf-5d52b2f5a268-service-ca\") pod \"cluster-version-operator-5c965bbfc6-9p4v5\" (UID: \"5b801378-aad7-497c-87cf-5d52b2f5a268\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9p4v5" Feb 25 11:19:48 crc kubenswrapper[5005]: I0225 11:19:48.676466 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/5b801378-aad7-497c-87cf-5d52b2f5a268-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-9p4v5\" (UID: \"5b801378-aad7-497c-87cf-5d52b2f5a268\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9p4v5" Feb 25 11:19:48 crc kubenswrapper[5005]: I0225 11:19:48.676505 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/5b801378-aad7-497c-87cf-5d52b2f5a268-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-9p4v5\" (UID: \"5b801378-aad7-497c-87cf-5d52b2f5a268\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9p4v5" Feb 25 11:19:48 crc kubenswrapper[5005]: I0225 11:19:48.676582 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/5b801378-aad7-497c-87cf-5d52b2f5a268-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-9p4v5\" (UID: \"5b801378-aad7-497c-87cf-5d52b2f5a268\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9p4v5" Feb 25 11:19:48 crc kubenswrapper[5005]: I0225 11:19:48.676598 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/5b801378-aad7-497c-87cf-5d52b2f5a268-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-9p4v5\" (UID: \"5b801378-aad7-497c-87cf-5d52b2f5a268\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9p4v5" Feb 25 11:19:48 crc kubenswrapper[5005]: I0225 11:19:48.677743 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5b801378-aad7-497c-87cf-5d52b2f5a268-service-ca\") pod \"cluster-version-operator-5c965bbfc6-9p4v5\" (UID: \"5b801378-aad7-497c-87cf-5d52b2f5a268\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9p4v5" Feb 25 11:19:48 crc kubenswrapper[5005]: I0225 11:19:48.685889 5005 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 25 11:19:48 crc kubenswrapper[5005]: I0225 11:19:48.691191 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b801378-aad7-497c-87cf-5d52b2f5a268-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-9p4v5\" (UID: \"5b801378-aad7-497c-87cf-5d52b2f5a268\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9p4v5" Feb 25 11:19:48 crc kubenswrapper[5005]: I0225 11:19:48.706249 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5b801378-aad7-497c-87cf-5d52b2f5a268-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-9p4v5\" (UID: \"5b801378-aad7-497c-87cf-5d52b2f5a268\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9p4v5" Feb 25 11:19:48 crc kubenswrapper[5005]: I0225 11:19:48.824105 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9p4v5" Feb 25 11:19:48 crc kubenswrapper[5005]: W0225 11:19:48.840946 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b801378_aad7_497c_87cf_5d52b2f5a268.slice/crio-031b2214ed4627f93cf13bfc9f02579d88a862d26092000e186eed2c95501045 WatchSource:0}: Error finding container 031b2214ed4627f93cf13bfc9f02579d88a862d26092000e186eed2c95501045: Status 404 returned error can't find the container with id 031b2214ed4627f93cf13bfc9f02579d88a862d26092000e186eed2c95501045 Feb 25 11:19:49 crc kubenswrapper[5005]: I0225 11:19:49.230259 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"4df45627d851ca558f35bdd5b7658daf475532149c72b3aa29e5d7ab1e242c26"} Feb 25 11:19:49 crc kubenswrapper[5005]: I0225 11:19:49.232787 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9p4v5" event={"ID":"5b801378-aad7-497c-87cf-5d52b2f5a268","Type":"ContainerStarted","Data":"385399a4577a0c52775e18af3350741af58cb04aff80eb25e8493c00eb0e0dc2"} Feb 25 11:19:49 crc kubenswrapper[5005]: I0225 11:19:49.233121 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9p4v5" event={"ID":"5b801378-aad7-497c-87cf-5d52b2f5a268","Type":"ContainerStarted","Data":"031b2214ed4627f93cf13bfc9f02579d88a862d26092000e186eed2c95501045"} Feb 25 11:19:49 crc kubenswrapper[5005]: I0225 11:19:49.684902 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 25 11:19:49 crc kubenswrapper[5005]: I0225 11:19:49.684962 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x2fvb" Feb 25 11:19:49 crc kubenswrapper[5005]: E0225 11:19:49.685648 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 25 11:19:49 crc kubenswrapper[5005]: I0225 11:19:49.685119 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 25 11:19:49 crc kubenswrapper[5005]: I0225 11:19:49.685044 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 25 11:19:49 crc kubenswrapper[5005]: E0225 11:19:49.685840 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x2fvb" podUID="67964f07-93aa-42ec-90a7-730363ab668b" Feb 25 11:19:49 crc kubenswrapper[5005]: E0225 11:19:49.685966 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 25 11:19:49 crc kubenswrapper[5005]: E0225 11:19:49.686107 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.764530 5005 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.764878 5005 kubelet_node_status.go:538] "Fast updating node status as it just became ready" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.850697 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9p4v5" podStartSLOduration=44.850661212 podStartE2EDuration="44.850661212s" podCreationTimestamp="2026-02-25 11:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:19:49.279553114 +0000 UTC m=+103.320285481" watchObservedRunningTime="2026-02-25 11:19:50.850661212 +0000 UTC m=+104.891393589" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.853567 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xnfp6"] Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.854301 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mn7l8"] Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.854495 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-xnfp6" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.854718 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-mn7l8" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.861903 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.863097 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-75zdc"] Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.863573 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.866557 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.866568 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-75zdc" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.870429 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-lh4hs"] Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.885041 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.885444 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lh4hs" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.886537 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mqmwp"] Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.886877 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.887068 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.887096 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mqmwp" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.887146 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.887308 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.887463 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.887712 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.887953 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.888023 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.888361 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.888057 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.888493 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.888250 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.888644 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.890705 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.895953 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.896439 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.897528 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.898291 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2qtlb"] Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.898912 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2qtlb" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.899176 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tzkzd"] Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.900992 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-gjfbr"] Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.901347 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-gjfbr" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.901677 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tzkzd" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.909282 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.909507 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.912220 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.912424 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rx5lf"] Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.913028 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rx5lf" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.916978 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.917267 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.917416 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.917584 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.917699 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.917807 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.917922 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.918031 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.918536 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.919907 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.921117 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.921208 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.921145 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.923302 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-69c2n"] Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.924749 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-69c2n" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.925343 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-65dkm"] Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.925777 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-65dkm" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.930383 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-277gg"] Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.930918 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-277gg" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.933787 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.934123 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.934305 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.934515 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.934820 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.934999 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.935118 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.935154 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.935258 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.935399 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.935500 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.935619 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.951687 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.960585 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.960718 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v692m"] Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.960732 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.960858 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.960894 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.960956 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.961952 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v692m" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.962016 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.967355 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.971756 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-vmm2d"] Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.973605 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-rsd6r"] Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.973746 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-vmm2d" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.974994 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xxm8g"] Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.975542 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.976161 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 25 11:19:50 crc kubenswrapper[5005]: I0225 11:19:50.978381 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-rsd6r" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.001864 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.002117 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.006418 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.007641 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.007709 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.007840 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.008793 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.008068 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.008445 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.008642 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.008967 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.008711 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.009117 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.009658 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.012455 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-gnfvv"] Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.013322 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-qzm5m"] Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.013741 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-qzm5m" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.013807 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-gnfvv" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.014838 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e972077c-5857-4a15-bd23-21b21fbad7b1-config\") pod \"route-controller-manager-6576b87f9c-mqmwp\" (UID: \"e972077c-5857-4a15-bd23-21b21fbad7b1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mqmwp" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.014869 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/783f9d2c-ef3f-4915-819f-16ad8ddf943a-client-ca\") pod \"controller-manager-879f6c89f-mn7l8\" (UID: \"783f9d2c-ef3f-4915-819f-16ad8ddf943a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mn7l8" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.014894 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9897feaf-0f0f-44a2-bb22-8863579d6359-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-rx5lf\" (UID: \"9897feaf-0f0f-44a2-bb22-8863579d6359\") " pod="openshift-authentication/oauth-openshift-558db77b4-rx5lf" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.014914 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjt27\" (UniqueName: \"kubernetes.io/projected/9897feaf-0f0f-44a2-bb22-8863579d6359-kube-api-access-hjt27\") pod \"oauth-openshift-558db77b4-rx5lf\" (UID: \"9897feaf-0f0f-44a2-bb22-8863579d6359\") " pod="openshift-authentication/oauth-openshift-558db77b4-rx5lf" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.014935 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1233075e-7e1b-48ab-bdec-50d771eec172-audit\") pod \"apiserver-76f77b778f-xnfp6\" (UID: \"1233075e-7e1b-48ab-bdec-50d771eec172\") " pod="openshift-apiserver/apiserver-76f77b778f-xnfp6" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.014952 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1233075e-7e1b-48ab-bdec-50d771eec172-etcd-serving-ca\") pod \"apiserver-76f77b778f-xnfp6\" (UID: \"1233075e-7e1b-48ab-bdec-50d771eec172\") " pod="openshift-apiserver/apiserver-76f77b778f-xnfp6" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.014973 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9897feaf-0f0f-44a2-bb22-8863579d6359-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-rx5lf\" (UID: \"9897feaf-0f0f-44a2-bb22-8863579d6359\") " pod="openshift-authentication/oauth-openshift-558db77b4-rx5lf" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.014994 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/953dbec8-f4fc-411b-8a6f-191a52d20523-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2qtlb\" (UID: \"953dbec8-f4fc-411b-8a6f-191a52d20523\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2qtlb" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.015022 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9897feaf-0f0f-44a2-bb22-8863579d6359-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-rx5lf\" (UID: \"9897feaf-0f0f-44a2-bb22-8863579d6359\") " pod="openshift-authentication/oauth-openshift-558db77b4-rx5lf" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.015040 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9897feaf-0f0f-44a2-bb22-8863579d6359-audit-policies\") pod \"oauth-openshift-558db77b4-rx5lf\" (UID: \"9897feaf-0f0f-44a2-bb22-8863579d6359\") " pod="openshift-authentication/oauth-openshift-558db77b4-rx5lf" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.015057 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1233075e-7e1b-48ab-bdec-50d771eec172-etcd-client\") pod \"apiserver-76f77b778f-xnfp6\" (UID: \"1233075e-7e1b-48ab-bdec-50d771eec172\") " pod="openshift-apiserver/apiserver-76f77b778f-xnfp6" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.015084 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/20559448-c2fa-4138-ba2f-f9907e6ef183-auth-proxy-config\") pod \"machine-approver-56656f9798-lh4hs\" (UID: \"20559448-c2fa-4138-ba2f-f9907e6ef183\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lh4hs" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.015103 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/771a10ce-b3f7-4d81-9963-51d7a38c2cdf-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-75zdc\" (UID: \"771a10ce-b3f7-4d81-9963-51d7a38c2cdf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-75zdc" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.015121 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9897feaf-0f0f-44a2-bb22-8863579d6359-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-rx5lf\" (UID: \"9897feaf-0f0f-44a2-bb22-8863579d6359\") " pod="openshift-authentication/oauth-openshift-558db77b4-rx5lf" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.015148 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d0749b3-9886-4360-8a7e-c80fa8921a50-service-ca-bundle\") pod \"authentication-operator-69f744f599-gjfbr\" (UID: \"4d0749b3-9886-4360-8a7e-c80fa8921a50\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gjfbr" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.015167 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbk7q\" (UniqueName: \"kubernetes.io/projected/4d0749b3-9886-4360-8a7e-c80fa8921a50-kube-api-access-wbk7q\") pod \"authentication-operator-69f744f599-gjfbr\" (UID: \"4d0749b3-9886-4360-8a7e-c80fa8921a50\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gjfbr" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.015192 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p475g\" (UniqueName: \"kubernetes.io/projected/20559448-c2fa-4138-ba2f-f9907e6ef183-kube-api-access-p475g\") pod \"machine-approver-56656f9798-lh4hs\" (UID: \"20559448-c2fa-4138-ba2f-f9907e6ef183\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lh4hs" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.015210 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxktw\" (UniqueName: \"kubernetes.io/projected/1233075e-7e1b-48ab-bdec-50d771eec172-kube-api-access-rxktw\") pod \"apiserver-76f77b778f-xnfp6\" (UID: \"1233075e-7e1b-48ab-bdec-50d771eec172\") " pod="openshift-apiserver/apiserver-76f77b778f-xnfp6" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.015230 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/783f9d2c-ef3f-4915-819f-16ad8ddf943a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-mn7l8\" (UID: \"783f9d2c-ef3f-4915-819f-16ad8ddf943a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mn7l8" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.015248 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9897feaf-0f0f-44a2-bb22-8863579d6359-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-rx5lf\" (UID: \"9897feaf-0f0f-44a2-bb22-8863579d6359\") " pod="openshift-authentication/oauth-openshift-558db77b4-rx5lf" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.015265 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j2hc\" (UniqueName: \"kubernetes.io/projected/953dbec8-f4fc-411b-8a6f-191a52d20523-kube-api-access-4j2hc\") pod \"openshift-apiserver-operator-796bbdcf4f-2qtlb\" (UID: \"953dbec8-f4fc-411b-8a6f-191a52d20523\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2qtlb" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.015281 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20559448-c2fa-4138-ba2f-f9907e6ef183-config\") pod \"machine-approver-56656f9798-lh4hs\" (UID: \"20559448-c2fa-4138-ba2f-f9907e6ef183\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lh4hs" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.015306 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcpz4\" (UniqueName: \"kubernetes.io/projected/783f9d2c-ef3f-4915-819f-16ad8ddf943a-kube-api-access-tcpz4\") pod \"controller-manager-879f6c89f-mn7l8\" (UID: \"783f9d2c-ef3f-4915-819f-16ad8ddf943a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mn7l8" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.015419 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/953dbec8-f4fc-411b-8a6f-191a52d20523-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2qtlb\" (UID: \"953dbec8-f4fc-411b-8a6f-191a52d20523\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2qtlb" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.015445 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9897feaf-0f0f-44a2-bb22-8863579d6359-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-rx5lf\" (UID: \"9897feaf-0f0f-44a2-bb22-8863579d6359\") " pod="openshift-authentication/oauth-openshift-558db77b4-rx5lf" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.015461 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1233075e-7e1b-48ab-bdec-50d771eec172-serving-cert\") pod \"apiserver-76f77b778f-xnfp6\" (UID: \"1233075e-7e1b-48ab-bdec-50d771eec172\") " pod="openshift-apiserver/apiserver-76f77b778f-xnfp6" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.015481 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d0749b3-9886-4360-8a7e-c80fa8921a50-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-gjfbr\" (UID: \"4d0749b3-9886-4360-8a7e-c80fa8921a50\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gjfbr" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.015503 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9897feaf-0f0f-44a2-bb22-8863579d6359-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-rx5lf\" (UID: \"9897feaf-0f0f-44a2-bb22-8863579d6359\") " pod="openshift-authentication/oauth-openshift-558db77b4-rx5lf" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.015523 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9897feaf-0f0f-44a2-bb22-8863579d6359-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-rx5lf\" (UID: \"9897feaf-0f0f-44a2-bb22-8863579d6359\") " pod="openshift-authentication/oauth-openshift-558db77b4-rx5lf" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.015539 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/20559448-c2fa-4138-ba2f-f9907e6ef183-machine-approver-tls\") pod \"machine-approver-56656f9798-lh4hs\" (UID: \"20559448-c2fa-4138-ba2f-f9907e6ef183\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lh4hs" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.015558 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9897feaf-0f0f-44a2-bb22-8863579d6359-audit-dir\") pod \"oauth-openshift-558db77b4-rx5lf\" (UID: \"9897feaf-0f0f-44a2-bb22-8863579d6359\") " pod="openshift-authentication/oauth-openshift-558db77b4-rx5lf" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.015575 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9897feaf-0f0f-44a2-bb22-8863579d6359-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-rx5lf\" (UID: \"9897feaf-0f0f-44a2-bb22-8863579d6359\") " pod="openshift-authentication/oauth-openshift-558db77b4-rx5lf" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.015633 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/783f9d2c-ef3f-4915-819f-16ad8ddf943a-serving-cert\") pod \"controller-manager-879f6c89f-mn7l8\" (UID: \"783f9d2c-ef3f-4915-819f-16ad8ddf943a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mn7l8" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.015704 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9897feaf-0f0f-44a2-bb22-8863579d6359-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-rx5lf\" (UID: \"9897feaf-0f0f-44a2-bb22-8863579d6359\") " pod="openshift-authentication/oauth-openshift-558db77b4-rx5lf" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.015762 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvnxh\" (UniqueName: \"kubernetes.io/projected/af95ef77-54c8-4b77-9a76-fcac6a29c993-kube-api-access-qvnxh\") pod \"cluster-samples-operator-665b6dd947-tzkzd\" (UID: \"af95ef77-54c8-4b77-9a76-fcac6a29c993\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tzkzd" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.015786 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8wxr\" (UniqueName: \"kubernetes.io/projected/771a10ce-b3f7-4d81-9963-51d7a38c2cdf-kube-api-access-p8wxr\") pod \"machine-api-operator-5694c8668f-75zdc\" (UID: \"771a10ce-b3f7-4d81-9963-51d7a38c2cdf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-75zdc" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.015817 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1233075e-7e1b-48ab-bdec-50d771eec172-audit-dir\") pod \"apiserver-76f77b778f-xnfp6\" (UID: \"1233075e-7e1b-48ab-bdec-50d771eec172\") " pod="openshift-apiserver/apiserver-76f77b778f-xnfp6" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.015834 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e972077c-5857-4a15-bd23-21b21fbad7b1-serving-cert\") pod \"route-controller-manager-6576b87f9c-mqmwp\" (UID: \"e972077c-5857-4a15-bd23-21b21fbad7b1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mqmwp" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.015852 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1233075e-7e1b-48ab-bdec-50d771eec172-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xnfp6\" (UID: \"1233075e-7e1b-48ab-bdec-50d771eec172\") " pod="openshift-apiserver/apiserver-76f77b778f-xnfp6" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.015872 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/af95ef77-54c8-4b77-9a76-fcac6a29c993-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-tzkzd\" (UID: \"af95ef77-54c8-4b77-9a76-fcac6a29c993\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tzkzd" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.015888 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e972077c-5857-4a15-bd23-21b21fbad7b1-client-ca\") pod \"route-controller-manager-6576b87f9c-mqmwp\" (UID: \"e972077c-5857-4a15-bd23-21b21fbad7b1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mqmwp" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.015905 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1233075e-7e1b-48ab-bdec-50d771eec172-encryption-config\") pod \"apiserver-76f77b778f-xnfp6\" (UID: \"1233075e-7e1b-48ab-bdec-50d771eec172\") " pod="openshift-apiserver/apiserver-76f77b778f-xnfp6" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.015930 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d0749b3-9886-4360-8a7e-c80fa8921a50-config\") pod \"authentication-operator-69f744f599-gjfbr\" (UID: \"4d0749b3-9886-4360-8a7e-c80fa8921a50\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gjfbr" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.015956 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d0749b3-9886-4360-8a7e-c80fa8921a50-serving-cert\") pod \"authentication-operator-69f744f599-gjfbr\" (UID: \"4d0749b3-9886-4360-8a7e-c80fa8921a50\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gjfbr" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.015976 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9897feaf-0f0f-44a2-bb22-8863579d6359-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-rx5lf\" (UID: \"9897feaf-0f0f-44a2-bb22-8863579d6359\") " pod="openshift-authentication/oauth-openshift-558db77b4-rx5lf" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.015995 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1233075e-7e1b-48ab-bdec-50d771eec172-config\") pod \"apiserver-76f77b778f-xnfp6\" (UID: \"1233075e-7e1b-48ab-bdec-50d771eec172\") " pod="openshift-apiserver/apiserver-76f77b778f-xnfp6" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.016011 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1233075e-7e1b-48ab-bdec-50d771eec172-image-import-ca\") pod \"apiserver-76f77b778f-xnfp6\" (UID: \"1233075e-7e1b-48ab-bdec-50d771eec172\") " pod="openshift-apiserver/apiserver-76f77b778f-xnfp6" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.016028 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/771a10ce-b3f7-4d81-9963-51d7a38c2cdf-images\") pod \"machine-api-operator-5694c8668f-75zdc\" (UID: \"771a10ce-b3f7-4d81-9963-51d7a38c2cdf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-75zdc" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.016046 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/783f9d2c-ef3f-4915-819f-16ad8ddf943a-config\") pod \"controller-manager-879f6c89f-mn7l8\" (UID: \"783f9d2c-ef3f-4915-819f-16ad8ddf943a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mn7l8" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.016064 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq7qg\" (UniqueName: \"kubernetes.io/projected/e972077c-5857-4a15-bd23-21b21fbad7b1-kube-api-access-jq7qg\") pod \"route-controller-manager-6576b87f9c-mqmwp\" (UID: \"e972077c-5857-4a15-bd23-21b21fbad7b1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mqmwp" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.016080 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/771a10ce-b3f7-4d81-9963-51d7a38c2cdf-config\") pod \"machine-api-operator-5694c8668f-75zdc\" (UID: \"771a10ce-b3f7-4d81-9963-51d7a38c2cdf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-75zdc" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.016095 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1233075e-7e1b-48ab-bdec-50d771eec172-node-pullsecrets\") pod \"apiserver-76f77b778f-xnfp6\" (UID: \"1233075e-7e1b-48ab-bdec-50d771eec172\") " pod="openshift-apiserver/apiserver-76f77b778f-xnfp6" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.016512 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.016765 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.016867 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.016982 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.017069 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.017152 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.017193 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.017649 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.017823 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.019007 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.019185 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.019452 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.019535 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xlq97"] Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.020034 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xlq97" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.022359 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.022688 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.023126 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.023409 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.023918 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.024018 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.024193 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.025873 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.025938 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.030327 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nsq6k"] Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.030414 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.030552 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.030569 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.030705 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.030745 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.030705 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.030830 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.030850 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.030956 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-xwmct"] Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.031092 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nsq6k" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.032102 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xwmct" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.032128 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.033049 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.033520 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mn7l8"] Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.034382 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6v8gf"] Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.036729 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6v8gf" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.039244 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5zrbx"] Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.040839 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5zrbx" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.042616 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.043236 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.047956 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-9ll9j"] Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.049765 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.050887 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-bfzqz"] Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.050915 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9ll9j" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.054105 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bfzqz" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.055389 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-ndnvt"] Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.056209 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ndnvt" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.057207 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.057650 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-stkgx"] Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.058848 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hl9wb"] Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.058992 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-stkgx" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.059333 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hl9wb" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.059883 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-plj26"] Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.060449 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-plj26" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.061044 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rt5kw"] Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.061887 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-5j95b"] Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.062120 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rt5kw" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.063267 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-hjfgv"] Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.063286 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5j95b" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.064750 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8zcnw"] Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.064948 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-hjfgv" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.065896 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mqmwp"] Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.065998 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8zcnw" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.066730 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-4m859"] Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.067518 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-x6pzz"] Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.067809 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4m859" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.068155 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-x6pzz" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.068408 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29533635-968n9"] Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.069726 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hzbjf"] Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.070253 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29533635-968n9" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.072220 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tzkzd"] Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.072254 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-vmm2d"] Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.072555 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hzbjf" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.073016 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-rcnr2"] Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.075149 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rcnr2" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.077685 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2qtlb"] Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.078970 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.079511 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-277gg"] Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.081845 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rx5lf"] Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.082789 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-65dkm"] Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.084803 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-gjfbr"] Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.086562 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xnfp6"] Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.087900 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6v8gf"] Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.089501 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-bpj6f"] Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.090156 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-bpj6f" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.090916 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v692m"] Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.093118 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-rsd6r"] Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.094205 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-75zdc"] Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.096798 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-qcfmz"] Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.097463 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-qcfmz" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.098748 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.099756 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xlq97"] Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.101388 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-qzm5m"] Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.103021 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-9ll9j"] Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.104226 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-69c2n"] Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.105601 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-bfzqz"] Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.106635 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-ndnvt"] Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.107757 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-4m859"] Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.109307 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-rcnr2"] Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.110292 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nsq6k"] Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.113613 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-bpj6f"] Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.116675 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.118304 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p475g\" (UniqueName: \"kubernetes.io/projected/20559448-c2fa-4138-ba2f-f9907e6ef183-kube-api-access-p475g\") pod \"machine-approver-56656f9798-lh4hs\" (UID: \"20559448-c2fa-4138-ba2f-f9907e6ef183\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lh4hs" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.118491 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbk7q\" (UniqueName: \"kubernetes.io/projected/4d0749b3-9886-4360-8a7e-c80fa8921a50-kube-api-access-wbk7q\") pod \"authentication-operator-69f744f599-gjfbr\" (UID: \"4d0749b3-9886-4360-8a7e-c80fa8921a50\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gjfbr" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.118586 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/783f9d2c-ef3f-4915-819f-16ad8ddf943a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-mn7l8\" (UID: \"783f9d2c-ef3f-4915-819f-16ad8ddf943a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mn7l8" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.118678 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf5c0827-c687-4ab2-a02f-7b74d00a57db-trusted-ca-bundle\") pod \"console-f9d7485db-277gg\" (UID: \"cf5c0827-c687-4ab2-a02f-7b74d00a57db\") " pod="openshift-console/console-f9d7485db-277gg" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.118759 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/edfd83bd-08e3-4a78-9b03-1fa3100e5ebf-etcd-client\") pod \"etcd-operator-b45778765-qzm5m\" (UID: \"edfd83bd-08e3-4a78-9b03-1fa3100e5ebf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qzm5m" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.118843 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9897feaf-0f0f-44a2-bb22-8863579d6359-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-rx5lf\" (UID: \"9897feaf-0f0f-44a2-bb22-8863579d6359\") " pod="openshift-authentication/oauth-openshift-558db77b4-rx5lf" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.118926 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4j2hc\" (UniqueName: \"kubernetes.io/projected/953dbec8-f4fc-411b-8a6f-191a52d20523-kube-api-access-4j2hc\") pod \"openshift-apiserver-operator-796bbdcf4f-2qtlb\" (UID: \"953dbec8-f4fc-411b-8a6f-191a52d20523\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2qtlb" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.119003 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxktw\" (UniqueName: \"kubernetes.io/projected/1233075e-7e1b-48ab-bdec-50d771eec172-kube-api-access-rxktw\") pod \"apiserver-76f77b778f-xnfp6\" (UID: \"1233075e-7e1b-48ab-bdec-50d771eec172\") " pod="openshift-apiserver/apiserver-76f77b778f-xnfp6" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.119089 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ee6ad6b-1ad3-4bcf-a35a-d5f09a50e9d2-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xlq97\" (UID: \"4ee6ad6b-1ad3-4bcf-a35a-d5f09a50e9d2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xlq97" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.119166 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20559448-c2fa-4138-ba2f-f9907e6ef183-config\") pod \"machine-approver-56656f9798-lh4hs\" (UID: \"20559448-c2fa-4138-ba2f-f9907e6ef183\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lh4hs" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.119245 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcpz4\" (UniqueName: \"kubernetes.io/projected/783f9d2c-ef3f-4915-819f-16ad8ddf943a-kube-api-access-tcpz4\") pod \"controller-manager-879f6c89f-mn7l8\" (UID: \"783f9d2c-ef3f-4915-819f-16ad8ddf943a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mn7l8" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.119324 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/953dbec8-f4fc-411b-8a6f-191a52d20523-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2qtlb\" (UID: \"953dbec8-f4fc-411b-8a6f-191a52d20523\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2qtlb" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.119453 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cf5c0827-c687-4ab2-a02f-7b74d00a57db-service-ca\") pod \"console-f9d7485db-277gg\" (UID: \"cf5c0827-c687-4ab2-a02f-7b74d00a57db\") " pod="openshift-console/console-f9d7485db-277gg" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.119545 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/edfd83bd-08e3-4a78-9b03-1fa3100e5ebf-etcd-service-ca\") pod \"etcd-operator-b45778765-qzm5m\" (UID: \"edfd83bd-08e3-4a78-9b03-1fa3100e5ebf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qzm5m" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.119633 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9897feaf-0f0f-44a2-bb22-8863579d6359-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-rx5lf\" (UID: \"9897feaf-0f0f-44a2-bb22-8863579d6359\") " pod="openshift-authentication/oauth-openshift-558db77b4-rx5lf" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.119713 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1233075e-7e1b-48ab-bdec-50d771eec172-serving-cert\") pod \"apiserver-76f77b778f-xnfp6\" (UID: \"1233075e-7e1b-48ab-bdec-50d771eec172\") " pod="openshift-apiserver/apiserver-76f77b778f-xnfp6" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.119788 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d0749b3-9886-4360-8a7e-c80fa8921a50-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-gjfbr\" (UID: \"4d0749b3-9886-4360-8a7e-c80fa8921a50\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gjfbr" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.119874 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9897feaf-0f0f-44a2-bb22-8863579d6359-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-rx5lf\" (UID: \"9897feaf-0f0f-44a2-bb22-8863579d6359\") " pod="openshift-authentication/oauth-openshift-558db77b4-rx5lf" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.119960 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9897feaf-0f0f-44a2-bb22-8863579d6359-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-rx5lf\" (UID: \"9897feaf-0f0f-44a2-bb22-8863579d6359\") " pod="openshift-authentication/oauth-openshift-558db77b4-rx5lf" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.120042 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cf5c0827-c687-4ab2-a02f-7b74d00a57db-console-oauth-config\") pod \"console-f9d7485db-277gg\" (UID: \"cf5c0827-c687-4ab2-a02f-7b74d00a57db\") " pod="openshift-console/console-f9d7485db-277gg" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.120116 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/faec049e-e45a-4ab3-8761-f4bd01acc732-metrics-certs\") pod \"router-default-5444994796-gnfvv\" (UID: \"faec049e-e45a-4ab3-8761-f4bd01acc732\") " pod="openshift-ingress/router-default-5444994796-gnfvv" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.120191 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9897feaf-0f0f-44a2-bb22-8863579d6359-audit-dir\") pod \"oauth-openshift-558db77b4-rx5lf\" (UID: \"9897feaf-0f0f-44a2-bb22-8863579d6359\") " pod="openshift-authentication/oauth-openshift-558db77b4-rx5lf" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.120273 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/20559448-c2fa-4138-ba2f-f9907e6ef183-machine-approver-tls\") pod \"machine-approver-56656f9798-lh4hs\" (UID: \"20559448-c2fa-4138-ba2f-f9907e6ef183\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lh4hs" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.120354 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9897feaf-0f0f-44a2-bb22-8863579d6359-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-rx5lf\" (UID: \"9897feaf-0f0f-44a2-bb22-8863579d6359\") " pod="openshift-authentication/oauth-openshift-558db77b4-rx5lf" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.120457 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/faec049e-e45a-4ab3-8761-f4bd01acc732-service-ca-bundle\") pod \"router-default-5444994796-gnfvv\" (UID: \"faec049e-e45a-4ab3-8761-f4bd01acc732\") " pod="openshift-ingress/router-default-5444994796-gnfvv" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.120532 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edfd83bd-08e3-4a78-9b03-1fa3100e5ebf-serving-cert\") pod \"etcd-operator-b45778765-qzm5m\" (UID: \"edfd83bd-08e3-4a78-9b03-1fa3100e5ebf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qzm5m" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.120605 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/edfd83bd-08e3-4a78-9b03-1fa3100e5ebf-etcd-ca\") pod \"etcd-operator-b45778765-qzm5m\" (UID: \"edfd83bd-08e3-4a78-9b03-1fa3100e5ebf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qzm5m" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.120687 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9897feaf-0f0f-44a2-bb22-8863579d6359-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-rx5lf\" (UID: \"9897feaf-0f0f-44a2-bb22-8863579d6359\") " pod="openshift-authentication/oauth-openshift-558db77b4-rx5lf" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.120763 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvnxh\" (UniqueName: \"kubernetes.io/projected/af95ef77-54c8-4b77-9a76-fcac6a29c993-kube-api-access-qvnxh\") pod \"cluster-samples-operator-665b6dd947-tzkzd\" (UID: \"af95ef77-54c8-4b77-9a76-fcac6a29c993\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tzkzd" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.120840 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/783f9d2c-ef3f-4915-819f-16ad8ddf943a-serving-cert\") pod \"controller-manager-879f6c89f-mn7l8\" (UID: \"783f9d2c-ef3f-4915-819f-16ad8ddf943a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mn7l8" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.120921 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8wxr\" (UniqueName: \"kubernetes.io/projected/771a10ce-b3f7-4d81-9963-51d7a38c2cdf-kube-api-access-p8wxr\") pod \"machine-api-operator-5694c8668f-75zdc\" (UID: \"771a10ce-b3f7-4d81-9963-51d7a38c2cdf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-75zdc" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.120998 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/faec049e-e45a-4ab3-8761-f4bd01acc732-stats-auth\") pod \"router-default-5444994796-gnfvv\" (UID: \"faec049e-e45a-4ab3-8761-f4bd01acc732\") " pod="openshift-ingress/router-default-5444994796-gnfvv" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.121096 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1233075e-7e1b-48ab-bdec-50d771eec172-audit-dir\") pod \"apiserver-76f77b778f-xnfp6\" (UID: \"1233075e-7e1b-48ab-bdec-50d771eec172\") " pod="openshift-apiserver/apiserver-76f77b778f-xnfp6" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.121200 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9897feaf-0f0f-44a2-bb22-8863579d6359-audit-dir\") pod \"oauth-openshift-558db77b4-rx5lf\" (UID: \"9897feaf-0f0f-44a2-bb22-8863579d6359\") " pod="openshift-authentication/oauth-openshift-558db77b4-rx5lf" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.121424 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1233075e-7e1b-48ab-bdec-50d771eec172-audit-dir\") pod \"apiserver-76f77b778f-xnfp6\" (UID: \"1233075e-7e1b-48ab-bdec-50d771eec172\") " pod="openshift-apiserver/apiserver-76f77b778f-xnfp6" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.122249 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9897feaf-0f0f-44a2-bb22-8863579d6359-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-rx5lf\" (UID: \"9897feaf-0f0f-44a2-bb22-8863579d6359\") " pod="openshift-authentication/oauth-openshift-558db77b4-rx5lf" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.122455 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d0749b3-9886-4360-8a7e-c80fa8921a50-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-gjfbr\" (UID: \"4d0749b3-9886-4360-8a7e-c80fa8921a50\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gjfbr" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.123303 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1233075e-7e1b-48ab-bdec-50d771eec172-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xnfp6\" (UID: \"1233075e-7e1b-48ab-bdec-50d771eec172\") " pod="openshift-apiserver/apiserver-76f77b778f-xnfp6" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.123411 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edfd83bd-08e3-4a78-9b03-1fa3100e5ebf-config\") pod \"etcd-operator-b45778765-qzm5m\" (UID: \"edfd83bd-08e3-4a78-9b03-1fa3100e5ebf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qzm5m" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.123510 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ee6ad6b-1ad3-4bcf-a35a-d5f09a50e9d2-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xlq97\" (UID: \"4ee6ad6b-1ad3-4bcf-a35a-d5f09a50e9d2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xlq97" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.123591 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e972077c-5857-4a15-bd23-21b21fbad7b1-serving-cert\") pod \"route-controller-manager-6576b87f9c-mqmwp\" (UID: \"e972077c-5857-4a15-bd23-21b21fbad7b1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mqmwp" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.123653 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/af95ef77-54c8-4b77-9a76-fcac6a29c993-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-tzkzd\" (UID: \"af95ef77-54c8-4b77-9a76-fcac6a29c993\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tzkzd" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.123693 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e972077c-5857-4a15-bd23-21b21fbad7b1-client-ca\") pod \"route-controller-manager-6576b87f9c-mqmwp\" (UID: \"e972077c-5857-4a15-bd23-21b21fbad7b1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mqmwp" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.123735 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1233075e-7e1b-48ab-bdec-50d771eec172-encryption-config\") pod \"apiserver-76f77b778f-xnfp6\" (UID: \"1233075e-7e1b-48ab-bdec-50d771eec172\") " pod="openshift-apiserver/apiserver-76f77b778f-xnfp6" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.123782 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d0749b3-9886-4360-8a7e-c80fa8921a50-config\") pod \"authentication-operator-69f744f599-gjfbr\" (UID: \"4d0749b3-9886-4360-8a7e-c80fa8921a50\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gjfbr" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.123836 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s8rv\" (UniqueName: \"kubernetes.io/projected/cf5c0827-c687-4ab2-a02f-7b74d00a57db-kube-api-access-4s8rv\") pod \"console-f9d7485db-277gg\" (UID: \"cf5c0827-c687-4ab2-a02f-7b74d00a57db\") " pod="openshift-console/console-f9d7485db-277gg" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.123943 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9897feaf-0f0f-44a2-bb22-8863579d6359-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-rx5lf\" (UID: \"9897feaf-0f0f-44a2-bb22-8863579d6359\") " pod="openshift-authentication/oauth-openshift-558db77b4-rx5lf" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.123996 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1233075e-7e1b-48ab-bdec-50d771eec172-config\") pod \"apiserver-76f77b778f-xnfp6\" (UID: \"1233075e-7e1b-48ab-bdec-50d771eec172\") " pod="openshift-apiserver/apiserver-76f77b778f-xnfp6" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.124032 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d0749b3-9886-4360-8a7e-c80fa8921a50-serving-cert\") pod \"authentication-operator-69f744f599-gjfbr\" (UID: \"4d0749b3-9886-4360-8a7e-c80fa8921a50\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gjfbr" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.124390 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5zrbx"] Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.125061 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/953dbec8-f4fc-411b-8a6f-191a52d20523-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2qtlb\" (UID: \"953dbec8-f4fc-411b-8a6f-191a52d20523\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2qtlb" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.125768 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20559448-c2fa-4138-ba2f-f9907e6ef183-config\") pod \"machine-approver-56656f9798-lh4hs\" (UID: \"20559448-c2fa-4138-ba2f-f9907e6ef183\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lh4hs" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.125905 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1233075e-7e1b-48ab-bdec-50d771eec172-serving-cert\") pod \"apiserver-76f77b778f-xnfp6\" (UID: \"1233075e-7e1b-48ab-bdec-50d771eec172\") " pod="openshift-apiserver/apiserver-76f77b778f-xnfp6" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.126039 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/20559448-c2fa-4138-ba2f-f9907e6ef183-machine-approver-tls\") pod \"machine-approver-56656f9798-lh4hs\" (UID: \"20559448-c2fa-4138-ba2f-f9907e6ef183\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lh4hs" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.126342 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xxm8g"] Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.127086 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9897feaf-0f0f-44a2-bb22-8863579d6359-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-rx5lf\" (UID: \"9897feaf-0f0f-44a2-bb22-8863579d6359\") " pod="openshift-authentication/oauth-openshift-558db77b4-rx5lf" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.127654 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d0749b3-9886-4360-8a7e-c80fa8921a50-config\") pod \"authentication-operator-69f744f599-gjfbr\" (UID: \"4d0749b3-9886-4360-8a7e-c80fa8921a50\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gjfbr" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.128035 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9897feaf-0f0f-44a2-bb22-8863579d6359-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-rx5lf\" (UID: \"9897feaf-0f0f-44a2-bb22-8863579d6359\") " pod="openshift-authentication/oauth-openshift-558db77b4-rx5lf" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.128768 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e972077c-5857-4a15-bd23-21b21fbad7b1-client-ca\") pod \"route-controller-manager-6576b87f9c-mqmwp\" (UID: \"e972077c-5857-4a15-bd23-21b21fbad7b1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mqmwp" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.128867 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1233075e-7e1b-48ab-bdec-50d771eec172-config\") pod \"apiserver-76f77b778f-xnfp6\" (UID: \"1233075e-7e1b-48ab-bdec-50d771eec172\") " pod="openshift-apiserver/apiserver-76f77b778f-xnfp6" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.128982 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1233075e-7e1b-48ab-bdec-50d771eec172-image-import-ca\") pod \"apiserver-76f77b778f-xnfp6\" (UID: \"1233075e-7e1b-48ab-bdec-50d771eec172\") " pod="openshift-apiserver/apiserver-76f77b778f-xnfp6" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.129092 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/771a10ce-b3f7-4d81-9963-51d7a38c2cdf-images\") pod \"machine-api-operator-5694c8668f-75zdc\" (UID: \"771a10ce-b3f7-4d81-9963-51d7a38c2cdf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-75zdc" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.129359 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/783f9d2c-ef3f-4915-819f-16ad8ddf943a-serving-cert\") pod \"controller-manager-879f6c89f-mn7l8\" (UID: \"783f9d2c-ef3f-4915-819f-16ad8ddf943a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mn7l8" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.129723 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1233075e-7e1b-48ab-bdec-50d771eec172-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xnfp6\" (UID: \"1233075e-7e1b-48ab-bdec-50d771eec172\") " pod="openshift-apiserver/apiserver-76f77b778f-xnfp6" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.129858 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jq7qg\" (UniqueName: \"kubernetes.io/projected/e972077c-5857-4a15-bd23-21b21fbad7b1-kube-api-access-jq7qg\") pod \"route-controller-manager-6576b87f9c-mqmwp\" (UID: \"e972077c-5857-4a15-bd23-21b21fbad7b1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mqmwp" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.130188 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/771a10ce-b3f7-4d81-9963-51d7a38c2cdf-config\") pod \"machine-api-operator-5694c8668f-75zdc\" (UID: \"771a10ce-b3f7-4d81-9963-51d7a38c2cdf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-75zdc" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.130290 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/783f9d2c-ef3f-4915-819f-16ad8ddf943a-config\") pod \"controller-manager-879f6c89f-mn7l8\" (UID: \"783f9d2c-ef3f-4915-819f-16ad8ddf943a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mn7l8" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.130302 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1233075e-7e1b-48ab-bdec-50d771eec172-image-import-ca\") pod \"apiserver-76f77b778f-xnfp6\" (UID: \"1233075e-7e1b-48ab-bdec-50d771eec172\") " pod="openshift-apiserver/apiserver-76f77b778f-xnfp6" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.130999 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/771a10ce-b3f7-4d81-9963-51d7a38c2cdf-images\") pod \"machine-api-operator-5694c8668f-75zdc\" (UID: \"771a10ce-b3f7-4d81-9963-51d7a38c2cdf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-75zdc" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.131328 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/771a10ce-b3f7-4d81-9963-51d7a38c2cdf-config\") pod \"machine-api-operator-5694c8668f-75zdc\" (UID: \"771a10ce-b3f7-4d81-9963-51d7a38c2cdf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-75zdc" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.131679 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1233075e-7e1b-48ab-bdec-50d771eec172-node-pullsecrets\") pod \"apiserver-76f77b778f-xnfp6\" (UID: \"1233075e-7e1b-48ab-bdec-50d771eec172\") " pod="openshift-apiserver/apiserver-76f77b778f-xnfp6" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.131772 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/faec049e-e45a-4ab3-8761-f4bd01acc732-default-certificate\") pod \"router-default-5444994796-gnfvv\" (UID: \"faec049e-e45a-4ab3-8761-f4bd01acc732\") " pod="openshift-ingress/router-default-5444994796-gnfvv" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.131846 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cttl\" (UniqueName: \"kubernetes.io/projected/faec049e-e45a-4ab3-8761-f4bd01acc732-kube-api-access-2cttl\") pod \"router-default-5444994796-gnfvv\" (UID: \"faec049e-e45a-4ab3-8761-f4bd01acc732\") " pod="openshift-ingress/router-default-5444994796-gnfvv" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.131895 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/af95ef77-54c8-4b77-9a76-fcac6a29c993-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-tzkzd\" (UID: \"af95ef77-54c8-4b77-9a76-fcac6a29c993\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tzkzd" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.131913 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1233075e-7e1b-48ab-bdec-50d771eec172-node-pullsecrets\") pod \"apiserver-76f77b778f-xnfp6\" (UID: \"1233075e-7e1b-48ab-bdec-50d771eec172\") " pod="openshift-apiserver/apiserver-76f77b778f-xnfp6" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.132023 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e972077c-5857-4a15-bd23-21b21fbad7b1-serving-cert\") pod \"route-controller-manager-6576b87f9c-mqmwp\" (UID: \"e972077c-5857-4a15-bd23-21b21fbad7b1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mqmwp" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.132014 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/783f9d2c-ef3f-4915-819f-16ad8ddf943a-client-ca\") pod \"controller-manager-879f6c89f-mn7l8\" (UID: \"783f9d2c-ef3f-4915-819f-16ad8ddf943a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mn7l8" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.132119 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cf5c0827-c687-4ab2-a02f-7b74d00a57db-console-serving-cert\") pod \"console-f9d7485db-277gg\" (UID: \"cf5c0827-c687-4ab2-a02f-7b74d00a57db\") " pod="openshift-console/console-f9d7485db-277gg" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.132303 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhnzv\" (UniqueName: \"kubernetes.io/projected/edfd83bd-08e3-4a78-9b03-1fa3100e5ebf-kube-api-access-fhnzv\") pod \"etcd-operator-b45778765-qzm5m\" (UID: \"edfd83bd-08e3-4a78-9b03-1fa3100e5ebf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qzm5m" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.133482 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1233075e-7e1b-48ab-bdec-50d771eec172-encryption-config\") pod \"apiserver-76f77b778f-xnfp6\" (UID: \"1233075e-7e1b-48ab-bdec-50d771eec172\") " pod="openshift-apiserver/apiserver-76f77b778f-xnfp6" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.133793 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/783f9d2c-ef3f-4915-819f-16ad8ddf943a-client-ca\") pod \"controller-manager-879f6c89f-mn7l8\" (UID: \"783f9d2c-ef3f-4915-819f-16ad8ddf943a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mn7l8" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.133811 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-stkgx"] Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.134807 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/783f9d2c-ef3f-4915-819f-16ad8ddf943a-config\") pod \"controller-manager-879f6c89f-mn7l8\" (UID: \"783f9d2c-ef3f-4915-819f-16ad8ddf943a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mn7l8" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.137891 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9897feaf-0f0f-44a2-bb22-8863579d6359-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-rx5lf\" (UID: \"9897feaf-0f0f-44a2-bb22-8863579d6359\") " pod="openshift-authentication/oauth-openshift-558db77b4-rx5lf" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.138038 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-plj26"] Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.139011 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9897feaf-0f0f-44a2-bb22-8863579d6359-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-rx5lf\" (UID: \"9897feaf-0f0f-44a2-bb22-8863579d6359\") " pod="openshift-authentication/oauth-openshift-558db77b4-rx5lf" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.139775 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/783f9d2c-ef3f-4915-819f-16ad8ddf943a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-mn7l8\" (UID: \"783f9d2c-ef3f-4915-819f-16ad8ddf943a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mn7l8" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.142766 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9897feaf-0f0f-44a2-bb22-8863579d6359-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-rx5lf\" (UID: \"9897feaf-0f0f-44a2-bb22-8863579d6359\") " pod="openshift-authentication/oauth-openshift-558db77b4-rx5lf" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.142957 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjt27\" (UniqueName: \"kubernetes.io/projected/9897feaf-0f0f-44a2-bb22-8863579d6359-kube-api-access-hjt27\") pod \"oauth-openshift-558db77b4-rx5lf\" (UID: \"9897feaf-0f0f-44a2-bb22-8863579d6359\") " pod="openshift-authentication/oauth-openshift-558db77b4-rx5lf" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.143153 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9897feaf-0f0f-44a2-bb22-8863579d6359-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-rx5lf\" (UID: \"9897feaf-0f0f-44a2-bb22-8863579d6359\") " pod="openshift-authentication/oauth-openshift-558db77b4-rx5lf" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.143515 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d0749b3-9886-4360-8a7e-c80fa8921a50-serving-cert\") pod \"authentication-operator-69f744f599-gjfbr\" (UID: \"4d0749b3-9886-4360-8a7e-c80fa8921a50\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gjfbr" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.143721 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.143834 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9897feaf-0f0f-44a2-bb22-8863579d6359-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-rx5lf\" (UID: \"9897feaf-0f0f-44a2-bb22-8863579d6359\") " pod="openshift-authentication/oauth-openshift-558db77b4-rx5lf" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.143703 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e972077c-5857-4a15-bd23-21b21fbad7b1-config\") pod \"route-controller-manager-6576b87f9c-mqmwp\" (UID: \"e972077c-5857-4a15-bd23-21b21fbad7b1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mqmwp" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.144040 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1233075e-7e1b-48ab-bdec-50d771eec172-audit\") pod \"apiserver-76f77b778f-xnfp6\" (UID: \"1233075e-7e1b-48ab-bdec-50d771eec172\") " pod="openshift-apiserver/apiserver-76f77b778f-xnfp6" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.144489 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1233075e-7e1b-48ab-bdec-50d771eec172-etcd-serving-ca\") pod \"apiserver-76f77b778f-xnfp6\" (UID: \"1233075e-7e1b-48ab-bdec-50d771eec172\") " pod="openshift-apiserver/apiserver-76f77b778f-xnfp6" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.144570 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ee6ad6b-1ad3-4bcf-a35a-d5f09a50e9d2-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xlq97\" (UID: \"4ee6ad6b-1ad3-4bcf-a35a-d5f09a50e9d2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xlq97" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.144639 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1233075e-7e1b-48ab-bdec-50d771eec172-audit\") pod \"apiserver-76f77b778f-xnfp6\" (UID: \"1233075e-7e1b-48ab-bdec-50d771eec172\") " pod="openshift-apiserver/apiserver-76f77b778f-xnfp6" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.144681 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hl9wb"] Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.144806 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/953dbec8-f4fc-411b-8a6f-191a52d20523-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2qtlb\" (UID: \"953dbec8-f4fc-411b-8a6f-191a52d20523\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2qtlb" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.144893 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9897feaf-0f0f-44a2-bb22-8863579d6359-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-rx5lf\" (UID: \"9897feaf-0f0f-44a2-bb22-8863579d6359\") " pod="openshift-authentication/oauth-openshift-558db77b4-rx5lf" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.144935 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e972077c-5857-4a15-bd23-21b21fbad7b1-config\") pod \"route-controller-manager-6576b87f9c-mqmwp\" (UID: \"e972077c-5857-4a15-bd23-21b21fbad7b1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mqmwp" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.144950 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9897feaf-0f0f-44a2-bb22-8863579d6359-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-rx5lf\" (UID: \"9897feaf-0f0f-44a2-bb22-8863579d6359\") " pod="openshift-authentication/oauth-openshift-558db77b4-rx5lf" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.145164 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9897feaf-0f0f-44a2-bb22-8863579d6359-audit-policies\") pod \"oauth-openshift-558db77b4-rx5lf\" (UID: \"9897feaf-0f0f-44a2-bb22-8863579d6359\") " pod="openshift-authentication/oauth-openshift-558db77b4-rx5lf" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.145216 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1233075e-7e1b-48ab-bdec-50d771eec172-etcd-client\") pod \"apiserver-76f77b778f-xnfp6\" (UID: \"1233075e-7e1b-48ab-bdec-50d771eec172\") " pod="openshift-apiserver/apiserver-76f77b778f-xnfp6" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.145293 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1233075e-7e1b-48ab-bdec-50d771eec172-etcd-serving-ca\") pod \"apiserver-76f77b778f-xnfp6\" (UID: \"1233075e-7e1b-48ab-bdec-50d771eec172\") " pod="openshift-apiserver/apiserver-76f77b778f-xnfp6" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.145864 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9897feaf-0f0f-44a2-bb22-8863579d6359-audit-policies\") pod \"oauth-openshift-558db77b4-rx5lf\" (UID: \"9897feaf-0f0f-44a2-bb22-8863579d6359\") " pod="openshift-authentication/oauth-openshift-558db77b4-rx5lf" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.145894 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9897feaf-0f0f-44a2-bb22-8863579d6359-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-rx5lf\" (UID: \"9897feaf-0f0f-44a2-bb22-8863579d6359\") " pod="openshift-authentication/oauth-openshift-558db77b4-rx5lf" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.146009 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/20559448-c2fa-4138-ba2f-f9907e6ef183-auth-proxy-config\") pod \"machine-approver-56656f9798-lh4hs\" (UID: \"20559448-c2fa-4138-ba2f-f9907e6ef183\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lh4hs" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.146055 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/771a10ce-b3f7-4d81-9963-51d7a38c2cdf-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-75zdc\" (UID: \"771a10ce-b3f7-4d81-9963-51d7a38c2cdf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-75zdc" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.146308 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cf5c0827-c687-4ab2-a02f-7b74d00a57db-console-config\") pod \"console-f9d7485db-277gg\" (UID: \"cf5c0827-c687-4ab2-a02f-7b74d00a57db\") " pod="openshift-console/console-f9d7485db-277gg" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.146422 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9897feaf-0f0f-44a2-bb22-8863579d6359-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-rx5lf\" (UID: \"9897feaf-0f0f-44a2-bb22-8863579d6359\") " pod="openshift-authentication/oauth-openshift-558db77b4-rx5lf" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.146610 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d0749b3-9886-4360-8a7e-c80fa8921a50-service-ca-bundle\") pod \"authentication-operator-69f744f599-gjfbr\" (UID: \"4d0749b3-9886-4360-8a7e-c80fa8921a50\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gjfbr" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.146680 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cf5c0827-c687-4ab2-a02f-7b74d00a57db-oauth-serving-cert\") pod \"console-f9d7485db-277gg\" (UID: \"cf5c0827-c687-4ab2-a02f-7b74d00a57db\") " pod="openshift-console/console-f9d7485db-277gg" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.146748 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/20559448-c2fa-4138-ba2f-f9907e6ef183-auth-proxy-config\") pod \"machine-approver-56656f9798-lh4hs\" (UID: \"20559448-c2fa-4138-ba2f-f9907e6ef183\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lh4hs" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.147626 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d0749b3-9886-4360-8a7e-c80fa8921a50-service-ca-bundle\") pod \"authentication-operator-69f744f599-gjfbr\" (UID: \"4d0749b3-9886-4360-8a7e-c80fa8921a50\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gjfbr" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.147737 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-5j95b"] Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.149175 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9897feaf-0f0f-44a2-bb22-8863579d6359-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-rx5lf\" (UID: \"9897feaf-0f0f-44a2-bb22-8863579d6359\") " pod="openshift-authentication/oauth-openshift-558db77b4-rx5lf" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.150109 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/953dbec8-f4fc-411b-8a6f-191a52d20523-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2qtlb\" (UID: \"953dbec8-f4fc-411b-8a6f-191a52d20523\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2qtlb" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.150509 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/771a10ce-b3f7-4d81-9963-51d7a38c2cdf-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-75zdc\" (UID: \"771a10ce-b3f7-4d81-9963-51d7a38c2cdf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-75zdc" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.151195 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1233075e-7e1b-48ab-bdec-50d771eec172-etcd-client\") pod \"apiserver-76f77b778f-xnfp6\" (UID: \"1233075e-7e1b-48ab-bdec-50d771eec172\") " pod="openshift-apiserver/apiserver-76f77b778f-xnfp6" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.151512 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9897feaf-0f0f-44a2-bb22-8863579d6359-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-rx5lf\" (UID: \"9897feaf-0f0f-44a2-bb22-8863579d6359\") " pod="openshift-authentication/oauth-openshift-558db77b4-rx5lf" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.152169 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9897feaf-0f0f-44a2-bb22-8863579d6359-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-rx5lf\" (UID: \"9897feaf-0f0f-44a2-bb22-8863579d6359\") " pod="openshift-authentication/oauth-openshift-558db77b4-rx5lf" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.152450 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8zcnw"] Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.153524 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rt5kw"] Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.154522 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-2bj7t"] Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.155547 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-2bj7t" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.155774 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-xwmct"] Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.156607 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-x6pzz"] Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.156746 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.157677 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29533635-968n9"] Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.158682 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-trf82"] Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.160003 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-hjfgv"] Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.160136 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-trf82" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.160820 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hzbjf"] Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.161812 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-trf82"] Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.162671 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-6q7wr"] Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.163987 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-6q7wr"] Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.164123 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-6q7wr" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.176773 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.197676 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.219868 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.238112 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.247317 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ee6ad6b-1ad3-4bcf-a35a-d5f09a50e9d2-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xlq97\" (UID: \"4ee6ad6b-1ad3-4bcf-a35a-d5f09a50e9d2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xlq97" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.247647 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cf5c0827-c687-4ab2-a02f-7b74d00a57db-console-config\") pod \"console-f9d7485db-277gg\" (UID: \"cf5c0827-c687-4ab2-a02f-7b74d00a57db\") " pod="openshift-console/console-f9d7485db-277gg" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.247840 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cf5c0827-c687-4ab2-a02f-7b74d00a57db-oauth-serving-cert\") pod \"console-f9d7485db-277gg\" (UID: \"cf5c0827-c687-4ab2-a02f-7b74d00a57db\") " pod="openshift-console/console-f9d7485db-277gg" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.248044 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/edfd83bd-08e3-4a78-9b03-1fa3100e5ebf-etcd-client\") pod \"etcd-operator-b45778765-qzm5m\" (UID: \"edfd83bd-08e3-4a78-9b03-1fa3100e5ebf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qzm5m" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.248245 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf5c0827-c687-4ab2-a02f-7b74d00a57db-trusted-ca-bundle\") pod \"console-f9d7485db-277gg\" (UID: \"cf5c0827-c687-4ab2-a02f-7b74d00a57db\") " pod="openshift-console/console-f9d7485db-277gg" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.248438 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ee6ad6b-1ad3-4bcf-a35a-d5f09a50e9d2-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xlq97\" (UID: \"4ee6ad6b-1ad3-4bcf-a35a-d5f09a50e9d2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xlq97" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.248585 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/edfd83bd-08e3-4a78-9b03-1fa3100e5ebf-etcd-service-ca\") pod \"etcd-operator-b45778765-qzm5m\" (UID: \"edfd83bd-08e3-4a78-9b03-1fa3100e5ebf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qzm5m" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.248694 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cf5c0827-c687-4ab2-a02f-7b74d00a57db-service-ca\") pod \"console-f9d7485db-277gg\" (UID: \"cf5c0827-c687-4ab2-a02f-7b74d00a57db\") " pod="openshift-console/console-f9d7485db-277gg" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.248816 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cf5c0827-c687-4ab2-a02f-7b74d00a57db-console-oauth-config\") pod \"console-f9d7485db-277gg\" (UID: \"cf5c0827-c687-4ab2-a02f-7b74d00a57db\") " pod="openshift-console/console-f9d7485db-277gg" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.248927 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/faec049e-e45a-4ab3-8761-f4bd01acc732-metrics-certs\") pod \"router-default-5444994796-gnfvv\" (UID: \"faec049e-e45a-4ab3-8761-f4bd01acc732\") " pod="openshift-ingress/router-default-5444994796-gnfvv" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.249024 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cf5c0827-c687-4ab2-a02f-7b74d00a57db-oauth-serving-cert\") pod \"console-f9d7485db-277gg\" (UID: \"cf5c0827-c687-4ab2-a02f-7b74d00a57db\") " pod="openshift-console/console-f9d7485db-277gg" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.249110 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/edfd83bd-08e3-4a78-9b03-1fa3100e5ebf-etcd-ca\") pod \"etcd-operator-b45778765-qzm5m\" (UID: \"edfd83bd-08e3-4a78-9b03-1fa3100e5ebf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qzm5m" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.249224 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/faec049e-e45a-4ab3-8761-f4bd01acc732-service-ca-bundle\") pod \"router-default-5444994796-gnfvv\" (UID: \"faec049e-e45a-4ab3-8761-f4bd01acc732\") " pod="openshift-ingress/router-default-5444994796-gnfvv" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.249346 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edfd83bd-08e3-4a78-9b03-1fa3100e5ebf-serving-cert\") pod \"etcd-operator-b45778765-qzm5m\" (UID: \"edfd83bd-08e3-4a78-9b03-1fa3100e5ebf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qzm5m" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.249549 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/faec049e-e45a-4ab3-8761-f4bd01acc732-stats-auth\") pod \"router-default-5444994796-gnfvv\" (UID: \"faec049e-e45a-4ab3-8761-f4bd01acc732\") " pod="openshift-ingress/router-default-5444994796-gnfvv" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.249674 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ee6ad6b-1ad3-4bcf-a35a-d5f09a50e9d2-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xlq97\" (UID: \"4ee6ad6b-1ad3-4bcf-a35a-d5f09a50e9d2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xlq97" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.249844 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edfd83bd-08e3-4a78-9b03-1fa3100e5ebf-config\") pod \"etcd-operator-b45778765-qzm5m\" (UID: \"edfd83bd-08e3-4a78-9b03-1fa3100e5ebf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qzm5m" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.249943 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4s8rv\" (UniqueName: \"kubernetes.io/projected/cf5c0827-c687-4ab2-a02f-7b74d00a57db-kube-api-access-4s8rv\") pod \"console-f9d7485db-277gg\" (UID: \"cf5c0827-c687-4ab2-a02f-7b74d00a57db\") " pod="openshift-console/console-f9d7485db-277gg" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.250104 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cttl\" (UniqueName: \"kubernetes.io/projected/faec049e-e45a-4ab3-8761-f4bd01acc732-kube-api-access-2cttl\") pod \"router-default-5444994796-gnfvv\" (UID: \"faec049e-e45a-4ab3-8761-f4bd01acc732\") " pod="openshift-ingress/router-default-5444994796-gnfvv" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.250233 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/faec049e-e45a-4ab3-8761-f4bd01acc732-default-certificate\") pod \"router-default-5444994796-gnfvv\" (UID: \"faec049e-e45a-4ab3-8761-f4bd01acc732\") " pod="openshift-ingress/router-default-5444994796-gnfvv" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.250335 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhnzv\" (UniqueName: \"kubernetes.io/projected/edfd83bd-08e3-4a78-9b03-1fa3100e5ebf-kube-api-access-fhnzv\") pod \"etcd-operator-b45778765-qzm5m\" (UID: \"edfd83bd-08e3-4a78-9b03-1fa3100e5ebf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qzm5m" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.250475 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cf5c0827-c687-4ab2-a02f-7b74d00a57db-console-serving-cert\") pod \"console-f9d7485db-277gg\" (UID: \"cf5c0827-c687-4ab2-a02f-7b74d00a57db\") " pod="openshift-console/console-f9d7485db-277gg" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.250487 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/faec049e-e45a-4ab3-8761-f4bd01acc732-service-ca-bundle\") pod \"router-default-5444994796-gnfvv\" (UID: \"faec049e-e45a-4ab3-8761-f4bd01acc732\") " pod="openshift-ingress/router-default-5444994796-gnfvv" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.250008 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cf5c0827-c687-4ab2-a02f-7b74d00a57db-service-ca\") pod \"console-f9d7485db-277gg\" (UID: \"cf5c0827-c687-4ab2-a02f-7b74d00a57db\") " pod="openshift-console/console-f9d7485db-277gg" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.249118 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cf5c0827-c687-4ab2-a02f-7b74d00a57db-console-config\") pod \"console-f9d7485db-277gg\" (UID: \"cf5c0827-c687-4ab2-a02f-7b74d00a57db\") " pod="openshift-console/console-f9d7485db-277gg" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.249982 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf5c0827-c687-4ab2-a02f-7b74d00a57db-trusted-ca-bundle\") pod \"console-f9d7485db-277gg\" (UID: \"cf5c0827-c687-4ab2-a02f-7b74d00a57db\") " pod="openshift-console/console-f9d7485db-277gg" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.251191 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/edfd83bd-08e3-4a78-9b03-1fa3100e5ebf-etcd-service-ca\") pod \"etcd-operator-b45778765-qzm5m\" (UID: \"edfd83bd-08e3-4a78-9b03-1fa3100e5ebf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qzm5m" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.252003 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/edfd83bd-08e3-4a78-9b03-1fa3100e5ebf-etcd-ca\") pod \"etcd-operator-b45778765-qzm5m\" (UID: \"edfd83bd-08e3-4a78-9b03-1fa3100e5ebf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qzm5m" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.252139 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edfd83bd-08e3-4a78-9b03-1fa3100e5ebf-config\") pod \"etcd-operator-b45778765-qzm5m\" (UID: \"edfd83bd-08e3-4a78-9b03-1fa3100e5ebf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qzm5m" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.252285 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/edfd83bd-08e3-4a78-9b03-1fa3100e5ebf-etcd-client\") pod \"etcd-operator-b45778765-qzm5m\" (UID: \"edfd83bd-08e3-4a78-9b03-1fa3100e5ebf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qzm5m" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.252496 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cf5c0827-c687-4ab2-a02f-7b74d00a57db-console-oauth-config\") pod \"console-f9d7485db-277gg\" (UID: \"cf5c0827-c687-4ab2-a02f-7b74d00a57db\") " pod="openshift-console/console-f9d7485db-277gg" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.254355 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/faec049e-e45a-4ab3-8761-f4bd01acc732-stats-auth\") pod \"router-default-5444994796-gnfvv\" (UID: \"faec049e-e45a-4ab3-8761-f4bd01acc732\") " pod="openshift-ingress/router-default-5444994796-gnfvv" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.255218 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cf5c0827-c687-4ab2-a02f-7b74d00a57db-console-serving-cert\") pod \"console-f9d7485db-277gg\" (UID: \"cf5c0827-c687-4ab2-a02f-7b74d00a57db\") " pod="openshift-console/console-f9d7485db-277gg" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.256202 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/faec049e-e45a-4ab3-8761-f4bd01acc732-metrics-certs\") pod \"router-default-5444994796-gnfvv\" (UID: \"faec049e-e45a-4ab3-8761-f4bd01acc732\") " pod="openshift-ingress/router-default-5444994796-gnfvv" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.256671 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.258488 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/faec049e-e45a-4ab3-8761-f4bd01acc732-default-certificate\") pod \"router-default-5444994796-gnfvv\" (UID: \"faec049e-e45a-4ab3-8761-f4bd01acc732\") " pod="openshift-ingress/router-default-5444994796-gnfvv" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.258827 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ee6ad6b-1ad3-4bcf-a35a-d5f09a50e9d2-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xlq97\" (UID: \"4ee6ad6b-1ad3-4bcf-a35a-d5f09a50e9d2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xlq97" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.262122 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edfd83bd-08e3-4a78-9b03-1fa3100e5ebf-serving-cert\") pod \"etcd-operator-b45778765-qzm5m\" (UID: \"edfd83bd-08e3-4a78-9b03-1fa3100e5ebf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qzm5m" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.262792 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ee6ad6b-1ad3-4bcf-a35a-d5f09a50e9d2-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xlq97\" (UID: \"4ee6ad6b-1ad3-4bcf-a35a-d5f09a50e9d2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xlq97" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.317812 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.337496 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.356976 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.378110 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.397427 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.419486 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.438005 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.458247 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.476981 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.503275 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.517279 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.536836 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.558065 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.578461 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.597327 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.617847 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.636918 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.657150 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.677404 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.685149 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.685261 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.685315 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x2fvb" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.685508 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.697300 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.716725 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.737705 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.757211 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.777311 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.797144 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.818141 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.837293 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.857238 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.877123 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.897005 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.916755 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.936827 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.957185 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 25 11:19:51 crc kubenswrapper[5005]: I0225 11:19:51.976809 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 25 11:19:52 crc kubenswrapper[5005]: I0225 11:19:52.006044 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 25 11:19:52 crc kubenswrapper[5005]: I0225 11:19:52.016966 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 25 11:19:52 crc kubenswrapper[5005]: I0225 11:19:52.038200 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 25 11:19:52 crc kubenswrapper[5005]: I0225 11:19:52.057646 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 25 11:19:52 crc kubenswrapper[5005]: I0225 11:19:52.075226 5005 request.go:700] Waited for 1.014343808s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Feb 25 11:19:52 crc kubenswrapper[5005]: I0225 11:19:52.077631 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 25 11:19:52 crc kubenswrapper[5005]: I0225 11:19:52.097871 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 25 11:19:52 crc kubenswrapper[5005]: I0225 11:19:52.118347 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 25 11:19:52 crc kubenswrapper[5005]: I0225 11:19:52.137880 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 25 11:19:52 crc kubenswrapper[5005]: I0225 11:19:52.166978 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 25 11:19:52 crc kubenswrapper[5005]: I0225 11:19:52.177694 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 25 11:19:52 crc kubenswrapper[5005]: I0225 11:19:52.198303 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 25 11:19:52 crc kubenswrapper[5005]: I0225 11:19:52.217530 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 25 11:19:52 crc kubenswrapper[5005]: I0225 11:19:52.237674 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 25 11:19:52 crc kubenswrapper[5005]: I0225 11:19:52.256750 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 25 11:19:52 crc kubenswrapper[5005]: I0225 11:19:52.278090 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 25 11:19:52 crc kubenswrapper[5005]: I0225 11:19:52.296349 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 25 11:19:52 crc kubenswrapper[5005]: I0225 11:19:52.318522 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 25 11:19:52 crc kubenswrapper[5005]: I0225 11:19:52.337356 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 25 11:19:52 crc kubenswrapper[5005]: I0225 11:19:52.356681 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 25 11:19:52 crc kubenswrapper[5005]: I0225 11:19:52.378563 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 25 11:19:52 crc kubenswrapper[5005]: I0225 11:19:52.397119 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 25 11:19:52 crc kubenswrapper[5005]: I0225 11:19:52.417315 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 25 11:19:52 crc kubenswrapper[5005]: I0225 11:19:52.437507 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 25 11:19:52 crc kubenswrapper[5005]: I0225 11:19:52.458021 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 25 11:19:52 crc kubenswrapper[5005]: I0225 11:19:52.477682 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 25 11:19:52 crc kubenswrapper[5005]: I0225 11:19:52.497477 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 25 11:19:52 crc kubenswrapper[5005]: I0225 11:19:52.517768 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 25 11:19:52 crc kubenswrapper[5005]: I0225 11:19:52.537801 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 25 11:19:52 crc kubenswrapper[5005]: I0225 11:19:52.568771 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 25 11:19:52 crc kubenswrapper[5005]: I0225 11:19:52.583392 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 25 11:19:52 crc kubenswrapper[5005]: I0225 11:19:52.597634 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 25 11:19:52 crc kubenswrapper[5005]: I0225 11:19:52.617286 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 25 11:19:52 crc kubenswrapper[5005]: I0225 11:19:52.636978 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 25 11:19:52 crc kubenswrapper[5005]: I0225 11:19:52.657799 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 25 11:19:52 crc kubenswrapper[5005]: I0225 11:19:52.677745 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 25 11:19:52 crc kubenswrapper[5005]: I0225 11:19:52.697481 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 25 11:19:52 crc kubenswrapper[5005]: I0225 11:19:52.717055 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 25 11:19:52 crc kubenswrapper[5005]: I0225 11:19:52.736772 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 25 11:19:52 crc kubenswrapper[5005]: I0225 11:19:52.757573 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Feb 25 11:19:52 crc kubenswrapper[5005]: I0225 11:19:52.812710 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxktw\" (UniqueName: \"kubernetes.io/projected/1233075e-7e1b-48ab-bdec-50d771eec172-kube-api-access-rxktw\") pod \"apiserver-76f77b778f-xnfp6\" (UID: \"1233075e-7e1b-48ab-bdec-50d771eec172\") " pod="openshift-apiserver/apiserver-76f77b778f-xnfp6" Feb 25 11:19:52 crc kubenswrapper[5005]: I0225 11:19:52.826273 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvnxh\" (UniqueName: \"kubernetes.io/projected/af95ef77-54c8-4b77-9a76-fcac6a29c993-kube-api-access-qvnxh\") pod \"cluster-samples-operator-665b6dd947-tzkzd\" (UID: \"af95ef77-54c8-4b77-9a76-fcac6a29c993\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tzkzd" Feb 25 11:19:52 crc kubenswrapper[5005]: I0225 11:19:52.844638 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tzkzd" Feb 25 11:19:52 crc kubenswrapper[5005]: I0225 11:19:52.846817 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8wxr\" (UniqueName: \"kubernetes.io/projected/771a10ce-b3f7-4d81-9963-51d7a38c2cdf-kube-api-access-p8wxr\") pod \"machine-api-operator-5694c8668f-75zdc\" (UID: \"771a10ce-b3f7-4d81-9963-51d7a38c2cdf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-75zdc" Feb 25 11:19:52 crc kubenswrapper[5005]: I0225 11:19:52.871571 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4j2hc\" (UniqueName: \"kubernetes.io/projected/953dbec8-f4fc-411b-8a6f-191a52d20523-kube-api-access-4j2hc\") pod \"openshift-apiserver-operator-796bbdcf4f-2qtlb\" (UID: \"953dbec8-f4fc-411b-8a6f-191a52d20523\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2qtlb" Feb 25 11:19:52 crc kubenswrapper[5005]: I0225 11:19:52.893227 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p475g\" (UniqueName: \"kubernetes.io/projected/20559448-c2fa-4138-ba2f-f9907e6ef183-kube-api-access-p475g\") pod \"machine-approver-56656f9798-lh4hs\" (UID: \"20559448-c2fa-4138-ba2f-f9907e6ef183\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lh4hs" Feb 25 11:19:52 crc kubenswrapper[5005]: I0225 11:19:52.925552 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbk7q\" (UniqueName: \"kubernetes.io/projected/4d0749b3-9886-4360-8a7e-c80fa8921a50-kube-api-access-wbk7q\") pod \"authentication-operator-69f744f599-gjfbr\" (UID: \"4d0749b3-9886-4360-8a7e-c80fa8921a50\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gjfbr" Feb 25 11:19:52 crc kubenswrapper[5005]: I0225 11:19:52.930746 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcpz4\" (UniqueName: \"kubernetes.io/projected/783f9d2c-ef3f-4915-819f-16ad8ddf943a-kube-api-access-tcpz4\") pod \"controller-manager-879f6c89f-mn7l8\" (UID: \"783f9d2c-ef3f-4915-819f-16ad8ddf943a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mn7l8" Feb 25 11:19:52 crc kubenswrapper[5005]: I0225 11:19:52.953560 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jq7qg\" (UniqueName: \"kubernetes.io/projected/e972077c-5857-4a15-bd23-21b21fbad7b1-kube-api-access-jq7qg\") pod \"route-controller-manager-6576b87f9c-mqmwp\" (UID: \"e972077c-5857-4a15-bd23-21b21fbad7b1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mqmwp" Feb 25 11:19:52 crc kubenswrapper[5005]: I0225 11:19:52.957543 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 25 11:19:52 crc kubenswrapper[5005]: I0225 11:19:52.963037 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjt27\" (UniqueName: \"kubernetes.io/projected/9897feaf-0f0f-44a2-bb22-8863579d6359-kube-api-access-hjt27\") pod \"oauth-openshift-558db77b4-rx5lf\" (UID: \"9897feaf-0f0f-44a2-bb22-8863579d6359\") " pod="openshift-authentication/oauth-openshift-558db77b4-rx5lf" Feb 25 11:19:52 crc kubenswrapper[5005]: I0225 11:19:52.979713 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 25 11:19:52 crc kubenswrapper[5005]: I0225 11:19:52.997432 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 25 11:19:52 crc kubenswrapper[5005]: I0225 11:19:52.998561 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-xnfp6" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.029286 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-mn7l8" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.029557 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.037754 5005 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.057626 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.066089 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-75zdc" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.076893 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.095078 5005 request.go:700] Waited for 1.930682603s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Ddns-dockercfg-jwfmh&limit=500&resourceVersion=0 Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.096944 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.112959 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lh4hs" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.118101 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.122714 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mqmwp" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.134782 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2qtlb" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.139228 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-gjfbr" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.141842 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tzkzd"] Feb 25 11:19:53 crc kubenswrapper[5005]: W0225 11:19:53.150576 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20559448_c2fa_4138_ba2f_f9907e6ef183.slice/crio-f21873f403e4a035ff49eab20d57a3e2d178c77899dae544413731f69e1daacd WatchSource:0}: Error finding container f21873f403e4a035ff49eab20d57a3e2d178c77899dae544413731f69e1daacd: Status 404 returned error can't find the container with id f21873f403e4a035ff49eab20d57a3e2d178c77899dae544413731f69e1daacd Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.154115 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ee6ad6b-1ad3-4bcf-a35a-d5f09a50e9d2-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xlq97\" (UID: \"4ee6ad6b-1ad3-4bcf-a35a-d5f09a50e9d2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xlq97" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.156888 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rx5lf" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.175625 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cttl\" (UniqueName: \"kubernetes.io/projected/faec049e-e45a-4ab3-8761-f4bd01acc732-kube-api-access-2cttl\") pod \"router-default-5444994796-gnfvv\" (UID: \"faec049e-e45a-4ab3-8761-f4bd01acc732\") " pod="openshift-ingress/router-default-5444994796-gnfvv" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.195063 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4s8rv\" (UniqueName: \"kubernetes.io/projected/cf5c0827-c687-4ab2-a02f-7b74d00a57db-kube-api-access-4s8rv\") pod \"console-f9d7485db-277gg\" (UID: \"cf5c0827-c687-4ab2-a02f-7b74d00a57db\") " pod="openshift-console/console-f9d7485db-277gg" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.214471 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhnzv\" (UniqueName: \"kubernetes.io/projected/edfd83bd-08e3-4a78-9b03-1fa3100e5ebf-kube-api-access-fhnzv\") pod \"etcd-operator-b45778765-qzm5m\" (UID: \"edfd83bd-08e3-4a78-9b03-1fa3100e5ebf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qzm5m" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.235954 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xnfp6"] Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.247689 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lh4hs" event={"ID":"20559448-c2fa-4138-ba2f-f9907e6ef183","Type":"ContainerStarted","Data":"f21873f403e4a035ff49eab20d57a3e2d178c77899dae544413731f69e1daacd"} Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.257621 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.274921 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-qzm5m" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.277407 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.282316 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-gnfvv" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.284297 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4031b2bd-16b2-49b4-a187-5eb591356aff-installation-pull-secrets\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.284507 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cee21152-b956-4d24-a5f8-9f62bad2cc3e-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-69c2n\" (UID: \"cee21152-b956-4d24-a5f8-9f62bad2cc3e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-69c2n" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.286830 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07fd30ae-0263-4080-9eca-a61666c2937d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-v692m\" (UID: \"07fd30ae-0263-4080-9eca-a61666c2937d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v692m" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.287023 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgs7p\" (UniqueName: \"kubernetes.io/projected/1d3ecaa2-3425-41f3-a8cf-cfbd0e94643c-kube-api-access-kgs7p\") pod \"downloads-7954f5f757-65dkm\" (UID: \"1d3ecaa2-3425-41f3-a8cf-cfbd0e94643c\") " pod="openshift-console/downloads-7954f5f757-65dkm" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.287136 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpqxn\" (UniqueName: \"kubernetes.io/projected/f7118aeb-770e-44b1-87de-f0a633360ff6-kube-api-access-hpqxn\") pod \"dns-operator-744455d44c-vmm2d\" (UID: \"f7118aeb-770e-44b1-87de-f0a633360ff6\") " pod="openshift-dns-operator/dns-operator-744455d44c-vmm2d" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.290515 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ffl6\" (UniqueName: \"kubernetes.io/projected/9254e22f-7734-4750-b29a-af5f2872eeef-kube-api-access-8ffl6\") pod \"console-operator-58897d9998-rsd6r\" (UID: \"9254e22f-7734-4750-b29a-af5f2872eeef\") " pod="openshift-console-operator/console-operator-58897d9998-rsd6r" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.290596 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4031b2bd-16b2-49b4-a187-5eb591356aff-trusted-ca\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.290640 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4031b2bd-16b2-49b4-a187-5eb591356aff-registry-certificates\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.290724 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4031b2bd-16b2-49b4-a187-5eb591356aff-ca-trust-extracted\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.290780 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9254e22f-7734-4750-b29a-af5f2872eeef-trusted-ca\") pod \"console-operator-58897d9998-rsd6r\" (UID: \"9254e22f-7734-4750-b29a-af5f2872eeef\") " pod="openshift-console-operator/console-operator-58897d9998-rsd6r" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.290799 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmsbt\" (UniqueName: \"kubernetes.io/projected/4031b2bd-16b2-49b4-a187-5eb591356aff-kube-api-access-dmsbt\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.290822 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4031b2bd-16b2-49b4-a187-5eb591356aff-registry-tls\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.290840 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/cee21152-b956-4d24-a5f8-9f62bad2cc3e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-69c2n\" (UID: \"cee21152-b956-4d24-a5f8-9f62bad2cc3e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-69c2n" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.290881 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwtlq\" (UniqueName: \"kubernetes.io/projected/cee21152-b956-4d24-a5f8-9f62bad2cc3e-kube-api-access-vwtlq\") pod \"cluster-image-registry-operator-dc59b4c8b-69c2n\" (UID: \"cee21152-b956-4d24-a5f8-9f62bad2cc3e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-69c2n" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.290898 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9254e22f-7734-4750-b29a-af5f2872eeef-config\") pod \"console-operator-58897d9998-rsd6r\" (UID: \"9254e22f-7734-4750-b29a-af5f2872eeef\") " pod="openshift-console-operator/console-operator-58897d9998-rsd6r" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.290996 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f7118aeb-770e-44b1-87de-f0a633360ff6-metrics-tls\") pod \"dns-operator-744455d44c-vmm2d\" (UID: \"f7118aeb-770e-44b1-87de-f0a633360ff6\") " pod="openshift-dns-operator/dns-operator-744455d44c-vmm2d" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.291018 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9254e22f-7734-4750-b29a-af5f2872eeef-serving-cert\") pod \"console-operator-58897d9998-rsd6r\" (UID: \"9254e22f-7734-4750-b29a-af5f2872eeef\") " pod="openshift-console-operator/console-operator-58897d9998-rsd6r" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.291034 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07fd30ae-0263-4080-9eca-a61666c2937d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-v692m\" (UID: \"07fd30ae-0263-4080-9eca-a61666c2937d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v692m" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.291062 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.291129 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4031b2bd-16b2-49b4-a187-5eb591356aff-bound-sa-token\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.291346 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hjr7\" (UniqueName: \"kubernetes.io/projected/07fd30ae-0263-4080-9eca-a61666c2937d-kube-api-access-6hjr7\") pod \"openshift-controller-manager-operator-756b6f6bc6-v692m\" (UID: \"07fd30ae-0263-4080-9eca-a61666c2937d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v692m" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.291362 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cee21152-b956-4d24-a5f8-9f62bad2cc3e-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-69c2n\" (UID: \"cee21152-b956-4d24-a5f8-9f62bad2cc3e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-69c2n" Feb 25 11:19:53 crc kubenswrapper[5005]: E0225 11:19:53.293181 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-25 11:19:53.793168708 +0000 UTC m=+107.833901035 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xxm8g" (UID: "4031b2bd-16b2-49b4-a187-5eb591356aff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.293618 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xlq97" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.303681 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.318560 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mn7l8"] Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.320895 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.338361 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.356836 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-75zdc"] Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.358761 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.382031 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2qtlb"] Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.395836 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.396488 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzvt8\" (UniqueName: \"kubernetes.io/projected/75d48e65-ec5a-4706-bf55-2b97ee71af11-kube-api-access-gzvt8\") pod \"control-plane-machine-set-operator-78cbb6b69f-hl9wb\" (UID: \"75d48e65-ec5a-4706-bf55-2b97ee71af11\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hl9wb" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.396581 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ffl6\" (UniqueName: \"kubernetes.io/projected/9254e22f-7734-4750-b29a-af5f2872eeef-kube-api-access-8ffl6\") pod \"console-operator-58897d9998-rsd6r\" (UID: \"9254e22f-7734-4750-b29a-af5f2872eeef\") " pod="openshift-console-operator/console-operator-58897d9998-rsd6r" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.396628 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/45ba893b-be5a-43dc-a979-b528286386cd-available-featuregates\") pod \"openshift-config-operator-7777fb866f-ndnvt\" (UID: \"45ba893b-be5a-43dc-a979-b528286386cd\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ndnvt" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.396653 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e8ae5d83-6d61-4c8e-b86a-f0456eedc676-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-9ll9j\" (UID: \"e8ae5d83-6d61-4c8e-b86a-f0456eedc676\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9ll9j" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.396676 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z7l5\" (UniqueName: \"kubernetes.io/projected/871158bf-c5f6-4e49-981a-bf00d5b8c4c7-kube-api-access-7z7l5\") pod \"collect-profiles-29533635-968n9\" (UID: \"871158bf-c5f6-4e49-981a-bf00d5b8c4c7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533635-968n9" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.396697 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/fbfbbf24-e0c7-4342-9200-d507e157a3c3-srv-cert\") pod \"catalog-operator-68c6474976-5zrbx\" (UID: \"fbfbbf24-e0c7-4342-9200-d507e157a3c3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5zrbx" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.396748 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4031b2bd-16b2-49b4-a187-5eb591356aff-registry-certificates\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.396770 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9a85e597-ce82-4b41-b00e-e142bbd38849-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hzbjf\" (UID: \"9a85e597-ce82-4b41-b00e-e142bbd38849\") " pod="openshift-marketplace/marketplace-operator-79b997595-hzbjf" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.396793 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f57976f3-2e78-4677-9491-d86626551cb8-auth-proxy-config\") pod \"machine-config-operator-74547568cd-4m859\" (UID: \"f57976f3-2e78-4677-9491-d86626551cb8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4m859" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.396814 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/871158bf-c5f6-4e49-981a-bf00d5b8c4c7-config-volume\") pod \"collect-profiles-29533635-968n9\" (UID: \"871158bf-c5f6-4e49-981a-bf00d5b8c4c7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533635-968n9" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.396837 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f32def92-50da-4035-a457-5d66df363c5c-webhook-cert\") pod \"packageserver-d55dfcdfc-8zcnw\" (UID: \"f32def92-50da-4035-a457-5d66df363c5c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8zcnw" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.396871 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45ba893b-be5a-43dc-a979-b528286386cd-serving-cert\") pod \"openshift-config-operator-7777fb866f-ndnvt\" (UID: \"45ba893b-be5a-43dc-a979-b528286386cd\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ndnvt" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.396894 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4031b2bd-16b2-49b4-a187-5eb591356aff-ca-trust-extracted\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.396945 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxdd8\" (UniqueName: \"kubernetes.io/projected/e8ae5d83-6d61-4c8e-b86a-f0456eedc676-kube-api-access-wxdd8\") pod \"machine-config-controller-84d6567774-9ll9j\" (UID: \"e8ae5d83-6d61-4c8e-b86a-f0456eedc676\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9ll9j" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.396969 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65cefc6d-9165-410b-9af2-0cdb4d56d85c-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-6v8gf\" (UID: \"65cefc6d-9165-410b-9af2-0cdb4d56d85c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6v8gf" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.397003 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/910fa1ee-59a6-4ca3-b937-ab44679c93d9-metrics-tls\") pod \"dns-default-6q7wr\" (UID: \"910fa1ee-59a6-4ca3-b937-ab44679c93d9\") " pod="openshift-dns/dns-default-6q7wr" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.397026 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/24d89ac7-e3b2-48ff-a21b-2de526920192-encryption-config\") pod \"apiserver-7bbb656c7d-xwmct\" (UID: \"24d89ac7-e3b2-48ff-a21b-2de526920192\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xwmct" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.397048 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmsbt\" (UniqueName: \"kubernetes.io/projected/4031b2bd-16b2-49b4-a187-5eb591356aff-kube-api-access-dmsbt\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.397067 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/910fa1ee-59a6-4ca3-b937-ab44679c93d9-config-volume\") pod \"dns-default-6q7wr\" (UID: \"910fa1ee-59a6-4ca3-b937-ab44679c93d9\") " pod="openshift-dns/dns-default-6q7wr" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.397086 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5d5348af-76fc-486e-9ad6-c0bbdd2d9223-metrics-tls\") pod \"ingress-operator-5b745b69d9-5j95b\" (UID: \"5d5348af-76fc-486e-9ad6-c0bbdd2d9223\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5j95b" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.397107 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4031b2bd-16b2-49b4-a187-5eb591356aff-registry-tls\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.397128 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/cee21152-b956-4d24-a5f8-9f62bad2cc3e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-69c2n\" (UID: \"cee21152-b956-4d24-a5f8-9f62bad2cc3e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-69c2n" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.397151 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bw5d\" (UniqueName: \"kubernetes.io/projected/15293032-ea03-4071-99a7-126b0348f0c0-kube-api-access-8bw5d\") pod \"ingress-canary-bpj6f\" (UID: \"15293032-ea03-4071-99a7-126b0348f0c0\") " pod="openshift-ingress-canary/ingress-canary-bpj6f" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.397340 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhtqx\" (UniqueName: \"kubernetes.io/projected/3dd9dd0c-5dc1-4ab5-a5fc-68569c68198e-kube-api-access-fhtqx\") pod \"migrator-59844c95c7-bfzqz\" (UID: \"3dd9dd0c-5dc1-4ab5-a5fc-68569c68198e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bfzqz" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.397361 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwtlq\" (UniqueName: \"kubernetes.io/projected/cee21152-b956-4d24-a5f8-9f62bad2cc3e-kube-api-access-vwtlq\") pod \"cluster-image-registry-operator-dc59b4c8b-69c2n\" (UID: \"cee21152-b956-4d24-a5f8-9f62bad2cc3e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-69c2n" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.397428 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/4378ed4f-6f5c-418c-9027-b307ffde4aab-csi-data-dir\") pod \"csi-hostpathplugin-trf82\" (UID: \"4378ed4f-6f5c-418c-9027-b307ffde4aab\") " pod="hostpath-provisioner/csi-hostpathplugin-trf82" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.397464 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4378ed4f-6f5c-418c-9027-b307ffde4aab-socket-dir\") pod \"csi-hostpathplugin-trf82\" (UID: \"4378ed4f-6f5c-418c-9027-b307ffde4aab\") " pod="hostpath-provisioner/csi-hostpathplugin-trf82" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.397484 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65cefc6d-9165-410b-9af2-0cdb4d56d85c-config\") pod \"kube-apiserver-operator-766d6c64bb-6v8gf\" (UID: \"65cefc6d-9165-410b-9af2-0cdb4d56d85c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6v8gf" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.397501 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/24d89ac7-e3b2-48ff-a21b-2de526920192-audit-policies\") pod \"apiserver-7bbb656c7d-xwmct\" (UID: \"24d89ac7-e3b2-48ff-a21b-2de526920192\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xwmct" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.397550 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07fd30ae-0263-4080-9eca-a61666c2937d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-v692m\" (UID: \"07fd30ae-0263-4080-9eca-a61666c2937d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v692m" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.397572 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/de78fa9e-1a35-4b2e-b5c4-e5de42bf7396-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-hjfgv\" (UID: \"de78fa9e-1a35-4b2e-b5c4-e5de42bf7396\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hjfgv" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.397601 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hjr7\" (UniqueName: \"kubernetes.io/projected/07fd30ae-0263-4080-9eca-a61666c2937d-kube-api-access-6hjr7\") pod \"openshift-controller-manager-operator-756b6f6bc6-v692m\" (UID: \"07fd30ae-0263-4080-9eca-a61666c2937d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v692m" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.397624 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cee21152-b956-4d24-a5f8-9f62bad2cc3e-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-69c2n\" (UID: \"cee21152-b956-4d24-a5f8-9f62bad2cc3e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-69c2n" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.397645 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ae6aaf14-6bb2-4e68-a46b-03209f55ba4d-srv-cert\") pod \"olm-operator-6b444d44fb-rt5kw\" (UID: \"ae6aaf14-6bb2-4e68-a46b-03209f55ba4d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rt5kw" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.397667 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlskk\" (UniqueName: \"kubernetes.io/projected/4378ed4f-6f5c-418c-9027-b307ffde4aab-kube-api-access-jlskk\") pod \"csi-hostpathplugin-trf82\" (UID: \"4378ed4f-6f5c-418c-9027-b307ffde4aab\") " pod="hostpath-provisioner/csi-hostpathplugin-trf82" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.397683 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pppc8\" (UniqueName: \"kubernetes.io/projected/de78fa9e-1a35-4b2e-b5c4-e5de42bf7396-kube-api-access-pppc8\") pod \"multus-admission-controller-857f4d67dd-hjfgv\" (UID: \"de78fa9e-1a35-4b2e-b5c4-e5de42bf7396\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hjfgv" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.397700 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24d89ac7-e3b2-48ff-a21b-2de526920192-serving-cert\") pod \"apiserver-7bbb656c7d-xwmct\" (UID: \"24d89ac7-e3b2-48ff-a21b-2de526920192\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xwmct" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.397721 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/24d89ac7-e3b2-48ff-a21b-2de526920192-etcd-client\") pod \"apiserver-7bbb656c7d-xwmct\" (UID: \"24d89ac7-e3b2-48ff-a21b-2de526920192\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xwmct" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.397754 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w56xk\" (UniqueName: \"kubernetes.io/projected/81150bf8-7367-43d6-bd5c-205a6a07ac7c-kube-api-access-w56xk\") pod \"service-ca-operator-777779d784-rcnr2\" (UID: \"81150bf8-7367-43d6-bd5c-205a6a07ac7c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rcnr2" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.397776 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55cng\" (UniqueName: \"kubernetes.io/projected/24d89ac7-e3b2-48ff-a21b-2de526920192-kube-api-access-55cng\") pod \"apiserver-7bbb656c7d-xwmct\" (UID: \"24d89ac7-e3b2-48ff-a21b-2de526920192\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xwmct" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.397799 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cee21152-b956-4d24-a5f8-9f62bad2cc3e-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-69c2n\" (UID: \"cee21152-b956-4d24-a5f8-9f62bad2cc3e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-69c2n" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.397818 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f57976f3-2e78-4677-9491-d86626551cb8-images\") pod \"machine-config-operator-74547568cd-4m859\" (UID: \"f57976f3-2e78-4677-9491-d86626551cb8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4m859" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.397839 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skbpw\" (UniqueName: \"kubernetes.io/projected/9a85e597-ce82-4b41-b00e-e142bbd38849-kube-api-access-skbpw\") pod \"marketplace-operator-79b997595-hzbjf\" (UID: \"9a85e597-ce82-4b41-b00e-e142bbd38849\") " pod="openshift-marketplace/marketplace-operator-79b997595-hzbjf" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.397872 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9a85e597-ce82-4b41-b00e-e142bbd38849-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hzbjf\" (UID: \"9a85e597-ce82-4b41-b00e-e142bbd38849\") " pod="openshift-marketplace/marketplace-operator-79b997595-hzbjf" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.397895 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7eaa1833-a4ad-422c-93db-6c442609d050-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-qcfmz\" (UID: \"7eaa1833-a4ad-422c-93db-6c442609d050\") " pod="openshift-multus/cni-sysctl-allowlist-ds-qcfmz" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.397926 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btl7p\" (UniqueName: \"kubernetes.io/projected/10b30994-5301-45ae-bf7b-f6c3c3f600b3-kube-api-access-btl7p\") pod \"machine-config-server-2bj7t\" (UID: \"10b30994-5301-45ae-bf7b-f6c3c3f600b3\") " pod="openshift-machine-config-operator/machine-config-server-2bj7t" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.397947 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81150bf8-7367-43d6-bd5c-205a6a07ac7c-serving-cert\") pod \"service-ca-operator-777779d784-rcnr2\" (UID: \"81150bf8-7367-43d6-bd5c-205a6a07ac7c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rcnr2" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.397968 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e8ae5d83-6d61-4c8e-b86a-f0456eedc676-proxy-tls\") pod \"machine-config-controller-84d6567774-9ll9j\" (UID: \"e8ae5d83-6d61-4c8e-b86a-f0456eedc676\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9ll9j" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.397988 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb8fw\" (UniqueName: \"kubernetes.io/projected/910fa1ee-59a6-4ca3-b937-ab44679c93d9-kube-api-access-tb8fw\") pod \"dns-default-6q7wr\" (UID: \"910fa1ee-59a6-4ca3-b937-ab44679c93d9\") " pod="openshift-dns/dns-default-6q7wr" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.398003 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f32def92-50da-4035-a457-5d66df363c5c-tmpfs\") pod \"packageserver-d55dfcdfc-8zcnw\" (UID: \"f32def92-50da-4035-a457-5d66df363c5c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8zcnw" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.398073 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgs7p\" (UniqueName: \"kubernetes.io/projected/1d3ecaa2-3425-41f3-a8cf-cfbd0e94643c-kube-api-access-kgs7p\") pod \"downloads-7954f5f757-65dkm\" (UID: \"1d3ecaa2-3425-41f3-a8cf-cfbd0e94643c\") " pod="openshift-console/downloads-7954f5f757-65dkm" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.398098 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq5qr\" (UniqueName: \"kubernetes.io/projected/4a2e8003-764c-4068-ada2-1555914b1dca-kube-api-access-gq5qr\") pod \"kube-storage-version-migrator-operator-b67b599dd-plj26\" (UID: \"4a2e8003-764c-4068-ada2-1555914b1dca\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-plj26" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.398140 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpqxn\" (UniqueName: \"kubernetes.io/projected/f7118aeb-770e-44b1-87de-f0a633360ff6-kube-api-access-hpqxn\") pod \"dns-operator-744455d44c-vmm2d\" (UID: \"f7118aeb-770e-44b1-87de-f0a633360ff6\") " pod="openshift-dns-operator/dns-operator-744455d44c-vmm2d" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.398166 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a2e8003-764c-4068-ada2-1555914b1dca-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-plj26\" (UID: \"4a2e8003-764c-4068-ada2-1555914b1dca\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-plj26" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.398189 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sl56b\" (UniqueName: \"kubernetes.io/projected/5d5348af-76fc-486e-9ad6-c0bbdd2d9223-kube-api-access-sl56b\") pod \"ingress-operator-5b745b69d9-5j95b\" (UID: \"5d5348af-76fc-486e-9ad6-c0bbdd2d9223\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5j95b" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.398214 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24d89ac7-e3b2-48ff-a21b-2de526920192-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-xwmct\" (UID: \"24d89ac7-e3b2-48ff-a21b-2de526920192\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xwmct" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.398251 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/15293032-ea03-4071-99a7-126b0348f0c0-cert\") pod \"ingress-canary-bpj6f\" (UID: \"15293032-ea03-4071-99a7-126b0348f0c0\") " pod="openshift-ingress-canary/ingress-canary-bpj6f" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.398273 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/4378ed4f-6f5c-418c-9027-b307ffde4aab-plugins-dir\") pod \"csi-hostpathplugin-trf82\" (UID: \"4378ed4f-6f5c-418c-9027-b307ffde4aab\") " pod="hostpath-provisioner/csi-hostpathplugin-trf82" Feb 25 11:19:53 crc kubenswrapper[5005]: E0225 11:19:53.398945 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-25 11:19:53.898898791 +0000 UTC m=+107.939631118 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.399038 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5d5348af-76fc-486e-9ad6-c0bbdd2d9223-trusted-ca\") pod \"ingress-operator-5b745b69d9-5j95b\" (UID: \"5d5348af-76fc-486e-9ad6-c0bbdd2d9223\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5j95b" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.399075 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/65cefc6d-9165-410b-9af2-0cdb4d56d85c-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-6v8gf\" (UID: \"65cefc6d-9165-410b-9af2-0cdb4d56d85c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6v8gf" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.399813 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07fd30ae-0263-4080-9eca-a61666c2937d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-v692m\" (UID: \"07fd30ae-0263-4080-9eca-a61666c2937d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v692m" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.400294 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4031b2bd-16b2-49b4-a187-5eb591356aff-trusted-ca\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.400326 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5d5348af-76fc-486e-9ad6-c0bbdd2d9223-bound-sa-token\") pod \"ingress-operator-5b745b69d9-5j95b\" (UID: \"5d5348af-76fc-486e-9ad6-c0bbdd2d9223\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5j95b" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.400355 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/08130b12-61f0-411b-98b6-52f25bf4c639-signing-cabundle\") pod \"service-ca-9c57cc56f-x6pzz\" (UID: \"08130b12-61f0-411b-98b6-52f25bf4c639\") " pod="openshift-service-ca/service-ca-9c57cc56f-x6pzz" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.404797 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4031b2bd-16b2-49b4-a187-5eb591356aff-trusted-ca\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.406194 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4031b2bd-16b2-49b4-a187-5eb591356aff-registry-certificates\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.406537 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4031b2bd-16b2-49b4-a187-5eb591356aff-ca-trust-extracted\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.406737 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/871158bf-c5f6-4e49-981a-bf00d5b8c4c7-secret-volume\") pod \"collect-profiles-29533635-968n9\" (UID: \"871158bf-c5f6-4e49-981a-bf00d5b8c4c7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533635-968n9" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.407447 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f57976f3-2e78-4677-9491-d86626551cb8-proxy-tls\") pod \"machine-config-operator-74547568cd-4m859\" (UID: \"f57976f3-2e78-4677-9491-d86626551cb8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4m859" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.407671 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9254e22f-7734-4750-b29a-af5f2872eeef-trusted-ca\") pod \"console-operator-58897d9998-rsd6r\" (UID: \"9254e22f-7734-4750-b29a-af5f2872eeef\") " pod="openshift-console-operator/console-operator-58897d9998-rsd6r" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.408124 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4378ed4f-6f5c-418c-9027-b307ffde4aab-registration-dir\") pod \"csi-hostpathplugin-trf82\" (UID: \"4378ed4f-6f5c-418c-9027-b307ffde4aab\") " pod="hostpath-provisioner/csi-hostpathplugin-trf82" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.408204 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ae6aaf14-6bb2-4e68-a46b-03209f55ba4d-profile-collector-cert\") pod \"olm-operator-6b444d44fb-rt5kw\" (UID: \"ae6aaf14-6bb2-4e68-a46b-03209f55ba4d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rt5kw" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.408243 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blsq7\" (UniqueName: \"kubernetes.io/projected/f32def92-50da-4035-a457-5d66df363c5c-kube-api-access-blsq7\") pod \"packageserver-d55dfcdfc-8zcnw\" (UID: \"f32def92-50da-4035-a457-5d66df363c5c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8zcnw" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.408946 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7eaa1833-a4ad-422c-93db-6c442609d050-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-qcfmz\" (UID: \"7eaa1833-a4ad-422c-93db-6c442609d050\") " pod="openshift-multus/cni-sysctl-allowlist-ds-qcfmz" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.408973 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/7eaa1833-a4ad-422c-93db-6c442609d050-ready\") pod \"cni-sysctl-allowlist-ds-qcfmz\" (UID: \"7eaa1833-a4ad-422c-93db-6c442609d050\") " pod="openshift-multus/cni-sysctl-allowlist-ds-qcfmz" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.409255 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b924dacd-6667-491c-8464-b849b6ae7624-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-nsq6k\" (UID: \"b924dacd-6667-491c-8464-b849b6ae7624\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nsq6k" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.409322 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsj5l\" (UniqueName: \"kubernetes.io/projected/e7e9b633-d17c-4fc7-b23b-0e8f1dc25b80-kube-api-access-lsj5l\") pod \"package-server-manager-789f6589d5-stkgx\" (UID: \"e7e9b633-d17c-4fc7-b23b-0e8f1dc25b80\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-stkgx" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.409351 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9254e22f-7734-4750-b29a-af5f2872eeef-trusted-ca\") pod \"console-operator-58897d9998-rsd6r\" (UID: \"9254e22f-7734-4750-b29a-af5f2872eeef\") " pod="openshift-console-operator/console-operator-58897d9998-rsd6r" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.410044 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9254e22f-7734-4750-b29a-af5f2872eeef-config\") pod \"console-operator-58897d9998-rsd6r\" (UID: \"9254e22f-7734-4750-b29a-af5f2872eeef\") " pod="openshift-console-operator/console-operator-58897d9998-rsd6r" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.410679 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-849nf\" (UniqueName: \"kubernetes.io/projected/08130b12-61f0-411b-98b6-52f25bf4c639-kube-api-access-849nf\") pod \"service-ca-9c57cc56f-x6pzz\" (UID: \"08130b12-61f0-411b-98b6-52f25bf4c639\") " pod="openshift-service-ca/service-ca-9c57cc56f-x6pzz" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.410996 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f7118aeb-770e-44b1-87de-f0a633360ff6-metrics-tls\") pod \"dns-operator-744455d44c-vmm2d\" (UID: \"f7118aeb-770e-44b1-87de-f0a633360ff6\") " pod="openshift-dns-operator/dns-operator-744455d44c-vmm2d" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.411169 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9254e22f-7734-4750-b29a-af5f2872eeef-serving-cert\") pod \"console-operator-58897d9998-rsd6r\" (UID: \"9254e22f-7734-4750-b29a-af5f2872eeef\") " pod="openshift-console-operator/console-operator-58897d9998-rsd6r" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.412220 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4031b2bd-16b2-49b4-a187-5eb591356aff-registry-tls\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.412272 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/75d48e65-ec5a-4706-bf55-2b97ee71af11-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-hl9wb\" (UID: \"75d48e65-ec5a-4706-bf55-2b97ee71af11\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hl9wb" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.412721 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/cee21152-b956-4d24-a5f8-9f62bad2cc3e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-69c2n\" (UID: \"cee21152-b956-4d24-a5f8-9f62bad2cc3e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-69c2n" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.412780 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4031b2bd-16b2-49b4-a187-5eb591356aff-bound-sa-token\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.412820 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f32def92-50da-4035-a457-5d66df363c5c-apiservice-cert\") pod \"packageserver-d55dfcdfc-8zcnw\" (UID: \"f32def92-50da-4035-a457-5d66df363c5c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8zcnw" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.412949 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4031b2bd-16b2-49b4-a187-5eb591356aff-installation-pull-secrets\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.413019 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/10b30994-5301-45ae-bf7b-f6c3c3f600b3-node-bootstrap-token\") pod \"machine-config-server-2bj7t\" (UID: \"10b30994-5301-45ae-bf7b-f6c3c3f600b3\") " pod="openshift-machine-config-operator/machine-config-server-2bj7t" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.415317 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4031b2bd-16b2-49b4-a187-5eb591356aff-installation-pull-secrets\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.416173 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9254e22f-7734-4750-b29a-af5f2872eeef-serving-cert\") pod \"console-operator-58897d9998-rsd6r\" (UID: \"9254e22f-7734-4750-b29a-af5f2872eeef\") " pod="openshift-console-operator/console-operator-58897d9998-rsd6r" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.417435 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f7118aeb-770e-44b1-87de-f0a633360ff6-metrics-tls\") pod \"dns-operator-744455d44c-vmm2d\" (UID: \"f7118aeb-770e-44b1-87de-f0a633360ff6\") " pod="openshift-dns-operator/dns-operator-744455d44c-vmm2d" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.417493 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e9b633-d17c-4fc7-b23b-0e8f1dc25b80-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-stkgx\" (UID: \"e7e9b633-d17c-4fc7-b23b-0e8f1dc25b80\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-stkgx" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.417550 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/4378ed4f-6f5c-418c-9027-b307ffde4aab-mountpoint-dir\") pod \"csi-hostpathplugin-trf82\" (UID: \"4378ed4f-6f5c-418c-9027-b307ffde4aab\") " pod="hostpath-provisioner/csi-hostpathplugin-trf82" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.417571 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81150bf8-7367-43d6-bd5c-205a6a07ac7c-config\") pod \"service-ca-operator-777779d784-rcnr2\" (UID: \"81150bf8-7367-43d6-bd5c-205a6a07ac7c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rcnr2" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.417630 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npvwj\" (UniqueName: \"kubernetes.io/projected/7eaa1833-a4ad-422c-93db-6c442609d050-kube-api-access-npvwj\") pod \"cni-sysctl-allowlist-ds-qcfmz\" (UID: \"7eaa1833-a4ad-422c-93db-6c442609d050\") " pod="openshift-multus/cni-sysctl-allowlist-ds-qcfmz" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.417653 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/10b30994-5301-45ae-bf7b-f6c3c3f600b3-certs\") pod \"machine-config-server-2bj7t\" (UID: \"10b30994-5301-45ae-bf7b-f6c3c3f600b3\") " pod="openshift-machine-config-operator/machine-config-server-2bj7t" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.417674 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/fbfbbf24-e0c7-4342-9200-d507e157a3c3-profile-collector-cert\") pod \"catalog-operator-68c6474976-5zrbx\" (UID: \"fbfbbf24-e0c7-4342-9200-d507e157a3c3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5zrbx" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.417694 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/08130b12-61f0-411b-98b6-52f25bf4c639-signing-key\") pod \"service-ca-9c57cc56f-x6pzz\" (UID: \"08130b12-61f0-411b-98b6-52f25bf4c639\") " pod="openshift-service-ca/service-ca-9c57cc56f-x6pzz" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.417716 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/24d89ac7-e3b2-48ff-a21b-2de526920192-audit-dir\") pod \"apiserver-7bbb656c7d-xwmct\" (UID: \"24d89ac7-e3b2-48ff-a21b-2de526920192\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xwmct" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.417742 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxs56\" (UniqueName: \"kubernetes.io/projected/fbfbbf24-e0c7-4342-9200-d507e157a3c3-kube-api-access-wxs56\") pod \"catalog-operator-68c6474976-5zrbx\" (UID: \"fbfbbf24-e0c7-4342-9200-d507e157a3c3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5zrbx" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.417763 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a2e8003-764c-4068-ada2-1555914b1dca-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-plj26\" (UID: \"4a2e8003-764c-4068-ada2-1555914b1dca\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-plj26" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.417787 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/24d89ac7-e3b2-48ff-a21b-2de526920192-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-xwmct\" (UID: \"24d89ac7-e3b2-48ff-a21b-2de526920192\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xwmct" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.417843 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4766\" (UniqueName: \"kubernetes.io/projected/f57976f3-2e78-4677-9491-d86626551cb8-kube-api-access-v4766\") pod \"machine-config-operator-74547568cd-4m859\" (UID: \"f57976f3-2e78-4677-9491-d86626551cb8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4m859" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.417868 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b924dacd-6667-491c-8464-b849b6ae7624-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-nsq6k\" (UID: \"b924dacd-6667-491c-8464-b849b6ae7624\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nsq6k" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.417957 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07fd30ae-0263-4080-9eca-a61666c2937d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-v692m\" (UID: \"07fd30ae-0263-4080-9eca-a61666c2937d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v692m" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.418048 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmfcn\" (UniqueName: \"kubernetes.io/projected/45ba893b-be5a-43dc-a979-b528286386cd-kube-api-access-gmfcn\") pod \"openshift-config-operator-7777fb866f-ndnvt\" (UID: \"45ba893b-be5a-43dc-a979-b528286386cd\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ndnvt" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.418070 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b924dacd-6667-491c-8464-b849b6ae7624-config\") pod \"kube-controller-manager-operator-78b949d7b-nsq6k\" (UID: \"b924dacd-6667-491c-8464-b849b6ae7624\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nsq6k" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.418093 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j7r5\" (UniqueName: \"kubernetes.io/projected/ae6aaf14-6bb2-4e68-a46b-03209f55ba4d-kube-api-access-7j7r5\") pod \"olm-operator-6b444d44fb-rt5kw\" (UID: \"ae6aaf14-6bb2-4e68-a46b-03209f55ba4d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rt5kw" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.420914 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cee21152-b956-4d24-a5f8-9f62bad2cc3e-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-69c2n\" (UID: \"cee21152-b956-4d24-a5f8-9f62bad2cc3e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-69c2n" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.422762 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07fd30ae-0263-4080-9eca-a61666c2937d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-v692m\" (UID: \"07fd30ae-0263-4080-9eca-a61666c2937d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v692m" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.423492 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9254e22f-7734-4750-b29a-af5f2872eeef-config\") pod \"console-operator-58897d9998-rsd6r\" (UID: \"9254e22f-7734-4750-b29a-af5f2872eeef\") " pod="openshift-console-operator/console-operator-58897d9998-rsd6r" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.433465 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mqmwp"] Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.438399 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hjr7\" (UniqueName: \"kubernetes.io/projected/07fd30ae-0263-4080-9eca-a61666c2937d-kube-api-access-6hjr7\") pod \"openshift-controller-manager-operator-756b6f6bc6-v692m\" (UID: \"07fd30ae-0263-4080-9eca-a61666c2937d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v692m" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.462557 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ffl6\" (UniqueName: \"kubernetes.io/projected/9254e22f-7734-4750-b29a-af5f2872eeef-kube-api-access-8ffl6\") pod \"console-operator-58897d9998-rsd6r\" (UID: \"9254e22f-7734-4750-b29a-af5f2872eeef\") " pod="openshift-console-operator/console-operator-58897d9998-rsd6r" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.467304 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-gjfbr"] Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.472144 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cee21152-b956-4d24-a5f8-9f62bad2cc3e-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-69c2n\" (UID: \"cee21152-b956-4d24-a5f8-9f62bad2cc3e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-69c2n" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.485519 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-277gg" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.491268 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v692m" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.510481 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rx5lf"] Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.511851 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpqxn\" (UniqueName: \"kubernetes.io/projected/f7118aeb-770e-44b1-87de-f0a633360ff6-kube-api-access-hpqxn\") pod \"dns-operator-744455d44c-vmm2d\" (UID: \"f7118aeb-770e-44b1-87de-f0a633360ff6\") " pod="openshift-dns-operator/dns-operator-744455d44c-vmm2d" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.518909 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/75d48e65-ec5a-4706-bf55-2b97ee71af11-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-hl9wb\" (UID: \"75d48e65-ec5a-4706-bf55-2b97ee71af11\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hl9wb" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.518944 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f32def92-50da-4035-a457-5d66df363c5c-apiservice-cert\") pod \"packageserver-d55dfcdfc-8zcnw\" (UID: \"f32def92-50da-4035-a457-5d66df363c5c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8zcnw" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.518967 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/10b30994-5301-45ae-bf7b-f6c3c3f600b3-node-bootstrap-token\") pod \"machine-config-server-2bj7t\" (UID: \"10b30994-5301-45ae-bf7b-f6c3c3f600b3\") " pod="openshift-machine-config-operator/machine-config-server-2bj7t" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.518984 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e9b633-d17c-4fc7-b23b-0e8f1dc25b80-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-stkgx\" (UID: \"e7e9b633-d17c-4fc7-b23b-0e8f1dc25b80\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-stkgx" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.519004 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/4378ed4f-6f5c-418c-9027-b307ffde4aab-mountpoint-dir\") pod \"csi-hostpathplugin-trf82\" (UID: \"4378ed4f-6f5c-418c-9027-b307ffde4aab\") " pod="hostpath-provisioner/csi-hostpathplugin-trf82" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.519021 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81150bf8-7367-43d6-bd5c-205a6a07ac7c-config\") pod \"service-ca-operator-777779d784-rcnr2\" (UID: \"81150bf8-7367-43d6-bd5c-205a6a07ac7c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rcnr2" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.519038 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npvwj\" (UniqueName: \"kubernetes.io/projected/7eaa1833-a4ad-422c-93db-6c442609d050-kube-api-access-npvwj\") pod \"cni-sysctl-allowlist-ds-qcfmz\" (UID: \"7eaa1833-a4ad-422c-93db-6c442609d050\") " pod="openshift-multus/cni-sysctl-allowlist-ds-qcfmz" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.519052 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/10b30994-5301-45ae-bf7b-f6c3c3f600b3-certs\") pod \"machine-config-server-2bj7t\" (UID: \"10b30994-5301-45ae-bf7b-f6c3c3f600b3\") " pod="openshift-machine-config-operator/machine-config-server-2bj7t" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.519067 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/fbfbbf24-e0c7-4342-9200-d507e157a3c3-profile-collector-cert\") pod \"catalog-operator-68c6474976-5zrbx\" (UID: \"fbfbbf24-e0c7-4342-9200-d507e157a3c3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5zrbx" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.519084 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/08130b12-61f0-411b-98b6-52f25bf4c639-signing-key\") pod \"service-ca-9c57cc56f-x6pzz\" (UID: \"08130b12-61f0-411b-98b6-52f25bf4c639\") " pod="openshift-service-ca/service-ca-9c57cc56f-x6pzz" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.519100 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/24d89ac7-e3b2-48ff-a21b-2de526920192-audit-dir\") pod \"apiserver-7bbb656c7d-xwmct\" (UID: \"24d89ac7-e3b2-48ff-a21b-2de526920192\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xwmct" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.519116 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxs56\" (UniqueName: \"kubernetes.io/projected/fbfbbf24-e0c7-4342-9200-d507e157a3c3-kube-api-access-wxs56\") pod \"catalog-operator-68c6474976-5zrbx\" (UID: \"fbfbbf24-e0c7-4342-9200-d507e157a3c3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5zrbx" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.519131 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a2e8003-764c-4068-ada2-1555914b1dca-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-plj26\" (UID: \"4a2e8003-764c-4068-ada2-1555914b1dca\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-plj26" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.519148 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/24d89ac7-e3b2-48ff-a21b-2de526920192-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-xwmct\" (UID: \"24d89ac7-e3b2-48ff-a21b-2de526920192\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xwmct" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.519170 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4766\" (UniqueName: \"kubernetes.io/projected/f57976f3-2e78-4677-9491-d86626551cb8-kube-api-access-v4766\") pod \"machine-config-operator-74547568cd-4m859\" (UID: \"f57976f3-2e78-4677-9491-d86626551cb8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4m859" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.519186 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b924dacd-6667-491c-8464-b849b6ae7624-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-nsq6k\" (UID: \"b924dacd-6667-491c-8464-b849b6ae7624\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nsq6k" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.519204 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmfcn\" (UniqueName: \"kubernetes.io/projected/45ba893b-be5a-43dc-a979-b528286386cd-kube-api-access-gmfcn\") pod \"openshift-config-operator-7777fb866f-ndnvt\" (UID: \"45ba893b-be5a-43dc-a979-b528286386cd\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ndnvt" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.519220 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b924dacd-6667-491c-8464-b849b6ae7624-config\") pod \"kube-controller-manager-operator-78b949d7b-nsq6k\" (UID: \"b924dacd-6667-491c-8464-b849b6ae7624\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nsq6k" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.519237 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7j7r5\" (UniqueName: \"kubernetes.io/projected/ae6aaf14-6bb2-4e68-a46b-03209f55ba4d-kube-api-access-7j7r5\") pod \"olm-operator-6b444d44fb-rt5kw\" (UID: \"ae6aaf14-6bb2-4e68-a46b-03209f55ba4d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rt5kw" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.519257 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzvt8\" (UniqueName: \"kubernetes.io/projected/75d48e65-ec5a-4706-bf55-2b97ee71af11-kube-api-access-gzvt8\") pod \"control-plane-machine-set-operator-78cbb6b69f-hl9wb\" (UID: \"75d48e65-ec5a-4706-bf55-2b97ee71af11\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hl9wb" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.519274 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/45ba893b-be5a-43dc-a979-b528286386cd-available-featuregates\") pod \"openshift-config-operator-7777fb866f-ndnvt\" (UID: \"45ba893b-be5a-43dc-a979-b528286386cd\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ndnvt" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.519293 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e8ae5d83-6d61-4c8e-b86a-f0456eedc676-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-9ll9j\" (UID: \"e8ae5d83-6d61-4c8e-b86a-f0456eedc676\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9ll9j" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.519306 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/fbfbbf24-e0c7-4342-9200-d507e157a3c3-srv-cert\") pod \"catalog-operator-68c6474976-5zrbx\" (UID: \"fbfbbf24-e0c7-4342-9200-d507e157a3c3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5zrbx" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.519322 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7z7l5\" (UniqueName: \"kubernetes.io/projected/871158bf-c5f6-4e49-981a-bf00d5b8c4c7-kube-api-access-7z7l5\") pod \"collect-profiles-29533635-968n9\" (UID: \"871158bf-c5f6-4e49-981a-bf00d5b8c4c7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533635-968n9" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.519338 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9a85e597-ce82-4b41-b00e-e142bbd38849-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hzbjf\" (UID: \"9a85e597-ce82-4b41-b00e-e142bbd38849\") " pod="openshift-marketplace/marketplace-operator-79b997595-hzbjf" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.519353 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f57976f3-2e78-4677-9491-d86626551cb8-auth-proxy-config\") pod \"machine-config-operator-74547568cd-4m859\" (UID: \"f57976f3-2e78-4677-9491-d86626551cb8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4m859" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.521543 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/871158bf-c5f6-4e49-981a-bf00d5b8c4c7-config-volume\") pod \"collect-profiles-29533635-968n9\" (UID: \"871158bf-c5f6-4e49-981a-bf00d5b8c4c7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533635-968n9" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.521576 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f32def92-50da-4035-a457-5d66df363c5c-webhook-cert\") pod \"packageserver-d55dfcdfc-8zcnw\" (UID: \"f32def92-50da-4035-a457-5d66df363c5c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8zcnw" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.521596 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45ba893b-be5a-43dc-a979-b528286386cd-serving-cert\") pod \"openshift-config-operator-7777fb866f-ndnvt\" (UID: \"45ba893b-be5a-43dc-a979-b528286386cd\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ndnvt" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.521622 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxdd8\" (UniqueName: \"kubernetes.io/projected/e8ae5d83-6d61-4c8e-b86a-f0456eedc676-kube-api-access-wxdd8\") pod \"machine-config-controller-84d6567774-9ll9j\" (UID: \"e8ae5d83-6d61-4c8e-b86a-f0456eedc676\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9ll9j" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.521648 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65cefc6d-9165-410b-9af2-0cdb4d56d85c-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-6v8gf\" (UID: \"65cefc6d-9165-410b-9af2-0cdb4d56d85c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6v8gf" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.521665 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/910fa1ee-59a6-4ca3-b937-ab44679c93d9-metrics-tls\") pod \"dns-default-6q7wr\" (UID: \"910fa1ee-59a6-4ca3-b937-ab44679c93d9\") " pod="openshift-dns/dns-default-6q7wr" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.521681 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/24d89ac7-e3b2-48ff-a21b-2de526920192-encryption-config\") pod \"apiserver-7bbb656c7d-xwmct\" (UID: \"24d89ac7-e3b2-48ff-a21b-2de526920192\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xwmct" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.521702 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/910fa1ee-59a6-4ca3-b937-ab44679c93d9-config-volume\") pod \"dns-default-6q7wr\" (UID: \"910fa1ee-59a6-4ca3-b937-ab44679c93d9\") " pod="openshift-dns/dns-default-6q7wr" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.521719 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5d5348af-76fc-486e-9ad6-c0bbdd2d9223-metrics-tls\") pod \"ingress-operator-5b745b69d9-5j95b\" (UID: \"5d5348af-76fc-486e-9ad6-c0bbdd2d9223\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5j95b" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.521736 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bw5d\" (UniqueName: \"kubernetes.io/projected/15293032-ea03-4071-99a7-126b0348f0c0-kube-api-access-8bw5d\") pod \"ingress-canary-bpj6f\" (UID: \"15293032-ea03-4071-99a7-126b0348f0c0\") " pod="openshift-ingress-canary/ingress-canary-bpj6f" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.521754 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhtqx\" (UniqueName: \"kubernetes.io/projected/3dd9dd0c-5dc1-4ab5-a5fc-68569c68198e-kube-api-access-fhtqx\") pod \"migrator-59844c95c7-bfzqz\" (UID: \"3dd9dd0c-5dc1-4ab5-a5fc-68569c68198e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bfzqz" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.521780 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/4378ed4f-6f5c-418c-9027-b307ffde4aab-csi-data-dir\") pod \"csi-hostpathplugin-trf82\" (UID: \"4378ed4f-6f5c-418c-9027-b307ffde4aab\") " pod="hostpath-provisioner/csi-hostpathplugin-trf82" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.521805 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4378ed4f-6f5c-418c-9027-b307ffde4aab-socket-dir\") pod \"csi-hostpathplugin-trf82\" (UID: \"4378ed4f-6f5c-418c-9027-b307ffde4aab\") " pod="hostpath-provisioner/csi-hostpathplugin-trf82" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.521822 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65cefc6d-9165-410b-9af2-0cdb4d56d85c-config\") pod \"kube-apiserver-operator-766d6c64bb-6v8gf\" (UID: \"65cefc6d-9165-410b-9af2-0cdb4d56d85c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6v8gf" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.521838 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/24d89ac7-e3b2-48ff-a21b-2de526920192-audit-policies\") pod \"apiserver-7bbb656c7d-xwmct\" (UID: \"24d89ac7-e3b2-48ff-a21b-2de526920192\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xwmct" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.521854 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/de78fa9e-1a35-4b2e-b5c4-e5de42bf7396-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-hjfgv\" (UID: \"de78fa9e-1a35-4b2e-b5c4-e5de42bf7396\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hjfgv" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.521871 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ae6aaf14-6bb2-4e68-a46b-03209f55ba4d-srv-cert\") pod \"olm-operator-6b444d44fb-rt5kw\" (UID: \"ae6aaf14-6bb2-4e68-a46b-03209f55ba4d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rt5kw" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.521889 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.521906 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlskk\" (UniqueName: \"kubernetes.io/projected/4378ed4f-6f5c-418c-9027-b307ffde4aab-kube-api-access-jlskk\") pod \"csi-hostpathplugin-trf82\" (UID: \"4378ed4f-6f5c-418c-9027-b307ffde4aab\") " pod="hostpath-provisioner/csi-hostpathplugin-trf82" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.521923 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pppc8\" (UniqueName: \"kubernetes.io/projected/de78fa9e-1a35-4b2e-b5c4-e5de42bf7396-kube-api-access-pppc8\") pod \"multus-admission-controller-857f4d67dd-hjfgv\" (UID: \"de78fa9e-1a35-4b2e-b5c4-e5de42bf7396\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hjfgv" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.521937 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24d89ac7-e3b2-48ff-a21b-2de526920192-serving-cert\") pod \"apiserver-7bbb656c7d-xwmct\" (UID: \"24d89ac7-e3b2-48ff-a21b-2de526920192\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xwmct" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.521954 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/24d89ac7-e3b2-48ff-a21b-2de526920192-etcd-client\") pod \"apiserver-7bbb656c7d-xwmct\" (UID: \"24d89ac7-e3b2-48ff-a21b-2de526920192\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xwmct" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.521971 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w56xk\" (UniqueName: \"kubernetes.io/projected/81150bf8-7367-43d6-bd5c-205a6a07ac7c-kube-api-access-w56xk\") pod \"service-ca-operator-777779d784-rcnr2\" (UID: \"81150bf8-7367-43d6-bd5c-205a6a07ac7c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rcnr2" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.521986 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55cng\" (UniqueName: \"kubernetes.io/projected/24d89ac7-e3b2-48ff-a21b-2de526920192-kube-api-access-55cng\") pod \"apiserver-7bbb656c7d-xwmct\" (UID: \"24d89ac7-e3b2-48ff-a21b-2de526920192\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xwmct" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.522005 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f57976f3-2e78-4677-9491-d86626551cb8-images\") pod \"machine-config-operator-74547568cd-4m859\" (UID: \"f57976f3-2e78-4677-9491-d86626551cb8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4m859" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.522020 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skbpw\" (UniqueName: \"kubernetes.io/projected/9a85e597-ce82-4b41-b00e-e142bbd38849-kube-api-access-skbpw\") pod \"marketplace-operator-79b997595-hzbjf\" (UID: \"9a85e597-ce82-4b41-b00e-e142bbd38849\") " pod="openshift-marketplace/marketplace-operator-79b997595-hzbjf" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.522039 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9a85e597-ce82-4b41-b00e-e142bbd38849-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hzbjf\" (UID: \"9a85e597-ce82-4b41-b00e-e142bbd38849\") " pod="openshift-marketplace/marketplace-operator-79b997595-hzbjf" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.522057 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7eaa1833-a4ad-422c-93db-6c442609d050-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-qcfmz\" (UID: \"7eaa1833-a4ad-422c-93db-6c442609d050\") " pod="openshift-multus/cni-sysctl-allowlist-ds-qcfmz" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.522074 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btl7p\" (UniqueName: \"kubernetes.io/projected/10b30994-5301-45ae-bf7b-f6c3c3f600b3-kube-api-access-btl7p\") pod \"machine-config-server-2bj7t\" (UID: \"10b30994-5301-45ae-bf7b-f6c3c3f600b3\") " pod="openshift-machine-config-operator/machine-config-server-2bj7t" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.522088 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81150bf8-7367-43d6-bd5c-205a6a07ac7c-serving-cert\") pod \"service-ca-operator-777779d784-rcnr2\" (UID: \"81150bf8-7367-43d6-bd5c-205a6a07ac7c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rcnr2" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.522112 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e8ae5d83-6d61-4c8e-b86a-f0456eedc676-proxy-tls\") pod \"machine-config-controller-84d6567774-9ll9j\" (UID: \"e8ae5d83-6d61-4c8e-b86a-f0456eedc676\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9ll9j" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.522129 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb8fw\" (UniqueName: \"kubernetes.io/projected/910fa1ee-59a6-4ca3-b937-ab44679c93d9-kube-api-access-tb8fw\") pod \"dns-default-6q7wr\" (UID: \"910fa1ee-59a6-4ca3-b937-ab44679c93d9\") " pod="openshift-dns/dns-default-6q7wr" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.522143 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f32def92-50da-4035-a457-5d66df363c5c-tmpfs\") pod \"packageserver-d55dfcdfc-8zcnw\" (UID: \"f32def92-50da-4035-a457-5d66df363c5c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8zcnw" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.522170 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gq5qr\" (UniqueName: \"kubernetes.io/projected/4a2e8003-764c-4068-ada2-1555914b1dca-kube-api-access-gq5qr\") pod \"kube-storage-version-migrator-operator-b67b599dd-plj26\" (UID: \"4a2e8003-764c-4068-ada2-1555914b1dca\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-plj26" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.522190 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a2e8003-764c-4068-ada2-1555914b1dca-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-plj26\" (UID: \"4a2e8003-764c-4068-ada2-1555914b1dca\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-plj26" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.522207 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sl56b\" (UniqueName: \"kubernetes.io/projected/5d5348af-76fc-486e-9ad6-c0bbdd2d9223-kube-api-access-sl56b\") pod \"ingress-operator-5b745b69d9-5j95b\" (UID: \"5d5348af-76fc-486e-9ad6-c0bbdd2d9223\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5j95b" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.522223 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24d89ac7-e3b2-48ff-a21b-2de526920192-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-xwmct\" (UID: \"24d89ac7-e3b2-48ff-a21b-2de526920192\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xwmct" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.522238 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/15293032-ea03-4071-99a7-126b0348f0c0-cert\") pod \"ingress-canary-bpj6f\" (UID: \"15293032-ea03-4071-99a7-126b0348f0c0\") " pod="openshift-ingress-canary/ingress-canary-bpj6f" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.522253 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/4378ed4f-6f5c-418c-9027-b307ffde4aab-plugins-dir\") pod \"csi-hostpathplugin-trf82\" (UID: \"4378ed4f-6f5c-418c-9027-b307ffde4aab\") " pod="hostpath-provisioner/csi-hostpathplugin-trf82" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.522269 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5d5348af-76fc-486e-9ad6-c0bbdd2d9223-trusted-ca\") pod \"ingress-operator-5b745b69d9-5j95b\" (UID: \"5d5348af-76fc-486e-9ad6-c0bbdd2d9223\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5j95b" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.522282 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/65cefc6d-9165-410b-9af2-0cdb4d56d85c-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-6v8gf\" (UID: \"65cefc6d-9165-410b-9af2-0cdb4d56d85c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6v8gf" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.522300 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5d5348af-76fc-486e-9ad6-c0bbdd2d9223-bound-sa-token\") pod \"ingress-operator-5b745b69d9-5j95b\" (UID: \"5d5348af-76fc-486e-9ad6-c0bbdd2d9223\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5j95b" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.522315 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/08130b12-61f0-411b-98b6-52f25bf4c639-signing-cabundle\") pod \"service-ca-9c57cc56f-x6pzz\" (UID: \"08130b12-61f0-411b-98b6-52f25bf4c639\") " pod="openshift-service-ca/service-ca-9c57cc56f-x6pzz" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.522334 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/871158bf-c5f6-4e49-981a-bf00d5b8c4c7-secret-volume\") pod \"collect-profiles-29533635-968n9\" (UID: \"871158bf-c5f6-4e49-981a-bf00d5b8c4c7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533635-968n9" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.522352 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f57976f3-2e78-4677-9491-d86626551cb8-proxy-tls\") pod \"machine-config-operator-74547568cd-4m859\" (UID: \"f57976f3-2e78-4677-9491-d86626551cb8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4m859" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.522386 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4378ed4f-6f5c-418c-9027-b307ffde4aab-registration-dir\") pod \"csi-hostpathplugin-trf82\" (UID: \"4378ed4f-6f5c-418c-9027-b307ffde4aab\") " pod="hostpath-provisioner/csi-hostpathplugin-trf82" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.522401 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blsq7\" (UniqueName: \"kubernetes.io/projected/f32def92-50da-4035-a457-5d66df363c5c-kube-api-access-blsq7\") pod \"packageserver-d55dfcdfc-8zcnw\" (UID: \"f32def92-50da-4035-a457-5d66df363c5c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8zcnw" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.522417 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ae6aaf14-6bb2-4e68-a46b-03209f55ba4d-profile-collector-cert\") pod \"olm-operator-6b444d44fb-rt5kw\" (UID: \"ae6aaf14-6bb2-4e68-a46b-03209f55ba4d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rt5kw" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.522435 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7eaa1833-a4ad-422c-93db-6c442609d050-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-qcfmz\" (UID: \"7eaa1833-a4ad-422c-93db-6c442609d050\") " pod="openshift-multus/cni-sysctl-allowlist-ds-qcfmz" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.522452 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/7eaa1833-a4ad-422c-93db-6c442609d050-ready\") pod \"cni-sysctl-allowlist-ds-qcfmz\" (UID: \"7eaa1833-a4ad-422c-93db-6c442609d050\") " pod="openshift-multus/cni-sysctl-allowlist-ds-qcfmz" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.522467 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b924dacd-6667-491c-8464-b849b6ae7624-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-nsq6k\" (UID: \"b924dacd-6667-491c-8464-b849b6ae7624\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nsq6k" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.522484 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsj5l\" (UniqueName: \"kubernetes.io/projected/e7e9b633-d17c-4fc7-b23b-0e8f1dc25b80-kube-api-access-lsj5l\") pod \"package-server-manager-789f6589d5-stkgx\" (UID: \"e7e9b633-d17c-4fc7-b23b-0e8f1dc25b80\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-stkgx" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.522508 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-849nf\" (UniqueName: \"kubernetes.io/projected/08130b12-61f0-411b-98b6-52f25bf4c639-kube-api-access-849nf\") pod \"service-ca-9c57cc56f-x6pzz\" (UID: \"08130b12-61f0-411b-98b6-52f25bf4c639\") " pod="openshift-service-ca/service-ca-9c57cc56f-x6pzz" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.524105 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65cefc6d-9165-410b-9af2-0cdb4d56d85c-config\") pod \"kube-apiserver-operator-766d6c64bb-6v8gf\" (UID: \"65cefc6d-9165-410b-9af2-0cdb4d56d85c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6v8gf" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.525119 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f32def92-50da-4035-a457-5d66df363c5c-tmpfs\") pod \"packageserver-d55dfcdfc-8zcnw\" (UID: \"f32def92-50da-4035-a457-5d66df363c5c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8zcnw" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.526295 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/24d89ac7-e3b2-48ff-a21b-2de526920192-audit-policies\") pod \"apiserver-7bbb656c7d-xwmct\" (UID: \"24d89ac7-e3b2-48ff-a21b-2de526920192\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xwmct" Feb 25 11:19:53 crc kubenswrapper[5005]: E0225 11:19:53.533766 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-25 11:19:54.033749276 +0000 UTC m=+108.074481603 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xxm8g" (UID: "4031b2bd-16b2-49b4-a187-5eb591356aff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.534171 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/4378ed4f-6f5c-418c-9027-b307ffde4aab-mountpoint-dir\") pod \"csi-hostpathplugin-trf82\" (UID: \"4378ed4f-6f5c-418c-9027-b307ffde4aab\") " pod="hostpath-provisioner/csi-hostpathplugin-trf82" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.535064 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81150bf8-7367-43d6-bd5c-205a6a07ac7c-config\") pod \"service-ca-operator-777779d784-rcnr2\" (UID: \"81150bf8-7367-43d6-bd5c-205a6a07ac7c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rcnr2" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.536349 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-rsd6r" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.537797 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/4378ed4f-6f5c-418c-9027-b307ffde4aab-csi-data-dir\") pod \"csi-hostpathplugin-trf82\" (UID: \"4378ed4f-6f5c-418c-9027-b307ffde4aab\") " pod="hostpath-provisioner/csi-hostpathplugin-trf82" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.537996 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4378ed4f-6f5c-418c-9027-b307ffde4aab-socket-dir\") pod \"csi-hostpathplugin-trf82\" (UID: \"4378ed4f-6f5c-418c-9027-b307ffde4aab\") " pod="hostpath-provisioner/csi-hostpathplugin-trf82" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.538721 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-qzm5m"] Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.538856 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/910fa1ee-59a6-4ca3-b937-ab44679c93d9-config-volume\") pod \"dns-default-6q7wr\" (UID: \"910fa1ee-59a6-4ca3-b937-ab44679c93d9\") " pod="openshift-dns/dns-default-6q7wr" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.540193 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/4378ed4f-6f5c-418c-9027-b307ffde4aab-plugins-dir\") pod \"csi-hostpathplugin-trf82\" (UID: \"4378ed4f-6f5c-418c-9027-b307ffde4aab\") " pod="hostpath-provisioner/csi-hostpathplugin-trf82" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.541478 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/de78fa9e-1a35-4b2e-b5c4-e5de42bf7396-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-hjfgv\" (UID: \"de78fa9e-1a35-4b2e-b5c4-e5de42bf7396\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hjfgv" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.542442 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/24d89ac7-e3b2-48ff-a21b-2de526920192-audit-dir\") pod \"apiserver-7bbb656c7d-xwmct\" (UID: \"24d89ac7-e3b2-48ff-a21b-2de526920192\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xwmct" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.542812 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgs7p\" (UniqueName: \"kubernetes.io/projected/1d3ecaa2-3425-41f3-a8cf-cfbd0e94643c-kube-api-access-kgs7p\") pod \"downloads-7954f5f757-65dkm\" (UID: \"1d3ecaa2-3425-41f3-a8cf-cfbd0e94643c\") " pod="openshift-console/downloads-7954f5f757-65dkm" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.543074 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9a85e597-ce82-4b41-b00e-e142bbd38849-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hzbjf\" (UID: \"9a85e597-ce82-4b41-b00e-e142bbd38849\") " pod="openshift-marketplace/marketplace-operator-79b997595-hzbjf" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.543179 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7eaa1833-a4ad-422c-93db-6c442609d050-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-qcfmz\" (UID: \"7eaa1833-a4ad-422c-93db-6c442609d050\") " pod="openshift-multus/cni-sysctl-allowlist-ds-qcfmz" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.543679 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b924dacd-6667-491c-8464-b849b6ae7624-config\") pod \"kube-controller-manager-operator-78b949d7b-nsq6k\" (UID: \"b924dacd-6667-491c-8464-b849b6ae7624\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nsq6k" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.543974 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f57976f3-2e78-4677-9491-d86626551cb8-auth-proxy-config\") pod \"machine-config-operator-74547568cd-4m859\" (UID: \"f57976f3-2e78-4677-9491-d86626551cb8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4m859" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.544043 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/45ba893b-be5a-43dc-a979-b528286386cd-available-featuregates\") pod \"openshift-config-operator-7777fb866f-ndnvt\" (UID: \"45ba893b-be5a-43dc-a979-b528286386cd\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ndnvt" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.544257 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f57976f3-2e78-4677-9491-d86626551cb8-images\") pod \"machine-config-operator-74547568cd-4m859\" (UID: \"f57976f3-2e78-4677-9491-d86626551cb8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4m859" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.544343 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4378ed4f-6f5c-418c-9027-b307ffde4aab-registration-dir\") pod \"csi-hostpathplugin-trf82\" (UID: \"4378ed4f-6f5c-418c-9027-b307ffde4aab\") " pod="hostpath-provisioner/csi-hostpathplugin-trf82" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.544518 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5d5348af-76fc-486e-9ad6-c0bbdd2d9223-trusted-ca\") pod \"ingress-operator-5b745b69d9-5j95b\" (UID: \"5d5348af-76fc-486e-9ad6-c0bbdd2d9223\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5j95b" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.544725 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e8ae5d83-6d61-4c8e-b86a-f0456eedc676-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-9ll9j\" (UID: \"e8ae5d83-6d61-4c8e-b86a-f0456eedc676\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9ll9j" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.544860 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/910fa1ee-59a6-4ca3-b937-ab44679c93d9-metrics-tls\") pod \"dns-default-6q7wr\" (UID: \"910fa1ee-59a6-4ca3-b937-ab44679c93d9\") " pod="openshift-dns/dns-default-6q7wr" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.545313 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/75d48e65-ec5a-4706-bf55-2b97ee71af11-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-hl9wb\" (UID: \"75d48e65-ec5a-4706-bf55-2b97ee71af11\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hl9wb" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.547163 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e9b633-d17c-4fc7-b23b-0e8f1dc25b80-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-stkgx\" (UID: \"e7e9b633-d17c-4fc7-b23b-0e8f1dc25b80\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-stkgx" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.547695 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/08130b12-61f0-411b-98b6-52f25bf4c639-signing-key\") pod \"service-ca-9c57cc56f-x6pzz\" (UID: \"08130b12-61f0-411b-98b6-52f25bf4c639\") " pod="openshift-service-ca/service-ca-9c57cc56f-x6pzz" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.548280 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f32def92-50da-4035-a457-5d66df363c5c-apiservice-cert\") pod \"packageserver-d55dfcdfc-8zcnw\" (UID: \"f32def92-50da-4035-a457-5d66df363c5c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8zcnw" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.549039 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a2e8003-764c-4068-ada2-1555914b1dca-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-plj26\" (UID: \"4a2e8003-764c-4068-ada2-1555914b1dca\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-plj26" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.549676 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/24d89ac7-e3b2-48ff-a21b-2de526920192-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-xwmct\" (UID: \"24d89ac7-e3b2-48ff-a21b-2de526920192\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xwmct" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.550276 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24d89ac7-e3b2-48ff-a21b-2de526920192-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-xwmct\" (UID: \"24d89ac7-e3b2-48ff-a21b-2de526920192\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xwmct" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.550671 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9a85e597-ce82-4b41-b00e-e142bbd38849-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hzbjf\" (UID: \"9a85e597-ce82-4b41-b00e-e142bbd38849\") " pod="openshift-marketplace/marketplace-operator-79b997595-hzbjf" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.550990 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5d5348af-76fc-486e-9ad6-c0bbdd2d9223-metrics-tls\") pod \"ingress-operator-5b745b69d9-5j95b\" (UID: \"5d5348af-76fc-486e-9ad6-c0bbdd2d9223\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5j95b" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.551589 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f32def92-50da-4035-a457-5d66df363c5c-webhook-cert\") pod \"packageserver-d55dfcdfc-8zcnw\" (UID: \"f32def92-50da-4035-a457-5d66df363c5c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8zcnw" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.552012 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/10b30994-5301-45ae-bf7b-f6c3c3f600b3-node-bootstrap-token\") pod \"machine-config-server-2bj7t\" (UID: \"10b30994-5301-45ae-bf7b-f6c3c3f600b3\") " pod="openshift-machine-config-operator/machine-config-server-2bj7t" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.552447 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81150bf8-7367-43d6-bd5c-205a6a07ac7c-serving-cert\") pod \"service-ca-operator-777779d784-rcnr2\" (UID: \"81150bf8-7367-43d6-bd5c-205a6a07ac7c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rcnr2" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.553173 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/fbfbbf24-e0c7-4342-9200-d507e157a3c3-srv-cert\") pod \"catalog-operator-68c6474976-5zrbx\" (UID: \"fbfbbf24-e0c7-4342-9200-d507e157a3c3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5zrbx" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.553820 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ae6aaf14-6bb2-4e68-a46b-03209f55ba4d-srv-cert\") pod \"olm-operator-6b444d44fb-rt5kw\" (UID: \"ae6aaf14-6bb2-4e68-a46b-03209f55ba4d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rt5kw" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.553363 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7eaa1833-a4ad-422c-93db-6c442609d050-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-qcfmz\" (UID: \"7eaa1833-a4ad-422c-93db-6c442609d050\") " pod="openshift-multus/cni-sysctl-allowlist-ds-qcfmz" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.554113 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/15293032-ea03-4071-99a7-126b0348f0c0-cert\") pod \"ingress-canary-bpj6f\" (UID: \"15293032-ea03-4071-99a7-126b0348f0c0\") " pod="openshift-ingress-canary/ingress-canary-bpj6f" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.554840 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/08130b12-61f0-411b-98b6-52f25bf4c639-signing-cabundle\") pod \"service-ca-9c57cc56f-x6pzz\" (UID: \"08130b12-61f0-411b-98b6-52f25bf4c639\") " pod="openshift-service-ca/service-ca-9c57cc56f-x6pzz" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.554949 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/871158bf-c5f6-4e49-981a-bf00d5b8c4c7-config-volume\") pod \"collect-profiles-29533635-968n9\" (UID: \"871158bf-c5f6-4e49-981a-bf00d5b8c4c7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533635-968n9" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.555056 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/7eaa1833-a4ad-422c-93db-6c442609d050-ready\") pod \"cni-sysctl-allowlist-ds-qcfmz\" (UID: \"7eaa1833-a4ad-422c-93db-6c442609d050\") " pod="openshift-multus/cni-sysctl-allowlist-ds-qcfmz" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.555174 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/871158bf-c5f6-4e49-981a-bf00d5b8c4c7-secret-volume\") pod \"collect-profiles-29533635-968n9\" (UID: \"871158bf-c5f6-4e49-981a-bf00d5b8c4c7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533635-968n9" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.556439 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/24d89ac7-e3b2-48ff-a21b-2de526920192-encryption-config\") pod \"apiserver-7bbb656c7d-xwmct\" (UID: \"24d89ac7-e3b2-48ff-a21b-2de526920192\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xwmct" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.556921 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/24d89ac7-e3b2-48ff-a21b-2de526920192-etcd-client\") pod \"apiserver-7bbb656c7d-xwmct\" (UID: \"24d89ac7-e3b2-48ff-a21b-2de526920192\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xwmct" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.557250 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f57976f3-2e78-4677-9491-d86626551cb8-proxy-tls\") pod \"machine-config-operator-74547568cd-4m859\" (UID: \"f57976f3-2e78-4677-9491-d86626551cb8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4m859" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.558483 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e8ae5d83-6d61-4c8e-b86a-f0456eedc676-proxy-tls\") pod \"machine-config-controller-84d6567774-9ll9j\" (UID: \"e8ae5d83-6d61-4c8e-b86a-f0456eedc676\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9ll9j" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.559612 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ae6aaf14-6bb2-4e68-a46b-03209f55ba4d-profile-collector-cert\") pod \"olm-operator-6b444d44fb-rt5kw\" (UID: \"ae6aaf14-6bb2-4e68-a46b-03209f55ba4d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rt5kw" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.560010 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b924dacd-6667-491c-8464-b849b6ae7624-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-nsq6k\" (UID: \"b924dacd-6667-491c-8464-b849b6ae7624\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nsq6k" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.562205 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45ba893b-be5a-43dc-a979-b528286386cd-serving-cert\") pod \"openshift-config-operator-7777fb866f-ndnvt\" (UID: \"45ba893b-be5a-43dc-a979-b528286386cd\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ndnvt" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.563464 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65cefc6d-9165-410b-9af2-0cdb4d56d85c-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-6v8gf\" (UID: \"65cefc6d-9165-410b-9af2-0cdb4d56d85c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6v8gf" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.564675 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a2e8003-764c-4068-ada2-1555914b1dca-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-plj26\" (UID: \"4a2e8003-764c-4068-ada2-1555914b1dca\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-plj26" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.567335 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmsbt\" (UniqueName: \"kubernetes.io/projected/4031b2bd-16b2-49b4-a187-5eb591356aff-kube-api-access-dmsbt\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.574012 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/10b30994-5301-45ae-bf7b-f6c3c3f600b3-certs\") pod \"machine-config-server-2bj7t\" (UID: \"10b30994-5301-45ae-bf7b-f6c3c3f600b3\") " pod="openshift-machine-config-operator/machine-config-server-2bj7t" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.574357 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24d89ac7-e3b2-48ff-a21b-2de526920192-serving-cert\") pod \"apiserver-7bbb656c7d-xwmct\" (UID: \"24d89ac7-e3b2-48ff-a21b-2de526920192\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xwmct" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.579620 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/fbfbbf24-e0c7-4342-9200-d507e157a3c3-profile-collector-cert\") pod \"catalog-operator-68c6474976-5zrbx\" (UID: \"fbfbbf24-e0c7-4342-9200-d507e157a3c3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5zrbx" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.582821 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwtlq\" (UniqueName: \"kubernetes.io/projected/cee21152-b956-4d24-a5f8-9f62bad2cc3e-kube-api-access-vwtlq\") pod \"cluster-image-registry-operator-dc59b4c8b-69c2n\" (UID: \"cee21152-b956-4d24-a5f8-9f62bad2cc3e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-69c2n" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.591129 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4031b2bd-16b2-49b4-a187-5eb591356aff-bound-sa-token\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.628775 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 25 11:19:53 crc kubenswrapper[5005]: E0225 11:19:53.629344 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-25 11:19:54.1293259 +0000 UTC m=+108.170058227 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.642927 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-849nf\" (UniqueName: \"kubernetes.io/projected/08130b12-61f0-411b-98b6-52f25bf4c639-kube-api-access-849nf\") pod \"service-ca-9c57cc56f-x6pzz\" (UID: \"08130b12-61f0-411b-98b6-52f25bf4c639\") " pod="openshift-service-ca/service-ca-9c57cc56f-x6pzz" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.658531 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlskk\" (UniqueName: \"kubernetes.io/projected/4378ed4f-6f5c-418c-9027-b307ffde4aab-kube-api-access-jlskk\") pod \"csi-hostpathplugin-trf82\" (UID: \"4378ed4f-6f5c-418c-9027-b307ffde4aab\") " pod="hostpath-provisioner/csi-hostpathplugin-trf82" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.676663 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xlq97"] Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.677915 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pppc8\" (UniqueName: \"kubernetes.io/projected/de78fa9e-1a35-4b2e-b5c4-e5de42bf7396-kube-api-access-pppc8\") pod \"multus-admission-controller-857f4d67dd-hjfgv\" (UID: \"de78fa9e-1a35-4b2e-b5c4-e5de42bf7396\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hjfgv" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.703752 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhtqx\" (UniqueName: \"kubernetes.io/projected/3dd9dd0c-5dc1-4ab5-a5fc-68569c68198e-kube-api-access-fhtqx\") pod \"migrator-59844c95c7-bfzqz\" (UID: \"3dd9dd0c-5dc1-4ab5-a5fc-68569c68198e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bfzqz" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.710960 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-277gg"] Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.725035 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-hjfgv" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.731570 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:19:53 crc kubenswrapper[5005]: E0225 11:19:53.731906 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-25 11:19:54.231895677 +0000 UTC m=+108.272628004 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xxm8g" (UID: "4031b2bd-16b2-49b4-a187-5eb591356aff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.733004 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z7l5\" (UniqueName: \"kubernetes.io/projected/871158bf-c5f6-4e49-981a-bf00d5b8c4c7-kube-api-access-7z7l5\") pod \"collect-profiles-29533635-968n9\" (UID: \"871158bf-c5f6-4e49-981a-bf00d5b8c4c7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533635-968n9" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.734250 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npvwj\" (UniqueName: \"kubernetes.io/projected/7eaa1833-a4ad-422c-93db-6c442609d050-kube-api-access-npvwj\") pod \"cni-sysctl-allowlist-ds-qcfmz\" (UID: \"7eaa1833-a4ad-422c-93db-6c442609d050\") " pod="openshift-multus/cni-sysctl-allowlist-ds-qcfmz" Feb 25 11:19:53 crc kubenswrapper[5005]: W0225 11:19:53.740352 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ee6ad6b_1ad3_4bcf_a35a_d5f09a50e9d2.slice/crio-3695d83fbd1b809f7e2e1d2bb1e60065e6848be9526a7458b1be0003e29f9836 WatchSource:0}: Error finding container 3695d83fbd1b809f7e2e1d2bb1e60065e6848be9526a7458b1be0003e29f9836: Status 404 returned error can't find the container with id 3695d83fbd1b809f7e2e1d2bb1e60065e6848be9526a7458b1be0003e29f9836 Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.752284 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-x6pzz" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.754594 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gq5qr\" (UniqueName: \"kubernetes.io/projected/4a2e8003-764c-4068-ada2-1555914b1dca-kube-api-access-gq5qr\") pod \"kube-storage-version-migrator-operator-b67b599dd-plj26\" (UID: \"4a2e8003-764c-4068-ada2-1555914b1dca\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-plj26" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.760217 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29533635-968n9" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.761487 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-69c2n" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.772126 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-65dkm" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.772622 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sl56b\" (UniqueName: \"kubernetes.io/projected/5d5348af-76fc-486e-9ad6-c0bbdd2d9223-kube-api-access-sl56b\") pod \"ingress-operator-5b745b69d9-5j95b\" (UID: \"5d5348af-76fc-486e-9ad6-c0bbdd2d9223\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5j95b" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.789838 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-qcfmz" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.792538 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w56xk\" (UniqueName: \"kubernetes.io/projected/81150bf8-7367-43d6-bd5c-205a6a07ac7c-kube-api-access-w56xk\") pod \"service-ca-operator-777779d784-rcnr2\" (UID: \"81150bf8-7367-43d6-bd5c-205a6a07ac7c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rcnr2" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.799566 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-vmm2d" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.818455 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bw5d\" (UniqueName: \"kubernetes.io/projected/15293032-ea03-4071-99a7-126b0348f0c0-kube-api-access-8bw5d\") pod \"ingress-canary-bpj6f\" (UID: \"15293032-ea03-4071-99a7-126b0348f0c0\") " pod="openshift-ingress-canary/ingress-canary-bpj6f" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.818465 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-trf82" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.831209 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb8fw\" (UniqueName: \"kubernetes.io/projected/910fa1ee-59a6-4ca3-b937-ab44679c93d9-kube-api-access-tb8fw\") pod \"dns-default-6q7wr\" (UID: \"910fa1ee-59a6-4ca3-b937-ab44679c93d9\") " pod="openshift-dns/dns-default-6q7wr" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.832447 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 25 11:19:53 crc kubenswrapper[5005]: E0225 11:19:53.832882 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-25 11:19:54.332866157 +0000 UTC m=+108.373598484 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.833419 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-rsd6r"] Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.837207 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-6q7wr" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.846128 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v692m"] Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.853022 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55cng\" (UniqueName: \"kubernetes.io/projected/24d89ac7-e3b2-48ff-a21b-2de526920192-kube-api-access-55cng\") pod \"apiserver-7bbb656c7d-xwmct\" (UID: \"24d89ac7-e3b2-48ff-a21b-2de526920192\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xwmct" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.881882 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btl7p\" (UniqueName: \"kubernetes.io/projected/10b30994-5301-45ae-bf7b-f6c3c3f600b3-kube-api-access-btl7p\") pod \"machine-config-server-2bj7t\" (UID: \"10b30994-5301-45ae-bf7b-f6c3c3f600b3\") " pod="openshift-machine-config-operator/machine-config-server-2bj7t" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.901903 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxs56\" (UniqueName: \"kubernetes.io/projected/fbfbbf24-e0c7-4342-9200-d507e157a3c3-kube-api-access-wxs56\") pod \"catalog-operator-68c6474976-5zrbx\" (UID: \"fbfbbf24-e0c7-4342-9200-d507e157a3c3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5zrbx" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.908545 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xwmct" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.917996 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4766\" (UniqueName: \"kubernetes.io/projected/f57976f3-2e78-4677-9491-d86626551cb8-kube-api-access-v4766\") pod \"machine-config-operator-74547568cd-4m859\" (UID: \"f57976f3-2e78-4677-9491-d86626551cb8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4m859" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.925328 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5zrbx" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.934197 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b924dacd-6667-491c-8464-b849b6ae7624-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-nsq6k\" (UID: \"b924dacd-6667-491c-8464-b849b6ae7624\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nsq6k" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.935099 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:19:53 crc kubenswrapper[5005]: E0225 11:19:53.935608 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-25 11:19:54.435593442 +0000 UTC m=+108.476325769 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xxm8g" (UID: "4031b2bd-16b2-49b4-a187-5eb591356aff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.942663 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bfzqz" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.962007 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmfcn\" (UniqueName: \"kubernetes.io/projected/45ba893b-be5a-43dc-a979-b528286386cd-kube-api-access-gmfcn\") pod \"openshift-config-operator-7777fb866f-ndnvt\" (UID: \"45ba893b-be5a-43dc-a979-b528286386cd\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ndnvt" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.974132 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skbpw\" (UniqueName: \"kubernetes.io/projected/9a85e597-ce82-4b41-b00e-e142bbd38849-kube-api-access-skbpw\") pod \"marketplace-operator-79b997595-hzbjf\" (UID: \"9a85e597-ce82-4b41-b00e-e142bbd38849\") " pod="openshift-marketplace/marketplace-operator-79b997595-hzbjf" Feb 25 11:19:53 crc kubenswrapper[5005]: I0225 11:19:53.995016 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-plj26" Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.001734 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7j7r5\" (UniqueName: \"kubernetes.io/projected/ae6aaf14-6bb2-4e68-a46b-03209f55ba4d-kube-api-access-7j7r5\") pod \"olm-operator-6b444d44fb-rt5kw\" (UID: \"ae6aaf14-6bb2-4e68-a46b-03209f55ba4d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rt5kw" Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.008010 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rt5kw" Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.021457 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzvt8\" (UniqueName: \"kubernetes.io/projected/75d48e65-ec5a-4706-bf55-2b97ee71af11-kube-api-access-gzvt8\") pod \"control-plane-machine-set-operator-78cbb6b69f-hl9wb\" (UID: \"75d48e65-ec5a-4706-bf55-2b97ee71af11\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hl9wb" Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.036102 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 25 11:19:54 crc kubenswrapper[5005]: E0225 11:19:54.036315 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-25 11:19:54.536299862 +0000 UTC m=+108.577032189 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.036762 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:19:54 crc kubenswrapper[5005]: E0225 11:19:54.037059 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-25 11:19:54.537052869 +0000 UTC m=+108.577785196 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xxm8g" (UID: "4031b2bd-16b2-49b4-a187-5eb591356aff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.045102 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/65cefc6d-9165-410b-9af2-0cdb4d56d85c-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-6v8gf\" (UID: \"65cefc6d-9165-410b-9af2-0cdb4d56d85c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6v8gf" Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.045209 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4m859" Feb 25 11:19:54 crc kubenswrapper[5005]: W0225 11:19:54.052977 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7eaa1833_a4ad_422c_93db_6c442609d050.slice/crio-7d48cff40346e6468c78887a74a5267ab2028939d4584cb2a99b1da64c92e547 WatchSource:0}: Error finding container 7d48cff40346e6468c78887a74a5267ab2028939d4584cb2a99b1da64c92e547: Status 404 returned error can't find the container with id 7d48cff40346e6468c78887a74a5267ab2028939d4584cb2a99b1da64c92e547 Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.058044 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5d5348af-76fc-486e-9ad6-c0bbdd2d9223-bound-sa-token\") pod \"ingress-operator-5b745b69d9-5j95b\" (UID: \"5d5348af-76fc-486e-9ad6-c0bbdd2d9223\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5j95b" Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.066890 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hzbjf" Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.108142 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rcnr2" Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.108180 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-2bj7t" Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.108301 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-bpj6f" Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.109362 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blsq7\" (UniqueName: \"kubernetes.io/projected/f32def92-50da-4035-a457-5d66df363c5c-kube-api-access-blsq7\") pod \"packageserver-d55dfcdfc-8zcnw\" (UID: \"f32def92-50da-4035-a457-5d66df363c5c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8zcnw" Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.132280 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxdd8\" (UniqueName: \"kubernetes.io/projected/e8ae5d83-6d61-4c8e-b86a-f0456eedc676-kube-api-access-wxdd8\") pod \"machine-config-controller-84d6567774-9ll9j\" (UID: \"e8ae5d83-6d61-4c8e-b86a-f0456eedc676\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9ll9j" Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.132508 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsj5l\" (UniqueName: \"kubernetes.io/projected/e7e9b633-d17c-4fc7-b23b-0e8f1dc25b80-kube-api-access-lsj5l\") pod \"package-server-manager-789f6589d5-stkgx\" (UID: \"e7e9b633-d17c-4fc7-b23b-0e8f1dc25b80\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-stkgx" Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.139522 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 25 11:19:54 crc kubenswrapper[5005]: E0225 11:19:54.139935 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-25 11:19:54.639918978 +0000 UTC m=+108.680651305 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.164335 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-hjfgv"] Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.201207 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-x6pzz"] Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.201489 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nsq6k" Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.221829 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6v8gf" Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.234173 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9ll9j" Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.240811 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:19:54 crc kubenswrapper[5005]: E0225 11:19:54.241191 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-25 11:19:54.741177308 +0000 UTC m=+108.781909635 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xxm8g" (UID: "4031b2bd-16b2-49b4-a187-5eb591356aff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.251748 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ndnvt" Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.262738 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v692m" event={"ID":"07fd30ae-0263-4080-9eca-a61666c2937d","Type":"ContainerStarted","Data":"b9cc4879fd76d3c02db758df67945414f7ec37ad89d3a030b8305dc87cda8430"} Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.271398 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-gjfbr" event={"ID":"4d0749b3-9886-4360-8a7e-c80fa8921a50","Type":"ContainerStarted","Data":"df14b4abb316810eed3c72df78f21b566f60367974621af5d8d7b8290bb405ce"} Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.271436 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-gjfbr" event={"ID":"4d0749b3-9886-4360-8a7e-c80fa8921a50","Type":"ContainerStarted","Data":"a0872a40edf88d9c64c733853052196b305c014751a63987e24e9944e6045a35"} Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.275020 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-qzm5m" event={"ID":"edfd83bd-08e3-4a78-9b03-1fa3100e5ebf","Type":"ContainerStarted","Data":"be620304d0343f16f35b5f9dcf3d4ca067588a684d7ba51a39fe930e57bf6786"} Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.275122 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-qzm5m" event={"ID":"edfd83bd-08e3-4a78-9b03-1fa3100e5ebf","Type":"ContainerStarted","Data":"5c673dd7fa7735223a251accef33abc12567f5503817977fd777064ce6c12f1c"} Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.280473 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-stkgx" Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.284920 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mqmwp" event={"ID":"e972077c-5857-4a15-bd23-21b21fbad7b1","Type":"ContainerStarted","Data":"9de2665b1e85562d6eaf297c410872fe6d66d92e123a4016dae828cb76709463"} Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.284976 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mqmwp" event={"ID":"e972077c-5857-4a15-bd23-21b21fbad7b1","Type":"ContainerStarted","Data":"f0a9dba9055c4f71a1e6ed32fa8910778dca0cb9df098b088586bcda5b29b176"} Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.285671 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mqmwp" Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.288282 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hl9wb" Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.295027 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-65dkm"] Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.296149 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tzkzd" event={"ID":"af95ef77-54c8-4b77-9a76-fcac6a29c993","Type":"ContainerStarted","Data":"89c1a3c9da8ee44810800f7e52af07e0a67338f55c12978ba9a023c16265c8f1"} Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.296193 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tzkzd" event={"ID":"af95ef77-54c8-4b77-9a76-fcac6a29c993","Type":"ContainerStarted","Data":"69130f27a5dd5dab23a80c8e4187079506643fee524c2e0d17513e6f03f90b39"} Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.296209 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tzkzd" event={"ID":"af95ef77-54c8-4b77-9a76-fcac6a29c993","Type":"ContainerStarted","Data":"1a0f198d1f5df92f4f9ca04c8c488c266f3e6838cc6615322f80d859ed94b6e3"} Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.301569 5005 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-mqmwp container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.301632 5005 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mqmwp" podUID="e972077c-5857-4a15-bd23-21b21fbad7b1" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.315463 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-gnfvv" event={"ID":"faec049e-e45a-4ab3-8761-f4bd01acc732","Type":"ContainerStarted","Data":"a688e3b49754a7fec73e3aa733bb5fcb8bb92e11a5efa85065cb10fd2dea3172"} Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.315501 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-gnfvv" event={"ID":"faec049e-e45a-4ab3-8761-f4bd01acc732","Type":"ContainerStarted","Data":"0b216d04ac539913b3fa716723fde0754bff84e404523abd9c95ec48b71a3763"} Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.319883 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5j95b" Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.322054 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-rsd6r" event={"ID":"9254e22f-7734-4750-b29a-af5f2872eeef","Type":"ContainerStarted","Data":"3f409ca6910ed3221a46fd9f193a4a0c6667472bb3cca17b21d42b549d7b57d6"} Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.326260 5005 generic.go:334] "Generic (PLEG): container finished" podID="1233075e-7e1b-48ab-bdec-50d771eec172" containerID="d1edc332c078fe1b44dc43b263f8307707022aada21f59205c4c626205ca4487" exitCode=0 Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.326318 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xnfp6" event={"ID":"1233075e-7e1b-48ab-bdec-50d771eec172","Type":"ContainerDied","Data":"d1edc332c078fe1b44dc43b263f8307707022aada21f59205c4c626205ca4487"} Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.326346 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xnfp6" event={"ID":"1233075e-7e1b-48ab-bdec-50d771eec172","Type":"ContainerStarted","Data":"23ed653a33690b40fb7d3916aa52f0308794044f46feffada808538363ec647f"} Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.329191 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-qcfmz" event={"ID":"7eaa1833-a4ad-422c-93db-6c442609d050","Type":"ContainerStarted","Data":"7d48cff40346e6468c78887a74a5267ab2028939d4584cb2a99b1da64c92e547"} Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.336915 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-277gg" event={"ID":"cf5c0827-c687-4ab2-a02f-7b74d00a57db","Type":"ContainerStarted","Data":"063de13593698c3e6a9b8840a86e4ac3c7f7b9d112c02c0c914c432e5765ec32"} Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.337222 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8zcnw" Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.341266 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xlq97" event={"ID":"4ee6ad6b-1ad3-4bcf-a35a-d5f09a50e9d2","Type":"ContainerStarted","Data":"3695d83fbd1b809f7e2e1d2bb1e60065e6848be9526a7458b1be0003e29f9836"} Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.341611 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 25 11:19:54 crc kubenswrapper[5005]: E0225 11:19:54.342004 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-25 11:19:54.841986782 +0000 UTC m=+108.882719109 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.344875 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-69c2n"] Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.351192 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-75zdc" event={"ID":"771a10ce-b3f7-4d81-9963-51d7a38c2cdf","Type":"ContainerStarted","Data":"bf63fa7f146a4273dc5e50edb279237e014f7d9c51f5d1aff68ca8adb753da51"} Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.351233 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-75zdc" event={"ID":"771a10ce-b3f7-4d81-9963-51d7a38c2cdf","Type":"ContainerStarted","Data":"8154685c3451aa27d5df21b143f1cd2a92ba4b2d4d77cca7b2f78c6a631de015"} Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.351247 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-75zdc" event={"ID":"771a10ce-b3f7-4d81-9963-51d7a38c2cdf","Type":"ContainerStarted","Data":"9cdcbb7305b6459c3cfd303d2e4622cc4e53236152609d9e64da58688dfe9ac1"} Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.355504 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-trf82"] Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.355813 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lh4hs" event={"ID":"20559448-c2fa-4138-ba2f-f9907e6ef183","Type":"ContainerStarted","Data":"427446156d783078cd33c2d344138a971a66e15fb67ad205fec146affc2e5db9"} Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.355860 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lh4hs" event={"ID":"20559448-c2fa-4138-ba2f-f9907e6ef183","Type":"ContainerStarted","Data":"a9da854a2c94728afa557f5443008ca9093fc5ebc676dc917de99ec59fd356c1"} Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.358446 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-mn7l8" event={"ID":"783f9d2c-ef3f-4915-819f-16ad8ddf943a","Type":"ContainerStarted","Data":"d36c8d3bddc5398609b8d2aefce62286e556f55c1d04f10d63d1a89eaa43589d"} Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.358476 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-mn7l8" event={"ID":"783f9d2c-ef3f-4915-819f-16ad8ddf943a","Type":"ContainerStarted","Data":"70a7694912f37560feb8c777edd32e486a0115c61ecb9e999e0a01f8befaec28"} Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.358673 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-mn7l8" Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.359300 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rx5lf" event={"ID":"9897feaf-0f0f-44a2-bb22-8863579d6359","Type":"ContainerStarted","Data":"c7aed6ad648831587addaa3269752bb2fd9e7d854bdaa3139a2929282baaee77"} Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.359322 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rx5lf" event={"ID":"9897feaf-0f0f-44a2-bb22-8863579d6359","Type":"ContainerStarted","Data":"6373d6f8c78636a99def47adcda1e4a7af15f855b09b2b9fc4a00b06eafb1d3a"} Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.359742 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-rx5lf" Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.360360 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-hjfgv" event={"ID":"de78fa9e-1a35-4b2e-b5c4-e5de42bf7396","Type":"ContainerStarted","Data":"2d00c2ab8bbec1440c640b216faad762fb1348b5597d000830d10f8dc58e5f23"} Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.361686 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2qtlb" event={"ID":"953dbec8-f4fc-411b-8a6f-191a52d20523","Type":"ContainerStarted","Data":"befc699afa357b98599c421933fc1ebce92b0b7f77ae43bcb0f636dfe22fc51c"} Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.361712 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2qtlb" event={"ID":"953dbec8-f4fc-411b-8a6f-191a52d20523","Type":"ContainerStarted","Data":"ace6a99bd0288d5885cffbc66d06f29c7e207598019cb6ac2c4ad8553573cae8"} Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.362408 5005 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-mn7l8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.362432 5005 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-rx5lf container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" start-of-body= Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.362444 5005 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-mn7l8" podUID="783f9d2c-ef3f-4915-819f-16ad8ddf943a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.362477 5005 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-rx5lf" podUID="9897feaf-0f0f-44a2-bb22-8863579d6359" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.451651 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:19:54 crc kubenswrapper[5005]: E0225 11:19:54.452634 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-25 11:19:54.952618154 +0000 UTC m=+108.993350481 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xxm8g" (UID: "4031b2bd-16b2-49b4-a187-5eb591356aff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.570016 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 25 11:19:54 crc kubenswrapper[5005]: E0225 11:19:54.570813 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-25 11:19:55.070784921 +0000 UTC m=+109.111517248 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.615826 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-6q7wr"] Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.631460 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-bfzqz"] Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.672008 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:19:54 crc kubenswrapper[5005]: E0225 11:19:54.672876 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-25 11:19:55.172863961 +0000 UTC m=+109.213596288 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xxm8g" (UID: "4031b2bd-16b2-49b4-a187-5eb591356aff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.746921 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29533635-968n9"] Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.750215 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5zrbx"] Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.766698 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-xwmct"] Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.775964 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-vmm2d"] Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.776340 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 25 11:19:54 crc kubenswrapper[5005]: E0225 11:19:54.784923 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-25 11:19:55.280988312 +0000 UTC m=+109.321720639 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.785097 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:19:54 crc kubenswrapper[5005]: E0225 11:19:54.788358 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-25 11:19:55.288341929 +0000 UTC m=+109.329074256 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xxm8g" (UID: "4031b2bd-16b2-49b4-a187-5eb591356aff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.812041 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-gnfvv" podStartSLOduration=48.812024642 podStartE2EDuration="48.812024642s" podCreationTimestamp="2026-02-25 11:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:19:54.810427884 +0000 UTC m=+108.851160211" watchObservedRunningTime="2026-02-25 11:19:54.812024642 +0000 UTC m=+108.852756969" Feb 25 11:19:54 crc kubenswrapper[5005]: W0225 11:19:54.868399 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfbfbbf24_e0c7_4342_9200_d507e157a3c3.slice/crio-0423f7136a43722cc497e54d98607483ac569638af3d61aaf9f40f18b83bf9be WatchSource:0}: Error finding container 0423f7136a43722cc497e54d98607483ac569638af3d61aaf9f40f18b83bf9be: Status 404 returned error can't find the container with id 0423f7136a43722cc497e54d98607483ac569638af3d61aaf9f40f18b83bf9be Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.891184 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 25 11:19:54 crc kubenswrapper[5005]: E0225 11:19:54.891328 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-25 11:19:55.391302972 +0000 UTC m=+109.432035299 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.891474 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:19:54 crc kubenswrapper[5005]: E0225 11:19:54.891795 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-25 11:19:55.391783839 +0000 UTC m=+109.432516166 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xxm8g" (UID: "4031b2bd-16b2-49b4-a187-5eb591356aff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:54 crc kubenswrapper[5005]: I0225 11:19:54.993110 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 25 11:19:54 crc kubenswrapper[5005]: E0225 11:19:54.993480 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-25 11:19:55.493447595 +0000 UTC m=+109.534179922 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:55 crc kubenswrapper[5005]: I0225 11:19:55.032980 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-mn7l8" podStartSLOduration=49.032961885 podStartE2EDuration="49.032961885s" podCreationTimestamp="2026-02-25 11:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:19:55.009445388 +0000 UTC m=+109.050177715" watchObservedRunningTime="2026-02-25 11:19:55.032961885 +0000 UTC m=+109.073694212" Feb 25 11:19:55 crc kubenswrapper[5005]: I0225 11:19:55.073303 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lh4hs" podStartSLOduration=49.073284064 podStartE2EDuration="49.073284064s" podCreationTimestamp="2026-02-25 11:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:19:55.038446205 +0000 UTC m=+109.079178552" watchObservedRunningTime="2026-02-25 11:19:55.073284064 +0000 UTC m=+109.114016391" Feb 25 11:19:55 crc kubenswrapper[5005]: I0225 11:19:55.095224 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:19:55 crc kubenswrapper[5005]: E0225 11:19:55.095592 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-25 11:19:55.595580416 +0000 UTC m=+109.636312743 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xxm8g" (UID: "4031b2bd-16b2-49b4-a187-5eb591356aff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:55 crc kubenswrapper[5005]: I0225 11:19:55.198720 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 25 11:19:55 crc kubenswrapper[5005]: E0225 11:19:55.199179 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-25 11:19:55.699141961 +0000 UTC m=+109.739874288 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:55 crc kubenswrapper[5005]: I0225 11:19:55.208628 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mqmwp" podStartSLOduration=49.208609856 podStartE2EDuration="49.208609856s" podCreationTimestamp="2026-02-25 11:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:19:55.208413829 +0000 UTC m=+109.249146156" watchObservedRunningTime="2026-02-25 11:19:55.208609856 +0000 UTC m=+109.249342183" Feb 25 11:19:55 crc kubenswrapper[5005]: I0225 11:19:55.287804 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-gnfvv" Feb 25 11:19:55 crc kubenswrapper[5005]: I0225 11:19:55.297263 5005 patch_prober.go:28] interesting pod/router-default-5444994796-gnfvv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 25 11:19:55 crc kubenswrapper[5005]: [-]has-synced failed: reason withheld Feb 25 11:19:55 crc kubenswrapper[5005]: [+]process-running ok Feb 25 11:19:55 crc kubenswrapper[5005]: healthz check failed Feb 25 11:19:55 crc kubenswrapper[5005]: I0225 11:19:55.297318 5005 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gnfvv" podUID="faec049e-e45a-4ab3-8761-f4bd01acc732" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 25 11:19:55 crc kubenswrapper[5005]: I0225 11:19:55.300937 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:19:55 crc kubenswrapper[5005]: E0225 11:19:55.301479 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-25 11:19:55.801467341 +0000 UTC m=+109.842199668 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xxm8g" (UID: "4031b2bd-16b2-49b4-a187-5eb591356aff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:55 crc kubenswrapper[5005]: I0225 11:19:55.411098 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tzkzd" podStartSLOduration=49.411082905 podStartE2EDuration="49.411082905s" podCreationTimestamp="2026-02-25 11:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:19:55.38239927 +0000 UTC m=+109.423131597" watchObservedRunningTime="2026-02-25 11:19:55.411082905 +0000 UTC m=+109.451815222" Feb 25 11:19:55 crc kubenswrapper[5005]: I0225 11:19:55.412333 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 25 11:19:55 crc kubenswrapper[5005]: E0225 11:19:55.412767 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-25 11:19:55.912749956 +0000 UTC m=+109.953482283 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:55 crc kubenswrapper[5005]: I0225 11:19:55.425477 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-ndnvt"] Feb 25 11:19:55 crc kubenswrapper[5005]: I0225 11:19:55.426991 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rt5kw"] Feb 25 11:19:55 crc kubenswrapper[5005]: I0225 11:19:55.430744 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-rcnr2"] Feb 25 11:19:55 crc kubenswrapper[5005]: I0225 11:19:55.431903 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-65dkm" event={"ID":"1d3ecaa2-3425-41f3-a8cf-cfbd0e94643c","Type":"ContainerStarted","Data":"60fb2f3419562c47b20324a2e05ece7cba3c6f370d9c3d49366d386fe4cb6ab6"} Feb 25 11:19:55 crc kubenswrapper[5005]: I0225 11:19:55.444628 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-rx5lf" podStartSLOduration=49.444606237 podStartE2EDuration="49.444606237s" podCreationTimestamp="2026-02-25 11:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:19:55.429624851 +0000 UTC m=+109.470357178" watchObservedRunningTime="2026-02-25 11:19:55.444606237 +0000 UTC m=+109.485338564" Feb 25 11:19:55 crc kubenswrapper[5005]: I0225 11:19:55.446365 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-vmm2d" event={"ID":"f7118aeb-770e-44b1-87de-f0a633360ff6","Type":"ContainerStarted","Data":"a244927d116f15f1acaeb1e0d1195e63df2a5af659be585b389726edb644f792"} Feb 25 11:19:55 crc kubenswrapper[5005]: I0225 11:19:55.477098 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-6q7wr" event={"ID":"910fa1ee-59a6-4ca3-b937-ab44679c93d9","Type":"ContainerStarted","Data":"5ab35162b3eed6bdeb8f5e17352fedc662635098d3d6ac0121039448d02925dc"} Feb 25 11:19:55 crc kubenswrapper[5005]: I0225 11:19:55.483337 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hzbjf"] Feb 25 11:19:55 crc kubenswrapper[5005]: I0225 11:19:55.493304 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xwmct" event={"ID":"24d89ac7-e3b2-48ff-a21b-2de526920192","Type":"ContainerStarted","Data":"445ba4d449b93c223dced7b6d4612f28cf66d9a244563adf644c18fde9a9da56"} Feb 25 11:19:55 crc kubenswrapper[5005]: I0225 11:19:55.503882 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5zrbx" event={"ID":"fbfbbf24-e0c7-4342-9200-d507e157a3c3","Type":"ContainerStarted","Data":"0423f7136a43722cc497e54d98607483ac569638af3d61aaf9f40f18b83bf9be"} Feb 25 11:19:55 crc kubenswrapper[5005]: I0225 11:19:55.505895 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-x6pzz" event={"ID":"08130b12-61f0-411b-98b6-52f25bf4c639","Type":"ContainerStarted","Data":"25e0be123db98f3811157ead2dbf1d8dac8c89a7f777755b98d0e9f3dd5a818e"} Feb 25 11:19:55 crc kubenswrapper[5005]: I0225 11:19:55.514749 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-2bj7t" event={"ID":"10b30994-5301-45ae-bf7b-f6c3c3f600b3","Type":"ContainerStarted","Data":"5b721b2104c03281819005feea1fa8ddf172227a16c0c47f70e8374234fc4d8f"} Feb 25 11:19:55 crc kubenswrapper[5005]: I0225 11:19:55.516811 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:19:55 crc kubenswrapper[5005]: E0225 11:19:55.517285 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-25 11:19:56.017271065 +0000 UTC m=+110.058003392 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xxm8g" (UID: "4031b2bd-16b2-49b4-a187-5eb591356aff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:55 crc kubenswrapper[5005]: I0225 11:19:55.618472 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6v8gf"] Feb 25 11:19:55 crc kubenswrapper[5005]: I0225 11:19:55.619720 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 25 11:19:55 crc kubenswrapper[5005]: I0225 11:19:55.622680 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-4m859"] Feb 25 11:19:55 crc kubenswrapper[5005]: E0225 11:19:55.623427 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-25 11:19:56.123407273 +0000 UTC m=+110.164139600 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:55 crc kubenswrapper[5005]: I0225 11:19:55.640938 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bfzqz" event={"ID":"3dd9dd0c-5dc1-4ab5-a5fc-68569c68198e","Type":"ContainerStarted","Data":"1b5c457e99617ed091f7af16000f472bc90b5c0eb17857fbc232ddc200b649b2"} Feb 25 11:19:55 crc kubenswrapper[5005]: I0225 11:19:55.655624 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-bpj6f"] Feb 25 11:19:55 crc kubenswrapper[5005]: I0225 11:19:55.676844 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-9ll9j"] Feb 25 11:19:55 crc kubenswrapper[5005]: I0225 11:19:55.678997 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v692m" event={"ID":"07fd30ae-0263-4080-9eca-a61666c2937d","Type":"ContainerStarted","Data":"873f466c12e5cdcbe6e3eae1b7b378b984651d9fc595c83aa4052b3b934d47fc"} Feb 25 11:19:55 crc kubenswrapper[5005]: I0225 11:19:55.698466 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-plj26"] Feb 25 11:19:55 crc kubenswrapper[5005]: I0225 11:19:55.699983 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29533635-968n9" event={"ID":"871158bf-c5f6-4e49-981a-bf00d5b8c4c7","Type":"ContainerStarted","Data":"2ec1501c318a3c6dd6633bb59e48188d8fe6d618dfecec32dce0e76bb3a391f2"} Feb 25 11:19:55 crc kubenswrapper[5005]: W0225 11:19:55.720880 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15293032_ea03_4071_99a7_126b0348f0c0.slice/crio-9c18476f8d438bde6e3ca27ac611d3cc19ddc39e4a4f83a48fe2180b3aa2d1ff WatchSource:0}: Error finding container 9c18476f8d438bde6e3ca27ac611d3cc19ddc39e4a4f83a48fe2180b3aa2d1ff: Status 404 returned error can't find the container with id 9c18476f8d438bde6e3ca27ac611d3cc19ddc39e4a4f83a48fe2180b3aa2d1ff Feb 25 11:19:55 crc kubenswrapper[5005]: I0225 11:19:55.721491 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:19:55 crc kubenswrapper[5005]: E0225 11:19:55.722569 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-25 11:19:56.222553596 +0000 UTC m=+110.263285923 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xxm8g" (UID: "4031b2bd-16b2-49b4-a187-5eb591356aff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:55 crc kubenswrapper[5005]: I0225 11:19:55.754169 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-5j95b"] Feb 25 11:19:55 crc kubenswrapper[5005]: I0225 11:19:55.755886 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-stkgx"] Feb 25 11:19:55 crc kubenswrapper[5005]: I0225 11:19:55.787084 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xlq97" event={"ID":"4ee6ad6b-1ad3-4bcf-a35a-d5f09a50e9d2","Type":"ContainerStarted","Data":"6474dd2ff994fff47dfd8fe5b1ba1aade3fc38e0c7ccfc6086ee7736cfa0a1a2"} Feb 25 11:19:55 crc kubenswrapper[5005]: I0225 11:19:55.792499 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-trf82" event={"ID":"4378ed4f-6f5c-418c-9027-b307ffde4aab","Type":"ContainerStarted","Data":"6ac7ff3a8e57f75d83fc53c22d5b51a3dd5eb14fc64ec203827442149d932fc0"} Feb 25 11:19:55 crc kubenswrapper[5005]: I0225 11:19:55.792980 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nsq6k"] Feb 25 11:19:55 crc kubenswrapper[5005]: I0225 11:19:55.793663 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-gjfbr" podStartSLOduration=49.793645018 podStartE2EDuration="49.793645018s" podCreationTimestamp="2026-02-25 11:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:19:55.788041463 +0000 UTC m=+109.828773810" watchObservedRunningTime="2026-02-25 11:19:55.793645018 +0000 UTC m=+109.834377345" Feb 25 11:19:55 crc kubenswrapper[5005]: I0225 11:19:55.823666 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 25 11:19:55 crc kubenswrapper[5005]: E0225 11:19:55.824581 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-25 11:19:56.324558464 +0000 UTC m=+110.365290791 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:55 crc kubenswrapper[5005]: I0225 11:19:55.836735 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-75zdc" podStartSLOduration=49.836721027 podStartE2EDuration="49.836721027s" podCreationTimestamp="2026-02-25 11:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:19:55.816602234 +0000 UTC m=+109.857334561" watchObservedRunningTime="2026-02-25 11:19:55.836721027 +0000 UTC m=+109.877453354" Feb 25 11:19:55 crc kubenswrapper[5005]: I0225 11:19:55.841017 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hl9wb"] Feb 25 11:19:55 crc kubenswrapper[5005]: I0225 11:19:55.845589 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-69c2n" event={"ID":"cee21152-b956-4d24-a5f8-9f62bad2cc3e","Type":"ContainerStarted","Data":"383cbdcb51994fc696e78c62a608027eaf34e84b3d8c16043c2c25788a959cd1"} Feb 25 11:19:55 crc kubenswrapper[5005]: I0225 11:19:55.854451 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-rsd6r" event={"ID":"9254e22f-7734-4750-b29a-af5f2872eeef","Type":"ContainerStarted","Data":"9c8bfb0432863a94670c68e910a64b7db18e5c8456f06f4b84b139211fb078b5"} Feb 25 11:19:55 crc kubenswrapper[5005]: I0225 11:19:55.855723 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-rsd6r" Feb 25 11:19:55 crc kubenswrapper[5005]: I0225 11:19:55.875695 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8zcnw"] Feb 25 11:19:55 crc kubenswrapper[5005]: I0225 11:19:55.876794 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-277gg" event={"ID":"cf5c0827-c687-4ab2-a02f-7b74d00a57db","Type":"ContainerStarted","Data":"128d82747dd14ddb57ac9863c67ba1916c3c0891b7d9360683c34c2caf2380ec"} Feb 25 11:19:55 crc kubenswrapper[5005]: I0225 11:19:55.884291 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mqmwp" Feb 25 11:19:55 crc kubenswrapper[5005]: I0225 11:19:55.905799 5005 patch_prober.go:28] interesting pod/console-operator-58897d9998-rsd6r container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/readyz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Feb 25 11:19:55 crc kubenswrapper[5005]: I0225 11:19:55.905838 5005 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-rsd6r" podUID="9254e22f-7734-4750-b29a-af5f2872eeef" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/readyz\": dial tcp 10.217.0.20:8443: connect: connection refused" Feb 25 11:19:55 crc kubenswrapper[5005]: I0225 11:19:55.907658 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-mn7l8" Feb 25 11:19:55 crc kubenswrapper[5005]: I0225 11:19:55.909522 5005 ???:1] "http: TLS handshake error from 192.168.126.11:41760: no serving certificate available for the kubelet" Feb 25 11:19:55 crc kubenswrapper[5005]: I0225 11:19:55.925422 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:19:55 crc kubenswrapper[5005]: E0225 11:19:55.936225 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-25 11:19:56.436203023 +0000 UTC m=+110.476935350 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xxm8g" (UID: "4031b2bd-16b2-49b4-a187-5eb591356aff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:55 crc kubenswrapper[5005]: I0225 11:19:55.962694 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2qtlb" podStartSLOduration=49.962614815 podStartE2EDuration="49.962614815s" podCreationTimestamp="2026-02-25 11:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:19:55.944466574 +0000 UTC m=+109.985198911" watchObservedRunningTime="2026-02-25 11:19:55.962614815 +0000 UTC m=+110.003347142" Feb 25 11:19:55 crc kubenswrapper[5005]: I0225 11:19:55.993896 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-qzm5m" podStartSLOduration=49.993877024 podStartE2EDuration="49.993877024s" podCreationTimestamp="2026-02-25 11:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:19:55.992039128 +0000 UTC m=+110.032771445" watchObservedRunningTime="2026-02-25 11:19:55.993877024 +0000 UTC m=+110.034609351" Feb 25 11:19:56 crc kubenswrapper[5005]: I0225 11:19:56.009089 5005 ???:1] "http: TLS handshake error from 192.168.126.11:41766: no serving certificate available for the kubelet" Feb 25 11:19:56 crc kubenswrapper[5005]: I0225 11:19:56.026423 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 25 11:19:56 crc kubenswrapper[5005]: E0225 11:19:56.026703 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-25 11:19:56.526689471 +0000 UTC m=+110.567421788 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:56 crc kubenswrapper[5005]: I0225 11:19:56.050442 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-rx5lf" Feb 25 11:19:56 crc kubenswrapper[5005]: I0225 11:19:56.055918 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-rsd6r" podStartSLOduration=50.055891995 podStartE2EDuration="50.055891995s" podCreationTimestamp="2026-02-25 11:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:19:56.048308749 +0000 UTC m=+110.089041076" watchObservedRunningTime="2026-02-25 11:19:56.055891995 +0000 UTC m=+110.096624322" Feb 25 11:19:56 crc kubenswrapper[5005]: I0225 11:19:56.110802 5005 ???:1] "http: TLS handshake error from 192.168.126.11:41774: no serving certificate available for the kubelet" Feb 25 11:19:56 crc kubenswrapper[5005]: I0225 11:19:56.133281 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:19:56 crc kubenswrapper[5005]: E0225 11:19:56.133942 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-25 11:19:56.633931759 +0000 UTC m=+110.674664086 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xxm8g" (UID: "4031b2bd-16b2-49b4-a187-5eb591356aff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:56 crc kubenswrapper[5005]: I0225 11:19:56.175402 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xlq97" podStartSLOduration=50.175386269 podStartE2EDuration="50.175386269s" podCreationTimestamp="2026-02-25 11:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:19:56.132348772 +0000 UTC m=+110.173081099" watchObservedRunningTime="2026-02-25 11:19:56.175386269 +0000 UTC m=+110.216118596" Feb 25 11:19:56 crc kubenswrapper[5005]: I0225 11:19:56.210034 5005 ???:1] "http: TLS handshake error from 192.168.126.11:41788: no serving certificate available for the kubelet" Feb 25 11:19:56 crc kubenswrapper[5005]: I0225 11:19:56.234881 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 25 11:19:56 crc kubenswrapper[5005]: E0225 11:19:56.235339 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-25 11:19:56.735318094 +0000 UTC m=+110.776050421 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:56 crc kubenswrapper[5005]: I0225 11:19:56.262064 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-277gg" podStartSLOduration=50.262044239 podStartE2EDuration="50.262044239s" podCreationTimestamp="2026-02-25 11:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:19:56.26016883 +0000 UTC m=+110.300901167" watchObservedRunningTime="2026-02-25 11:19:56.262044239 +0000 UTC m=+110.302776566" Feb 25 11:19:56 crc kubenswrapper[5005]: I0225 11:19:56.262793 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-2bj7t" podStartSLOduration=6.2627869050000005 podStartE2EDuration="6.262786905s" podCreationTimestamp="2026-02-25 11:19:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:19:56.209748302 +0000 UTC m=+110.250480629" watchObservedRunningTime="2026-02-25 11:19:56.262786905 +0000 UTC m=+110.303519232" Feb 25 11:19:56 crc kubenswrapper[5005]: I0225 11:19:56.290938 5005 patch_prober.go:28] interesting pod/router-default-5444994796-gnfvv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 25 11:19:56 crc kubenswrapper[5005]: [-]has-synced failed: reason withheld Feb 25 11:19:56 crc kubenswrapper[5005]: [+]process-running ok Feb 25 11:19:56 crc kubenswrapper[5005]: healthz check failed Feb 25 11:19:56 crc kubenswrapper[5005]: I0225 11:19:56.291009 5005 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gnfvv" podUID="faec049e-e45a-4ab3-8761-f4bd01acc732" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 25 11:19:56 crc kubenswrapper[5005]: I0225 11:19:56.294523 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v692m" podStartSLOduration=50.294500141 podStartE2EDuration="50.294500141s" podCreationTimestamp="2026-02-25 11:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:19:56.287018869 +0000 UTC m=+110.327751196" watchObservedRunningTime="2026-02-25 11:19:56.294500141 +0000 UTC m=+110.335232468" Feb 25 11:19:56 crc kubenswrapper[5005]: I0225 11:19:56.312121 5005 ???:1] "http: TLS handshake error from 192.168.126.11:41800: no serving certificate available for the kubelet" Feb 25 11:19:56 crc kubenswrapper[5005]: I0225 11:19:56.337507 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:19:56 crc kubenswrapper[5005]: E0225 11:19:56.337877 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-25 11:19:56.837866451 +0000 UTC m=+110.878598778 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xxm8g" (UID: "4031b2bd-16b2-49b4-a187-5eb591356aff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:56 crc kubenswrapper[5005]: I0225 11:19:56.439699 5005 ???:1] "http: TLS handshake error from 192.168.126.11:41812: no serving certificate available for the kubelet" Feb 25 11:19:56 crc kubenswrapper[5005]: I0225 11:19:56.440521 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 25 11:19:56 crc kubenswrapper[5005]: E0225 11:19:56.440839 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-25 11:19:56.940825754 +0000 UTC m=+110.981558081 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:56 crc kubenswrapper[5005]: I0225 11:19:56.522814 5005 ???:1] "http: TLS handshake error from 192.168.126.11:41816: no serving certificate available for the kubelet" Feb 25 11:19:56 crc kubenswrapper[5005]: I0225 11:19:56.541200 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:19:56 crc kubenswrapper[5005]: E0225 11:19:56.541581 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-25 11:19:57.041566675 +0000 UTC m=+111.082299002 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xxm8g" (UID: "4031b2bd-16b2-49b4-a187-5eb591356aff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:56 crc kubenswrapper[5005]: I0225 11:19:56.642815 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 25 11:19:56 crc kubenswrapper[5005]: E0225 11:19:56.643232 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-25 11:19:57.143218339 +0000 UTC m=+111.183950666 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:56 crc kubenswrapper[5005]: I0225 11:19:56.716788 5005 ???:1] "http: TLS handshake error from 192.168.126.11:41818: no serving certificate available for the kubelet" Feb 25 11:19:56 crc kubenswrapper[5005]: I0225 11:19:56.743914 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:19:56 crc kubenswrapper[5005]: E0225 11:19:56.745210 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-25 11:19:57.245195586 +0000 UTC m=+111.285927913 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xxm8g" (UID: "4031b2bd-16b2-49b4-a187-5eb591356aff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:56 crc kubenswrapper[5005]: I0225 11:19:56.845172 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 25 11:19:56 crc kubenswrapper[5005]: E0225 11:19:56.845964 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-25 11:19:57.345912877 +0000 UTC m=+111.386645204 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:56 crc kubenswrapper[5005]: I0225 11:19:56.911418 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6v8gf" event={"ID":"65cefc6d-9165-410b-9af2-0cdb4d56d85c","Type":"ContainerStarted","Data":"c24cdd8c71c5cc3c7a922ecab96faba0de68d1d19124ece1fa4846b7cfb3e216"} Feb 25 11:19:56 crc kubenswrapper[5005]: I0225 11:19:56.914347 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6v8gf" event={"ID":"65cefc6d-9165-410b-9af2-0cdb4d56d85c","Type":"ContainerStarted","Data":"e30f36237aedf5c60426a80268121f9a8a5ee255f62a2647e8b25e317f821672"} Feb 25 11:19:56 crc kubenswrapper[5005]: I0225 11:19:56.917423 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-x6pzz" event={"ID":"08130b12-61f0-411b-98b6-52f25bf4c639","Type":"ContainerStarted","Data":"f6b04f5b489dc93129d73420dd404fac45f7e32e958ed58fec07976727d10b50"} Feb 25 11:19:56 crc kubenswrapper[5005]: I0225 11:19:56.928935 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nsq6k" event={"ID":"b924dacd-6667-491c-8464-b849b6ae7624","Type":"ContainerStarted","Data":"1ff8cdf6e55ffd1da0932f1a5d72fcd0701015bb8084de885e20c433f7b6f833"} Feb 25 11:19:56 crc kubenswrapper[5005]: I0225 11:19:56.928975 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nsq6k" event={"ID":"b924dacd-6667-491c-8464-b849b6ae7624","Type":"ContainerStarted","Data":"024dd071df7398e8a5fa3e08885eae12c3a4156848417252c6928912429e4254"} Feb 25 11:19:56 crc kubenswrapper[5005]: I0225 11:19:56.937941 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-stkgx" event={"ID":"e7e9b633-d17c-4fc7-b23b-0e8f1dc25b80","Type":"ContainerStarted","Data":"2053ab7ba1e32e4ca2ceac47cd377973d731ae86a8bf2e567fee8d38581c82e0"} Feb 25 11:19:56 crc kubenswrapper[5005]: I0225 11:19:56.937992 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-stkgx" event={"ID":"e7e9b633-d17c-4fc7-b23b-0e8f1dc25b80","Type":"ContainerStarted","Data":"3aeb60cf6f7e8ef1fe55c0164dc5e62dad8f1371e8fa3950274452e6c79e5440"} Feb 25 11:19:56 crc kubenswrapper[5005]: I0225 11:19:56.953643 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:19:56 crc kubenswrapper[5005]: E0225 11:19:56.955364 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-25 11:19:57.455350095 +0000 UTC m=+111.496082422 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xxm8g" (UID: "4031b2bd-16b2-49b4-a187-5eb591356aff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:56 crc kubenswrapper[5005]: I0225 11:19:56.970676 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hzbjf" event={"ID":"9a85e597-ce82-4b41-b00e-e142bbd38849","Type":"ContainerStarted","Data":"070c821be0e163c8231a2e69ec7c9b2acb5f74f3016d587ef1266e120cc1dd95"} Feb 25 11:19:56 crc kubenswrapper[5005]: I0225 11:19:56.970801 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hzbjf" event={"ID":"9a85e597-ce82-4b41-b00e-e142bbd38849","Type":"ContainerStarted","Data":"198c82bb6b80d210e5c253bf7f76bd313de0feeb1efa9de3d0a45ebeb2e1a9c3"} Feb 25 11:19:56 crc kubenswrapper[5005]: I0225 11:19:56.971562 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-hzbjf" Feb 25 11:19:56 crc kubenswrapper[5005]: I0225 11:19:56.977872 5005 generic.go:334] "Generic (PLEG): container finished" podID="24d89ac7-e3b2-48ff-a21b-2de526920192" containerID="7ba57417ebcf71e2a895f4e4fefd6c93cd32bfe41fb069bf7afc4893ed2b492d" exitCode=0 Feb 25 11:19:56 crc kubenswrapper[5005]: I0225 11:19:56.978557 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xwmct" event={"ID":"24d89ac7-e3b2-48ff-a21b-2de526920192","Type":"ContainerDied","Data":"7ba57417ebcf71e2a895f4e4fefd6c93cd32bfe41fb069bf7afc4893ed2b492d"} Feb 25 11:19:56 crc kubenswrapper[5005]: I0225 11:19:56.987082 5005 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hzbjf container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Feb 25 11:19:56 crc kubenswrapper[5005]: I0225 11:19:56.987131 5005 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hzbjf" podUID="9a85e597-ce82-4b41-b00e-e142bbd38849" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" Feb 25 11:19:56 crc kubenswrapper[5005]: I0225 11:19:56.988336 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-2bj7t" event={"ID":"10b30994-5301-45ae-bf7b-f6c3c3f600b3","Type":"ContainerStarted","Data":"4d8d9b9089f701d70a8bc0afe5420a2d9fb0fa3cd87e9b89acb60f1af9044517"} Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.006235 5005 generic.go:334] "Generic (PLEG): container finished" podID="45ba893b-be5a-43dc-a979-b528286386cd" containerID="6944032f1da521f8dd1cbb73bf88dbde2b79563ab72ca8857ce9a61afb4e23d3" exitCode=0 Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.006306 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ndnvt" event={"ID":"45ba893b-be5a-43dc-a979-b528286386cd","Type":"ContainerDied","Data":"6944032f1da521f8dd1cbb73bf88dbde2b79563ab72ca8857ce9a61afb4e23d3"} Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.006331 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ndnvt" event={"ID":"45ba893b-be5a-43dc-a979-b528286386cd","Type":"ContainerStarted","Data":"36203f1718718bab91a2644e6bf777ec2cb677fd8760d201df85796eeb739778"} Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.009319 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-plj26" event={"ID":"4a2e8003-764c-4068-ada2-1555914b1dca","Type":"ContainerStarted","Data":"e8595864af2b1a9f08f3efe592ce8d5e449cce5206a81904dccfaaf8c07b0aef"} Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.009657 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-plj26" event={"ID":"4a2e8003-764c-4068-ada2-1555914b1dca","Type":"ContainerStarted","Data":"931c3d76e9f40aabe5b88e9e5069aa9eef098aaef2717130fe4b8ffaec6fb0c1"} Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.045820 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-bpj6f" event={"ID":"15293032-ea03-4071-99a7-126b0348f0c0","Type":"ContainerStarted","Data":"5cb8fbb8a9446dcb771bad599aa9ca5d520ced2fb8d7ef4d801bc3cb291e6251"} Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.045875 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-bpj6f" event={"ID":"15293032-ea03-4071-99a7-126b0348f0c0","Type":"ContainerStarted","Data":"9c18476f8d438bde6e3ca27ac611d3cc19ddc39e4a4f83a48fe2180b3aa2d1ff"} Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.061529 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-65dkm" event={"ID":"1d3ecaa2-3425-41f3-a8cf-cfbd0e94643c","Type":"ContainerStarted","Data":"b8048dd12e3cefa7ee4e62311014a1c0d68aeab662a5748e1084bc1ec803cfdf"} Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.062273 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-65dkm" Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.062597 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 25 11:19:57 crc kubenswrapper[5005]: E0225 11:19:57.063100 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-25 11:19:57.563077071 +0000 UTC m=+111.603809398 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.064895 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-69c2n" event={"ID":"cee21152-b956-4d24-a5f8-9f62bad2cc3e","Type":"ContainerStarted","Data":"878ac8be6f6b964ed049b38d54d7c89398aae57f256a28159aeedaf16aa949f7"} Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.067123 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bfzqz" event={"ID":"3dd9dd0c-5dc1-4ab5-a5fc-68569c68198e","Type":"ContainerStarted","Data":"3eb64fe0cafaa239010a2436c74cb63060a1aaf6a7385b708182c444731494d8"} Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.067153 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bfzqz" event={"ID":"3dd9dd0c-5dc1-4ab5-a5fc-68569c68198e","Type":"ContainerStarted","Data":"f81cebe4e4853bb3e4d2e382b1dd1c849b93514fadb5fb1281cc6a75dc87f4bd"} Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.076022 5005 patch_prober.go:28] interesting pod/downloads-7954f5f757-65dkm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.076075 5005 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-65dkm" podUID="1d3ecaa2-3425-41f3-a8cf-cfbd0e94643c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.081531 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rt5kw" event={"ID":"ae6aaf14-6bb2-4e68-a46b-03209f55ba4d","Type":"ContainerStarted","Data":"14bab4afc71b6d21abb62c024b593a31cb7a367a587c97bf8ae3fa401abdcae0"} Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.081573 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rt5kw" event={"ID":"ae6aaf14-6bb2-4e68-a46b-03209f55ba4d","Type":"ContainerStarted","Data":"49bdd37e1fcbfc25e72c30af2b7611932161dfa21bdc29017a4868e103f18e85"} Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.083466 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rt5kw" Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.090730 5005 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-rt5kw container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.090784 5005 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rt5kw" podUID="ae6aaf14-6bb2-4e68-a46b-03209f55ba4d" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.110129 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8zcnw" event={"ID":"f32def92-50da-4035-a457-5d66df363c5c","Type":"ContainerStarted","Data":"637f652b35c8f7c31b51def207dd15a8412472564252b6c404909f2a4dc18e05"} Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.112346 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8zcnw" Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.114301 5005 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-8zcnw container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" start-of-body= Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.114350 5005 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8zcnw" podUID="f32def92-50da-4035-a457-5d66df363c5c" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.133621 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29533635-968n9" event={"ID":"871158bf-c5f6-4e49-981a-bf00d5b8c4c7","Type":"ContainerStarted","Data":"a1f530703922ade8a94f5905b688352a03c649440f48f176f6fda094a4b23fd7"} Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.163242 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-65dkm" podStartSLOduration=51.163223752 podStartE2EDuration="51.163223752s" podCreationTimestamp="2026-02-25 11:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:19:57.159229726 +0000 UTC m=+111.199962063" watchObservedRunningTime="2026-02-25 11:19:57.163223752 +0000 UTC m=+111.203956079" Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.164478 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.165214 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hl9wb" event={"ID":"75d48e65-ec5a-4706-bf55-2b97ee71af11","Type":"ContainerStarted","Data":"5c7f4bccc29ddbad874ee18314f6678f7a9761db4a7c1ae952fccf7de48b7101"} Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.165254 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hl9wb" event={"ID":"75d48e65-ec5a-4706-bf55-2b97ee71af11","Type":"ContainerStarted","Data":"ea0b465a816de60498da21ace14ec18503486be9af0e3b9ab7d7242d20408223"} Feb 25 11:19:57 crc kubenswrapper[5005]: E0225 11:19:57.166046 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-25 11:19:57.666030984 +0000 UTC m=+111.706763361 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xxm8g" (UID: "4031b2bd-16b2-49b4-a187-5eb591356aff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.210816 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-6q7wr" event={"ID":"910fa1ee-59a6-4ca3-b937-ab44679c93d9","Type":"ContainerStarted","Data":"e7de1cd9eb411988e52d1272e5ed8b05c01f37edc9365d6b1656a1a8041586af"} Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.210888 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-6q7wr" event={"ID":"910fa1ee-59a6-4ca3-b937-ab44679c93d9","Type":"ContainerStarted","Data":"6f4219eae033e8a96bc4316711630c95d8d18156ee5b4e7f427b69e220864f31"} Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.212243 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-6q7wr" Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.214485 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-x6pzz" podStartSLOduration=51.214474429 podStartE2EDuration="51.214474429s" podCreationTimestamp="2026-02-25 11:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:19:57.200813711 +0000 UTC m=+111.241546038" watchObservedRunningTime="2026-02-25 11:19:57.214474429 +0000 UTC m=+111.255206756" Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.228081 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5j95b" event={"ID":"5d5348af-76fc-486e-9ad6-c0bbdd2d9223","Type":"ContainerStarted","Data":"3c949e3845d8312d9c94bb148cbffcfb2552dc3cc83655a361eb4db777967aa8"} Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.228123 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5j95b" event={"ID":"5d5348af-76fc-486e-9ad6-c0bbdd2d9223","Type":"ContainerStarted","Data":"5f33e3929f9b84761df1fce172125ab63f8d40521e9b844d37f0615079c7bbab"} Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.250308 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6v8gf" podStartSLOduration=51.250293675 podStartE2EDuration="51.250293675s" podCreationTimestamp="2026-02-25 11:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:19:57.249860159 +0000 UTC m=+111.290592486" watchObservedRunningTime="2026-02-25 11:19:57.250293675 +0000 UTC m=+111.291026002" Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.257737 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rcnr2" event={"ID":"81150bf8-7367-43d6-bd5c-205a6a07ac7c","Type":"ContainerStarted","Data":"948f2e3596d624d7a7b8f3c89fc92d222c76d3253a848a86c00eaa25fb8fb194"} Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.257775 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rcnr2" event={"ID":"81150bf8-7367-43d6-bd5c-205a6a07ac7c","Type":"ContainerStarted","Data":"63942d7457a030c106b8b14a1d8d35b67edcd684949db0505b50c7bf16d271c8"} Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.266071 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 25 11:19:57 crc kubenswrapper[5005]: E0225 11:19:57.266188 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-25 11:19:57.766165883 +0000 UTC m=+111.806898210 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.266482 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:19:57 crc kubenswrapper[5005]: E0225 11:19:57.266812 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-25 11:19:57.766800416 +0000 UTC m=+111.807532743 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xxm8g" (UID: "4031b2bd-16b2-49b4-a187-5eb591356aff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.307282 5005 patch_prober.go:28] interesting pod/router-default-5444994796-gnfvv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 25 11:19:57 crc kubenswrapper[5005]: [-]has-synced failed: reason withheld Feb 25 11:19:57 crc kubenswrapper[5005]: [+]process-running ok Feb 25 11:19:57 crc kubenswrapper[5005]: healthz check failed Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.307692 5005 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gnfvv" podUID="faec049e-e45a-4ab3-8761-f4bd01acc732" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.319341 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-hjfgv" event={"ID":"de78fa9e-1a35-4b2e-b5c4-e5de42bf7396","Type":"ContainerStarted","Data":"f55cbfb4008849d25a45be0ea84538dbb42a42e4579d6f68d30453d730aeb797"} Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.326118 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4m859" event={"ID":"f57976f3-2e78-4677-9491-d86626551cb8","Type":"ContainerStarted","Data":"d20fb27c19cffcfa51f256937af4d317fcf02d81d4905dacc61d43406408250b"} Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.326157 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4m859" event={"ID":"f57976f3-2e78-4677-9491-d86626551cb8","Type":"ContainerStarted","Data":"45548c4418de3150cc181ecd8c403b9595af5592f1f335e86de8b162f9cf57f4"} Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.347464 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5zrbx" event={"ID":"fbfbbf24-e0c7-4342-9200-d507e157a3c3","Type":"ContainerStarted","Data":"fbe8c87d49338052cfa18d1743ba5b5d014a0245ca19b5dfdb7ed715ec3447c4"} Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.348176 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5zrbx" Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.372069 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-hzbjf" podStartSLOduration=51.372055342 podStartE2EDuration="51.372055342s" podCreationTimestamp="2026-02-25 11:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:19:57.339270147 +0000 UTC m=+111.380002474" watchObservedRunningTime="2026-02-25 11:19:57.372055342 +0000 UTC m=+111.412787669" Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.372107 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-vmm2d" event={"ID":"f7118aeb-770e-44b1-87de-f0a633360ff6","Type":"ContainerStarted","Data":"f6cb6103b7ea0c3c6f96649ce0644c6caa49f6ba48c56c415a519044adaccbde"} Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.375731 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 25 11:19:57 crc kubenswrapper[5005]: E0225 11:19:57.377174 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-25 11:19:57.877151678 +0000 UTC m=+111.917884005 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.382422 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5zrbx" Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.406095 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29533635-968n9" podStartSLOduration=51.406081182 podStartE2EDuration="51.406081182s" podCreationTimestamp="2026-02-25 11:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:19:57.370729403 +0000 UTC m=+111.411461730" watchObservedRunningTime="2026-02-25 11:19:57.406081182 +0000 UTC m=+111.446813499" Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.406746 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-bpj6f" podStartSLOduration=7.406741957 podStartE2EDuration="7.406741957s" podCreationTimestamp="2026-02-25 11:19:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:19:57.404869868 +0000 UTC m=+111.445602195" watchObservedRunningTime="2026-02-25 11:19:57.406741957 +0000 UTC m=+111.447474284" Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.408011 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xnfp6" event={"ID":"1233075e-7e1b-48ab-bdec-50d771eec172","Type":"ContainerStarted","Data":"38786143405e71c49181aba157b1c3f8428f054e4e668b83f6567a5b0dec81a1"} Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.421494 5005 ???:1] "http: TLS handshake error from 192.168.126.11:41822: no serving certificate available for the kubelet" Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.422569 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-qcfmz" event={"ID":"7eaa1833-a4ad-422c-93db-6c442609d050","Type":"ContainerStarted","Data":"67023985697f96200596a4d662626679a8a51a06b7aaad02de5efcf52ab2c31e"} Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.422955 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-qcfmz" Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.432661 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9ll9j" event={"ID":"e8ae5d83-6d61-4c8e-b86a-f0456eedc676","Type":"ContainerStarted","Data":"cd4aca14223cc2132e87db4f09a473f16f671c94d71950676a4f9dff297dd79f"} Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.432696 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9ll9j" event={"ID":"e8ae5d83-6d61-4c8e-b86a-f0456eedc676","Type":"ContainerStarted","Data":"3fc51b07b8e0d9e5da18d453fccf8bdd0e4b13469fc0f2e4fe833ca5ad11f8af"} Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.439153 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bfzqz" podStartSLOduration=51.439124026 podStartE2EDuration="51.439124026s" podCreationTimestamp="2026-02-25 11:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:19:57.437386093 +0000 UTC m=+111.478118420" watchObservedRunningTime="2026-02-25 11:19:57.439124026 +0000 UTC m=+111.479856353" Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.461033 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-rsd6r" Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.479980 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:19:57 crc kubenswrapper[5005]: E0225 11:19:57.480485 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-25 11:19:57.980473232 +0000 UTC m=+112.021205559 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xxm8g" (UID: "4031b2bd-16b2-49b4-a187-5eb591356aff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.521995 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-qcfmz" Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.529535 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-69c2n" podStartSLOduration=51.52951736 podStartE2EDuration="51.52951736s" podCreationTimestamp="2026-02-25 11:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:19:57.472463791 +0000 UTC m=+111.513196118" watchObservedRunningTime="2026-02-25 11:19:57.52951736 +0000 UTC m=+111.570249687" Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.545208 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nsq6k" podStartSLOduration=51.545190271 podStartE2EDuration="51.545190271s" podCreationTimestamp="2026-02-25 11:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:19:57.542810194 +0000 UTC m=+111.583542531" watchObservedRunningTime="2026-02-25 11:19:57.545190271 +0000 UTC m=+111.585922598" Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.575720 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8zcnw" podStartSLOduration=51.575703523 podStartE2EDuration="51.575703523s" podCreationTimestamp="2026-02-25 11:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:19:57.572414803 +0000 UTC m=+111.613147130" watchObservedRunningTime="2026-02-25 11:19:57.575703523 +0000 UTC m=+111.616435850" Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.581492 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 25 11:19:57 crc kubenswrapper[5005]: E0225 11:19:57.581793 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-25 11:19:58.081768244 +0000 UTC m=+112.122500571 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.581950 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:19:57 crc kubenswrapper[5005]: E0225 11:19:57.585114 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-25 11:19:58.085098756 +0000 UTC m=+112.125831083 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xxm8g" (UID: "4031b2bd-16b2-49b4-a187-5eb591356aff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.608286 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-plj26" podStartSLOduration=51.608272521 podStartE2EDuration="51.608272521s" podCreationTimestamp="2026-02-25 11:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:19:57.605819311 +0000 UTC m=+111.646551638" watchObservedRunningTime="2026-02-25 11:19:57.608272521 +0000 UTC m=+111.649004848" Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.647517 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rt5kw" podStartSLOduration=51.647502180000004 podStartE2EDuration="51.64750218s" podCreationTimestamp="2026-02-25 11:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:19:57.644886395 +0000 UTC m=+111.685618722" watchObservedRunningTime="2026-02-25 11:19:57.64750218 +0000 UTC m=+111.688234507" Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.683831 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 25 11:19:57 crc kubenswrapper[5005]: E0225 11:19:57.684072 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-25 11:19:58.184050882 +0000 UTC m=+112.224783219 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.684476 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:19:57 crc kubenswrapper[5005]: E0225 11:19:57.684839 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-25 11:19:58.18482899 +0000 UTC m=+112.225561317 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xxm8g" (UID: "4031b2bd-16b2-49b4-a187-5eb591356aff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.719800 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hl9wb" podStartSLOduration=51.719784224 podStartE2EDuration="51.719784224s" podCreationTimestamp="2026-02-25 11:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:19:57.677896208 +0000 UTC m=+111.718628535" watchObservedRunningTime="2026-02-25 11:19:57.719784224 +0000 UTC m=+111.760516551" Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.720487 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-6q7wr" podStartSLOduration=6.72048243 podStartE2EDuration="6.72048243s" podCreationTimestamp="2026-02-25 11:19:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:19:57.719273596 +0000 UTC m=+111.760005923" watchObservedRunningTime="2026-02-25 11:19:57.72048243 +0000 UTC m=+111.761214757" Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.769247 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-qcfmz" podStartSLOduration=7.769232546 podStartE2EDuration="7.769232546s" podCreationTimestamp="2026-02-25 11:19:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:19:57.767499743 +0000 UTC m=+111.808232080" watchObservedRunningTime="2026-02-25 11:19:57.769232546 +0000 UTC m=+111.809964873" Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.786231 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 25 11:19:57 crc kubenswrapper[5005]: E0225 11:19:57.786657 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-25 11:19:58.286642551 +0000 UTC m=+112.327374878 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.802570 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4m859" podStartSLOduration=51.80254938 podStartE2EDuration="51.80254938s" podCreationTimestamp="2026-02-25 11:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:19:57.796462898 +0000 UTC m=+111.837195225" watchObservedRunningTime="2026-02-25 11:19:57.80254938 +0000 UTC m=+111.843281707" Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.888017 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:19:57 crc kubenswrapper[5005]: E0225 11:19:57.888360 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-25 11:19:58.388344998 +0000 UTC m=+112.429077335 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xxm8g" (UID: "4031b2bd-16b2-49b4-a187-5eb591356aff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.917092 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-xnfp6" podStartSLOduration=51.917072774 podStartE2EDuration="51.917072774s" podCreationTimestamp="2026-02-25 11:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:19:57.916613548 +0000 UTC m=+111.957345885" watchObservedRunningTime="2026-02-25 11:19:57.917072774 +0000 UTC m=+111.957805101" Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.978872 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9ll9j" podStartSLOduration=51.978856806 podStartE2EDuration="51.978856806s" podCreationTimestamp="2026-02-25 11:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:19:57.948703167 +0000 UTC m=+111.989435494" watchObservedRunningTime="2026-02-25 11:19:57.978856806 +0000 UTC m=+112.019589133" Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.989606 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 25 11:19:57 crc kubenswrapper[5005]: E0225 11:19:57.989934 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-25 11:19:58.489920739 +0000 UTC m=+112.530653066 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:57 crc kubenswrapper[5005]: I0225 11:19:57.999509 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-xnfp6" Feb 25 11:19:58 crc kubenswrapper[5005]: I0225 11:19:57.999848 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-xnfp6" Feb 25 11:19:58 crc kubenswrapper[5005]: I0225 11:19:58.011133 5005 patch_prober.go:28] interesting pod/apiserver-76f77b778f-xnfp6 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 25 11:19:58 crc kubenswrapper[5005]: I0225 11:19:58.011184 5005 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-xnfp6" podUID="1233075e-7e1b-48ab-bdec-50d771eec172" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 25 11:19:58 crc kubenswrapper[5005]: I0225 11:19:58.028541 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5zrbx" podStartSLOduration=52.028524076 podStartE2EDuration="52.028524076s" podCreationTimestamp="2026-02-25 11:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:19:58.027828821 +0000 UTC m=+112.068561148" watchObservedRunningTime="2026-02-25 11:19:58.028524076 +0000 UTC m=+112.069256393" Feb 25 11:19:58 crc kubenswrapper[5005]: I0225 11:19:58.029500 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rcnr2" podStartSLOduration=52.029488701 podStartE2EDuration="52.029488701s" podCreationTimestamp="2026-02-25 11:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:19:57.97786838 +0000 UTC m=+112.018600707" watchObservedRunningTime="2026-02-25 11:19:58.029488701 +0000 UTC m=+112.070221028" Feb 25 11:19:58 crc kubenswrapper[5005]: I0225 11:19:58.092134 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:19:58 crc kubenswrapper[5005]: E0225 11:19:58.092562 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-25 11:19:58.59254492 +0000 UTC m=+112.633277247 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xxm8g" (UID: "4031b2bd-16b2-49b4-a187-5eb591356aff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:58 crc kubenswrapper[5005]: I0225 11:19:58.107607 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:19:58 crc kubenswrapper[5005]: I0225 11:19:58.193083 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 25 11:19:58 crc kubenswrapper[5005]: E0225 11:19:58.193283 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-25 11:19:58.69325786 +0000 UTC m=+112.733990187 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:58 crc kubenswrapper[5005]: I0225 11:19:58.286636 5005 patch_prober.go:28] interesting pod/router-default-5444994796-gnfvv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 25 11:19:58 crc kubenswrapper[5005]: [-]has-synced failed: reason withheld Feb 25 11:19:58 crc kubenswrapper[5005]: [+]process-running ok Feb 25 11:19:58 crc kubenswrapper[5005]: healthz check failed Feb 25 11:19:58 crc kubenswrapper[5005]: I0225 11:19:58.286721 5005 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gnfvv" podUID="faec049e-e45a-4ab3-8761-f4bd01acc732" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 25 11:19:58 crc kubenswrapper[5005]: I0225 11:19:58.295565 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:19:58 crc kubenswrapper[5005]: E0225 11:19:58.295972 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-25 11:19:58.795953862 +0000 UTC m=+112.836686189 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xxm8g" (UID: "4031b2bd-16b2-49b4-a187-5eb591356aff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:58 crc kubenswrapper[5005]: I0225 11:19:58.397236 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 25 11:19:58 crc kubenswrapper[5005]: E0225 11:19:58.397435 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-25 11:19:58.89740917 +0000 UTC m=+112.938141497 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:58 crc kubenswrapper[5005]: I0225 11:19:58.397530 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:19:58 crc kubenswrapper[5005]: E0225 11:19:58.397878 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-25 11:19:58.897864087 +0000 UTC m=+112.938596414 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xxm8g" (UID: "4031b2bd-16b2-49b4-a187-5eb591356aff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:58 crc kubenswrapper[5005]: I0225 11:19:58.441006 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-vmm2d" event={"ID":"f7118aeb-770e-44b1-87de-f0a633360ff6","Type":"ContainerStarted","Data":"5ef53ef6cb21640e34527eafb78e1cd220a7cca4930376d458af317e1864b9d7"} Feb 25 11:19:58 crc kubenswrapper[5005]: I0225 11:19:58.443498 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xwmct" event={"ID":"24d89ac7-e3b2-48ff-a21b-2de526920192","Type":"ContainerStarted","Data":"bdec344646927b7da0c63b2221baf021433920f4a579a8d511208cca9434a5a7"} Feb 25 11:19:58 crc kubenswrapper[5005]: I0225 11:19:58.445553 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-stkgx" event={"ID":"e7e9b633-d17c-4fc7-b23b-0e8f1dc25b80","Type":"ContainerStarted","Data":"5f90bcb0b3eb775fac4c3f425bf0436e42c3980b4a37c283ce75c86a42c7d2ef"} Feb 25 11:19:58 crc kubenswrapper[5005]: I0225 11:19:58.445718 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-stkgx" Feb 25 11:19:58 crc kubenswrapper[5005]: I0225 11:19:58.448609 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ndnvt" event={"ID":"45ba893b-be5a-43dc-a979-b528286386cd","Type":"ContainerStarted","Data":"edb0940740430d22bea8fd5b164a2b409508f747c810e557d1becb83d37a52f3"} Feb 25 11:19:58 crc kubenswrapper[5005]: I0225 11:19:58.449354 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ndnvt" Feb 25 11:19:58 crc kubenswrapper[5005]: I0225 11:19:58.452191 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5j95b" event={"ID":"5d5348af-76fc-486e-9ad6-c0bbdd2d9223","Type":"ContainerStarted","Data":"44e99a7e2b668db3ad87a0da5be705439fc945944506f789b3c66411c038a704"} Feb 25 11:19:58 crc kubenswrapper[5005]: I0225 11:19:58.454591 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-hjfgv" event={"ID":"de78fa9e-1a35-4b2e-b5c4-e5de42bf7396","Type":"ContainerStarted","Data":"ccd715ac296fb0672866fc8d1e85d2c648e2b681d0ece8556b90765311209906"} Feb 25 11:19:58 crc kubenswrapper[5005]: I0225 11:19:58.456167 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4m859" event={"ID":"f57976f3-2e78-4677-9491-d86626551cb8","Type":"ContainerStarted","Data":"ff44300c720cae730fd4328e7b7d56217247b5e4bda383f432b7137c2ee0be5c"} Feb 25 11:19:58 crc kubenswrapper[5005]: I0225 11:19:58.457745 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8zcnw" event={"ID":"f32def92-50da-4035-a457-5d66df363c5c","Type":"ContainerStarted","Data":"58194bc656e5e785e0dc26f5427ce66d965a6a4c53bbe0ea2f3b4e021774fab2"} Feb 25 11:19:58 crc kubenswrapper[5005]: I0225 11:19:58.458578 5005 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-8zcnw container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" start-of-body= Feb 25 11:19:58 crc kubenswrapper[5005]: I0225 11:19:58.458708 5005 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8zcnw" podUID="f32def92-50da-4035-a457-5d66df363c5c" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" Feb 25 11:19:58 crc kubenswrapper[5005]: I0225 11:19:58.464485 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xnfp6" event={"ID":"1233075e-7e1b-48ab-bdec-50d771eec172","Type":"ContainerStarted","Data":"d46bf2a923f379660902edc948111dda4b8dcd83ea2267b1d1846ae60cec0dac"} Feb 25 11:19:58 crc kubenswrapper[5005]: I0225 11:19:58.467653 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-trf82" event={"ID":"4378ed4f-6f5c-418c-9027-b307ffde4aab","Type":"ContainerStarted","Data":"9996e2b172b984eca07df1913ff739fc3256a8f981f9f88e3db87936b9b30138"} Feb 25 11:19:58 crc kubenswrapper[5005]: I0225 11:19:58.479193 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9ll9j" event={"ID":"e8ae5d83-6d61-4c8e-b86a-f0456eedc676","Type":"ContainerStarted","Data":"7239cca27c51f7721bc7ba7f5e2f36cf15f1042a9f4fb705bd1e9826f4d7c62a"} Feb 25 11:19:58 crc kubenswrapper[5005]: I0225 11:19:58.488052 5005 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hzbjf container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Feb 25 11:19:58 crc kubenswrapper[5005]: I0225 11:19:58.488271 5005 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hzbjf" podUID="9a85e597-ce82-4b41-b00e-e142bbd38849" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" Feb 25 11:19:58 crc kubenswrapper[5005]: I0225 11:19:58.488672 5005 patch_prober.go:28] interesting pod/downloads-7954f5f757-65dkm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 25 11:19:58 crc kubenswrapper[5005]: I0225 11:19:58.488752 5005 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-65dkm" podUID="1d3ecaa2-3425-41f3-a8cf-cfbd0e94643c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 25 11:19:58 crc kubenswrapper[5005]: I0225 11:19:58.498135 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 25 11:19:58 crc kubenswrapper[5005]: E0225 11:19:58.498423 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-25 11:19:58.99839648 +0000 UTC m=+113.039128797 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:58 crc kubenswrapper[5005]: I0225 11:19:58.498677 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:19:58 crc kubenswrapper[5005]: E0225 11:19:58.499057 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-25 11:19:58.999045063 +0000 UTC m=+113.039777390 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xxm8g" (UID: "4031b2bd-16b2-49b4-a187-5eb591356aff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:58 crc kubenswrapper[5005]: I0225 11:19:58.507561 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-vmm2d" podStartSLOduration=52.507542393 podStartE2EDuration="52.507542393s" podCreationTimestamp="2026-02-25 11:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:19:58.495742084 +0000 UTC m=+112.536474411" watchObservedRunningTime="2026-02-25 11:19:58.507542393 +0000 UTC m=+112.548274720" Feb 25 11:19:58 crc kubenswrapper[5005]: I0225 11:19:58.510708 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-qcfmz"] Feb 25 11:19:58 crc kubenswrapper[5005]: I0225 11:19:58.512881 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rt5kw" Feb 25 11:19:58 crc kubenswrapper[5005]: I0225 11:19:58.545495 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-stkgx" podStartSLOduration=52.545480336 podStartE2EDuration="52.545480336s" podCreationTimestamp="2026-02-25 11:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:19:58.544454978 +0000 UTC m=+112.585187295" watchObservedRunningTime="2026-02-25 11:19:58.545480336 +0000 UTC m=+112.586212663" Feb 25 11:19:58 crc kubenswrapper[5005]: I0225 11:19:58.575345 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xwmct" podStartSLOduration=52.575329904 podStartE2EDuration="52.575329904s" podCreationTimestamp="2026-02-25 11:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:19:58.571236315 +0000 UTC m=+112.611968652" watchObservedRunningTime="2026-02-25 11:19:58.575329904 +0000 UTC m=+112.616062231" Feb 25 11:19:58 crc kubenswrapper[5005]: I0225 11:19:58.600087 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 25 11:19:58 crc kubenswrapper[5005]: E0225 11:19:58.600398 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-25 11:19:59.100358726 +0000 UTC m=+113.141091053 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:58 crc kubenswrapper[5005]: I0225 11:19:58.600662 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:19:58 crc kubenswrapper[5005]: E0225 11:19:58.606657 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-25 11:19:59.106641365 +0000 UTC m=+113.147373682 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xxm8g" (UID: "4031b2bd-16b2-49b4-a187-5eb591356aff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:58 crc kubenswrapper[5005]: I0225 11:19:58.684469 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-hjfgv" podStartSLOduration=52.684453351 podStartE2EDuration="52.684453351s" podCreationTimestamp="2026-02-25 11:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:19:58.616195734 +0000 UTC m=+112.656928061" watchObservedRunningTime="2026-02-25 11:19:58.684453351 +0000 UTC m=+112.725185678" Feb 25 11:19:58 crc kubenswrapper[5005]: I0225 11:19:58.686559 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5j95b" podStartSLOduration=52.686551027 podStartE2EDuration="52.686551027s" podCreationTimestamp="2026-02-25 11:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:19:58.683523787 +0000 UTC m=+112.724256114" watchObservedRunningTime="2026-02-25 11:19:58.686551027 +0000 UTC m=+112.727283354" Feb 25 11:19:58 crc kubenswrapper[5005]: I0225 11:19:58.714041 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 25 11:19:58 crc kubenswrapper[5005]: E0225 11:19:58.714647 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-25 11:19:59.21463206 +0000 UTC m=+113.255364387 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:58 crc kubenswrapper[5005]: I0225 11:19:58.723627 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ndnvt" podStartSLOduration=52.723612868000004 podStartE2EDuration="52.723612868s" podCreationTimestamp="2026-02-25 11:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:19:58.721880685 +0000 UTC m=+112.762613002" watchObservedRunningTime="2026-02-25 11:19:58.723612868 +0000 UTC m=+112.764345195" Feb 25 11:19:58 crc kubenswrapper[5005]: I0225 11:19:58.816152 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:19:58 crc kubenswrapper[5005]: E0225 11:19:58.816481 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-25 11:19:59.316467702 +0000 UTC m=+113.357200029 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xxm8g" (UID: "4031b2bd-16b2-49b4-a187-5eb591356aff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:58 crc kubenswrapper[5005]: I0225 11:19:58.829477 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mn7l8"] Feb 25 11:19:58 crc kubenswrapper[5005]: I0225 11:19:58.829671 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-mn7l8" podUID="783f9d2c-ef3f-4915-819f-16ad8ddf943a" containerName="controller-manager" containerID="cri-o://d36c8d3bddc5398609b8d2aefce62286e556f55c1d04f10d63d1a89eaa43589d" gracePeriod=30 Feb 25 11:19:58 crc kubenswrapper[5005]: I0225 11:19:58.836621 5005 ???:1] "http: TLS handshake error from 192.168.126.11:41830: no serving certificate available for the kubelet" Feb 25 11:19:58 crc kubenswrapper[5005]: I0225 11:19:58.909472 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xwmct" Feb 25 11:19:58 crc kubenswrapper[5005]: I0225 11:19:58.912402 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xwmct" Feb 25 11:19:58 crc kubenswrapper[5005]: I0225 11:19:58.917921 5005 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-xwmct container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.34:8443/livez\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Feb 25 11:19:58 crc kubenswrapper[5005]: I0225 11:19:58.917961 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 25 11:19:58 crc kubenswrapper[5005]: I0225 11:19:58.917962 5005 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xwmct" podUID="24d89ac7-e3b2-48ff-a21b-2de526920192" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.34:8443/livez\": dial tcp 10.217.0.34:8443: connect: connection refused" Feb 25 11:19:58 crc kubenswrapper[5005]: E0225 11:19:58.918227 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-25 11:19:59.41821089 +0000 UTC m=+113.458943217 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:58 crc kubenswrapper[5005]: I0225 11:19:58.918333 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:19:58 crc kubenswrapper[5005]: E0225 11:19:58.918666 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-25 11:19:59.418657067 +0000 UTC m=+113.459389394 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xxm8g" (UID: "4031b2bd-16b2-49b4-a187-5eb591356aff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:58 crc kubenswrapper[5005]: I0225 11:19:58.960255 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mqmwp"] Feb 25 11:19:58 crc kubenswrapper[5005]: I0225 11:19:58.960452 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mqmwp" podUID="e972077c-5857-4a15-bd23-21b21fbad7b1" containerName="route-controller-manager" containerID="cri-o://9de2665b1e85562d6eaf297c410872fe6d66d92e123a4016dae828cb76709463" gracePeriod=30 Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.019721 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 25 11:19:59 crc kubenswrapper[5005]: E0225 11:19:59.019987 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-25 11:19:59.519962248 +0000 UTC m=+113.560694575 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.020165 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:19:59 crc kubenswrapper[5005]: E0225 11:19:59.020550 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-25 11:19:59.520538469 +0000 UTC m=+113.561270786 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xxm8g" (UID: "4031b2bd-16b2-49b4-a187-5eb591356aff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.024104 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dgfww"] Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.024995 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dgfww" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.038740 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.058123 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dgfww"] Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.125854 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.126072 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a80639a8-456e-4f88-a949-ce0c6fa8284c-utilities\") pod \"certified-operators-dgfww\" (UID: \"a80639a8-456e-4f88-a949-ce0c6fa8284c\") " pod="openshift-marketplace/certified-operators-dgfww" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.126094 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a80639a8-456e-4f88-a949-ce0c6fa8284c-catalog-content\") pod \"certified-operators-dgfww\" (UID: \"a80639a8-456e-4f88-a949-ce0c6fa8284c\") " pod="openshift-marketplace/certified-operators-dgfww" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.126145 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9f2q\" (UniqueName: \"kubernetes.io/projected/a80639a8-456e-4f88-a949-ce0c6fa8284c-kube-api-access-f9f2q\") pod \"certified-operators-dgfww\" (UID: \"a80639a8-456e-4f88-a949-ce0c6fa8284c\") " pod="openshift-marketplace/certified-operators-dgfww" Feb 25 11:19:59 crc kubenswrapper[5005]: E0225 11:19:59.126241 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-25 11:19:59.62622669 +0000 UTC m=+113.666959017 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.187432 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9bdm4"] Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.188322 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9bdm4" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.193956 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.234973 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.235046 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a80639a8-456e-4f88-a949-ce0c6fa8284c-utilities\") pod \"certified-operators-dgfww\" (UID: \"a80639a8-456e-4f88-a949-ce0c6fa8284c\") " pod="openshift-marketplace/certified-operators-dgfww" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.235064 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a80639a8-456e-4f88-a949-ce0c6fa8284c-catalog-content\") pod \"certified-operators-dgfww\" (UID: \"a80639a8-456e-4f88-a949-ce0c6fa8284c\") " pod="openshift-marketplace/certified-operators-dgfww" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.235112 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9f2q\" (UniqueName: \"kubernetes.io/projected/a80639a8-456e-4f88-a949-ce0c6fa8284c-kube-api-access-f9f2q\") pod \"certified-operators-dgfww\" (UID: \"a80639a8-456e-4f88-a949-ce0c6fa8284c\") " pod="openshift-marketplace/certified-operators-dgfww" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.235166 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9bdm4"] Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.235531 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a80639a8-456e-4f88-a949-ce0c6fa8284c-utilities\") pod \"certified-operators-dgfww\" (UID: \"a80639a8-456e-4f88-a949-ce0c6fa8284c\") " pod="openshift-marketplace/certified-operators-dgfww" Feb 25 11:19:59 crc kubenswrapper[5005]: E0225 11:19:59.235756 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-25 11:19:59.735727711 +0000 UTC m=+113.776460038 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xxm8g" (UID: "4031b2bd-16b2-49b4-a187-5eb591356aff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.238865 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a80639a8-456e-4f88-a949-ce0c6fa8284c-catalog-content\") pod \"certified-operators-dgfww\" (UID: \"a80639a8-456e-4f88-a949-ce0c6fa8284c\") " pod="openshift-marketplace/certified-operators-dgfww" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.286228 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9f2q\" (UniqueName: \"kubernetes.io/projected/a80639a8-456e-4f88-a949-ce0c6fa8284c-kube-api-access-f9f2q\") pod \"certified-operators-dgfww\" (UID: \"a80639a8-456e-4f88-a949-ce0c6fa8284c\") " pod="openshift-marketplace/certified-operators-dgfww" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.287506 5005 patch_prober.go:28] interesting pod/router-default-5444994796-gnfvv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 25 11:19:59 crc kubenswrapper[5005]: [-]has-synced failed: reason withheld Feb 25 11:19:59 crc kubenswrapper[5005]: [+]process-running ok Feb 25 11:19:59 crc kubenswrapper[5005]: healthz check failed Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.287538 5005 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gnfvv" podUID="faec049e-e45a-4ab3-8761-f4bd01acc732" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.336293 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.336434 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0fc9766-1d31-4d5a-8234-47d9e32f59be-catalog-content\") pod \"community-operators-9bdm4\" (UID: \"f0fc9766-1d31-4d5a-8234-47d9e32f59be\") " pod="openshift-marketplace/community-operators-9bdm4" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.336459 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flpk8\" (UniqueName: \"kubernetes.io/projected/f0fc9766-1d31-4d5a-8234-47d9e32f59be-kube-api-access-flpk8\") pod \"community-operators-9bdm4\" (UID: \"f0fc9766-1d31-4d5a-8234-47d9e32f59be\") " pod="openshift-marketplace/community-operators-9bdm4" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.336558 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0fc9766-1d31-4d5a-8234-47d9e32f59be-utilities\") pod \"community-operators-9bdm4\" (UID: \"f0fc9766-1d31-4d5a-8234-47d9e32f59be\") " pod="openshift-marketplace/community-operators-9bdm4" Feb 25 11:19:59 crc kubenswrapper[5005]: E0225 11:19:59.336672 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-25 11:19:59.83665883 +0000 UTC m=+113.877391157 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.381127 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qf86g"] Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.382006 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qf86g" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.408688 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qf86g"] Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.420548 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dgfww" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.440090 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.440137 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0fc9766-1d31-4d5a-8234-47d9e32f59be-utilities\") pod \"community-operators-9bdm4\" (UID: \"f0fc9766-1d31-4d5a-8234-47d9e32f59be\") " pod="openshift-marketplace/community-operators-9bdm4" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.440190 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0fc9766-1d31-4d5a-8234-47d9e32f59be-catalog-content\") pod \"community-operators-9bdm4\" (UID: \"f0fc9766-1d31-4d5a-8234-47d9e32f59be\") " pod="openshift-marketplace/community-operators-9bdm4" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.440209 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flpk8\" (UniqueName: \"kubernetes.io/projected/f0fc9766-1d31-4d5a-8234-47d9e32f59be-kube-api-access-flpk8\") pod \"community-operators-9bdm4\" (UID: \"f0fc9766-1d31-4d5a-8234-47d9e32f59be\") " pod="openshift-marketplace/community-operators-9bdm4" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.440703 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0fc9766-1d31-4d5a-8234-47d9e32f59be-utilities\") pod \"community-operators-9bdm4\" (UID: \"f0fc9766-1d31-4d5a-8234-47d9e32f59be\") " pod="openshift-marketplace/community-operators-9bdm4" Feb 25 11:19:59 crc kubenswrapper[5005]: E0225 11:19:59.440962 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-25 11:19:59.940948911 +0000 UTC m=+113.981681238 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xxm8g" (UID: "4031b2bd-16b2-49b4-a187-5eb591356aff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.443808 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0fc9766-1d31-4d5a-8234-47d9e32f59be-catalog-content\") pod \"community-operators-9bdm4\" (UID: \"f0fc9766-1d31-4d5a-8234-47d9e32f59be\") " pod="openshift-marketplace/community-operators-9bdm4" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.483262 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flpk8\" (UniqueName: \"kubernetes.io/projected/f0fc9766-1d31-4d5a-8234-47d9e32f59be-kube-api-access-flpk8\") pod \"community-operators-9bdm4\" (UID: \"f0fc9766-1d31-4d5a-8234-47d9e32f59be\") " pod="openshift-marketplace/community-operators-9bdm4" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.506417 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9bdm4" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.535837 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-trf82" event={"ID":"4378ed4f-6f5c-418c-9027-b307ffde4aab","Type":"ContainerStarted","Data":"55f16a51990c9795637ccaa574d0eb33ad661b0e261b74558d0bcbdf6938ddd0"} Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.540621 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.540844 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.540892 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.540927 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.540948 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.540974 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2dtt\" (UniqueName: \"kubernetes.io/projected/5a7766dc-4b89-4d09-87a3-7690f8af1ad7-kube-api-access-v2dtt\") pod \"certified-operators-qf86g\" (UID: \"5a7766dc-4b89-4d09-87a3-7690f8af1ad7\") " pod="openshift-marketplace/certified-operators-qf86g" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.541001 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a7766dc-4b89-4d09-87a3-7690f8af1ad7-utilities\") pod \"certified-operators-qf86g\" (UID: \"5a7766dc-4b89-4d09-87a3-7690f8af1ad7\") " pod="openshift-marketplace/certified-operators-qf86g" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.541032 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a7766dc-4b89-4d09-87a3-7690f8af1ad7-catalog-content\") pod \"certified-operators-qf86g\" (UID: \"5a7766dc-4b89-4d09-87a3-7690f8af1ad7\") " pod="openshift-marketplace/certified-operators-qf86g" Feb 25 11:19:59 crc kubenswrapper[5005]: E0225 11:19:59.541178 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-25 11:20:00.041164103 +0000 UTC m=+114.081896430 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.545365 5005 generic.go:334] "Generic (PLEG): container finished" podID="e972077c-5857-4a15-bd23-21b21fbad7b1" containerID="9de2665b1e85562d6eaf297c410872fe6d66d92e123a4016dae828cb76709463" exitCode=0 Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.545651 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mqmwp" event={"ID":"e972077c-5857-4a15-bd23-21b21fbad7b1","Type":"ContainerDied","Data":"9de2665b1e85562d6eaf297c410872fe6d66d92e123a4016dae828cb76709463"} Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.548586 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.548755 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.553599 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.554916 5005 generic.go:334] "Generic (PLEG): container finished" podID="783f9d2c-ef3f-4915-819f-16ad8ddf943a" containerID="d36c8d3bddc5398609b8d2aefce62286e556f55c1d04f10d63d1a89eaa43589d" exitCode=0 Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.554936 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.555072 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-mn7l8" event={"ID":"783f9d2c-ef3f-4915-819f-16ad8ddf943a","Type":"ContainerDied","Data":"d36c8d3bddc5398609b8d2aefce62286e556f55c1d04f10d63d1a89eaa43589d"} Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.555954 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.565989 5005 patch_prober.go:28] interesting pod/downloads-7954f5f757-65dkm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.566313 5005 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-65dkm" podUID="1d3ecaa2-3425-41f3-a8cf-cfbd0e94643c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.581009 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jlrk4"] Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.630407 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jlrk4" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.635119 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-hzbjf" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.636603 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jlrk4"] Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.642390 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4ff9940-8045-4d50-99a8-85f30d202aae-utilities\") pod \"community-operators-jlrk4\" (UID: \"d4ff9940-8045-4d50-99a8-85f30d202aae\") " pod="openshift-marketplace/community-operators-jlrk4" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.642462 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a7766dc-4b89-4d09-87a3-7690f8af1ad7-catalog-content\") pod \"certified-operators-qf86g\" (UID: \"5a7766dc-4b89-4d09-87a3-7690f8af1ad7\") " pod="openshift-marketplace/certified-operators-qf86g" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.642564 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/67964f07-93aa-42ec-90a7-730363ab668b-metrics-certs\") pod \"network-metrics-daemon-x2fvb\" (UID: \"67964f07-93aa-42ec-90a7-730363ab668b\") " pod="openshift-multus/network-metrics-daemon-x2fvb" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.646324 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.646472 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4ff9940-8045-4d50-99a8-85f30d202aae-catalog-content\") pod \"community-operators-jlrk4\" (UID: \"d4ff9940-8045-4d50-99a8-85f30d202aae\") " pod="openshift-marketplace/community-operators-jlrk4" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.646673 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2dtt\" (UniqueName: \"kubernetes.io/projected/5a7766dc-4b89-4d09-87a3-7690f8af1ad7-kube-api-access-v2dtt\") pod \"certified-operators-qf86g\" (UID: \"5a7766dc-4b89-4d09-87a3-7690f8af1ad7\") " pod="openshift-marketplace/certified-operators-qf86g" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.646858 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6r8dk\" (UniqueName: \"kubernetes.io/projected/d4ff9940-8045-4d50-99a8-85f30d202aae-kube-api-access-6r8dk\") pod \"community-operators-jlrk4\" (UID: \"d4ff9940-8045-4d50-99a8-85f30d202aae\") " pod="openshift-marketplace/community-operators-jlrk4" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.646999 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a7766dc-4b89-4d09-87a3-7690f8af1ad7-utilities\") pod \"certified-operators-qf86g\" (UID: \"5a7766dc-4b89-4d09-87a3-7690f8af1ad7\") " pod="openshift-marketplace/certified-operators-qf86g" Feb 25 11:19:59 crc kubenswrapper[5005]: E0225 11:19:59.652089 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-25 11:20:00.152071556 +0000 UTC m=+114.192803883 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xxm8g" (UID: "4031b2bd-16b2-49b4-a187-5eb591356aff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.671550 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a7766dc-4b89-4d09-87a3-7690f8af1ad7-catalog-content\") pod \"certified-operators-qf86g\" (UID: \"5a7766dc-4b89-4d09-87a3-7690f8af1ad7\") " pod="openshift-marketplace/certified-operators-qf86g" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.673334 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/67964f07-93aa-42ec-90a7-730363ab668b-metrics-certs\") pod \"network-metrics-daemon-x2fvb\" (UID: \"67964f07-93aa-42ec-90a7-730363ab668b\") " pod="openshift-multus/network-metrics-daemon-x2fvb" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.679219 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a7766dc-4b89-4d09-87a3-7690f8af1ad7-utilities\") pod \"certified-operators-qf86g\" (UID: \"5a7766dc-4b89-4d09-87a3-7690f8af1ad7\") " pod="openshift-marketplace/certified-operators-qf86g" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.694100 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mqmwp" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.706993 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2dtt\" (UniqueName: \"kubernetes.io/projected/5a7766dc-4b89-4d09-87a3-7690f8af1ad7-kube-api-access-v2dtt\") pod \"certified-operators-qf86g\" (UID: \"5a7766dc-4b89-4d09-87a3-7690f8af1ad7\") " pod="openshift-marketplace/certified-operators-qf86g" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.755769 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qf86g" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.755805 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e972077c-5857-4a15-bd23-21b21fbad7b1-client-ca\") pod \"e972077c-5857-4a15-bd23-21b21fbad7b1\" (UID: \"e972077c-5857-4a15-bd23-21b21fbad7b1\") " Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.755972 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.755994 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e972077c-5857-4a15-bd23-21b21fbad7b1-config\") pod \"e972077c-5857-4a15-bd23-21b21fbad7b1\" (UID: \"e972077c-5857-4a15-bd23-21b21fbad7b1\") " Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.756047 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jq7qg\" (UniqueName: \"kubernetes.io/projected/e972077c-5857-4a15-bd23-21b21fbad7b1-kube-api-access-jq7qg\") pod \"e972077c-5857-4a15-bd23-21b21fbad7b1\" (UID: \"e972077c-5857-4a15-bd23-21b21fbad7b1\") " Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.756073 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e972077c-5857-4a15-bd23-21b21fbad7b1-serving-cert\") pod \"e972077c-5857-4a15-bd23-21b21fbad7b1\" (UID: \"e972077c-5857-4a15-bd23-21b21fbad7b1\") " Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.756195 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6r8dk\" (UniqueName: \"kubernetes.io/projected/d4ff9940-8045-4d50-99a8-85f30d202aae-kube-api-access-6r8dk\") pod \"community-operators-jlrk4\" (UID: \"d4ff9940-8045-4d50-99a8-85f30d202aae\") " pod="openshift-marketplace/community-operators-jlrk4" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.756233 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4ff9940-8045-4d50-99a8-85f30d202aae-utilities\") pod \"community-operators-jlrk4\" (UID: \"d4ff9940-8045-4d50-99a8-85f30d202aae\") " pod="openshift-marketplace/community-operators-jlrk4" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.756293 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4ff9940-8045-4d50-99a8-85f30d202aae-catalog-content\") pod \"community-operators-jlrk4\" (UID: \"d4ff9940-8045-4d50-99a8-85f30d202aae\") " pod="openshift-marketplace/community-operators-jlrk4" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.756960 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4ff9940-8045-4d50-99a8-85f30d202aae-catalog-content\") pod \"community-operators-jlrk4\" (UID: \"d4ff9940-8045-4d50-99a8-85f30d202aae\") " pod="openshift-marketplace/community-operators-jlrk4" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.757879 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4ff9940-8045-4d50-99a8-85f30d202aae-utilities\") pod \"community-operators-jlrk4\" (UID: \"d4ff9940-8045-4d50-99a8-85f30d202aae\") " pod="openshift-marketplace/community-operators-jlrk4" Feb 25 11:19:59 crc kubenswrapper[5005]: E0225 11:19:59.758065 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-25 11:20:00.258050628 +0000 UTC m=+114.298782965 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.758501 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e972077c-5857-4a15-bd23-21b21fbad7b1-client-ca" (OuterVolumeSpecName: "client-ca") pod "e972077c-5857-4a15-bd23-21b21fbad7b1" (UID: "e972077c-5857-4a15-bd23-21b21fbad7b1"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.758895 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e972077c-5857-4a15-bd23-21b21fbad7b1-config" (OuterVolumeSpecName: "config") pod "e972077c-5857-4a15-bd23-21b21fbad7b1" (UID: "e972077c-5857-4a15-bd23-21b21fbad7b1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.778734 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e972077c-5857-4a15-bd23-21b21fbad7b1-kube-api-access-jq7qg" (OuterVolumeSpecName: "kube-api-access-jq7qg") pod "e972077c-5857-4a15-bd23-21b21fbad7b1" (UID: "e972077c-5857-4a15-bd23-21b21fbad7b1"). InnerVolumeSpecName "kube-api-access-jq7qg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.792866 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e972077c-5857-4a15-bd23-21b21fbad7b1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e972077c-5857-4a15-bd23-21b21fbad7b1" (UID: "e972077c-5857-4a15-bd23-21b21fbad7b1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.798579 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6r8dk\" (UniqueName: \"kubernetes.io/projected/d4ff9940-8045-4d50-99a8-85f30d202aae-kube-api-access-6r8dk\") pod \"community-operators-jlrk4\" (UID: \"d4ff9940-8045-4d50-99a8-85f30d202aae\") " pod="openshift-marketplace/community-operators-jlrk4" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.810038 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.813987 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-mn7l8" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.832707 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.850811 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x2fvb" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.858844 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/783f9d2c-ef3f-4915-819f-16ad8ddf943a-client-ca\") pod \"783f9d2c-ef3f-4915-819f-16ad8ddf943a\" (UID: \"783f9d2c-ef3f-4915-819f-16ad8ddf943a\") " Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.858882 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/783f9d2c-ef3f-4915-819f-16ad8ddf943a-proxy-ca-bundles\") pod \"783f9d2c-ef3f-4915-819f-16ad8ddf943a\" (UID: \"783f9d2c-ef3f-4915-819f-16ad8ddf943a\") " Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.858997 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/783f9d2c-ef3f-4915-819f-16ad8ddf943a-serving-cert\") pod \"783f9d2c-ef3f-4915-819f-16ad8ddf943a\" (UID: \"783f9d2c-ef3f-4915-819f-16ad8ddf943a\") " Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.859016 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcpz4\" (UniqueName: \"kubernetes.io/projected/783f9d2c-ef3f-4915-819f-16ad8ddf943a-kube-api-access-tcpz4\") pod \"783f9d2c-ef3f-4915-819f-16ad8ddf943a\" (UID: \"783f9d2c-ef3f-4915-819f-16ad8ddf943a\") " Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.859073 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/783f9d2c-ef3f-4915-819f-16ad8ddf943a-config\") pod \"783f9d2c-ef3f-4915-819f-16ad8ddf943a\" (UID: \"783f9d2c-ef3f-4915-819f-16ad8ddf943a\") " Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.859221 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.859273 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jq7qg\" (UniqueName: \"kubernetes.io/projected/e972077c-5857-4a15-bd23-21b21fbad7b1-kube-api-access-jq7qg\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.859286 5005 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e972077c-5857-4a15-bd23-21b21fbad7b1-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.859295 5005 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e972077c-5857-4a15-bd23-21b21fbad7b1-client-ca\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.859303 5005 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e972077c-5857-4a15-bd23-21b21fbad7b1-config\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.860090 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/783f9d2c-ef3f-4915-819f-16ad8ddf943a-client-ca" (OuterVolumeSpecName: "client-ca") pod "783f9d2c-ef3f-4915-819f-16ad8ddf943a" (UID: "783f9d2c-ef3f-4915-819f-16ad8ddf943a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.860506 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/783f9d2c-ef3f-4915-819f-16ad8ddf943a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "783f9d2c-ef3f-4915-819f-16ad8ddf943a" (UID: "783f9d2c-ef3f-4915-819f-16ad8ddf943a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:59 crc kubenswrapper[5005]: E0225 11:19:59.862644 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-25 11:20:00.362627028 +0000 UTC m=+114.403359355 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xxm8g" (UID: "4031b2bd-16b2-49b4-a187-5eb591356aff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.865577 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/783f9d2c-ef3f-4915-819f-16ad8ddf943a-kube-api-access-tcpz4" (OuterVolumeSpecName: "kube-api-access-tcpz4") pod "783f9d2c-ef3f-4915-819f-16ad8ddf943a" (UID: "783f9d2c-ef3f-4915-819f-16ad8ddf943a"). InnerVolumeSpecName "kube-api-access-tcpz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.866069 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/783f9d2c-ef3f-4915-819f-16ad8ddf943a-config" (OuterVolumeSpecName: "config") pod "783f9d2c-ef3f-4915-819f-16ad8ddf943a" (UID: "783f9d2c-ef3f-4915-819f-16ad8ddf943a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.888227 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/783f9d2c-ef3f-4915-819f-16ad8ddf943a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "783f9d2c-ef3f-4915-819f-16ad8ddf943a" (UID: "783f9d2c-ef3f-4915-819f-16ad8ddf943a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.899763 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8zcnw" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.915479 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dgfww"] Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.960314 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.960611 5005 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/783f9d2c-ef3f-4915-819f-16ad8ddf943a-client-ca\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.960624 5005 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/783f9d2c-ef3f-4915-819f-16ad8ddf943a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.960633 5005 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/783f9d2c-ef3f-4915-819f-16ad8ddf943a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.960641 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcpz4\" (UniqueName: \"kubernetes.io/projected/783f9d2c-ef3f-4915-819f-16ad8ddf943a-kube-api-access-tcpz4\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.960649 5005 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/783f9d2c-ef3f-4915-819f-16ad8ddf943a-config\") on node \"crc\" DevicePath \"\"" Feb 25 11:19:59 crc kubenswrapper[5005]: E0225 11:19:59.960708 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-25 11:20:00.460693962 +0000 UTC m=+114.501426279 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.981334 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jlrk4" Feb 25 11:19:59 crc kubenswrapper[5005]: I0225 11:19:59.989545 5005 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 25 11:19:59 crc kubenswrapper[5005]: W0225 11:19:59.991125 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda80639a8_456e_4f88_a949_ce0c6fa8284c.slice/crio-790070acf20bd9866f8d52f6d7272a7a6ea2ee3b3e3b3bdc89f35537df5b07fb WatchSource:0}: Error finding container 790070acf20bd9866f8d52f6d7272a7a6ea2ee3b3e3b3bdc89f35537df5b07fb: Status 404 returned error can't find the container with id 790070acf20bd9866f8d52f6d7272a7a6ea2ee3b3e3b3bdc89f35537df5b07fb Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.063669 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:20:00 crc kubenswrapper[5005]: E0225 11:20:00.063964 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-25 11:20:00.563954316 +0000 UTC m=+114.604686643 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xxm8g" (UID: "4031b2bd-16b2-49b4-a187-5eb591356aff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.149550 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533640-qhftv"] Feb 25 11:20:00 crc kubenswrapper[5005]: E0225 11:20:00.150124 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="783f9d2c-ef3f-4915-819f-16ad8ddf943a" containerName="controller-manager" Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.150138 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="783f9d2c-ef3f-4915-819f-16ad8ddf943a" containerName="controller-manager" Feb 25 11:20:00 crc kubenswrapper[5005]: E0225 11:20:00.150147 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e972077c-5857-4a15-bd23-21b21fbad7b1" containerName="route-controller-manager" Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.150155 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="e972077c-5857-4a15-bd23-21b21fbad7b1" containerName="route-controller-manager" Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.150280 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="e972077c-5857-4a15-bd23-21b21fbad7b1" containerName="route-controller-manager" Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.150298 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="783f9d2c-ef3f-4915-819f-16ad8ddf943a" containerName="controller-manager" Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.150667 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533640-qhftv" Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.153721 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.154029 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.154194 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.156529 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533640-qhftv"] Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.164425 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.164618 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qm9d\" (UniqueName: \"kubernetes.io/projected/705e7826-0109-4ea7-bfe8-3cf9e37285bc-kube-api-access-6qm9d\") pod \"auto-csr-approver-29533640-qhftv\" (UID: \"705e7826-0109-4ea7-bfe8-3cf9e37285bc\") " pod="openshift-infra/auto-csr-approver-29533640-qhftv" Feb 25 11:20:00 crc kubenswrapper[5005]: E0225 11:20:00.164656 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-25 11:20:00.664625006 +0000 UTC m=+114.705357323 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.164683 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:20:00 crc kubenswrapper[5005]: E0225 11:20:00.164936 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-25 11:20:00.664924286 +0000 UTC m=+114.705656613 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xxm8g" (UID: "4031b2bd-16b2-49b4-a187-5eb591356aff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.220081 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9bdm4"] Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.267962 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.268178 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qm9d\" (UniqueName: \"kubernetes.io/projected/705e7826-0109-4ea7-bfe8-3cf9e37285bc-kube-api-access-6qm9d\") pod \"auto-csr-approver-29533640-qhftv\" (UID: \"705e7826-0109-4ea7-bfe8-3cf9e37285bc\") " pod="openshift-infra/auto-csr-approver-29533640-qhftv" Feb 25 11:20:00 crc kubenswrapper[5005]: E0225 11:20:00.269583 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-25 11:20:00.769562479 +0000 UTC m=+114.810294806 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.289002 5005 patch_prober.go:28] interesting pod/router-default-5444994796-gnfvv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 25 11:20:00 crc kubenswrapper[5005]: [-]has-synced failed: reason withheld Feb 25 11:20:00 crc kubenswrapper[5005]: [+]process-running ok Feb 25 11:20:00 crc kubenswrapper[5005]: healthz check failed Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.289378 5005 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gnfvv" podUID="faec049e-e45a-4ab3-8761-f4bd01acc732" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.289509 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ndnvt" Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.332408 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qm9d\" (UniqueName: \"kubernetes.io/projected/705e7826-0109-4ea7-bfe8-3cf9e37285bc-kube-api-access-6qm9d\") pod \"auto-csr-approver-29533640-qhftv\" (UID: \"705e7826-0109-4ea7-bfe8-3cf9e37285bc\") " pod="openshift-infra/auto-csr-approver-29533640-qhftv" Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.369000 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:20:00 crc kubenswrapper[5005]: E0225 11:20:00.369294 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-25 11:20:00.869282203 +0000 UTC m=+114.910014530 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xxm8g" (UID: "4031b2bd-16b2-49b4-a187-5eb591356aff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.469873 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 25 11:20:00 crc kubenswrapper[5005]: E0225 11:20:00.470206 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-25 11:20:00.970179231 +0000 UTC m=+115.010911558 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.475447 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533640-qhftv" Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.498453 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-x2fvb"] Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.513336 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qf86g"] Feb 25 11:20:00 crc kubenswrapper[5005]: W0225 11:20:00.521250 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67964f07_93aa_42ec_90a7_730363ab668b.slice/crio-44b4a311341e9ee25d939727d45239187d21261889e4d74fb47cef59aca32018 WatchSource:0}: Error finding container 44b4a311341e9ee25d939727d45239187d21261889e4d74fb47cef59aca32018: Status 404 returned error can't find the container with id 44b4a311341e9ee25d939727d45239187d21261889e4d74fb47cef59aca32018 Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.566550 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65d84f95c5-g7zdf"] Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.567420 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65d84f95c5-g7zdf" Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.571924 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:20:00 crc kubenswrapper[5005]: E0225 11:20:00.572208 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-25 11:20:01.072193879 +0000 UTC m=+115.112926206 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xxm8g" (UID: "4031b2bd-16b2-49b4-a187-5eb591356aff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:20:00 crc kubenswrapper[5005]: W0225 11:20:00.578107 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-c246183b454cb5b5bc12254a977e0e8b642e95b28411e75361957155e73a667e WatchSource:0}: Error finding container c246183b454cb5b5bc12254a977e0e8b642e95b28411e75361957155e73a667e: Status 404 returned error can't find the container with id c246183b454cb5b5bc12254a977e0e8b642e95b28411e75361957155e73a667e Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.582970 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-746c6848cc-rt6bf"] Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.583362 5005 generic.go:334] "Generic (PLEG): container finished" podID="f0fc9766-1d31-4d5a-8234-47d9e32f59be" containerID="d7e7aada8b128f64aa57ad37afd3c8f59d02fcf55f3948ced7964bd6a990f67c" exitCode=0 Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.583580 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9bdm4" event={"ID":"f0fc9766-1d31-4d5a-8234-47d9e32f59be","Type":"ContainerDied","Data":"d7e7aada8b128f64aa57ad37afd3c8f59d02fcf55f3948ced7964bd6a990f67c"} Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.583609 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9bdm4" event={"ID":"f0fc9766-1d31-4d5a-8234-47d9e32f59be","Type":"ContainerStarted","Data":"5164004bd7edea935f82d4af708fec589eb3e2afbed0b30967d90c94bf831893"} Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.583681 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-746c6848cc-rt6bf" Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.590079 5005 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.590902 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65d84f95c5-g7zdf"] Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.596405 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mqmwp" Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.596770 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mqmwp" event={"ID":"e972077c-5857-4a15-bd23-21b21fbad7b1","Type":"ContainerDied","Data":"f0a9dba9055c4f71a1e6ed32fa8910778dca0cb9df098b088586bcda5b29b176"} Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.596816 5005 scope.go:117] "RemoveContainer" containerID="9de2665b1e85562d6eaf297c410872fe6d66d92e123a4016dae828cb76709463" Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.606147 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-mn7l8" Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.606192 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-746c6848cc-rt6bf"] Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.606237 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-mn7l8" event={"ID":"783f9d2c-ef3f-4915-819f-16ad8ddf943a","Type":"ContainerDied","Data":"70a7694912f37560feb8c777edd32e486a0115c61ecb9e999e0a01f8befaec28"} Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.616995 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"19093bf09a3510e07dd309da89c8cfabb266257945cc74e4e80afa14029bfe75"} Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.626908 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-trf82" event={"ID":"4378ed4f-6f5c-418c-9027-b307ffde4aab","Type":"ContainerStarted","Data":"697311bf2c0a2b8e055f706cdf3ea26101ef7bb4e70b5b30e7217918019a831d"} Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.626962 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-trf82" event={"ID":"4378ed4f-6f5c-418c-9027-b307ffde4aab","Type":"ContainerStarted","Data":"01a7e4d852a26701eaaf442379a488fbf0bcc8f0d6a8b8c222fd7198547a684e"} Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.632211 5005 generic.go:334] "Generic (PLEG): container finished" podID="a80639a8-456e-4f88-a949-ce0c6fa8284c" containerID="eeff8dcc34975225b0343cfa74746cf6d4e0aad2afa4d76fb83897c203d58955" exitCode=0 Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.632270 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dgfww" event={"ID":"a80639a8-456e-4f88-a949-ce0c6fa8284c","Type":"ContainerDied","Data":"eeff8dcc34975225b0343cfa74746cf6d4e0aad2afa4d76fb83897c203d58955"} Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.632298 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dgfww" event={"ID":"a80639a8-456e-4f88-a949-ce0c6fa8284c","Type":"ContainerStarted","Data":"790070acf20bd9866f8d52f6d7272a7a6ea2ee3b3e3b3bdc89f35537df5b07fb"} Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.661993 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-x2fvb" event={"ID":"67964f07-93aa-42ec-90a7-730363ab668b","Type":"ContainerStarted","Data":"44b4a311341e9ee25d939727d45239187d21261889e4d74fb47cef59aca32018"} Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.662167 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-qcfmz" podUID="7eaa1833-a4ad-422c-93db-6c442609d050" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://67023985697f96200596a4d662626679a8a51a06b7aaad02de5efcf52ab2c31e" gracePeriod=30 Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.673273 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.673548 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/39b36f61-4a48-4c6e-b755-9ffc31065462-client-ca\") pod \"route-controller-manager-746c6848cc-rt6bf\" (UID: \"39b36f61-4a48-4c6e-b755-9ffc31065462\") " pod="openshift-route-controller-manager/route-controller-manager-746c6848cc-rt6bf" Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.673615 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e2a769c-92b0-4958-9339-5331a07280c5-config\") pod \"controller-manager-65d84f95c5-g7zdf\" (UID: \"0e2a769c-92b0-4958-9339-5331a07280c5\") " pod="openshift-controller-manager/controller-manager-65d84f95c5-g7zdf" Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.673662 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39b36f61-4a48-4c6e-b755-9ffc31065462-config\") pod \"route-controller-manager-746c6848cc-rt6bf\" (UID: \"39b36f61-4a48-4c6e-b755-9ffc31065462\") " pod="openshift-route-controller-manager/route-controller-manager-746c6848cc-rt6bf" Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.673696 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e2a769c-92b0-4958-9339-5331a07280c5-client-ca\") pod \"controller-manager-65d84f95c5-g7zdf\" (UID: \"0e2a769c-92b0-4958-9339-5331a07280c5\") " pod="openshift-controller-manager/controller-manager-65d84f95c5-g7zdf" Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.673735 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e2a769c-92b0-4958-9339-5331a07280c5-serving-cert\") pod \"controller-manager-65d84f95c5-g7zdf\" (UID: \"0e2a769c-92b0-4958-9339-5331a07280c5\") " pod="openshift-controller-manager/controller-manager-65d84f95c5-g7zdf" Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.673754 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0e2a769c-92b0-4958-9339-5331a07280c5-proxy-ca-bundles\") pod \"controller-manager-65d84f95c5-g7zdf\" (UID: \"0e2a769c-92b0-4958-9339-5331a07280c5\") " pod="openshift-controller-manager/controller-manager-65d84f95c5-g7zdf" Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.673866 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlgmj\" (UniqueName: \"kubernetes.io/projected/0e2a769c-92b0-4958-9339-5331a07280c5-kube-api-access-hlgmj\") pod \"controller-manager-65d84f95c5-g7zdf\" (UID: \"0e2a769c-92b0-4958-9339-5331a07280c5\") " pod="openshift-controller-manager/controller-manager-65d84f95c5-g7zdf" Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.673891 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39b36f61-4a48-4c6e-b755-9ffc31065462-serving-cert\") pod \"route-controller-manager-746c6848cc-rt6bf\" (UID: \"39b36f61-4a48-4c6e-b755-9ffc31065462\") " pod="openshift-route-controller-manager/route-controller-manager-746c6848cc-rt6bf" Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.673982 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fctx\" (UniqueName: \"kubernetes.io/projected/39b36f61-4a48-4c6e-b755-9ffc31065462-kube-api-access-6fctx\") pod \"route-controller-manager-746c6848cc-rt6bf\" (UID: \"39b36f61-4a48-4c6e-b755-9ffc31065462\") " pod="openshift-route-controller-manager/route-controller-manager-746c6848cc-rt6bf" Feb 25 11:20:00 crc kubenswrapper[5005]: E0225 11:20:00.675431 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-25 11:20:01.17540712 +0000 UTC m=+115.216139447 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.717044 5005 scope.go:117] "RemoveContainer" containerID="d36c8d3bddc5398609b8d2aefce62286e556f55c1d04f10d63d1a89eaa43589d" Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.727115 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-trf82" podStartSLOduration=10.727089744 podStartE2EDuration="10.727089744s" podCreationTimestamp="2026-02-25 11:19:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:20:00.707706628 +0000 UTC m=+114.748438955" watchObservedRunningTime="2026-02-25 11:20:00.727089744 +0000 UTC m=+114.767822071" Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.768197 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mn7l8"] Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.771825 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mn7l8"] Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.774783 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlgmj\" (UniqueName: \"kubernetes.io/projected/0e2a769c-92b0-4958-9339-5331a07280c5-kube-api-access-hlgmj\") pod \"controller-manager-65d84f95c5-g7zdf\" (UID: \"0e2a769c-92b0-4958-9339-5331a07280c5\") " pod="openshift-controller-manager/controller-manager-65d84f95c5-g7zdf" Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.774818 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39b36f61-4a48-4c6e-b755-9ffc31065462-serving-cert\") pod \"route-controller-manager-746c6848cc-rt6bf\" (UID: \"39b36f61-4a48-4c6e-b755-9ffc31065462\") " pod="openshift-route-controller-manager/route-controller-manager-746c6848cc-rt6bf" Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.774853 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fctx\" (UniqueName: \"kubernetes.io/projected/39b36f61-4a48-4c6e-b755-9ffc31065462-kube-api-access-6fctx\") pod \"route-controller-manager-746c6848cc-rt6bf\" (UID: \"39b36f61-4a48-4c6e-b755-9ffc31065462\") " pod="openshift-route-controller-manager/route-controller-manager-746c6848cc-rt6bf" Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.774898 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/39b36f61-4a48-4c6e-b755-9ffc31065462-client-ca\") pod \"route-controller-manager-746c6848cc-rt6bf\" (UID: \"39b36f61-4a48-4c6e-b755-9ffc31065462\") " pod="openshift-route-controller-manager/route-controller-manager-746c6848cc-rt6bf" Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.774921 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e2a769c-92b0-4958-9339-5331a07280c5-config\") pod \"controller-manager-65d84f95c5-g7zdf\" (UID: \"0e2a769c-92b0-4958-9339-5331a07280c5\") " pod="openshift-controller-manager/controller-manager-65d84f95c5-g7zdf" Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.774938 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39b36f61-4a48-4c6e-b755-9ffc31065462-config\") pod \"route-controller-manager-746c6848cc-rt6bf\" (UID: \"39b36f61-4a48-4c6e-b755-9ffc31065462\") " pod="openshift-route-controller-manager/route-controller-manager-746c6848cc-rt6bf" Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.774958 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e2a769c-92b0-4958-9339-5331a07280c5-client-ca\") pod \"controller-manager-65d84f95c5-g7zdf\" (UID: \"0e2a769c-92b0-4958-9339-5331a07280c5\") " pod="openshift-controller-manager/controller-manager-65d84f95c5-g7zdf" Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.774976 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e2a769c-92b0-4958-9339-5331a07280c5-serving-cert\") pod \"controller-manager-65d84f95c5-g7zdf\" (UID: \"0e2a769c-92b0-4958-9339-5331a07280c5\") " pod="openshift-controller-manager/controller-manager-65d84f95c5-g7zdf" Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.774995 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.775014 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0e2a769c-92b0-4958-9339-5331a07280c5-proxy-ca-bundles\") pod \"controller-manager-65d84f95c5-g7zdf\" (UID: \"0e2a769c-92b0-4958-9339-5331a07280c5\") " pod="openshift-controller-manager/controller-manager-65d84f95c5-g7zdf" Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.776857 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0e2a769c-92b0-4958-9339-5331a07280c5-proxy-ca-bundles\") pod \"controller-manager-65d84f95c5-g7zdf\" (UID: \"0e2a769c-92b0-4958-9339-5331a07280c5\") " pod="openshift-controller-manager/controller-manager-65d84f95c5-g7zdf" Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.778531 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39b36f61-4a48-4c6e-b755-9ffc31065462-config\") pod \"route-controller-manager-746c6848cc-rt6bf\" (UID: \"39b36f61-4a48-4c6e-b755-9ffc31065462\") " pod="openshift-route-controller-manager/route-controller-manager-746c6848cc-rt6bf" Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.779291 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/39b36f61-4a48-4c6e-b755-9ffc31065462-client-ca\") pod \"route-controller-manager-746c6848cc-rt6bf\" (UID: \"39b36f61-4a48-4c6e-b755-9ffc31065462\") " pod="openshift-route-controller-manager/route-controller-manager-746c6848cc-rt6bf" Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.782060 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e2a769c-92b0-4958-9339-5331a07280c5-config\") pod \"controller-manager-65d84f95c5-g7zdf\" (UID: \"0e2a769c-92b0-4958-9339-5331a07280c5\") " pod="openshift-controller-manager/controller-manager-65d84f95c5-g7zdf" Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.783697 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39b36f61-4a48-4c6e-b755-9ffc31065462-serving-cert\") pod \"route-controller-manager-746c6848cc-rt6bf\" (UID: \"39b36f61-4a48-4c6e-b755-9ffc31065462\") " pod="openshift-route-controller-manager/route-controller-manager-746c6848cc-rt6bf" Feb 25 11:20:00 crc kubenswrapper[5005]: E0225 11:20:00.783975 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-25 11:20:01.283961426 +0000 UTC m=+115.324693743 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xxm8g" (UID: "4031b2bd-16b2-49b4-a187-5eb591356aff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.784682 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e2a769c-92b0-4958-9339-5331a07280c5-client-ca\") pod \"controller-manager-65d84f95c5-g7zdf\" (UID: \"0e2a769c-92b0-4958-9339-5331a07280c5\") " pod="openshift-controller-manager/controller-manager-65d84f95c5-g7zdf" Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.789442 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mqmwp"] Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.790659 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mqmwp"] Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.797663 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlgmj\" (UniqueName: \"kubernetes.io/projected/0e2a769c-92b0-4958-9339-5331a07280c5-kube-api-access-hlgmj\") pod \"controller-manager-65d84f95c5-g7zdf\" (UID: \"0e2a769c-92b0-4958-9339-5331a07280c5\") " pod="openshift-controller-manager/controller-manager-65d84f95c5-g7zdf" Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.804065 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fctx\" (UniqueName: \"kubernetes.io/projected/39b36f61-4a48-4c6e-b755-9ffc31065462-kube-api-access-6fctx\") pod \"route-controller-manager-746c6848cc-rt6bf\" (UID: \"39b36f61-4a48-4c6e-b755-9ffc31065462\") " pod="openshift-route-controller-manager/route-controller-manager-746c6848cc-rt6bf" Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.809758 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jlrk4"] Feb 25 11:20:00 crc kubenswrapper[5005]: W0225 11:20:00.814519 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-c49e7385c36620d8779d647db00a19760551e6f1bf7d3aa3209fc0e9dcdcba7c WatchSource:0}: Error finding container c49e7385c36620d8779d647db00a19760551e6f1bf7d3aa3209fc0e9dcdcba7c: Status 404 returned error can't find the container with id c49e7385c36620d8779d647db00a19760551e6f1bf7d3aa3209fc0e9dcdcba7c Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.814982 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e2a769c-92b0-4958-9339-5331a07280c5-serving-cert\") pod \"controller-manager-65d84f95c5-g7zdf\" (UID: \"0e2a769c-92b0-4958-9339-5331a07280c5\") " pod="openshift-controller-manager/controller-manager-65d84f95c5-g7zdf" Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.837102 5005 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-25T11:19:59.989563825Z","Handler":null,"Name":""} Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.843257 5005 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.843288 5005 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.860301 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65d84f95c5-g7zdf"] Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.873665 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65d84f95c5-g7zdf" Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.875735 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.879790 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.921420 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533640-qhftv"] Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.977227 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.980192 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-746c6848cc-rt6bf" Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.980938 5005 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 25 11:20:00 crc kubenswrapper[5005]: I0225 11:20:00.980979 5005 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:20:01 crc kubenswrapper[5005]: I0225 11:20:01.061778 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xxm8g\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:20:01 crc kubenswrapper[5005]: I0225 11:20:01.137600 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65d84f95c5-g7zdf"] Feb 25 11:20:01 crc kubenswrapper[5005]: I0225 11:20:01.162473 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vw6ng"] Feb 25 11:20:01 crc kubenswrapper[5005]: I0225 11:20:01.163411 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vw6ng" Feb 25 11:20:01 crc kubenswrapper[5005]: I0225 11:20:01.165289 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 25 11:20:01 crc kubenswrapper[5005]: I0225 11:20:01.172042 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vw6ng"] Feb 25 11:20:01 crc kubenswrapper[5005]: I0225 11:20:01.180876 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vhjn\" (UniqueName: \"kubernetes.io/projected/4e5d39a0-d977-4973-a2a3-55699e86de91-kube-api-access-6vhjn\") pod \"redhat-marketplace-vw6ng\" (UID: \"4e5d39a0-d977-4973-a2a3-55699e86de91\") " pod="openshift-marketplace/redhat-marketplace-vw6ng" Feb 25 11:20:01 crc kubenswrapper[5005]: I0225 11:20:01.180939 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e5d39a0-d977-4973-a2a3-55699e86de91-catalog-content\") pod \"redhat-marketplace-vw6ng\" (UID: \"4e5d39a0-d977-4973-a2a3-55699e86de91\") " pod="openshift-marketplace/redhat-marketplace-vw6ng" Feb 25 11:20:01 crc kubenswrapper[5005]: I0225 11:20:01.181003 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e5d39a0-d977-4973-a2a3-55699e86de91-utilities\") pod \"redhat-marketplace-vw6ng\" (UID: \"4e5d39a0-d977-4973-a2a3-55699e86de91\") " pod="openshift-marketplace/redhat-marketplace-vw6ng" Feb 25 11:20:01 crc kubenswrapper[5005]: W0225 11:20:01.213978 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e2a769c_92b0_4958_9339_5331a07280c5.slice/crio-e64bad0e0326cf22dee7bd3587014be4f68ab64cdaecac4ff394de37f8ccea26 WatchSource:0}: Error finding container e64bad0e0326cf22dee7bd3587014be4f68ab64cdaecac4ff394de37f8ccea26: Status 404 returned error can't find the container with id e64bad0e0326cf22dee7bd3587014be4f68ab64cdaecac4ff394de37f8ccea26 Feb 25 11:20:01 crc kubenswrapper[5005]: I0225 11:20:01.281541 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-746c6848cc-rt6bf"] Feb 25 11:20:01 crc kubenswrapper[5005]: I0225 11:20:01.282535 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e5d39a0-d977-4973-a2a3-55699e86de91-utilities\") pod \"redhat-marketplace-vw6ng\" (UID: \"4e5d39a0-d977-4973-a2a3-55699e86de91\") " pod="openshift-marketplace/redhat-marketplace-vw6ng" Feb 25 11:20:01 crc kubenswrapper[5005]: I0225 11:20:01.282611 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vhjn\" (UniqueName: \"kubernetes.io/projected/4e5d39a0-d977-4973-a2a3-55699e86de91-kube-api-access-6vhjn\") pod \"redhat-marketplace-vw6ng\" (UID: \"4e5d39a0-d977-4973-a2a3-55699e86de91\") " pod="openshift-marketplace/redhat-marketplace-vw6ng" Feb 25 11:20:01 crc kubenswrapper[5005]: I0225 11:20:01.282649 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e5d39a0-d977-4973-a2a3-55699e86de91-catalog-content\") pod \"redhat-marketplace-vw6ng\" (UID: \"4e5d39a0-d977-4973-a2a3-55699e86de91\") " pod="openshift-marketplace/redhat-marketplace-vw6ng" Feb 25 11:20:01 crc kubenswrapper[5005]: I0225 11:20:01.283017 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e5d39a0-d977-4973-a2a3-55699e86de91-utilities\") pod \"redhat-marketplace-vw6ng\" (UID: \"4e5d39a0-d977-4973-a2a3-55699e86de91\") " pod="openshift-marketplace/redhat-marketplace-vw6ng" Feb 25 11:20:01 crc kubenswrapper[5005]: I0225 11:20:01.283076 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e5d39a0-d977-4973-a2a3-55699e86de91-catalog-content\") pod \"redhat-marketplace-vw6ng\" (UID: \"4e5d39a0-d977-4973-a2a3-55699e86de91\") " pod="openshift-marketplace/redhat-marketplace-vw6ng" Feb 25 11:20:01 crc kubenswrapper[5005]: I0225 11:20:01.285925 5005 patch_prober.go:28] interesting pod/router-default-5444994796-gnfvv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 25 11:20:01 crc kubenswrapper[5005]: [-]has-synced failed: reason withheld Feb 25 11:20:01 crc kubenswrapper[5005]: [+]process-running ok Feb 25 11:20:01 crc kubenswrapper[5005]: healthz check failed Feb 25 11:20:01 crc kubenswrapper[5005]: I0225 11:20:01.286000 5005 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gnfvv" podUID="faec049e-e45a-4ab3-8761-f4bd01acc732" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 25 11:20:01 crc kubenswrapper[5005]: W0225 11:20:01.291615 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39b36f61_4a48_4c6e_b755_9ffc31065462.slice/crio-a7d3e260bb9e9f2e7847e38ccd3c519a113bbecdda6a81ea6087e04c3750870e WatchSource:0}: Error finding container a7d3e260bb9e9f2e7847e38ccd3c519a113bbecdda6a81ea6087e04c3750870e: Status 404 returned error can't find the container with id a7d3e260bb9e9f2e7847e38ccd3c519a113bbecdda6a81ea6087e04c3750870e Feb 25 11:20:01 crc kubenswrapper[5005]: I0225 11:20:01.306081 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vhjn\" (UniqueName: \"kubernetes.io/projected/4e5d39a0-d977-4973-a2a3-55699e86de91-kube-api-access-6vhjn\") pod \"redhat-marketplace-vw6ng\" (UID: \"4e5d39a0-d977-4973-a2a3-55699e86de91\") " pod="openshift-marketplace/redhat-marketplace-vw6ng" Feb 25 11:20:01 crc kubenswrapper[5005]: I0225 11:20:01.307489 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:20:01 crc kubenswrapper[5005]: I0225 11:20:01.438040 5005 ???:1] "http: TLS handshake error from 192.168.126.11:41834: no serving certificate available for the kubelet" Feb 25 11:20:01 crc kubenswrapper[5005]: I0225 11:20:01.483723 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vw6ng" Feb 25 11:20:01 crc kubenswrapper[5005]: I0225 11:20:01.537433 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xxm8g"] Feb 25 11:20:01 crc kubenswrapper[5005]: I0225 11:20:01.565763 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-z5mfq"] Feb 25 11:20:01 crc kubenswrapper[5005]: I0225 11:20:01.566763 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z5mfq" Feb 25 11:20:01 crc kubenswrapper[5005]: I0225 11:20:01.587001 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z5mfq"] Feb 25 11:20:01 crc kubenswrapper[5005]: I0225 11:20:01.685188 5005 scope.go:117] "RemoveContainer" containerID="c59c602484c6cda3ffbd176e13b44ae1676fa65bde2f71e60e0e03bdc0c96375" Feb 25 11:20:01 crc kubenswrapper[5005]: E0225 11:20:01.685406 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 25 11:20:01 crc kubenswrapper[5005]: I0225 11:20:01.693599 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5552a8c8-de53-484f-a47f-42dbd9983137-catalog-content\") pod \"redhat-marketplace-z5mfq\" (UID: \"5552a8c8-de53-484f-a47f-42dbd9983137\") " pod="openshift-marketplace/redhat-marketplace-z5mfq" Feb 25 11:20:01 crc kubenswrapper[5005]: I0225 11:20:01.693670 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5552a8c8-de53-484f-a47f-42dbd9983137-utilities\") pod \"redhat-marketplace-z5mfq\" (UID: \"5552a8c8-de53-484f-a47f-42dbd9983137\") " pod="openshift-marketplace/redhat-marketplace-z5mfq" Feb 25 11:20:01 crc kubenswrapper[5005]: I0225 11:20:01.693699 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctf8w\" (UniqueName: \"kubernetes.io/projected/5552a8c8-de53-484f-a47f-42dbd9983137-kube-api-access-ctf8w\") pod \"redhat-marketplace-z5mfq\" (UID: \"5552a8c8-de53-484f-a47f-42dbd9983137\") " pod="openshift-marketplace/redhat-marketplace-z5mfq" Feb 25 11:20:01 crc kubenswrapper[5005]: I0225 11:20:01.723871 5005 generic.go:334] "Generic (PLEG): container finished" podID="5a7766dc-4b89-4d09-87a3-7690f8af1ad7" containerID="0f90e868f308552b9fe1a39a52fe61b1b4744f25cc4bba1dc7625c4574cb65ce" exitCode=0 Feb 25 11:20:01 crc kubenswrapper[5005]: I0225 11:20:01.723982 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qf86g" event={"ID":"5a7766dc-4b89-4d09-87a3-7690f8af1ad7","Type":"ContainerDied","Data":"0f90e868f308552b9fe1a39a52fe61b1b4744f25cc4bba1dc7625c4574cb65ce"} Feb 25 11:20:01 crc kubenswrapper[5005]: I0225 11:20:01.724016 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qf86g" event={"ID":"5a7766dc-4b89-4d09-87a3-7690f8af1ad7","Type":"ContainerStarted","Data":"c4a0bb2e9cd7adb5c243d7efb7bb232e237aa44074af7de974f87a5096aaaf78"} Feb 25 11:20:01 crc kubenswrapper[5005]: I0225 11:20:01.744928 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"a890f4f4772043349519ae65ec5f7ebb6d6327003098bfba79400969e47d2305"} Feb 25 11:20:01 crc kubenswrapper[5005]: I0225 11:20:01.781616 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-746c6848cc-rt6bf" event={"ID":"39b36f61-4a48-4c6e-b755-9ffc31065462","Type":"ContainerStarted","Data":"fb3866567aa9dba2dd318ea031f13c8beb2a112a4a906fc96773df8a9ab26e73"} Feb 25 11:20:01 crc kubenswrapper[5005]: I0225 11:20:01.781681 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-746c6848cc-rt6bf" event={"ID":"39b36f61-4a48-4c6e-b755-9ffc31065462","Type":"ContainerStarted","Data":"a7d3e260bb9e9f2e7847e38ccd3c519a113bbecdda6a81ea6087e04c3750870e"} Feb 25 11:20:01 crc kubenswrapper[5005]: I0225 11:20:01.782083 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-746c6848cc-rt6bf" Feb 25 11:20:01 crc kubenswrapper[5005]: I0225 11:20:01.783586 5005 generic.go:334] "Generic (PLEG): container finished" podID="d4ff9940-8045-4d50-99a8-85f30d202aae" containerID="58a0cac6b64186b40055e05be39c77b4e3b3c823d354ab7510582a7dda79b83e" exitCode=0 Feb 25 11:20:01 crc kubenswrapper[5005]: I0225 11:20:01.783823 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jlrk4" event={"ID":"d4ff9940-8045-4d50-99a8-85f30d202aae","Type":"ContainerDied","Data":"58a0cac6b64186b40055e05be39c77b4e3b3c823d354ab7510582a7dda79b83e"} Feb 25 11:20:01 crc kubenswrapper[5005]: I0225 11:20:01.783854 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jlrk4" event={"ID":"d4ff9940-8045-4d50-99a8-85f30d202aae","Type":"ContainerStarted","Data":"370ff990f6319cecced9f1a6cca8777cdb96869da770adbf118a1b82501ccbc4"} Feb 25 11:20:01 crc kubenswrapper[5005]: I0225 11:20:01.787197 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65d84f95c5-g7zdf" event={"ID":"0e2a769c-92b0-4958-9339-5331a07280c5","Type":"ContainerStarted","Data":"de04650e7cbf42d512665858a5c6d11188f23311edab92dcabe2af89c7f8df6c"} Feb 25 11:20:01 crc kubenswrapper[5005]: I0225 11:20:01.787205 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65d84f95c5-g7zdf" podUID="0e2a769c-92b0-4958-9339-5331a07280c5" containerName="controller-manager" containerID="cri-o://de04650e7cbf42d512665858a5c6d11188f23311edab92dcabe2af89c7f8df6c" gracePeriod=30 Feb 25 11:20:01 crc kubenswrapper[5005]: I0225 11:20:01.787225 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65d84f95c5-g7zdf" event={"ID":"0e2a769c-92b0-4958-9339-5331a07280c5","Type":"ContainerStarted","Data":"e64bad0e0326cf22dee7bd3587014be4f68ab64cdaecac4ff394de37f8ccea26"} Feb 25 11:20:01 crc kubenswrapper[5005]: I0225 11:20:01.788513 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-65d84f95c5-g7zdf" Feb 25 11:20:01 crc kubenswrapper[5005]: I0225 11:20:01.794567 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5552a8c8-de53-484f-a47f-42dbd9983137-catalog-content\") pod \"redhat-marketplace-z5mfq\" (UID: \"5552a8c8-de53-484f-a47f-42dbd9983137\") " pod="openshift-marketplace/redhat-marketplace-z5mfq" Feb 25 11:20:01 crc kubenswrapper[5005]: I0225 11:20:01.794638 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5552a8c8-de53-484f-a47f-42dbd9983137-utilities\") pod \"redhat-marketplace-z5mfq\" (UID: \"5552a8c8-de53-484f-a47f-42dbd9983137\") " pod="openshift-marketplace/redhat-marketplace-z5mfq" Feb 25 11:20:01 crc kubenswrapper[5005]: I0225 11:20:01.794668 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctf8w\" (UniqueName: \"kubernetes.io/projected/5552a8c8-de53-484f-a47f-42dbd9983137-kube-api-access-ctf8w\") pod \"redhat-marketplace-z5mfq\" (UID: \"5552a8c8-de53-484f-a47f-42dbd9983137\") " pod="openshift-marketplace/redhat-marketplace-z5mfq" Feb 25 11:20:01 crc kubenswrapper[5005]: I0225 11:20:01.796315 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5552a8c8-de53-484f-a47f-42dbd9983137-catalog-content\") pod \"redhat-marketplace-z5mfq\" (UID: \"5552a8c8-de53-484f-a47f-42dbd9983137\") " pod="openshift-marketplace/redhat-marketplace-z5mfq" Feb 25 11:20:01 crc kubenswrapper[5005]: I0225 11:20:01.796618 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5552a8c8-de53-484f-a47f-42dbd9983137-utilities\") pod \"redhat-marketplace-z5mfq\" (UID: \"5552a8c8-de53-484f-a47f-42dbd9983137\") " pod="openshift-marketplace/redhat-marketplace-z5mfq" Feb 25 11:20:01 crc kubenswrapper[5005]: I0225 11:20:01.796832 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"837dceda5ae4c4dae5bb29cbbfdd98765104cd3ebede2a01afed3daff8e7b67f"} Feb 25 11:20:01 crc kubenswrapper[5005]: I0225 11:20:01.796879 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"c246183b454cb5b5bc12254a977e0e8b642e95b28411e75361957155e73a667e"} Feb 25 11:20:01 crc kubenswrapper[5005]: I0225 11:20:01.797455 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 25 11:20:01 crc kubenswrapper[5005]: I0225 11:20:01.804444 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-x2fvb" event={"ID":"67964f07-93aa-42ec-90a7-730363ab668b","Type":"ContainerStarted","Data":"0e32d0a238907f34e110212a2423bc2c578e06f35bc80dd95efed88e994c718d"} Feb 25 11:20:01 crc kubenswrapper[5005]: I0225 11:20:01.804476 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-x2fvb" event={"ID":"67964f07-93aa-42ec-90a7-730363ab668b","Type":"ContainerStarted","Data":"c79d929a66222c9164b76c245d348baea85e8cc61045dc2c68469e76b5ec1ec2"} Feb 25 11:20:01 crc kubenswrapper[5005]: I0225 11:20:01.821722 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctf8w\" (UniqueName: \"kubernetes.io/projected/5552a8c8-de53-484f-a47f-42dbd9983137-kube-api-access-ctf8w\") pod \"redhat-marketplace-z5mfq\" (UID: \"5552a8c8-de53-484f-a47f-42dbd9983137\") " pod="openshift-marketplace/redhat-marketplace-z5mfq" Feb 25 11:20:01 crc kubenswrapper[5005]: I0225 11:20:01.846267 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"04a2cf06bd90b4b7f551956ebde7c92c1e7150509c82c01beea7d32317496551"} Feb 25 11:20:01 crc kubenswrapper[5005]: I0225 11:20:01.846310 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"c49e7385c36620d8779d647db00a19760551e6f1bf7d3aa3209fc0e9dcdcba7c"} Feb 25 11:20:01 crc kubenswrapper[5005]: I0225 11:20:01.854797 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533640-qhftv" event={"ID":"705e7826-0109-4ea7-bfe8-3cf9e37285bc","Type":"ContainerStarted","Data":"6c93eec9f67bcc8155d49ea3cabcdf9fac05cfbd181ab5188e67802193042e1e"} Feb 25 11:20:01 crc kubenswrapper[5005]: I0225 11:20:01.860096 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" event={"ID":"4031b2bd-16b2-49b4-a187-5eb591356aff","Type":"ContainerStarted","Data":"051e09b9a1bc39b4ecf381355bb6bd254e70f03f9356d0f3dfed9890553941ea"} Feb 25 11:20:01 crc kubenswrapper[5005]: I0225 11:20:01.860486 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:20:01 crc kubenswrapper[5005]: I0225 11:20:01.868111 5005 patch_prober.go:28] interesting pod/controller-manager-65d84f95c5-g7zdf container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.49:8443/healthz\": read tcp 10.217.0.2:57354->10.217.0.49:8443: read: connection reset by peer" start-of-body= Feb 25 11:20:01 crc kubenswrapper[5005]: I0225 11:20:01.868176 5005 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65d84f95c5-g7zdf" podUID="0e2a769c-92b0-4958-9339-5331a07280c5" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.49:8443/healthz\": read tcp 10.217.0.2:57354->10.217.0.49:8443: read: connection reset by peer" Feb 25 11:20:01 crc kubenswrapper[5005]: I0225 11:20:01.868690 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-746c6848cc-rt6bf" podStartSLOduration=2.868651628 podStartE2EDuration="2.868651628s" podCreationTimestamp="2026-02-25 11:19:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:20:01.844655923 +0000 UTC m=+115.885388250" watchObservedRunningTime="2026-02-25 11:20:01.868651628 +0000 UTC m=+115.909383955" Feb 25 11:20:01 crc kubenswrapper[5005]: I0225 11:20:01.889629 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z5mfq" Feb 25 11:20:01 crc kubenswrapper[5005]: I0225 11:20:01.902644 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65d84f95c5-g7zdf" podStartSLOduration=3.9026276060000002 podStartE2EDuration="3.902627606s" podCreationTimestamp="2026-02-25 11:19:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:20:01.892782397 +0000 UTC m=+115.933514724" watchObservedRunningTime="2026-02-25 11:20:01.902627606 +0000 UTC m=+115.943359933" Feb 25 11:20:01 crc kubenswrapper[5005]: I0225 11:20:01.908507 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vw6ng"] Feb 25 11:20:01 crc kubenswrapper[5005]: I0225 11:20:01.925250 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-x2fvb" podStartSLOduration=55.92523413 podStartE2EDuration="55.92523413s" podCreationTimestamp="2026-02-25 11:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:20:01.924297976 +0000 UTC m=+115.965030303" watchObservedRunningTime="2026-02-25 11:20:01.92523413 +0000 UTC m=+115.965966457" Feb 25 11:20:01 crc kubenswrapper[5005]: W0225 11:20:01.977914 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e5d39a0_d977_4973_a2a3_55699e86de91.slice/crio-523f7e6a2ac9e9e8ec20808b144ff6668206e3f5b4bbf0539e1031258d5cbe73 WatchSource:0}: Error finding container 523f7e6a2ac9e9e8ec20808b144ff6668206e3f5b4bbf0539e1031258d5cbe73: Status 404 returned error can't find the container with id 523f7e6a2ac9e9e8ec20808b144ff6668206e3f5b4bbf0539e1031258d5cbe73 Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.093626 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-746c6848cc-rt6bf" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.124522 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" podStartSLOduration=56.124502732 podStartE2EDuration="56.124502732s" podCreationTimestamp="2026-02-25 11:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:20:01.96692936 +0000 UTC m=+116.007661707" watchObservedRunningTime="2026-02-25 11:20:02.124502732 +0000 UTC m=+116.165235059" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.168060 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zhhkb"] Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.169697 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zhhkb" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.174664 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.189358 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zhhkb"] Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.220099 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.221028 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.224972 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.225164 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.234435 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.291664 5005 patch_prober.go:28] interesting pod/router-default-5444994796-gnfvv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 25 11:20:02 crc kubenswrapper[5005]: [-]has-synced failed: reason withheld Feb 25 11:20:02 crc kubenswrapper[5005]: [+]process-running ok Feb 25 11:20:02 crc kubenswrapper[5005]: healthz check failed Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.291720 5005 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gnfvv" podUID="faec049e-e45a-4ab3-8761-f4bd01acc732" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.295668 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z5mfq"] Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.312069 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e6f04db-3c08-49ba-af16-7ebbc0cfe6f1-utilities\") pod \"redhat-operators-zhhkb\" (UID: \"8e6f04db-3c08-49ba-af16-7ebbc0cfe6f1\") " pod="openshift-marketplace/redhat-operators-zhhkb" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.312135 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e6f04db-3c08-49ba-af16-7ebbc0cfe6f1-catalog-content\") pod \"redhat-operators-zhhkb\" (UID: \"8e6f04db-3c08-49ba-af16-7ebbc0cfe6f1\") " pod="openshift-marketplace/redhat-operators-zhhkb" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.312161 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62sg8\" (UniqueName: \"kubernetes.io/projected/8e6f04db-3c08-49ba-af16-7ebbc0cfe6f1-kube-api-access-62sg8\") pod \"redhat-operators-zhhkb\" (UID: \"8e6f04db-3c08-49ba-af16-7ebbc0cfe6f1\") " pod="openshift-marketplace/redhat-operators-zhhkb" Feb 25 11:20:02 crc kubenswrapper[5005]: W0225 11:20:02.328056 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5552a8c8_de53_484f_a47f_42dbd9983137.slice/crio-f1631d1827497423d3751e82eda53e233f2d61031d84cbf2ad881c88c6f3d250 WatchSource:0}: Error finding container f1631d1827497423d3751e82eda53e233f2d61031d84cbf2ad881c88c6f3d250: Status 404 returned error can't find the container with id f1631d1827497423d3751e82eda53e233f2d61031d84cbf2ad881c88c6f3d250 Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.332269 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65d84f95c5-g7zdf" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.367942 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-756f9dc565-vwbkh"] Feb 25 11:20:02 crc kubenswrapper[5005]: E0225 11:20:02.368166 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e2a769c-92b0-4958-9339-5331a07280c5" containerName="controller-manager" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.368178 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e2a769c-92b0-4958-9339-5331a07280c5" containerName="controller-manager" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.368283 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e2a769c-92b0-4958-9339-5331a07280c5" containerName="controller-manager" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.368641 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-756f9dc565-vwbkh" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.377386 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-756f9dc565-vwbkh"] Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.413156 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b0f20782-1535-4c8a-8821-6ca8b3262a2e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b0f20782-1535-4c8a-8821-6ca8b3262a2e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.413191 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0f20782-1535-4c8a-8821-6ca8b3262a2e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b0f20782-1535-4c8a-8821-6ca8b3262a2e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.413952 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e6f04db-3c08-49ba-af16-7ebbc0cfe6f1-utilities\") pod \"redhat-operators-zhhkb\" (UID: \"8e6f04db-3c08-49ba-af16-7ebbc0cfe6f1\") " pod="openshift-marketplace/redhat-operators-zhhkb" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.414189 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e6f04db-3c08-49ba-af16-7ebbc0cfe6f1-catalog-content\") pod \"redhat-operators-zhhkb\" (UID: \"8e6f04db-3c08-49ba-af16-7ebbc0cfe6f1\") " pod="openshift-marketplace/redhat-operators-zhhkb" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.414350 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62sg8\" (UniqueName: \"kubernetes.io/projected/8e6f04db-3c08-49ba-af16-7ebbc0cfe6f1-kube-api-access-62sg8\") pod \"redhat-operators-zhhkb\" (UID: \"8e6f04db-3c08-49ba-af16-7ebbc0cfe6f1\") " pod="openshift-marketplace/redhat-operators-zhhkb" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.414405 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e6f04db-3c08-49ba-af16-7ebbc0cfe6f1-utilities\") pod \"redhat-operators-zhhkb\" (UID: \"8e6f04db-3c08-49ba-af16-7ebbc0cfe6f1\") " pod="openshift-marketplace/redhat-operators-zhhkb" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.414782 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e6f04db-3c08-49ba-af16-7ebbc0cfe6f1-catalog-content\") pod \"redhat-operators-zhhkb\" (UID: \"8e6f04db-3c08-49ba-af16-7ebbc0cfe6f1\") " pod="openshift-marketplace/redhat-operators-zhhkb" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.432067 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62sg8\" (UniqueName: \"kubernetes.io/projected/8e6f04db-3c08-49ba-af16-7ebbc0cfe6f1-kube-api-access-62sg8\") pod \"redhat-operators-zhhkb\" (UID: \"8e6f04db-3c08-49ba-af16-7ebbc0cfe6f1\") " pod="openshift-marketplace/redhat-operators-zhhkb" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.515912 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zhhkb" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.521686 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e2a769c-92b0-4958-9339-5331a07280c5-serving-cert\") pod \"0e2a769c-92b0-4958-9339-5331a07280c5\" (UID: \"0e2a769c-92b0-4958-9339-5331a07280c5\") " Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.521788 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e2a769c-92b0-4958-9339-5331a07280c5-client-ca\") pod \"0e2a769c-92b0-4958-9339-5331a07280c5\" (UID: \"0e2a769c-92b0-4958-9339-5331a07280c5\") " Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.521808 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e2a769c-92b0-4958-9339-5331a07280c5-config\") pod \"0e2a769c-92b0-4958-9339-5331a07280c5\" (UID: \"0e2a769c-92b0-4958-9339-5331a07280c5\") " Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.521830 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0e2a769c-92b0-4958-9339-5331a07280c5-proxy-ca-bundles\") pod \"0e2a769c-92b0-4958-9339-5331a07280c5\" (UID: \"0e2a769c-92b0-4958-9339-5331a07280c5\") " Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.521884 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlgmj\" (UniqueName: \"kubernetes.io/projected/0e2a769c-92b0-4958-9339-5331a07280c5-kube-api-access-hlgmj\") pod \"0e2a769c-92b0-4958-9339-5331a07280c5\" (UID: \"0e2a769c-92b0-4958-9339-5331a07280c5\") " Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.522326 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3320b664-bd3b-44df-93f6-0e7f6108f60f-proxy-ca-bundles\") pod \"controller-manager-756f9dc565-vwbkh\" (UID: \"3320b664-bd3b-44df-93f6-0e7f6108f60f\") " pod="openshift-controller-manager/controller-manager-756f9dc565-vwbkh" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.522390 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgn94\" (UniqueName: \"kubernetes.io/projected/3320b664-bd3b-44df-93f6-0e7f6108f60f-kube-api-access-rgn94\") pod \"controller-manager-756f9dc565-vwbkh\" (UID: \"3320b664-bd3b-44df-93f6-0e7f6108f60f\") " pod="openshift-controller-manager/controller-manager-756f9dc565-vwbkh" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.522476 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3320b664-bd3b-44df-93f6-0e7f6108f60f-config\") pod \"controller-manager-756f9dc565-vwbkh\" (UID: \"3320b664-bd3b-44df-93f6-0e7f6108f60f\") " pod="openshift-controller-manager/controller-manager-756f9dc565-vwbkh" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.522503 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3320b664-bd3b-44df-93f6-0e7f6108f60f-serving-cert\") pod \"controller-manager-756f9dc565-vwbkh\" (UID: \"3320b664-bd3b-44df-93f6-0e7f6108f60f\") " pod="openshift-controller-manager/controller-manager-756f9dc565-vwbkh" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.522632 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b0f20782-1535-4c8a-8821-6ca8b3262a2e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b0f20782-1535-4c8a-8821-6ca8b3262a2e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.522652 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0f20782-1535-4c8a-8821-6ca8b3262a2e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b0f20782-1535-4c8a-8821-6ca8b3262a2e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.522839 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3320b664-bd3b-44df-93f6-0e7f6108f60f-client-ca\") pod \"controller-manager-756f9dc565-vwbkh\" (UID: \"3320b664-bd3b-44df-93f6-0e7f6108f60f\") " pod="openshift-controller-manager/controller-manager-756f9dc565-vwbkh" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.522844 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e2a769c-92b0-4958-9339-5331a07280c5-client-ca" (OuterVolumeSpecName: "client-ca") pod "0e2a769c-92b0-4958-9339-5331a07280c5" (UID: "0e2a769c-92b0-4958-9339-5331a07280c5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.522701 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b0f20782-1535-4c8a-8821-6ca8b3262a2e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b0f20782-1535-4c8a-8821-6ca8b3262a2e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.522888 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e2a769c-92b0-4958-9339-5331a07280c5-config" (OuterVolumeSpecName: "config") pod "0e2a769c-92b0-4958-9339-5331a07280c5" (UID: "0e2a769c-92b0-4958-9339-5331a07280c5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.523194 5005 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e2a769c-92b0-4958-9339-5331a07280c5-client-ca\") on node \"crc\" DevicePath \"\"" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.523564 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e2a769c-92b0-4958-9339-5331a07280c5-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "0e2a769c-92b0-4958-9339-5331a07280c5" (UID: "0e2a769c-92b0-4958-9339-5331a07280c5"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.559804 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0f20782-1535-4c8a-8821-6ca8b3262a2e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b0f20782-1535-4c8a-8821-6ca8b3262a2e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.563666 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-twk58"] Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.565100 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e2a769c-92b0-4958-9339-5331a07280c5-kube-api-access-hlgmj" (OuterVolumeSpecName: "kube-api-access-hlgmj") pod "0e2a769c-92b0-4958-9339-5331a07280c5" (UID: "0e2a769c-92b0-4958-9339-5331a07280c5"). InnerVolumeSpecName "kube-api-access-hlgmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.565164 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e2a769c-92b0-4958-9339-5331a07280c5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0e2a769c-92b0-4958-9339-5331a07280c5" (UID: "0e2a769c-92b0-4958-9339-5331a07280c5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.565642 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-twk58" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.577713 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-twk58"] Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.623882 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3320b664-bd3b-44df-93f6-0e7f6108f60f-client-ca\") pod \"controller-manager-756f9dc565-vwbkh\" (UID: \"3320b664-bd3b-44df-93f6-0e7f6108f60f\") " pod="openshift-controller-manager/controller-manager-756f9dc565-vwbkh" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.623943 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3320b664-bd3b-44df-93f6-0e7f6108f60f-proxy-ca-bundles\") pod \"controller-manager-756f9dc565-vwbkh\" (UID: \"3320b664-bd3b-44df-93f6-0e7f6108f60f\") " pod="openshift-controller-manager/controller-manager-756f9dc565-vwbkh" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.623988 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgn94\" (UniqueName: \"kubernetes.io/projected/3320b664-bd3b-44df-93f6-0e7f6108f60f-kube-api-access-rgn94\") pod \"controller-manager-756f9dc565-vwbkh\" (UID: \"3320b664-bd3b-44df-93f6-0e7f6108f60f\") " pod="openshift-controller-manager/controller-manager-756f9dc565-vwbkh" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.624018 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3320b664-bd3b-44df-93f6-0e7f6108f60f-config\") pod \"controller-manager-756f9dc565-vwbkh\" (UID: \"3320b664-bd3b-44df-93f6-0e7f6108f60f\") " pod="openshift-controller-manager/controller-manager-756f9dc565-vwbkh" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.624814 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3320b664-bd3b-44df-93f6-0e7f6108f60f-client-ca\") pod \"controller-manager-756f9dc565-vwbkh\" (UID: \"3320b664-bd3b-44df-93f6-0e7f6108f60f\") " pod="openshift-controller-manager/controller-manager-756f9dc565-vwbkh" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.625158 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3320b664-bd3b-44df-93f6-0e7f6108f60f-config\") pod \"controller-manager-756f9dc565-vwbkh\" (UID: \"3320b664-bd3b-44df-93f6-0e7f6108f60f\") " pod="openshift-controller-manager/controller-manager-756f9dc565-vwbkh" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.624037 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3320b664-bd3b-44df-93f6-0e7f6108f60f-serving-cert\") pod \"controller-manager-756f9dc565-vwbkh\" (UID: \"3320b664-bd3b-44df-93f6-0e7f6108f60f\") " pod="openshift-controller-manager/controller-manager-756f9dc565-vwbkh" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.625427 5005 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e2a769c-92b0-4958-9339-5331a07280c5-config\") on node \"crc\" DevicePath \"\"" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.625462 5005 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0e2a769c-92b0-4958-9339-5331a07280c5-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.625486 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlgmj\" (UniqueName: \"kubernetes.io/projected/0e2a769c-92b0-4958-9339-5331a07280c5-kube-api-access-hlgmj\") on node \"crc\" DevicePath \"\"" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.625496 5005 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e2a769c-92b0-4958-9339-5331a07280c5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.631723 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3320b664-bd3b-44df-93f6-0e7f6108f60f-serving-cert\") pod \"controller-manager-756f9dc565-vwbkh\" (UID: \"3320b664-bd3b-44df-93f6-0e7f6108f60f\") " pod="openshift-controller-manager/controller-manager-756f9dc565-vwbkh" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.632147 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3320b664-bd3b-44df-93f6-0e7f6108f60f-proxy-ca-bundles\") pod \"controller-manager-756f9dc565-vwbkh\" (UID: \"3320b664-bd3b-44df-93f6-0e7f6108f60f\") " pod="openshift-controller-manager/controller-manager-756f9dc565-vwbkh" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.639776 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgn94\" (UniqueName: \"kubernetes.io/projected/3320b664-bd3b-44df-93f6-0e7f6108f60f-kube-api-access-rgn94\") pod \"controller-manager-756f9dc565-vwbkh\" (UID: \"3320b664-bd3b-44df-93f6-0e7f6108f60f\") " pod="openshift-controller-manager/controller-manager-756f9dc565-vwbkh" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.647599 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.689639 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-756f9dc565-vwbkh" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.715306 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="783f9d2c-ef3f-4915-819f-16ad8ddf943a" path="/var/lib/kubelet/pods/783f9d2c-ef3f-4915-819f-16ad8ddf943a/volumes" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.716038 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.717159 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e972077c-5857-4a15-bd23-21b21fbad7b1" path="/var/lib/kubelet/pods/e972077c-5857-4a15-bd23-21b21fbad7b1/volumes" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.719999 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.727028 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-928hn\" (UniqueName: \"kubernetes.io/projected/7d259751-70f2-4fcc-a6c7-4b99993eb217-kube-api-access-928hn\") pod \"redhat-operators-twk58\" (UID: \"7d259751-70f2-4fcc-a6c7-4b99993eb217\") " pod="openshift-marketplace/redhat-operators-twk58" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.727100 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d259751-70f2-4fcc-a6c7-4b99993eb217-utilities\") pod \"redhat-operators-twk58\" (UID: \"7d259751-70f2-4fcc-a6c7-4b99993eb217\") " pod="openshift-marketplace/redhat-operators-twk58" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.727152 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d259751-70f2-4fcc-a6c7-4b99993eb217-catalog-content\") pod \"redhat-operators-twk58\" (UID: \"7d259751-70f2-4fcc-a6c7-4b99993eb217\") " pod="openshift-marketplace/redhat-operators-twk58" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.827464 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d259751-70f2-4fcc-a6c7-4b99993eb217-utilities\") pod \"redhat-operators-twk58\" (UID: \"7d259751-70f2-4fcc-a6c7-4b99993eb217\") " pod="openshift-marketplace/redhat-operators-twk58" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.827539 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d259751-70f2-4fcc-a6c7-4b99993eb217-catalog-content\") pod \"redhat-operators-twk58\" (UID: \"7d259751-70f2-4fcc-a6c7-4b99993eb217\") " pod="openshift-marketplace/redhat-operators-twk58" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.827570 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-928hn\" (UniqueName: \"kubernetes.io/projected/7d259751-70f2-4fcc-a6c7-4b99993eb217-kube-api-access-928hn\") pod \"redhat-operators-twk58\" (UID: \"7d259751-70f2-4fcc-a6c7-4b99993eb217\") " pod="openshift-marketplace/redhat-operators-twk58" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.828710 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d259751-70f2-4fcc-a6c7-4b99993eb217-catalog-content\") pod \"redhat-operators-twk58\" (UID: \"7d259751-70f2-4fcc-a6c7-4b99993eb217\") " pod="openshift-marketplace/redhat-operators-twk58" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.828782 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d259751-70f2-4fcc-a6c7-4b99993eb217-utilities\") pod \"redhat-operators-twk58\" (UID: \"7d259751-70f2-4fcc-a6c7-4b99993eb217\") " pod="openshift-marketplace/redhat-operators-twk58" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.847661 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zhhkb"] Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.854067 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-928hn\" (UniqueName: \"kubernetes.io/projected/7d259751-70f2-4fcc-a6c7-4b99993eb217-kube-api-access-928hn\") pod \"redhat-operators-twk58\" (UID: \"7d259751-70f2-4fcc-a6c7-4b99993eb217\") " pod="openshift-marketplace/redhat-operators-twk58" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.924647 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-twk58" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.983522 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" event={"ID":"4031b2bd-16b2-49b4-a187-5eb591356aff","Type":"ContainerStarted","Data":"f4b1b34459113a88f4ab7aae312f54578279e010e98a435eca458731625356dd"} Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.986616 5005 generic.go:334] "Generic (PLEG): container finished" podID="0e2a769c-92b0-4958-9339-5331a07280c5" containerID="de04650e7cbf42d512665858a5c6d11188f23311edab92dcabe2af89c7f8df6c" exitCode=0 Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.986688 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65d84f95c5-g7zdf" event={"ID":"0e2a769c-92b0-4958-9339-5331a07280c5","Type":"ContainerDied","Data":"de04650e7cbf42d512665858a5c6d11188f23311edab92dcabe2af89c7f8df6c"} Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.986706 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65d84f95c5-g7zdf" event={"ID":"0e2a769c-92b0-4958-9339-5331a07280c5","Type":"ContainerDied","Data":"e64bad0e0326cf22dee7bd3587014be4f68ab64cdaecac4ff394de37f8ccea26"} Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.986742 5005 scope.go:117] "RemoveContainer" containerID="de04650e7cbf42d512665858a5c6d11188f23311edab92dcabe2af89c7f8df6c" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.986871 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65d84f95c5-g7zdf" Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.994470 5005 generic.go:334] "Generic (PLEG): container finished" podID="5552a8c8-de53-484f-a47f-42dbd9983137" containerID="87acc748555d5bf915b5cdca04b4aef7a9cca547eab1b788807dd41b7632de11" exitCode=0 Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.994553 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z5mfq" event={"ID":"5552a8c8-de53-484f-a47f-42dbd9983137","Type":"ContainerDied","Data":"87acc748555d5bf915b5cdca04b4aef7a9cca547eab1b788807dd41b7632de11"} Feb 25 11:20:02 crc kubenswrapper[5005]: I0225 11:20:02.994580 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z5mfq" event={"ID":"5552a8c8-de53-484f-a47f-42dbd9983137","Type":"ContainerStarted","Data":"f1631d1827497423d3751e82eda53e233f2d61031d84cbf2ad881c88c6f3d250"} Feb 25 11:20:03 crc kubenswrapper[5005]: I0225 11:20:03.006728 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-xnfp6" Feb 25 11:20:03 crc kubenswrapper[5005]: I0225 11:20:03.009124 5005 generic.go:334] "Generic (PLEG): container finished" podID="4e5d39a0-d977-4973-a2a3-55699e86de91" containerID="f0d343acbb4beaffba9cb2648c4430d63fe8db62ff5579889441014ff4c5d179" exitCode=0 Feb 25 11:20:03 crc kubenswrapper[5005]: I0225 11:20:03.009277 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vw6ng" event={"ID":"4e5d39a0-d977-4973-a2a3-55699e86de91","Type":"ContainerDied","Data":"f0d343acbb4beaffba9cb2648c4430d63fe8db62ff5579889441014ff4c5d179"} Feb 25 11:20:03 crc kubenswrapper[5005]: I0225 11:20:03.009299 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vw6ng" event={"ID":"4e5d39a0-d977-4973-a2a3-55699e86de91","Type":"ContainerStarted","Data":"523f7e6a2ac9e9e8ec20808b144ff6668206e3f5b4bbf0539e1031258d5cbe73"} Feb 25 11:20:03 crc kubenswrapper[5005]: I0225 11:20:03.010853 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=1.010837495 podStartE2EDuration="1.010837495s" podCreationTimestamp="2026-02-25 11:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:20:03.001061028 +0000 UTC m=+117.041793365" watchObservedRunningTime="2026-02-25 11:20:03.010837495 +0000 UTC m=+117.051569822" Feb 25 11:20:03 crc kubenswrapper[5005]: I0225 11:20:03.026516 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-xnfp6" Feb 25 11:20:03 crc kubenswrapper[5005]: I0225 11:20:03.032512 5005 scope.go:117] "RemoveContainer" containerID="de04650e7cbf42d512665858a5c6d11188f23311edab92dcabe2af89c7f8df6c" Feb 25 11:20:03 crc kubenswrapper[5005]: E0225 11:20:03.049345 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de04650e7cbf42d512665858a5c6d11188f23311edab92dcabe2af89c7f8df6c\": container with ID starting with de04650e7cbf42d512665858a5c6d11188f23311edab92dcabe2af89c7f8df6c not found: ID does not exist" containerID="de04650e7cbf42d512665858a5c6d11188f23311edab92dcabe2af89c7f8df6c" Feb 25 11:20:03 crc kubenswrapper[5005]: I0225 11:20:03.049399 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de04650e7cbf42d512665858a5c6d11188f23311edab92dcabe2af89c7f8df6c"} err="failed to get container status \"de04650e7cbf42d512665858a5c6d11188f23311edab92dcabe2af89c7f8df6c\": rpc error: code = NotFound desc = could not find container \"de04650e7cbf42d512665858a5c6d11188f23311edab92dcabe2af89c7f8df6c\": container with ID starting with de04650e7cbf42d512665858a5c6d11188f23311edab92dcabe2af89c7f8df6c not found: ID does not exist" Feb 25 11:20:03 crc kubenswrapper[5005]: I0225 11:20:03.077126 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65d84f95c5-g7zdf"] Feb 25 11:20:03 crc kubenswrapper[5005]: I0225 11:20:03.089620 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65d84f95c5-g7zdf"] Feb 25 11:20:03 crc kubenswrapper[5005]: I0225 11:20:03.089675 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 25 11:20:03 crc kubenswrapper[5005]: I0225 11:20:03.283239 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-gnfvv" Feb 25 11:20:03 crc kubenswrapper[5005]: I0225 11:20:03.286658 5005 patch_prober.go:28] interesting pod/router-default-5444994796-gnfvv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 25 11:20:03 crc kubenswrapper[5005]: [-]has-synced failed: reason withheld Feb 25 11:20:03 crc kubenswrapper[5005]: [+]process-running ok Feb 25 11:20:03 crc kubenswrapper[5005]: healthz check failed Feb 25 11:20:03 crc kubenswrapper[5005]: I0225 11:20:03.286722 5005 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gnfvv" podUID="faec049e-e45a-4ab3-8761-f4bd01acc732" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 25 11:20:03 crc kubenswrapper[5005]: I0225 11:20:03.361973 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-756f9dc565-vwbkh"] Feb 25 11:20:03 crc kubenswrapper[5005]: I0225 11:20:03.429435 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-twk58"] Feb 25 11:20:03 crc kubenswrapper[5005]: I0225 11:20:03.486900 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-277gg" Feb 25 11:20:03 crc kubenswrapper[5005]: I0225 11:20:03.486934 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-277gg" Feb 25 11:20:03 crc kubenswrapper[5005]: I0225 11:20:03.489612 5005 patch_prober.go:28] interesting pod/console-f9d7485db-277gg container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Feb 25 11:20:03 crc kubenswrapper[5005]: I0225 11:20:03.489664 5005 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-277gg" podUID="cf5c0827-c687-4ab2-a02f-7b74d00a57db" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Feb 25 11:20:03 crc kubenswrapper[5005]: W0225 11:20:03.571156 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3320b664_bd3b_44df_93f6_0e7f6108f60f.slice/crio-f39978faf1eef3a9f0acae4419c16f2ae0a27d33d9c4f017e61b00cb44a0f89e WatchSource:0}: Error finding container f39978faf1eef3a9f0acae4419c16f2ae0a27d33d9c4f017e61b00cb44a0f89e: Status 404 returned error can't find the container with id f39978faf1eef3a9f0acae4419c16f2ae0a27d33d9c4f017e61b00cb44a0f89e Feb 25 11:20:03 crc kubenswrapper[5005]: W0225 11:20:03.571623 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d259751_70f2_4fcc_a6c7_4b99993eb217.slice/crio-fd4fc558b9f780612dbf63fae249c8b80f8f063ea7dad5fa4e5d53d85965572e WatchSource:0}: Error finding container fd4fc558b9f780612dbf63fae249c8b80f8f063ea7dad5fa4e5d53d85965572e: Status 404 returned error can't find the container with id fd4fc558b9f780612dbf63fae249c8b80f8f063ea7dad5fa4e5d53d85965572e Feb 25 11:20:03 crc kubenswrapper[5005]: I0225 11:20:03.773607 5005 patch_prober.go:28] interesting pod/downloads-7954f5f757-65dkm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 25 11:20:03 crc kubenswrapper[5005]: I0225 11:20:03.773658 5005 patch_prober.go:28] interesting pod/downloads-7954f5f757-65dkm container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 25 11:20:03 crc kubenswrapper[5005]: I0225 11:20:03.773709 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-65dkm" podUID="1d3ecaa2-3425-41f3-a8cf-cfbd0e94643c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 25 11:20:03 crc kubenswrapper[5005]: I0225 11:20:03.773665 5005 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-65dkm" podUID="1d3ecaa2-3425-41f3-a8cf-cfbd0e94643c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 25 11:20:03 crc kubenswrapper[5005]: E0225 11:20:03.801786 5005 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="67023985697f96200596a4d662626679a8a51a06b7aaad02de5efcf52ab2c31e" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 25 11:20:03 crc kubenswrapper[5005]: E0225 11:20:03.853829 5005 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="67023985697f96200596a4d662626679a8a51a06b7aaad02de5efcf52ab2c31e" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 25 11:20:03 crc kubenswrapper[5005]: E0225 11:20:03.864149 5005 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="67023985697f96200596a4d662626679a8a51a06b7aaad02de5efcf52ab2c31e" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 25 11:20:03 crc kubenswrapper[5005]: E0225 11:20:03.864248 5005 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-qcfmz" podUID="7eaa1833-a4ad-422c-93db-6c442609d050" containerName="kube-multus-additional-cni-plugins" Feb 25 11:20:03 crc kubenswrapper[5005]: I0225 11:20:03.916865 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xwmct" Feb 25 11:20:03 crc kubenswrapper[5005]: I0225 11:20:03.926929 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xwmct" Feb 25 11:20:04 crc kubenswrapper[5005]: I0225 11:20:04.074067 5005 generic.go:334] "Generic (PLEG): container finished" podID="871158bf-c5f6-4e49-981a-bf00d5b8c4c7" containerID="a1f530703922ade8a94f5905b688352a03c649440f48f176f6fda094a4b23fd7" exitCode=0 Feb 25 11:20:04 crc kubenswrapper[5005]: I0225 11:20:04.074463 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29533635-968n9" event={"ID":"871158bf-c5f6-4e49-981a-bf00d5b8c4c7","Type":"ContainerDied","Data":"a1f530703922ade8a94f5905b688352a03c649440f48f176f6fda094a4b23fd7"} Feb 25 11:20:04 crc kubenswrapper[5005]: I0225 11:20:04.093645 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-756f9dc565-vwbkh" event={"ID":"3320b664-bd3b-44df-93f6-0e7f6108f60f","Type":"ContainerStarted","Data":"e9f74d36e50a314519d49078f2fb5bf7fdfc720cd5b0d439e7fb27ec46c89696"} Feb 25 11:20:04 crc kubenswrapper[5005]: I0225 11:20:04.093687 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-756f9dc565-vwbkh" event={"ID":"3320b664-bd3b-44df-93f6-0e7f6108f60f","Type":"ContainerStarted","Data":"f39978faf1eef3a9f0acae4419c16f2ae0a27d33d9c4f017e61b00cb44a0f89e"} Feb 25 11:20:04 crc kubenswrapper[5005]: I0225 11:20:04.094546 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-756f9dc565-vwbkh" Feb 25 11:20:04 crc kubenswrapper[5005]: I0225 11:20:04.096783 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"b0f20782-1535-4c8a-8821-6ca8b3262a2e","Type":"ContainerStarted","Data":"dabb92e5ec7076978ac7bd09bdb2e1aad26dcf145d45c621dc3a3ba843eb02bc"} Feb 25 11:20:04 crc kubenswrapper[5005]: I0225 11:20:04.097825 5005 generic.go:334] "Generic (PLEG): container finished" podID="8e6f04db-3c08-49ba-af16-7ebbc0cfe6f1" containerID="911e1eda948b505f0e18480e3b57cbdfb8347fc2288f8669c556396ee8899df5" exitCode=0 Feb 25 11:20:04 crc kubenswrapper[5005]: I0225 11:20:04.097862 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zhhkb" event={"ID":"8e6f04db-3c08-49ba-af16-7ebbc0cfe6f1","Type":"ContainerDied","Data":"911e1eda948b505f0e18480e3b57cbdfb8347fc2288f8669c556396ee8899df5"} Feb 25 11:20:04 crc kubenswrapper[5005]: I0225 11:20:04.097876 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zhhkb" event={"ID":"8e6f04db-3c08-49ba-af16-7ebbc0cfe6f1","Type":"ContainerStarted","Data":"75c47509906c638e9cf129396e6a07549fd7153d846a7bc76e58f18194204a1e"} Feb 25 11:20:04 crc kubenswrapper[5005]: I0225 11:20:04.102392 5005 generic.go:334] "Generic (PLEG): container finished" podID="7d259751-70f2-4fcc-a6c7-4b99993eb217" containerID="a050921e8d57448330f27ac9cae59d3a55c022b059a7301a2739693da5e5c012" exitCode=0 Feb 25 11:20:04 crc kubenswrapper[5005]: I0225 11:20:04.103987 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-twk58" event={"ID":"7d259751-70f2-4fcc-a6c7-4b99993eb217","Type":"ContainerDied","Data":"a050921e8d57448330f27ac9cae59d3a55c022b059a7301a2739693da5e5c012"} Feb 25 11:20:04 crc kubenswrapper[5005]: I0225 11:20:04.104014 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-twk58" event={"ID":"7d259751-70f2-4fcc-a6c7-4b99993eb217","Type":"ContainerStarted","Data":"fd4fc558b9f780612dbf63fae249c8b80f8f063ea7dad5fa4e5d53d85965572e"} Feb 25 11:20:04 crc kubenswrapper[5005]: I0225 11:20:04.117349 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-756f9dc565-vwbkh" Feb 25 11:20:04 crc kubenswrapper[5005]: I0225 11:20:04.201089 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-756f9dc565-vwbkh" podStartSLOduration=4.201073863 podStartE2EDuration="4.201073863s" podCreationTimestamp="2026-02-25 11:20:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:20:04.199125861 +0000 UTC m=+118.239858188" watchObservedRunningTime="2026-02-25 11:20:04.201073863 +0000 UTC m=+118.241806190" Feb 25 11:20:04 crc kubenswrapper[5005]: I0225 11:20:04.287032 5005 patch_prober.go:28] interesting pod/router-default-5444994796-gnfvv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 25 11:20:04 crc kubenswrapper[5005]: [-]has-synced failed: reason withheld Feb 25 11:20:04 crc kubenswrapper[5005]: [+]process-running ok Feb 25 11:20:04 crc kubenswrapper[5005]: healthz check failed Feb 25 11:20:04 crc kubenswrapper[5005]: I0225 11:20:04.287085 5005 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gnfvv" podUID="faec049e-e45a-4ab3-8761-f4bd01acc732" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 25 11:20:04 crc kubenswrapper[5005]: I0225 11:20:04.380848 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 25 11:20:04 crc kubenswrapper[5005]: I0225 11:20:04.381482 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 25 11:20:04 crc kubenswrapper[5005]: I0225 11:20:04.385970 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 25 11:20:04 crc kubenswrapper[5005]: I0225 11:20:04.386066 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 25 11:20:04 crc kubenswrapper[5005]: I0225 11:20:04.386570 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 25 11:20:04 crc kubenswrapper[5005]: I0225 11:20:04.467092 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1b21e910-d77a-421f-a1ec-916d8d77bbd2-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"1b21e910-d77a-421f-a1ec-916d8d77bbd2\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 25 11:20:04 crc kubenswrapper[5005]: I0225 11:20:04.467176 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1b21e910-d77a-421f-a1ec-916d8d77bbd2-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"1b21e910-d77a-421f-a1ec-916d8d77bbd2\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 25 11:20:04 crc kubenswrapper[5005]: I0225 11:20:04.568730 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1b21e910-d77a-421f-a1ec-916d8d77bbd2-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"1b21e910-d77a-421f-a1ec-916d8d77bbd2\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 25 11:20:04 crc kubenswrapper[5005]: I0225 11:20:04.568858 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1b21e910-d77a-421f-a1ec-916d8d77bbd2-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"1b21e910-d77a-421f-a1ec-916d8d77bbd2\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 25 11:20:04 crc kubenswrapper[5005]: I0225 11:20:04.568941 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1b21e910-d77a-421f-a1ec-916d8d77bbd2-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"1b21e910-d77a-421f-a1ec-916d8d77bbd2\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 25 11:20:04 crc kubenswrapper[5005]: I0225 11:20:04.595133 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1b21e910-d77a-421f-a1ec-916d8d77bbd2-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"1b21e910-d77a-421f-a1ec-916d8d77bbd2\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 25 11:20:04 crc kubenswrapper[5005]: I0225 11:20:04.711665 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 25 11:20:04 crc kubenswrapper[5005]: I0225 11:20:04.725555 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e2a769c-92b0-4958-9339-5331a07280c5" path="/var/lib/kubelet/pods/0e2a769c-92b0-4958-9339-5331a07280c5/volumes" Feb 25 11:20:05 crc kubenswrapper[5005]: I0225 11:20:05.116485 5005 generic.go:334] "Generic (PLEG): container finished" podID="b0f20782-1535-4c8a-8821-6ca8b3262a2e" containerID="cedcc282012938e206f2e507d5fa3c630ae2c637fec5008c7010fe2026402c58" exitCode=0 Feb 25 11:20:05 crc kubenswrapper[5005]: I0225 11:20:05.128472 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"b0f20782-1535-4c8a-8821-6ca8b3262a2e","Type":"ContainerDied","Data":"cedcc282012938e206f2e507d5fa3c630ae2c637fec5008c7010fe2026402c58"} Feb 25 11:20:05 crc kubenswrapper[5005]: I0225 11:20:05.287862 5005 patch_prober.go:28] interesting pod/router-default-5444994796-gnfvv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 25 11:20:05 crc kubenswrapper[5005]: [-]has-synced failed: reason withheld Feb 25 11:20:05 crc kubenswrapper[5005]: [+]process-running ok Feb 25 11:20:05 crc kubenswrapper[5005]: healthz check failed Feb 25 11:20:05 crc kubenswrapper[5005]: I0225 11:20:05.287924 5005 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gnfvv" podUID="faec049e-e45a-4ab3-8761-f4bd01acc732" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 25 11:20:05 crc kubenswrapper[5005]: I0225 11:20:05.339878 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 25 11:20:05 crc kubenswrapper[5005]: I0225 11:20:05.588520 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29533635-968n9" Feb 25 11:20:05 crc kubenswrapper[5005]: I0225 11:20:05.686009 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7z7l5\" (UniqueName: \"kubernetes.io/projected/871158bf-c5f6-4e49-981a-bf00d5b8c4c7-kube-api-access-7z7l5\") pod \"871158bf-c5f6-4e49-981a-bf00d5b8c4c7\" (UID: \"871158bf-c5f6-4e49-981a-bf00d5b8c4c7\") " Feb 25 11:20:05 crc kubenswrapper[5005]: I0225 11:20:05.686080 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/871158bf-c5f6-4e49-981a-bf00d5b8c4c7-secret-volume\") pod \"871158bf-c5f6-4e49-981a-bf00d5b8c4c7\" (UID: \"871158bf-c5f6-4e49-981a-bf00d5b8c4c7\") " Feb 25 11:20:05 crc kubenswrapper[5005]: I0225 11:20:05.686154 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/871158bf-c5f6-4e49-981a-bf00d5b8c4c7-config-volume\") pod \"871158bf-c5f6-4e49-981a-bf00d5b8c4c7\" (UID: \"871158bf-c5f6-4e49-981a-bf00d5b8c4c7\") " Feb 25 11:20:05 crc kubenswrapper[5005]: I0225 11:20:05.687260 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/871158bf-c5f6-4e49-981a-bf00d5b8c4c7-config-volume" (OuterVolumeSpecName: "config-volume") pod "871158bf-c5f6-4e49-981a-bf00d5b8c4c7" (UID: "871158bf-c5f6-4e49-981a-bf00d5b8c4c7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:20:05 crc kubenswrapper[5005]: I0225 11:20:05.693315 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/871158bf-c5f6-4e49-981a-bf00d5b8c4c7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "871158bf-c5f6-4e49-981a-bf00d5b8c4c7" (UID: "871158bf-c5f6-4e49-981a-bf00d5b8c4c7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:20:05 crc kubenswrapper[5005]: I0225 11:20:05.693456 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/871158bf-c5f6-4e49-981a-bf00d5b8c4c7-kube-api-access-7z7l5" (OuterVolumeSpecName: "kube-api-access-7z7l5") pod "871158bf-c5f6-4e49-981a-bf00d5b8c4c7" (UID: "871158bf-c5f6-4e49-981a-bf00d5b8c4c7"). InnerVolumeSpecName "kube-api-access-7z7l5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:20:05 crc kubenswrapper[5005]: I0225 11:20:05.788105 5005 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/871158bf-c5f6-4e49-981a-bf00d5b8c4c7-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 25 11:20:05 crc kubenswrapper[5005]: I0225 11:20:05.788138 5005 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/871158bf-c5f6-4e49-981a-bf00d5b8c4c7-config-volume\") on node \"crc\" DevicePath \"\"" Feb 25 11:20:05 crc kubenswrapper[5005]: I0225 11:20:05.788149 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7z7l5\" (UniqueName: \"kubernetes.io/projected/871158bf-c5f6-4e49-981a-bf00d5b8c4c7-kube-api-access-7z7l5\") on node \"crc\" DevicePath \"\"" Feb 25 11:20:05 crc kubenswrapper[5005]: I0225 11:20:05.842628 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-6q7wr" Feb 25 11:20:06 crc kubenswrapper[5005]: I0225 11:20:06.143878 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1b21e910-d77a-421f-a1ec-916d8d77bbd2","Type":"ContainerStarted","Data":"f3ffb906088982b7bccc4a3cb2571373ca85bfa44c5de3e66e372468ba3910c2"} Feb 25 11:20:06 crc kubenswrapper[5005]: I0225 11:20:06.146995 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29533635-968n9" event={"ID":"871158bf-c5f6-4e49-981a-bf00d5b8c4c7","Type":"ContainerDied","Data":"2ec1501c318a3c6dd6633bb59e48188d8fe6d618dfecec32dce0e76bb3a391f2"} Feb 25 11:20:06 crc kubenswrapper[5005]: I0225 11:20:06.147029 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ec1501c318a3c6dd6633bb59e48188d8fe6d618dfecec32dce0e76bb3a391f2" Feb 25 11:20:06 crc kubenswrapper[5005]: I0225 11:20:06.147008 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29533635-968n9" Feb 25 11:20:06 crc kubenswrapper[5005]: I0225 11:20:06.285934 5005 patch_prober.go:28] interesting pod/router-default-5444994796-gnfvv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 25 11:20:06 crc kubenswrapper[5005]: [-]has-synced failed: reason withheld Feb 25 11:20:06 crc kubenswrapper[5005]: [+]process-running ok Feb 25 11:20:06 crc kubenswrapper[5005]: healthz check failed Feb 25 11:20:06 crc kubenswrapper[5005]: I0225 11:20:06.295605 5005 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gnfvv" podUID="faec049e-e45a-4ab3-8761-f4bd01acc732" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 25 11:20:06 crc kubenswrapper[5005]: I0225 11:20:06.374073 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 25 11:20:06 crc kubenswrapper[5005]: I0225 11:20:06.503010 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0f20782-1535-4c8a-8821-6ca8b3262a2e-kube-api-access\") pod \"b0f20782-1535-4c8a-8821-6ca8b3262a2e\" (UID: \"b0f20782-1535-4c8a-8821-6ca8b3262a2e\") " Feb 25 11:20:06 crc kubenswrapper[5005]: I0225 11:20:06.503082 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b0f20782-1535-4c8a-8821-6ca8b3262a2e-kubelet-dir\") pod \"b0f20782-1535-4c8a-8821-6ca8b3262a2e\" (UID: \"b0f20782-1535-4c8a-8821-6ca8b3262a2e\") " Feb 25 11:20:06 crc kubenswrapper[5005]: I0225 11:20:06.503447 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0f20782-1535-4c8a-8821-6ca8b3262a2e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b0f20782-1535-4c8a-8821-6ca8b3262a2e" (UID: "b0f20782-1535-4c8a-8821-6ca8b3262a2e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 25 11:20:06 crc kubenswrapper[5005]: I0225 11:20:06.509505 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0f20782-1535-4c8a-8821-6ca8b3262a2e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b0f20782-1535-4c8a-8821-6ca8b3262a2e" (UID: "b0f20782-1535-4c8a-8821-6ca8b3262a2e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:20:06 crc kubenswrapper[5005]: I0225 11:20:06.581073 5005 ???:1] "http: TLS handshake error from 192.168.126.11:47912: no serving certificate available for the kubelet" Feb 25 11:20:06 crc kubenswrapper[5005]: I0225 11:20:06.604124 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0f20782-1535-4c8a-8821-6ca8b3262a2e-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 25 11:20:06 crc kubenswrapper[5005]: I0225 11:20:06.604320 5005 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b0f20782-1535-4c8a-8821-6ca8b3262a2e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 25 11:20:07 crc kubenswrapper[5005]: I0225 11:20:07.037747 5005 ???:1] "http: TLS handshake error from 192.168.126.11:47914: no serving certificate available for the kubelet" Feb 25 11:20:07 crc kubenswrapper[5005]: I0225 11:20:07.156666 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"b0f20782-1535-4c8a-8821-6ca8b3262a2e","Type":"ContainerDied","Data":"dabb92e5ec7076978ac7bd09bdb2e1aad26dcf145d45c621dc3a3ba843eb02bc"} Feb 25 11:20:07 crc kubenswrapper[5005]: I0225 11:20:07.156714 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dabb92e5ec7076978ac7bd09bdb2e1aad26dcf145d45c621dc3a3ba843eb02bc" Feb 25 11:20:07 crc kubenswrapper[5005]: I0225 11:20:07.156728 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 25 11:20:07 crc kubenswrapper[5005]: I0225 11:20:07.159308 5005 generic.go:334] "Generic (PLEG): container finished" podID="1b21e910-d77a-421f-a1ec-916d8d77bbd2" containerID="4c5c9c24e7eb42f491e419822925cee4068e1247fbdd864a59d74c6f4b79e4b3" exitCode=0 Feb 25 11:20:07 crc kubenswrapper[5005]: I0225 11:20:07.159388 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1b21e910-d77a-421f-a1ec-916d8d77bbd2","Type":"ContainerDied","Data":"4c5c9c24e7eb42f491e419822925cee4068e1247fbdd864a59d74c6f4b79e4b3"} Feb 25 11:20:07 crc kubenswrapper[5005]: I0225 11:20:07.287102 5005 patch_prober.go:28] interesting pod/router-default-5444994796-gnfvv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 25 11:20:07 crc kubenswrapper[5005]: [-]has-synced failed: reason withheld Feb 25 11:20:07 crc kubenswrapper[5005]: [+]process-running ok Feb 25 11:20:07 crc kubenswrapper[5005]: healthz check failed Feb 25 11:20:07 crc kubenswrapper[5005]: I0225 11:20:07.287162 5005 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gnfvv" podUID="faec049e-e45a-4ab3-8761-f4bd01acc732" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 25 11:20:08 crc kubenswrapper[5005]: I0225 11:20:08.285816 5005 patch_prober.go:28] interesting pod/router-default-5444994796-gnfvv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 25 11:20:08 crc kubenswrapper[5005]: [-]has-synced failed: reason withheld Feb 25 11:20:08 crc kubenswrapper[5005]: [+]process-running ok Feb 25 11:20:08 crc kubenswrapper[5005]: healthz check failed Feb 25 11:20:08 crc kubenswrapper[5005]: I0225 11:20:08.286047 5005 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gnfvv" podUID="faec049e-e45a-4ab3-8761-f4bd01acc732" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 25 11:20:09 crc kubenswrapper[5005]: I0225 11:20:09.284652 5005 patch_prober.go:28] interesting pod/router-default-5444994796-gnfvv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 25 11:20:09 crc kubenswrapper[5005]: [-]has-synced failed: reason withheld Feb 25 11:20:09 crc kubenswrapper[5005]: [+]process-running ok Feb 25 11:20:09 crc kubenswrapper[5005]: healthz check failed Feb 25 11:20:09 crc kubenswrapper[5005]: I0225 11:20:09.285090 5005 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gnfvv" podUID="faec049e-e45a-4ab3-8761-f4bd01acc732" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 25 11:20:10 crc kubenswrapper[5005]: I0225 11:20:10.286066 5005 patch_prober.go:28] interesting pod/router-default-5444994796-gnfvv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 25 11:20:10 crc kubenswrapper[5005]: [+]has-synced ok Feb 25 11:20:10 crc kubenswrapper[5005]: [+]process-running ok Feb 25 11:20:10 crc kubenswrapper[5005]: healthz check failed Feb 25 11:20:10 crc kubenswrapper[5005]: I0225 11:20:10.286126 5005 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gnfvv" podUID="faec049e-e45a-4ab3-8761-f4bd01acc732" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 25 11:20:11 crc kubenswrapper[5005]: I0225 11:20:11.285507 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-gnfvv" Feb 25 11:20:11 crc kubenswrapper[5005]: I0225 11:20:11.288456 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-gnfvv" Feb 25 11:20:11 crc kubenswrapper[5005]: I0225 11:20:11.699832 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 25 11:20:13 crc kubenswrapper[5005]: I0225 11:20:13.512420 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-277gg" Feb 25 11:20:13 crc kubenswrapper[5005]: I0225 11:20:13.515995 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-277gg" Feb 25 11:20:13 crc kubenswrapper[5005]: I0225 11:20:13.563695 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=2.563670979 podStartE2EDuration="2.563670979s" podCreationTimestamp="2026-02-25 11:20:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:20:13.558001192 +0000 UTC m=+127.598733519" watchObservedRunningTime="2026-02-25 11:20:13.563670979 +0000 UTC m=+127.604403306" Feb 25 11:20:13 crc kubenswrapper[5005]: I0225 11:20:13.686202 5005 scope.go:117] "RemoveContainer" containerID="c59c602484c6cda3ffbd176e13b44ae1676fa65bde2f71e60e0e03bdc0c96375" Feb 25 11:20:13 crc kubenswrapper[5005]: I0225 11:20:13.778464 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-65dkm" Feb 25 11:20:13 crc kubenswrapper[5005]: E0225 11:20:13.795887 5005 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="67023985697f96200596a4d662626679a8a51a06b7aaad02de5efcf52ab2c31e" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 25 11:20:13 crc kubenswrapper[5005]: E0225 11:20:13.799730 5005 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="67023985697f96200596a4d662626679a8a51a06b7aaad02de5efcf52ab2c31e" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 25 11:20:13 crc kubenswrapper[5005]: E0225 11:20:13.802495 5005 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="67023985697f96200596a4d662626679a8a51a06b7aaad02de5efcf52ab2c31e" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 25 11:20:13 crc kubenswrapper[5005]: E0225 11:20:13.802578 5005 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-qcfmz" podUID="7eaa1833-a4ad-422c-93db-6c442609d050" containerName="kube-multus-additional-cni-plugins" Feb 25 11:20:16 crc kubenswrapper[5005]: I0225 11:20:16.840267 5005 ???:1] "http: TLS handshake error from 192.168.126.11:56442: no serving certificate available for the kubelet" Feb 25 11:20:18 crc kubenswrapper[5005]: I0225 11:20:18.015145 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 25 11:20:18 crc kubenswrapper[5005]: I0225 11:20:18.093215 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1b21e910-d77a-421f-a1ec-916d8d77bbd2-kubelet-dir\") pod \"1b21e910-d77a-421f-a1ec-916d8d77bbd2\" (UID: \"1b21e910-d77a-421f-a1ec-916d8d77bbd2\") " Feb 25 11:20:18 crc kubenswrapper[5005]: I0225 11:20:18.093414 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1b21e910-d77a-421f-a1ec-916d8d77bbd2-kube-api-access\") pod \"1b21e910-d77a-421f-a1ec-916d8d77bbd2\" (UID: \"1b21e910-d77a-421f-a1ec-916d8d77bbd2\") " Feb 25 11:20:18 crc kubenswrapper[5005]: I0225 11:20:18.093401 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b21e910-d77a-421f-a1ec-916d8d77bbd2-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1b21e910-d77a-421f-a1ec-916d8d77bbd2" (UID: "1b21e910-d77a-421f-a1ec-916d8d77bbd2"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 25 11:20:18 crc kubenswrapper[5005]: I0225 11:20:18.093986 5005 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1b21e910-d77a-421f-a1ec-916d8d77bbd2-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 25 11:20:18 crc kubenswrapper[5005]: I0225 11:20:18.108509 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b21e910-d77a-421f-a1ec-916d8d77bbd2-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1b21e910-d77a-421f-a1ec-916d8d77bbd2" (UID: "1b21e910-d77a-421f-a1ec-916d8d77bbd2"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:20:18 crc kubenswrapper[5005]: I0225 11:20:18.199077 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1b21e910-d77a-421f-a1ec-916d8d77bbd2-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 25 11:20:18 crc kubenswrapper[5005]: I0225 11:20:18.251361 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-756f9dc565-vwbkh"] Feb 25 11:20:18 crc kubenswrapper[5005]: I0225 11:20:18.253661 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-756f9dc565-vwbkh" podUID="3320b664-bd3b-44df-93f6-0e7f6108f60f" containerName="controller-manager" containerID="cri-o://e9f74d36e50a314519d49078f2fb5bf7fdfc720cd5b0d439e7fb27ec46c89696" gracePeriod=30 Feb 25 11:20:18 crc kubenswrapper[5005]: I0225 11:20:18.276516 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1b21e910-d77a-421f-a1ec-916d8d77bbd2","Type":"ContainerDied","Data":"f3ffb906088982b7bccc4a3cb2571373ca85bfa44c5de3e66e372468ba3910c2"} Feb 25 11:20:18 crc kubenswrapper[5005]: I0225 11:20:18.276572 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3ffb906088982b7bccc4a3cb2571373ca85bfa44c5de3e66e372468ba3910c2" Feb 25 11:20:18 crc kubenswrapper[5005]: I0225 11:20:18.276588 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 25 11:20:18 crc kubenswrapper[5005]: I0225 11:20:18.281189 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-746c6848cc-rt6bf"] Feb 25 11:20:18 crc kubenswrapper[5005]: I0225 11:20:18.281454 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-746c6848cc-rt6bf" podUID="39b36f61-4a48-4c6e-b755-9ffc31065462" containerName="route-controller-manager" containerID="cri-o://fb3866567aa9dba2dd318ea031f13c8beb2a112a4a906fc96773df8a9ab26e73" gracePeriod=30 Feb 25 11:20:19 crc kubenswrapper[5005]: I0225 11:20:19.283666 5005 generic.go:334] "Generic (PLEG): container finished" podID="39b36f61-4a48-4c6e-b755-9ffc31065462" containerID="fb3866567aa9dba2dd318ea031f13c8beb2a112a4a906fc96773df8a9ab26e73" exitCode=0 Feb 25 11:20:19 crc kubenswrapper[5005]: I0225 11:20:19.283748 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-746c6848cc-rt6bf" event={"ID":"39b36f61-4a48-4c6e-b755-9ffc31065462","Type":"ContainerDied","Data":"fb3866567aa9dba2dd318ea031f13c8beb2a112a4a906fc96773df8a9ab26e73"} Feb 25 11:20:19 crc kubenswrapper[5005]: I0225 11:20:19.285401 5005 generic.go:334] "Generic (PLEG): container finished" podID="3320b664-bd3b-44df-93f6-0e7f6108f60f" containerID="e9f74d36e50a314519d49078f2fb5bf7fdfc720cd5b0d439e7fb27ec46c89696" exitCode=0 Feb 25 11:20:19 crc kubenswrapper[5005]: I0225 11:20:19.285434 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-756f9dc565-vwbkh" event={"ID":"3320b664-bd3b-44df-93f6-0e7f6108f60f","Type":"ContainerDied","Data":"e9f74d36e50a314519d49078f2fb5bf7fdfc720cd5b0d439e7fb27ec46c89696"} Feb 25 11:20:20 crc kubenswrapper[5005]: I0225 11:20:20.980877 5005 patch_prober.go:28] interesting pod/route-controller-manager-746c6848cc-rt6bf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.50:8443/healthz\": dial tcp 10.217.0.50:8443: connect: connection refused" start-of-body= Feb 25 11:20:20 crc kubenswrapper[5005]: I0225 11:20:20.980946 5005 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-746c6848cc-rt6bf" podUID="39b36f61-4a48-4c6e-b755-9ffc31065462" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.50:8443/healthz\": dial tcp 10.217.0.50:8443: connect: connection refused" Feb 25 11:20:21 crc kubenswrapper[5005]: I0225 11:20:21.314572 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:20:22 crc kubenswrapper[5005]: I0225 11:20:22.690930 5005 patch_prober.go:28] interesting pod/controller-manager-756f9dc565-vwbkh container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" start-of-body= Feb 25 11:20:22 crc kubenswrapper[5005]: I0225 11:20:22.691361 5005 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-756f9dc565-vwbkh" podUID="3320b664-bd3b-44df-93f6-0e7f6108f60f" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" Feb 25 11:20:23 crc kubenswrapper[5005]: E0225 11:20:23.793920 5005 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="67023985697f96200596a4d662626679a8a51a06b7aaad02de5efcf52ab2c31e" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 25 11:20:23 crc kubenswrapper[5005]: E0225 11:20:23.795968 5005 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="67023985697f96200596a4d662626679a8a51a06b7aaad02de5efcf52ab2c31e" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 25 11:20:23 crc kubenswrapper[5005]: E0225 11:20:23.797788 5005 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="67023985697f96200596a4d662626679a8a51a06b7aaad02de5efcf52ab2c31e" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 25 11:20:23 crc kubenswrapper[5005]: E0225 11:20:23.797819 5005 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-qcfmz" podUID="7eaa1833-a4ad-422c-93db-6c442609d050" containerName="kube-multus-additional-cni-plugins" Feb 25 11:20:27 crc kubenswrapper[5005]: E0225 11:20:27.013407 5005 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 25 11:20:27 crc kubenswrapper[5005]: E0225 11:20:27.013894 5005 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6vhjn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-vw6ng_openshift-marketplace(4e5d39a0-d977-4973-a2a3-55699e86de91): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 25 11:20:27 crc kubenswrapper[5005]: E0225 11:20:27.015239 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-vw6ng" podUID="4e5d39a0-d977-4973-a2a3-55699e86de91" Feb 25 11:20:30 crc kubenswrapper[5005]: E0225 11:20:30.458985 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vw6ng" podUID="4e5d39a0-d977-4973-a2a3-55699e86de91" Feb 25 11:20:30 crc kubenswrapper[5005]: E0225 11:20:30.524733 5005 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 25 11:20:30 crc kubenswrapper[5005]: E0225 11:20:30.525247 5005 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ctf8w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-z5mfq_openshift-marketplace(5552a8c8-de53-484f-a47f-42dbd9983137): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 25 11:20:30 crc kubenswrapper[5005]: E0225 11:20:30.526491 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-z5mfq" podUID="5552a8c8-de53-484f-a47f-42dbd9983137" Feb 25 11:20:30 crc kubenswrapper[5005]: E0225 11:20:30.542700 5005 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 25 11:20:30 crc kubenswrapper[5005]: E0225 11:20:30.542835 5005 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-62sg8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-zhhkb_openshift-marketplace(8e6f04db-3c08-49ba-af16-7ebbc0cfe6f1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 25 11:20:30 crc kubenswrapper[5005]: E0225 11:20:30.544972 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-zhhkb" podUID="8e6f04db-3c08-49ba-af16-7ebbc0cfe6f1" Feb 25 11:20:30 crc kubenswrapper[5005]: E0225 11:20:30.548394 5005 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 25 11:20:30 crc kubenswrapper[5005]: E0225 11:20:30.548498 5005 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-928hn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-twk58_openshift-marketplace(7d259751-70f2-4fcc-a6c7-4b99993eb217): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 25 11:20:30 crc kubenswrapper[5005]: E0225 11:20:30.549833 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-twk58" podUID="7d259751-70f2-4fcc-a6c7-4b99993eb217" Feb 25 11:20:30 crc kubenswrapper[5005]: W0225 11:20:30.784448 5005 watcher.go:93] Error while processing event ("/sys/fs/cgroup/user.slice/user-1000.slice/user@1000.service/user.slice/podman-12188.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/user.slice/user-1000.slice/user@1000.service/user.slice/podman-12188.scope: no such file or directory Feb 25 11:20:30 crc kubenswrapper[5005]: W0225 11:20:30.784541 5005 watcher.go:93] Error while processing event ("/sys/fs/cgroup/user.slice/user-1000.slice/user@1000.service/user.slice/libpod-conmon-0600371549dd6d63a4e1dfad78cc6bca01415938ea110af57e6e80681b283b60.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/user.slice/user-1000.slice/user@1000.service/user.slice/libpod-conmon-0600371549dd6d63a4e1dfad78cc6bca01415938ea110af57e6e80681b283b60.scope: no such file or directory Feb 25 11:20:30 crc kubenswrapper[5005]: W0225 11:20:30.784596 5005 watcher.go:93] Error while processing event ("/sys/fs/cgroup/user.slice/user-1000.slice/user@1000.service/user.slice/libpod-0600371549dd6d63a4e1dfad78cc6bca01415938ea110af57e6e80681b283b60.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/user.slice/user-1000.slice/user@1000.service/user.slice/libpod-0600371549dd6d63a4e1dfad78cc6bca01415938ea110af57e6e80681b283b60.scope: no such file or directory Feb 25 11:20:30 crc kubenswrapper[5005]: W0225 11:20:30.784638 5005 watcher.go:93] Error while processing event ("/sys/fs/cgroup/user.slice/user-1000.slice/user@1000.service/user.slice/podman-12274.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/user.slice/user-1000.slice/user@1000.service/user.slice/podman-12274.scope: no such file or directory Feb 25 11:20:30 crc kubenswrapper[5005]: W0225 11:20:30.784703 5005 watcher.go:93] Error while processing event ("/sys/fs/cgroup/user.slice/user-1000.slice/user@1000.service/user.slice/libpod-conmon-428840ffb89754e720d3de482db9d832ee6164fb80f0c007035d9513168d7c0a.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/user.slice/user-1000.slice/user@1000.service/user.slice/libpod-conmon-428840ffb89754e720d3de482db9d832ee6164fb80f0c007035d9513168d7c0a.scope: no such file or directory Feb 25 11:20:30 crc kubenswrapper[5005]: W0225 11:20:30.784743 5005 watcher.go:93] Error while processing event ("/sys/fs/cgroup/user.slice/user-1000.slice/user@1000.service/user.slice/libpod-428840ffb89754e720d3de482db9d832ee6164fb80f0c007035d9513168d7c0a.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/user.slice/user-1000.slice/user@1000.service/user.slice/libpod-428840ffb89754e720d3de482db9d832ee6164fb80f0c007035d9513168d7c0a.scope: no such file or directory Feb 25 11:20:31 crc kubenswrapper[5005]: E0225 11:20:31.306660 5005 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 25 11:20:31 crc kubenswrapper[5005]: E0225 11:20:31.306796 5005 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 25 11:20:31 crc kubenswrapper[5005]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 25 11:20:31 crc kubenswrapper[5005]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6qm9d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29533640-qhftv_openshift-infra(705e7826-0109-4ea7-bfe8-3cf9e37285bc): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled Feb 25 11:20:31 crc kubenswrapper[5005]: > logger="UnhandledError" Feb 25 11:20:31 crc kubenswrapper[5005]: E0225 11:20:31.307852 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-infra/auto-csr-approver-29533640-qhftv" podUID="705e7826-0109-4ea7-bfe8-3cf9e37285bc" Feb 25 11:20:31 crc kubenswrapper[5005]: I0225 11:20:31.365665 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-qcfmz_7eaa1833-a4ad-422c-93db-6c442609d050/kube-multus-additional-cni-plugins/0.log" Feb 25 11:20:31 crc kubenswrapper[5005]: I0225 11:20:31.365711 5005 generic.go:334] "Generic (PLEG): container finished" podID="7eaa1833-a4ad-422c-93db-6c442609d050" containerID="67023985697f96200596a4d662626679a8a51a06b7aaad02de5efcf52ab2c31e" exitCode=137 Feb 25 11:20:31 crc kubenswrapper[5005]: I0225 11:20:31.365839 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-qcfmz" event={"ID":"7eaa1833-a4ad-422c-93db-6c442609d050","Type":"ContainerDied","Data":"67023985697f96200596a4d662626679a8a51a06b7aaad02de5efcf52ab2c31e"} Feb 25 11:20:31 crc kubenswrapper[5005]: E0225 11:20:31.368404 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29533640-qhftv" podUID="705e7826-0109-4ea7-bfe8-3cf9e37285bc" Feb 25 11:20:31 crc kubenswrapper[5005]: I0225 11:20:31.981672 5005 patch_prober.go:28] interesting pod/route-controller-manager-746c6848cc-rt6bf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.50:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 25 11:20:31 crc kubenswrapper[5005]: I0225 11:20:31.982105 5005 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-746c6848cc-rt6bf" podUID="39b36f61-4a48-4c6e-b755-9ffc31065462" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.50:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 25 11:20:32 crc kubenswrapper[5005]: E0225 11:20:32.636392 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zhhkb" podUID="8e6f04db-3c08-49ba-af16-7ebbc0cfe6f1" Feb 25 11:20:32 crc kubenswrapper[5005]: E0225 11:20:32.636389 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-z5mfq" podUID="5552a8c8-de53-484f-a47f-42dbd9983137" Feb 25 11:20:32 crc kubenswrapper[5005]: E0225 11:20:32.636453 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-twk58" podUID="7d259751-70f2-4fcc-a6c7-4b99993eb217" Feb 25 11:20:32 crc kubenswrapper[5005]: I0225 11:20:32.698798 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-756f9dc565-vwbkh" Feb 25 11:20:32 crc kubenswrapper[5005]: I0225 11:20:32.704989 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-746c6848cc-rt6bf" Feb 25 11:20:32 crc kubenswrapper[5005]: I0225 11:20:32.706461 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-qcfmz_7eaa1833-a4ad-422c-93db-6c442609d050/kube-multus-additional-cni-plugins/0.log" Feb 25 11:20:32 crc kubenswrapper[5005]: I0225 11:20:32.706564 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-qcfmz" Feb 25 11:20:32 crc kubenswrapper[5005]: I0225 11:20:32.744752 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-8448bcbf66-wv8zb"] Feb 25 11:20:32 crc kubenswrapper[5005]: E0225 11:20:32.745027 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="871158bf-c5f6-4e49-981a-bf00d5b8c4c7" containerName="collect-profiles" Feb 25 11:20:32 crc kubenswrapper[5005]: I0225 11:20:32.745042 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="871158bf-c5f6-4e49-981a-bf00d5b8c4c7" containerName="collect-profiles" Feb 25 11:20:32 crc kubenswrapper[5005]: E0225 11:20:32.745050 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0f20782-1535-4c8a-8821-6ca8b3262a2e" containerName="pruner" Feb 25 11:20:32 crc kubenswrapper[5005]: I0225 11:20:32.745057 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0f20782-1535-4c8a-8821-6ca8b3262a2e" containerName="pruner" Feb 25 11:20:32 crc kubenswrapper[5005]: E0225 11:20:32.745073 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3320b664-bd3b-44df-93f6-0e7f6108f60f" containerName="controller-manager" Feb 25 11:20:32 crc kubenswrapper[5005]: I0225 11:20:32.745079 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="3320b664-bd3b-44df-93f6-0e7f6108f60f" containerName="controller-manager" Feb 25 11:20:32 crc kubenswrapper[5005]: E0225 11:20:32.745088 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7eaa1833-a4ad-422c-93db-6c442609d050" containerName="kube-multus-additional-cni-plugins" Feb 25 11:20:32 crc kubenswrapper[5005]: I0225 11:20:32.745094 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="7eaa1833-a4ad-422c-93db-6c442609d050" containerName="kube-multus-additional-cni-plugins" Feb 25 11:20:32 crc kubenswrapper[5005]: E0225 11:20:32.745100 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b21e910-d77a-421f-a1ec-916d8d77bbd2" containerName="pruner" Feb 25 11:20:32 crc kubenswrapper[5005]: I0225 11:20:32.745107 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b21e910-d77a-421f-a1ec-916d8d77bbd2" containerName="pruner" Feb 25 11:20:32 crc kubenswrapper[5005]: E0225 11:20:32.745118 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39b36f61-4a48-4c6e-b755-9ffc31065462" containerName="route-controller-manager" Feb 25 11:20:32 crc kubenswrapper[5005]: I0225 11:20:32.745124 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="39b36f61-4a48-4c6e-b755-9ffc31065462" containerName="route-controller-manager" Feb 25 11:20:32 crc kubenswrapper[5005]: I0225 11:20:32.745223 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b21e910-d77a-421f-a1ec-916d8d77bbd2" containerName="pruner" Feb 25 11:20:32 crc kubenswrapper[5005]: I0225 11:20:32.745231 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="871158bf-c5f6-4e49-981a-bf00d5b8c4c7" containerName="collect-profiles" Feb 25 11:20:32 crc kubenswrapper[5005]: I0225 11:20:32.745243 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="3320b664-bd3b-44df-93f6-0e7f6108f60f" containerName="controller-manager" Feb 25 11:20:32 crc kubenswrapper[5005]: I0225 11:20:32.745249 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0f20782-1535-4c8a-8821-6ca8b3262a2e" containerName="pruner" Feb 25 11:20:32 crc kubenswrapper[5005]: I0225 11:20:32.745257 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="39b36f61-4a48-4c6e-b755-9ffc31065462" containerName="route-controller-manager" Feb 25 11:20:32 crc kubenswrapper[5005]: I0225 11:20:32.745269 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="7eaa1833-a4ad-422c-93db-6c442609d050" containerName="kube-multus-additional-cni-plugins" Feb 25 11:20:32 crc kubenswrapper[5005]: I0225 11:20:32.746061 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8448bcbf66-wv8zb" Feb 25 11:20:32 crc kubenswrapper[5005]: E0225 11:20:32.749654 5005 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 25 11:20:32 crc kubenswrapper[5005]: E0225 11:20:32.749951 5005 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6r8dk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-jlrk4_openshift-marketplace(d4ff9940-8045-4d50-99a8-85f30d202aae): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 25 11:20:32 crc kubenswrapper[5005]: E0225 11:20:32.751570 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-jlrk4" podUID="d4ff9940-8045-4d50-99a8-85f30d202aae" Feb 25 11:20:32 crc kubenswrapper[5005]: I0225 11:20:32.759091 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8448bcbf66-wv8zb"] Feb 25 11:20:32 crc kubenswrapper[5005]: E0225 11:20:32.766994 5005 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 25 11:20:32 crc kubenswrapper[5005]: E0225 11:20:32.767122 5005 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f9f2q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-dgfww_openshift-marketplace(a80639a8-456e-4f88-a949-ce0c6fa8284c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 25 11:20:32 crc kubenswrapper[5005]: E0225 11:20:32.768969 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-dgfww" podUID="a80639a8-456e-4f88-a949-ce0c6fa8284c" Feb 25 11:20:32 crc kubenswrapper[5005]: E0225 11:20:32.780664 5005 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 25 11:20:32 crc kubenswrapper[5005]: E0225 11:20:32.780853 5005 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-flpk8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-9bdm4_openshift-marketplace(f0fc9766-1d31-4d5a-8234-47d9e32f59be): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 25 11:20:32 crc kubenswrapper[5005]: E0225 11:20:32.782270 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-9bdm4" podUID="f0fc9766-1d31-4d5a-8234-47d9e32f59be" Feb 25 11:20:32 crc kubenswrapper[5005]: I0225 11:20:32.797394 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgn94\" (UniqueName: \"kubernetes.io/projected/3320b664-bd3b-44df-93f6-0e7f6108f60f-kube-api-access-rgn94\") pod \"3320b664-bd3b-44df-93f6-0e7f6108f60f\" (UID: \"3320b664-bd3b-44df-93f6-0e7f6108f60f\") " Feb 25 11:20:32 crc kubenswrapper[5005]: I0225 11:20:32.797436 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/7eaa1833-a4ad-422c-93db-6c442609d050-ready\") pod \"7eaa1833-a4ad-422c-93db-6c442609d050\" (UID: \"7eaa1833-a4ad-422c-93db-6c442609d050\") " Feb 25 11:20:32 crc kubenswrapper[5005]: I0225 11:20:32.797467 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3320b664-bd3b-44df-93f6-0e7f6108f60f-proxy-ca-bundles\") pod \"3320b664-bd3b-44df-93f6-0e7f6108f60f\" (UID: \"3320b664-bd3b-44df-93f6-0e7f6108f60f\") " Feb 25 11:20:32 crc kubenswrapper[5005]: I0225 11:20:32.797499 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3320b664-bd3b-44df-93f6-0e7f6108f60f-client-ca\") pod \"3320b664-bd3b-44df-93f6-0e7f6108f60f\" (UID: \"3320b664-bd3b-44df-93f6-0e7f6108f60f\") " Feb 25 11:20:32 crc kubenswrapper[5005]: I0225 11:20:32.797550 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npvwj\" (UniqueName: \"kubernetes.io/projected/7eaa1833-a4ad-422c-93db-6c442609d050-kube-api-access-npvwj\") pod \"7eaa1833-a4ad-422c-93db-6c442609d050\" (UID: \"7eaa1833-a4ad-422c-93db-6c442609d050\") " Feb 25 11:20:32 crc kubenswrapper[5005]: I0225 11:20:32.797566 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39b36f61-4a48-4c6e-b755-9ffc31065462-config\") pod \"39b36f61-4a48-4c6e-b755-9ffc31065462\" (UID: \"39b36f61-4a48-4c6e-b755-9ffc31065462\") " Feb 25 11:20:32 crc kubenswrapper[5005]: I0225 11:20:32.797587 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7eaa1833-a4ad-422c-93db-6c442609d050-tuning-conf-dir\") pod \"7eaa1833-a4ad-422c-93db-6c442609d050\" (UID: \"7eaa1833-a4ad-422c-93db-6c442609d050\") " Feb 25 11:20:32 crc kubenswrapper[5005]: I0225 11:20:32.797612 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39b36f61-4a48-4c6e-b755-9ffc31065462-serving-cert\") pod \"39b36f61-4a48-4c6e-b755-9ffc31065462\" (UID: \"39b36f61-4a48-4c6e-b755-9ffc31065462\") " Feb 25 11:20:32 crc kubenswrapper[5005]: I0225 11:20:32.797626 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/39b36f61-4a48-4c6e-b755-9ffc31065462-client-ca\") pod \"39b36f61-4a48-4c6e-b755-9ffc31065462\" (UID: \"39b36f61-4a48-4c6e-b755-9ffc31065462\") " Feb 25 11:20:32 crc kubenswrapper[5005]: I0225 11:20:32.797641 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3320b664-bd3b-44df-93f6-0e7f6108f60f-serving-cert\") pod \"3320b664-bd3b-44df-93f6-0e7f6108f60f\" (UID: \"3320b664-bd3b-44df-93f6-0e7f6108f60f\") " Feb 25 11:20:32 crc kubenswrapper[5005]: I0225 11:20:32.797667 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6fctx\" (UniqueName: \"kubernetes.io/projected/39b36f61-4a48-4c6e-b755-9ffc31065462-kube-api-access-6fctx\") pod \"39b36f61-4a48-4c6e-b755-9ffc31065462\" (UID: \"39b36f61-4a48-4c6e-b755-9ffc31065462\") " Feb 25 11:20:32 crc kubenswrapper[5005]: I0225 11:20:32.797702 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7eaa1833-a4ad-422c-93db-6c442609d050-cni-sysctl-allowlist\") pod \"7eaa1833-a4ad-422c-93db-6c442609d050\" (UID: \"7eaa1833-a4ad-422c-93db-6c442609d050\") " Feb 25 11:20:32 crc kubenswrapper[5005]: I0225 11:20:32.797749 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3320b664-bd3b-44df-93f6-0e7f6108f60f-config\") pod \"3320b664-bd3b-44df-93f6-0e7f6108f60f\" (UID: \"3320b664-bd3b-44df-93f6-0e7f6108f60f\") " Feb 25 11:20:32 crc kubenswrapper[5005]: I0225 11:20:32.799259 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39b36f61-4a48-4c6e-b755-9ffc31065462-config" (OuterVolumeSpecName: "config") pod "39b36f61-4a48-4c6e-b755-9ffc31065462" (UID: "39b36f61-4a48-4c6e-b755-9ffc31065462"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:20:32 crc kubenswrapper[5005]: I0225 11:20:32.800295 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3320b664-bd3b-44df-93f6-0e7f6108f60f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "3320b664-bd3b-44df-93f6-0e7f6108f60f" (UID: "3320b664-bd3b-44df-93f6-0e7f6108f60f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:20:32 crc kubenswrapper[5005]: I0225 11:20:32.800745 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3320b664-bd3b-44df-93f6-0e7f6108f60f-client-ca" (OuterVolumeSpecName: "client-ca") pod "3320b664-bd3b-44df-93f6-0e7f6108f60f" (UID: "3320b664-bd3b-44df-93f6-0e7f6108f60f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:20:32 crc kubenswrapper[5005]: I0225 11:20:32.801119 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7eaa1833-a4ad-422c-93db-6c442609d050-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "7eaa1833-a4ad-422c-93db-6c442609d050" (UID: "7eaa1833-a4ad-422c-93db-6c442609d050"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 25 11:20:32 crc kubenswrapper[5005]: I0225 11:20:32.801309 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3320b664-bd3b-44df-93f6-0e7f6108f60f-config" (OuterVolumeSpecName: "config") pod "3320b664-bd3b-44df-93f6-0e7f6108f60f" (UID: "3320b664-bd3b-44df-93f6-0e7f6108f60f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:20:32 crc kubenswrapper[5005]: I0225 11:20:32.801559 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39b36f61-4a48-4c6e-b755-9ffc31065462-client-ca" (OuterVolumeSpecName: "client-ca") pod "39b36f61-4a48-4c6e-b755-9ffc31065462" (UID: "39b36f61-4a48-4c6e-b755-9ffc31065462"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:20:32 crc kubenswrapper[5005]: I0225 11:20:32.802203 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7eaa1833-a4ad-422c-93db-6c442609d050-ready" (OuterVolumeSpecName: "ready") pod "7eaa1833-a4ad-422c-93db-6c442609d050" (UID: "7eaa1833-a4ad-422c-93db-6c442609d050"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:20:32 crc kubenswrapper[5005]: I0225 11:20:32.803465 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7eaa1833-a4ad-422c-93db-6c442609d050-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7eaa1833-a4ad-422c-93db-6c442609d050" (UID: "7eaa1833-a4ad-422c-93db-6c442609d050"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:20:32 crc kubenswrapper[5005]: I0225 11:20:32.804220 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3320b664-bd3b-44df-93f6-0e7f6108f60f-kube-api-access-rgn94" (OuterVolumeSpecName: "kube-api-access-rgn94") pod "3320b664-bd3b-44df-93f6-0e7f6108f60f" (UID: "3320b664-bd3b-44df-93f6-0e7f6108f60f"). InnerVolumeSpecName "kube-api-access-rgn94". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:20:32 crc kubenswrapper[5005]: I0225 11:20:32.804412 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39b36f61-4a48-4c6e-b755-9ffc31065462-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "39b36f61-4a48-4c6e-b755-9ffc31065462" (UID: "39b36f61-4a48-4c6e-b755-9ffc31065462"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:20:32 crc kubenswrapper[5005]: I0225 11:20:32.805032 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3320b664-bd3b-44df-93f6-0e7f6108f60f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3320b664-bd3b-44df-93f6-0e7f6108f60f" (UID: "3320b664-bd3b-44df-93f6-0e7f6108f60f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:20:32 crc kubenswrapper[5005]: I0225 11:20:32.805884 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7eaa1833-a4ad-422c-93db-6c442609d050-kube-api-access-npvwj" (OuterVolumeSpecName: "kube-api-access-npvwj") pod "7eaa1833-a4ad-422c-93db-6c442609d050" (UID: "7eaa1833-a4ad-422c-93db-6c442609d050"). InnerVolumeSpecName "kube-api-access-npvwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:20:32 crc kubenswrapper[5005]: I0225 11:20:32.806086 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39b36f61-4a48-4c6e-b755-9ffc31065462-kube-api-access-6fctx" (OuterVolumeSpecName: "kube-api-access-6fctx") pod "39b36f61-4a48-4c6e-b755-9ffc31065462" (UID: "39b36f61-4a48-4c6e-b755-9ffc31065462"). InnerVolumeSpecName "kube-api-access-6fctx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:20:32 crc kubenswrapper[5005]: I0225 11:20:32.899250 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d-config\") pod \"controller-manager-8448bcbf66-wv8zb\" (UID: \"a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d\") " pod="openshift-controller-manager/controller-manager-8448bcbf66-wv8zb" Feb 25 11:20:32 crc kubenswrapper[5005]: I0225 11:20:32.899551 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d-serving-cert\") pod \"controller-manager-8448bcbf66-wv8zb\" (UID: \"a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d\") " pod="openshift-controller-manager/controller-manager-8448bcbf66-wv8zb" Feb 25 11:20:32 crc kubenswrapper[5005]: I0225 11:20:32.899585 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d-client-ca\") pod \"controller-manager-8448bcbf66-wv8zb\" (UID: \"a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d\") " pod="openshift-controller-manager/controller-manager-8448bcbf66-wv8zb" Feb 25 11:20:32 crc kubenswrapper[5005]: I0225 11:20:32.899611 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d-proxy-ca-bundles\") pod \"controller-manager-8448bcbf66-wv8zb\" (UID: \"a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d\") " pod="openshift-controller-manager/controller-manager-8448bcbf66-wv8zb" Feb 25 11:20:32 crc kubenswrapper[5005]: I0225 11:20:32.899654 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltnlc\" (UniqueName: \"kubernetes.io/projected/a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d-kube-api-access-ltnlc\") pod \"controller-manager-8448bcbf66-wv8zb\" (UID: \"a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d\") " pod="openshift-controller-manager/controller-manager-8448bcbf66-wv8zb" Feb 25 11:20:32 crc kubenswrapper[5005]: I0225 11:20:32.899732 5005 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39b36f61-4a48-4c6e-b755-9ffc31065462-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 25 11:20:32 crc kubenswrapper[5005]: I0225 11:20:32.899764 5005 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/39b36f61-4a48-4c6e-b755-9ffc31065462-client-ca\") on node \"crc\" DevicePath \"\"" Feb 25 11:20:32 crc kubenswrapper[5005]: I0225 11:20:32.899776 5005 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3320b664-bd3b-44df-93f6-0e7f6108f60f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 25 11:20:32 crc kubenswrapper[5005]: I0225 11:20:32.899786 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6fctx\" (UniqueName: \"kubernetes.io/projected/39b36f61-4a48-4c6e-b755-9ffc31065462-kube-api-access-6fctx\") on node \"crc\" DevicePath \"\"" Feb 25 11:20:32 crc kubenswrapper[5005]: I0225 11:20:32.899797 5005 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7eaa1833-a4ad-422c-93db-6c442609d050-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 25 11:20:32 crc kubenswrapper[5005]: I0225 11:20:32.899806 5005 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3320b664-bd3b-44df-93f6-0e7f6108f60f-config\") on node \"crc\" DevicePath \"\"" Feb 25 11:20:32 crc kubenswrapper[5005]: I0225 11:20:32.899817 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgn94\" (UniqueName: \"kubernetes.io/projected/3320b664-bd3b-44df-93f6-0e7f6108f60f-kube-api-access-rgn94\") on node \"crc\" DevicePath \"\"" Feb 25 11:20:32 crc kubenswrapper[5005]: I0225 11:20:32.899827 5005 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/7eaa1833-a4ad-422c-93db-6c442609d050-ready\") on node \"crc\" DevicePath \"\"" Feb 25 11:20:32 crc kubenswrapper[5005]: I0225 11:20:32.899838 5005 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3320b664-bd3b-44df-93f6-0e7f6108f60f-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 25 11:20:32 crc kubenswrapper[5005]: I0225 11:20:32.899847 5005 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3320b664-bd3b-44df-93f6-0e7f6108f60f-client-ca\") on node \"crc\" DevicePath \"\"" Feb 25 11:20:32 crc kubenswrapper[5005]: I0225 11:20:32.899858 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npvwj\" (UniqueName: \"kubernetes.io/projected/7eaa1833-a4ad-422c-93db-6c442609d050-kube-api-access-npvwj\") on node \"crc\" DevicePath \"\"" Feb 25 11:20:32 crc kubenswrapper[5005]: I0225 11:20:32.899867 5005 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39b36f61-4a48-4c6e-b755-9ffc31065462-config\") on node \"crc\" DevicePath \"\"" Feb 25 11:20:32 crc kubenswrapper[5005]: I0225 11:20:32.899876 5005 reconciler_common.go:293] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7eaa1833-a4ad-422c-93db-6c442609d050-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Feb 25 11:20:33 crc kubenswrapper[5005]: I0225 11:20:33.000644 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltnlc\" (UniqueName: \"kubernetes.io/projected/a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d-kube-api-access-ltnlc\") pod \"controller-manager-8448bcbf66-wv8zb\" (UID: \"a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d\") " pod="openshift-controller-manager/controller-manager-8448bcbf66-wv8zb" Feb 25 11:20:33 crc kubenswrapper[5005]: I0225 11:20:33.000698 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d-config\") pod \"controller-manager-8448bcbf66-wv8zb\" (UID: \"a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d\") " pod="openshift-controller-manager/controller-manager-8448bcbf66-wv8zb" Feb 25 11:20:33 crc kubenswrapper[5005]: I0225 11:20:33.000731 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d-serving-cert\") pod \"controller-manager-8448bcbf66-wv8zb\" (UID: \"a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d\") " pod="openshift-controller-manager/controller-manager-8448bcbf66-wv8zb" Feb 25 11:20:33 crc kubenswrapper[5005]: I0225 11:20:33.000769 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d-client-ca\") pod \"controller-manager-8448bcbf66-wv8zb\" (UID: \"a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d\") " pod="openshift-controller-manager/controller-manager-8448bcbf66-wv8zb" Feb 25 11:20:33 crc kubenswrapper[5005]: I0225 11:20:33.000798 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d-proxy-ca-bundles\") pod \"controller-manager-8448bcbf66-wv8zb\" (UID: \"a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d\") " pod="openshift-controller-manager/controller-manager-8448bcbf66-wv8zb" Feb 25 11:20:33 crc kubenswrapper[5005]: I0225 11:20:33.002048 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d-proxy-ca-bundles\") pod \"controller-manager-8448bcbf66-wv8zb\" (UID: \"a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d\") " pod="openshift-controller-manager/controller-manager-8448bcbf66-wv8zb" Feb 25 11:20:33 crc kubenswrapper[5005]: I0225 11:20:33.002357 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d-client-ca\") pod \"controller-manager-8448bcbf66-wv8zb\" (UID: \"a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d\") " pod="openshift-controller-manager/controller-manager-8448bcbf66-wv8zb" Feb 25 11:20:33 crc kubenswrapper[5005]: I0225 11:20:33.003423 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d-config\") pod \"controller-manager-8448bcbf66-wv8zb\" (UID: \"a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d\") " pod="openshift-controller-manager/controller-manager-8448bcbf66-wv8zb" Feb 25 11:20:33 crc kubenswrapper[5005]: I0225 11:20:33.005823 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d-serving-cert\") pod \"controller-manager-8448bcbf66-wv8zb\" (UID: \"a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d\") " pod="openshift-controller-manager/controller-manager-8448bcbf66-wv8zb" Feb 25 11:20:33 crc kubenswrapper[5005]: I0225 11:20:33.022344 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltnlc\" (UniqueName: \"kubernetes.io/projected/a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d-kube-api-access-ltnlc\") pod \"controller-manager-8448bcbf66-wv8zb\" (UID: \"a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d\") " pod="openshift-controller-manager/controller-manager-8448bcbf66-wv8zb" Feb 25 11:20:33 crc kubenswrapper[5005]: I0225 11:20:33.079793 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8448bcbf66-wv8zb" Feb 25 11:20:33 crc kubenswrapper[5005]: I0225 11:20:33.376929 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-746c6848cc-rt6bf" event={"ID":"39b36f61-4a48-4c6e-b755-9ffc31065462","Type":"ContainerDied","Data":"a7d3e260bb9e9f2e7847e38ccd3c519a113bbecdda6a81ea6087e04c3750870e"} Feb 25 11:20:33 crc kubenswrapper[5005]: I0225 11:20:33.377340 5005 scope.go:117] "RemoveContainer" containerID="fb3866567aa9dba2dd318ea031f13c8beb2a112a4a906fc96773df8a9ab26e73" Feb 25 11:20:33 crc kubenswrapper[5005]: I0225 11:20:33.377487 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-746c6848cc-rt6bf" Feb 25 11:20:33 crc kubenswrapper[5005]: I0225 11:20:33.382853 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-756f9dc565-vwbkh" event={"ID":"3320b664-bd3b-44df-93f6-0e7f6108f60f","Type":"ContainerDied","Data":"f39978faf1eef3a9f0acae4419c16f2ae0a27d33d9c4f017e61b00cb44a0f89e"} Feb 25 11:20:33 crc kubenswrapper[5005]: I0225 11:20:33.382930 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-756f9dc565-vwbkh" Feb 25 11:20:33 crc kubenswrapper[5005]: I0225 11:20:33.394013 5005 generic.go:334] "Generic (PLEG): container finished" podID="5a7766dc-4b89-4d09-87a3-7690f8af1ad7" containerID="3837af66c67ae31840de45c8e92d4efa2354836e56f46a45e9887725581f740e" exitCode=0 Feb 25 11:20:33 crc kubenswrapper[5005]: I0225 11:20:33.394580 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qf86g" event={"ID":"5a7766dc-4b89-4d09-87a3-7690f8af1ad7","Type":"ContainerDied","Data":"3837af66c67ae31840de45c8e92d4efa2354836e56f46a45e9887725581f740e"} Feb 25 11:20:33 crc kubenswrapper[5005]: I0225 11:20:33.399165 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 25 11:20:33 crc kubenswrapper[5005]: I0225 11:20:33.401396 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"615166d051854429ec9eefe2bb6a7c81676db08ee8b7e91a0503c82f82c27b7a"} Feb 25 11:20:33 crc kubenswrapper[5005]: I0225 11:20:33.401616 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 25 11:20:33 crc kubenswrapper[5005]: I0225 11:20:33.404454 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-qcfmz_7eaa1833-a4ad-422c-93db-6c442609d050/kube-multus-additional-cni-plugins/0.log" Feb 25 11:20:33 crc kubenswrapper[5005]: I0225 11:20:33.404528 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-qcfmz" event={"ID":"7eaa1833-a4ad-422c-93db-6c442609d050","Type":"ContainerDied","Data":"7d48cff40346e6468c78887a74a5267ab2028939d4584cb2a99b1da64c92e547"} Feb 25 11:20:33 crc kubenswrapper[5005]: I0225 11:20:33.404868 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-qcfmz" Feb 25 11:20:33 crc kubenswrapper[5005]: I0225 11:20:33.424915 5005 scope.go:117] "RemoveContainer" containerID="e9f74d36e50a314519d49078f2fb5bf7fdfc720cd5b0d439e7fb27ec46c89696" Feb 25 11:20:33 crc kubenswrapper[5005]: E0225 11:20:33.425595 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-jlrk4" podUID="d4ff9940-8045-4d50-99a8-85f30d202aae" Feb 25 11:20:33 crc kubenswrapper[5005]: E0225 11:20:33.426043 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9bdm4" podUID="f0fc9766-1d31-4d5a-8234-47d9e32f59be" Feb 25 11:20:33 crc kubenswrapper[5005]: I0225 11:20:33.427485 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-756f9dc565-vwbkh"] Feb 25 11:20:33 crc kubenswrapper[5005]: E0225 11:20:33.430106 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-dgfww" podUID="a80639a8-456e-4f88-a949-ce0c6fa8284c" Feb 25 11:20:33 crc kubenswrapper[5005]: I0225 11:20:33.432233 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-756f9dc565-vwbkh"] Feb 25 11:20:33 crc kubenswrapper[5005]: I0225 11:20:33.453316 5005 scope.go:117] "RemoveContainer" containerID="67023985697f96200596a4d662626679a8a51a06b7aaad02de5efcf52ab2c31e" Feb 25 11:20:33 crc kubenswrapper[5005]: I0225 11:20:33.483265 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8448bcbf66-wv8zb"] Feb 25 11:20:33 crc kubenswrapper[5005]: I0225 11:20:33.528887 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=60.528871587 podStartE2EDuration="1m0.528871587s" podCreationTimestamp="2026-02-25 11:19:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:20:33.518975187 +0000 UTC m=+147.559707524" watchObservedRunningTime="2026-02-25 11:20:33.528871587 +0000 UTC m=+147.569603914" Feb 25 11:20:33 crc kubenswrapper[5005]: I0225 11:20:33.530799 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-746c6848cc-rt6bf"] Feb 25 11:20:33 crc kubenswrapper[5005]: I0225 11:20:33.533823 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-746c6848cc-rt6bf"] Feb 25 11:20:33 crc kubenswrapper[5005]: I0225 11:20:33.543538 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-qcfmz"] Feb 25 11:20:33 crc kubenswrapper[5005]: I0225 11:20:33.543763 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-qcfmz"] Feb 25 11:20:33 crc kubenswrapper[5005]: I0225 11:20:33.690972 5005 patch_prober.go:28] interesting pod/controller-manager-756f9dc565-vwbkh container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 25 11:20:33 crc kubenswrapper[5005]: I0225 11:20:33.691045 5005 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-756f9dc565-vwbkh" podUID="3320b664-bd3b-44df-93f6-0e7f6108f60f" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 25 11:20:34 crc kubenswrapper[5005]: I0225 11:20:34.286430 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-stkgx" Feb 25 11:20:34 crc kubenswrapper[5005]: I0225 11:20:34.413248 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8448bcbf66-wv8zb" event={"ID":"a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d","Type":"ContainerStarted","Data":"0d10e1cf66982b948fd06540bdf94adf7da330b4870c985c7f8044277a28beae"} Feb 25 11:20:34 crc kubenswrapper[5005]: I0225 11:20:34.413308 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8448bcbf66-wv8zb" event={"ID":"a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d","Type":"ContainerStarted","Data":"d65e8ec3d1a3383f242d9ad0260ad723dadac786e8d36e26844908ab6c0a5f5c"} Feb 25 11:20:34 crc kubenswrapper[5005]: I0225 11:20:34.413524 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-8448bcbf66-wv8zb" Feb 25 11:20:34 crc kubenswrapper[5005]: I0225 11:20:34.417410 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-8448bcbf66-wv8zb" Feb 25 11:20:34 crc kubenswrapper[5005]: I0225 11:20:34.430387 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-8448bcbf66-wv8zb" podStartSLOduration=16.430350221 podStartE2EDuration="16.430350221s" podCreationTimestamp="2026-02-25 11:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:20:34.426780135 +0000 UTC m=+148.467512472" watchObservedRunningTime="2026-02-25 11:20:34.430350221 +0000 UTC m=+148.471082548" Feb 25 11:20:34 crc kubenswrapper[5005]: I0225 11:20:34.692533 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3320b664-bd3b-44df-93f6-0e7f6108f60f" path="/var/lib/kubelet/pods/3320b664-bd3b-44df-93f6-0e7f6108f60f/volumes" Feb 25 11:20:34 crc kubenswrapper[5005]: I0225 11:20:34.693019 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39b36f61-4a48-4c6e-b755-9ffc31065462" path="/var/lib/kubelet/pods/39b36f61-4a48-4c6e-b755-9ffc31065462/volumes" Feb 25 11:20:34 crc kubenswrapper[5005]: I0225 11:20:34.693676 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7eaa1833-a4ad-422c-93db-6c442609d050" path="/var/lib/kubelet/pods/7eaa1833-a4ad-422c-93db-6c442609d050/volumes" Feb 25 11:20:35 crc kubenswrapper[5005]: I0225 11:20:35.422560 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qf86g" event={"ID":"5a7766dc-4b89-4d09-87a3-7690f8af1ad7","Type":"ContainerStarted","Data":"23d6f2a7955cebedfbba289b5e65d81ef367fdb3fdf74514774ca8dccc761c9e"} Feb 25 11:20:35 crc kubenswrapper[5005]: I0225 11:20:35.442103 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qf86g" podStartSLOduration=3.247107157 podStartE2EDuration="36.442086318s" podCreationTimestamp="2026-02-25 11:19:59 +0000 UTC" firstStartedPulling="2026-02-25 11:20:01.731698237 +0000 UTC m=+115.772430564" lastFinishedPulling="2026-02-25 11:20:34.926677398 +0000 UTC m=+148.967409725" observedRunningTime="2026-02-25 11:20:35.440126018 +0000 UTC m=+149.480858335" watchObservedRunningTime="2026-02-25 11:20:35.442086318 +0000 UTC m=+149.482818645" Feb 25 11:20:35 crc kubenswrapper[5005]: I0225 11:20:35.630738 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fd99546b4-jkt6r"] Feb 25 11:20:35 crc kubenswrapper[5005]: I0225 11:20:35.631979 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6fd99546b4-jkt6r" Feb 25 11:20:35 crc kubenswrapper[5005]: I0225 11:20:35.634970 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 25 11:20:35 crc kubenswrapper[5005]: I0225 11:20:35.634980 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 25 11:20:35 crc kubenswrapper[5005]: I0225 11:20:35.635280 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 25 11:20:35 crc kubenswrapper[5005]: I0225 11:20:35.635452 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 25 11:20:35 crc kubenswrapper[5005]: I0225 11:20:35.635528 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 25 11:20:35 crc kubenswrapper[5005]: I0225 11:20:35.635709 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 25 11:20:35 crc kubenswrapper[5005]: I0225 11:20:35.643672 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fd99546b4-jkt6r"] Feb 25 11:20:35 crc kubenswrapper[5005]: I0225 11:20:35.702239 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 25 11:20:35 crc kubenswrapper[5005]: I0225 11:20:35.734619 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/82d88a40-4cc0-4e78-89c1-90fb1f0f2fc3-client-ca\") pod \"route-controller-manager-6fd99546b4-jkt6r\" (UID: \"82d88a40-4cc0-4e78-89c1-90fb1f0f2fc3\") " pod="openshift-route-controller-manager/route-controller-manager-6fd99546b4-jkt6r" Feb 25 11:20:35 crc kubenswrapper[5005]: I0225 11:20:35.734710 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2l6b\" (UniqueName: \"kubernetes.io/projected/82d88a40-4cc0-4e78-89c1-90fb1f0f2fc3-kube-api-access-x2l6b\") pod \"route-controller-manager-6fd99546b4-jkt6r\" (UID: \"82d88a40-4cc0-4e78-89c1-90fb1f0f2fc3\") " pod="openshift-route-controller-manager/route-controller-manager-6fd99546b4-jkt6r" Feb 25 11:20:35 crc kubenswrapper[5005]: I0225 11:20:35.734803 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82d88a40-4cc0-4e78-89c1-90fb1f0f2fc3-serving-cert\") pod \"route-controller-manager-6fd99546b4-jkt6r\" (UID: \"82d88a40-4cc0-4e78-89c1-90fb1f0f2fc3\") " pod="openshift-route-controller-manager/route-controller-manager-6fd99546b4-jkt6r" Feb 25 11:20:35 crc kubenswrapper[5005]: I0225 11:20:35.734849 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82d88a40-4cc0-4e78-89c1-90fb1f0f2fc3-config\") pod \"route-controller-manager-6fd99546b4-jkt6r\" (UID: \"82d88a40-4cc0-4e78-89c1-90fb1f0f2fc3\") " pod="openshift-route-controller-manager/route-controller-manager-6fd99546b4-jkt6r" Feb 25 11:20:35 crc kubenswrapper[5005]: I0225 11:20:35.836322 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2l6b\" (UniqueName: \"kubernetes.io/projected/82d88a40-4cc0-4e78-89c1-90fb1f0f2fc3-kube-api-access-x2l6b\") pod \"route-controller-manager-6fd99546b4-jkt6r\" (UID: \"82d88a40-4cc0-4e78-89c1-90fb1f0f2fc3\") " pod="openshift-route-controller-manager/route-controller-manager-6fd99546b4-jkt6r" Feb 25 11:20:35 crc kubenswrapper[5005]: I0225 11:20:35.836837 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82d88a40-4cc0-4e78-89c1-90fb1f0f2fc3-serving-cert\") pod \"route-controller-manager-6fd99546b4-jkt6r\" (UID: \"82d88a40-4cc0-4e78-89c1-90fb1f0f2fc3\") " pod="openshift-route-controller-manager/route-controller-manager-6fd99546b4-jkt6r" Feb 25 11:20:35 crc kubenswrapper[5005]: I0225 11:20:35.836870 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82d88a40-4cc0-4e78-89c1-90fb1f0f2fc3-config\") pod \"route-controller-manager-6fd99546b4-jkt6r\" (UID: \"82d88a40-4cc0-4e78-89c1-90fb1f0f2fc3\") " pod="openshift-route-controller-manager/route-controller-manager-6fd99546b4-jkt6r" Feb 25 11:20:35 crc kubenswrapper[5005]: I0225 11:20:35.836896 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/82d88a40-4cc0-4e78-89c1-90fb1f0f2fc3-client-ca\") pod \"route-controller-manager-6fd99546b4-jkt6r\" (UID: \"82d88a40-4cc0-4e78-89c1-90fb1f0f2fc3\") " pod="openshift-route-controller-manager/route-controller-manager-6fd99546b4-jkt6r" Feb 25 11:20:35 crc kubenswrapper[5005]: I0225 11:20:35.838397 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/82d88a40-4cc0-4e78-89c1-90fb1f0f2fc3-client-ca\") pod \"route-controller-manager-6fd99546b4-jkt6r\" (UID: \"82d88a40-4cc0-4e78-89c1-90fb1f0f2fc3\") " pod="openshift-route-controller-manager/route-controller-manager-6fd99546b4-jkt6r" Feb 25 11:20:35 crc kubenswrapper[5005]: I0225 11:20:35.838620 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82d88a40-4cc0-4e78-89c1-90fb1f0f2fc3-config\") pod \"route-controller-manager-6fd99546b4-jkt6r\" (UID: \"82d88a40-4cc0-4e78-89c1-90fb1f0f2fc3\") " pod="openshift-route-controller-manager/route-controller-manager-6fd99546b4-jkt6r" Feb 25 11:20:35 crc kubenswrapper[5005]: I0225 11:20:35.850471 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82d88a40-4cc0-4e78-89c1-90fb1f0f2fc3-serving-cert\") pod \"route-controller-manager-6fd99546b4-jkt6r\" (UID: \"82d88a40-4cc0-4e78-89c1-90fb1f0f2fc3\") " pod="openshift-route-controller-manager/route-controller-manager-6fd99546b4-jkt6r" Feb 25 11:20:35 crc kubenswrapper[5005]: I0225 11:20:35.858011 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2l6b\" (UniqueName: \"kubernetes.io/projected/82d88a40-4cc0-4e78-89c1-90fb1f0f2fc3-kube-api-access-x2l6b\") pod \"route-controller-manager-6fd99546b4-jkt6r\" (UID: \"82d88a40-4cc0-4e78-89c1-90fb1f0f2fc3\") " pod="openshift-route-controller-manager/route-controller-manager-6fd99546b4-jkt6r" Feb 25 11:20:35 crc kubenswrapper[5005]: I0225 11:20:35.953080 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6fd99546b4-jkt6r" Feb 25 11:20:36 crc kubenswrapper[5005]: I0225 11:20:36.189076 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fd99546b4-jkt6r"] Feb 25 11:20:36 crc kubenswrapper[5005]: W0225 11:20:36.196360 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82d88a40_4cc0_4e78_89c1_90fb1f0f2fc3.slice/crio-48d456b66ce50a45b11fe57dbc3a75e2596f7f4de1effbc017436ee243c09a90 WatchSource:0}: Error finding container 48d456b66ce50a45b11fe57dbc3a75e2596f7f4de1effbc017436ee243c09a90: Status 404 returned error can't find the container with id 48d456b66ce50a45b11fe57dbc3a75e2596f7f4de1effbc017436ee243c09a90 Feb 25 11:20:36 crc kubenswrapper[5005]: I0225 11:20:36.429496 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6fd99546b4-jkt6r" event={"ID":"82d88a40-4cc0-4e78-89c1-90fb1f0f2fc3","Type":"ContainerStarted","Data":"e0ec1569d14c1bd09344870760a186f5ad9b6e5d2b87153bfc31c7c70720368f"} Feb 25 11:20:36 crc kubenswrapper[5005]: I0225 11:20:36.429565 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6fd99546b4-jkt6r" event={"ID":"82d88a40-4cc0-4e78-89c1-90fb1f0f2fc3","Type":"ContainerStarted","Data":"48d456b66ce50a45b11fe57dbc3a75e2596f7f4de1effbc017436ee243c09a90"} Feb 25 11:20:36 crc kubenswrapper[5005]: I0225 11:20:36.485627 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6fd99546b4-jkt6r" podStartSLOduration=18.485609155 podStartE2EDuration="18.485609155s" podCreationTimestamp="2026-02-25 11:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:20:36.484671132 +0000 UTC m=+150.525403459" watchObservedRunningTime="2026-02-25 11:20:36.485609155 +0000 UTC m=+150.526341482" Feb 25 11:20:36 crc kubenswrapper[5005]: I0225 11:20:36.487466 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=1.48745126 podStartE2EDuration="1.48745126s" podCreationTimestamp="2026-02-25 11:20:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:20:36.460268091 +0000 UTC m=+150.501000428" watchObservedRunningTime="2026-02-25 11:20:36.48745126 +0000 UTC m=+150.528183587" Feb 25 11:20:37 crc kubenswrapper[5005]: I0225 11:20:37.438089 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6fd99546b4-jkt6r" Feb 25 11:20:37 crc kubenswrapper[5005]: I0225 11:20:37.443895 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6fd99546b4-jkt6r" Feb 25 11:20:37 crc kubenswrapper[5005]: I0225 11:20:37.590779 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 25 11:20:37 crc kubenswrapper[5005]: I0225 11:20:37.591462 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 25 11:20:37 crc kubenswrapper[5005]: I0225 11:20:37.594926 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 25 11:20:37 crc kubenswrapper[5005]: I0225 11:20:37.596062 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 25 11:20:37 crc kubenswrapper[5005]: I0225 11:20:37.629625 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 25 11:20:37 crc kubenswrapper[5005]: I0225 11:20:37.765660 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0d9ee39f-feeb-433b-bc90-810d1df73ff4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0d9ee39f-feeb-433b-bc90-810d1df73ff4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 25 11:20:37 crc kubenswrapper[5005]: I0225 11:20:37.765752 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0d9ee39f-feeb-433b-bc90-810d1df73ff4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0d9ee39f-feeb-433b-bc90-810d1df73ff4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 25 11:20:37 crc kubenswrapper[5005]: I0225 11:20:37.866980 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0d9ee39f-feeb-433b-bc90-810d1df73ff4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0d9ee39f-feeb-433b-bc90-810d1df73ff4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 25 11:20:37 crc kubenswrapper[5005]: I0225 11:20:37.867069 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0d9ee39f-feeb-433b-bc90-810d1df73ff4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0d9ee39f-feeb-433b-bc90-810d1df73ff4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 25 11:20:37 crc kubenswrapper[5005]: I0225 11:20:37.867134 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0d9ee39f-feeb-433b-bc90-810d1df73ff4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0d9ee39f-feeb-433b-bc90-810d1df73ff4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 25 11:20:37 crc kubenswrapper[5005]: I0225 11:20:37.889813 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0d9ee39f-feeb-433b-bc90-810d1df73ff4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0d9ee39f-feeb-433b-bc90-810d1df73ff4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 25 11:20:37 crc kubenswrapper[5005]: I0225 11:20:37.907707 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 25 11:20:38 crc kubenswrapper[5005]: I0225 11:20:38.109964 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 25 11:20:38 crc kubenswrapper[5005]: W0225 11:20:38.118312 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod0d9ee39f_feeb_433b_bc90_810d1df73ff4.slice/crio-230a0deb3d8e012260d29282fd054eff7842f945f91860715663ffaff67d8c24 WatchSource:0}: Error finding container 230a0deb3d8e012260d29282fd054eff7842f945f91860715663ffaff67d8c24: Status 404 returned error can't find the container with id 230a0deb3d8e012260d29282fd054eff7842f945f91860715663ffaff67d8c24 Feb 25 11:20:38 crc kubenswrapper[5005]: I0225 11:20:38.239228 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8448bcbf66-wv8zb"] Feb 25 11:20:38 crc kubenswrapper[5005]: I0225 11:20:38.239426 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-8448bcbf66-wv8zb" podUID="a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d" containerName="controller-manager" containerID="cri-o://0d10e1cf66982b948fd06540bdf94adf7da330b4870c985c7f8044277a28beae" gracePeriod=30 Feb 25 11:20:38 crc kubenswrapper[5005]: I0225 11:20:38.337888 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fd99546b4-jkt6r"] Feb 25 11:20:38 crc kubenswrapper[5005]: I0225 11:20:38.452156 5005 generic.go:334] "Generic (PLEG): container finished" podID="a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d" containerID="0d10e1cf66982b948fd06540bdf94adf7da330b4870c985c7f8044277a28beae" exitCode=0 Feb 25 11:20:38 crc kubenswrapper[5005]: I0225 11:20:38.452227 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8448bcbf66-wv8zb" event={"ID":"a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d","Type":"ContainerDied","Data":"0d10e1cf66982b948fd06540bdf94adf7da330b4870c985c7f8044277a28beae"} Feb 25 11:20:38 crc kubenswrapper[5005]: I0225 11:20:38.454569 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"0d9ee39f-feeb-433b-bc90-810d1df73ff4","Type":"ContainerStarted","Data":"5e6b35615536033b0765cd9602cf40d1ef1b3aa49cd14e367aa778438d7d57ed"} Feb 25 11:20:38 crc kubenswrapper[5005]: I0225 11:20:38.454623 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"0d9ee39f-feeb-433b-bc90-810d1df73ff4","Type":"ContainerStarted","Data":"230a0deb3d8e012260d29282fd054eff7842f945f91860715663ffaff67d8c24"} Feb 25 11:20:38 crc kubenswrapper[5005]: I0225 11:20:38.470593 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=1.470128913 podStartE2EDuration="1.470128913s" podCreationTimestamp="2026-02-25 11:20:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:20:38.466879478 +0000 UTC m=+152.507611805" watchObservedRunningTime="2026-02-25 11:20:38.470128913 +0000 UTC m=+152.510861240" Feb 25 11:20:38 crc kubenswrapper[5005]: I0225 11:20:38.619745 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8448bcbf66-wv8zb" Feb 25 11:20:38 crc kubenswrapper[5005]: I0225 11:20:38.781764 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d-config\") pod \"a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d\" (UID: \"a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d\") " Feb 25 11:20:38 crc kubenswrapper[5005]: I0225 11:20:38.781837 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d-client-ca\") pod \"a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d\" (UID: \"a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d\") " Feb 25 11:20:38 crc kubenswrapper[5005]: I0225 11:20:38.781862 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d-proxy-ca-bundles\") pod \"a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d\" (UID: \"a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d\") " Feb 25 11:20:38 crc kubenswrapper[5005]: I0225 11:20:38.781903 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltnlc\" (UniqueName: \"kubernetes.io/projected/a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d-kube-api-access-ltnlc\") pod \"a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d\" (UID: \"a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d\") " Feb 25 11:20:38 crc kubenswrapper[5005]: I0225 11:20:38.781950 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d-serving-cert\") pod \"a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d\" (UID: \"a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d\") " Feb 25 11:20:38 crc kubenswrapper[5005]: I0225 11:20:38.782776 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d-client-ca" (OuterVolumeSpecName: "client-ca") pod "a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d" (UID: "a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:20:38 crc kubenswrapper[5005]: I0225 11:20:38.782832 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d" (UID: "a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:20:38 crc kubenswrapper[5005]: I0225 11:20:38.782853 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d-config" (OuterVolumeSpecName: "config") pod "a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d" (UID: "a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:20:38 crc kubenswrapper[5005]: I0225 11:20:38.786966 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d-kube-api-access-ltnlc" (OuterVolumeSpecName: "kube-api-access-ltnlc") pod "a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d" (UID: "a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d"). InnerVolumeSpecName "kube-api-access-ltnlc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:20:38 crc kubenswrapper[5005]: I0225 11:20:38.787075 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d" (UID: "a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:20:38 crc kubenswrapper[5005]: I0225 11:20:38.883013 5005 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d-config\") on node \"crc\" DevicePath \"\"" Feb 25 11:20:38 crc kubenswrapper[5005]: I0225 11:20:38.883058 5005 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d-client-ca\") on node \"crc\" DevicePath \"\"" Feb 25 11:20:38 crc kubenswrapper[5005]: I0225 11:20:38.883068 5005 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 25 11:20:38 crc kubenswrapper[5005]: I0225 11:20:38.883081 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltnlc\" (UniqueName: \"kubernetes.io/projected/a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d-kube-api-access-ltnlc\") on node \"crc\" DevicePath \"\"" Feb 25 11:20:38 crc kubenswrapper[5005]: I0225 11:20:38.883091 5005 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 25 11:20:39 crc kubenswrapper[5005]: I0225 11:20:39.471072 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8448bcbf66-wv8zb" event={"ID":"a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d","Type":"ContainerDied","Data":"d65e8ec3d1a3383f242d9ad0260ad723dadac786e8d36e26844908ab6c0a5f5c"} Feb 25 11:20:39 crc kubenswrapper[5005]: I0225 11:20:39.471109 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8448bcbf66-wv8zb" Feb 25 11:20:39 crc kubenswrapper[5005]: I0225 11:20:39.471422 5005 scope.go:117] "RemoveContainer" containerID="0d10e1cf66982b948fd06540bdf94adf7da330b4870c985c7f8044277a28beae" Feb 25 11:20:39 crc kubenswrapper[5005]: I0225 11:20:39.472766 5005 generic.go:334] "Generic (PLEG): container finished" podID="0d9ee39f-feeb-433b-bc90-810d1df73ff4" containerID="5e6b35615536033b0765cd9602cf40d1ef1b3aa49cd14e367aa778438d7d57ed" exitCode=0 Feb 25 11:20:39 crc kubenswrapper[5005]: I0225 11:20:39.472870 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"0d9ee39f-feeb-433b-bc90-810d1df73ff4","Type":"ContainerDied","Data":"5e6b35615536033b0765cd9602cf40d1ef1b3aa49cd14e367aa778438d7d57ed"} Feb 25 11:20:39 crc kubenswrapper[5005]: I0225 11:20:39.472939 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6fd99546b4-jkt6r" podUID="82d88a40-4cc0-4e78-89c1-90fb1f0f2fc3" containerName="route-controller-manager" containerID="cri-o://e0ec1569d14c1bd09344870760a186f5ad9b6e5d2b87153bfc31c7c70720368f" gracePeriod=30 Feb 25 11:20:39 crc kubenswrapper[5005]: I0225 11:20:39.517805 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8448bcbf66-wv8zb"] Feb 25 11:20:39 crc kubenswrapper[5005]: I0225 11:20:39.525544 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-8448bcbf66-wv8zb"] Feb 25 11:20:39 crc kubenswrapper[5005]: I0225 11:20:39.538117 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rx5lf"] Feb 25 11:20:39 crc kubenswrapper[5005]: I0225 11:20:39.638429 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-587bcdf5bc-whn89"] Feb 25 11:20:39 crc kubenswrapper[5005]: E0225 11:20:39.638674 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d" containerName="controller-manager" Feb 25 11:20:39 crc kubenswrapper[5005]: I0225 11:20:39.638691 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d" containerName="controller-manager" Feb 25 11:20:39 crc kubenswrapper[5005]: I0225 11:20:39.638797 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d" containerName="controller-manager" Feb 25 11:20:39 crc kubenswrapper[5005]: I0225 11:20:39.639148 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-587bcdf5bc-whn89" Feb 25 11:20:39 crc kubenswrapper[5005]: I0225 11:20:39.641603 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 25 11:20:39 crc kubenswrapper[5005]: I0225 11:20:39.643289 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 25 11:20:39 crc kubenswrapper[5005]: I0225 11:20:39.644115 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 25 11:20:39 crc kubenswrapper[5005]: I0225 11:20:39.644139 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 25 11:20:39 crc kubenswrapper[5005]: I0225 11:20:39.645703 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 25 11:20:39 crc kubenswrapper[5005]: I0225 11:20:39.646500 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 25 11:20:39 crc kubenswrapper[5005]: I0225 11:20:39.651226 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-587bcdf5bc-whn89"] Feb 25 11:20:39 crc kubenswrapper[5005]: I0225 11:20:39.653809 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 25 11:20:39 crc kubenswrapper[5005]: I0225 11:20:39.756899 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qf86g" Feb 25 11:20:39 crc kubenswrapper[5005]: I0225 11:20:39.756969 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qf86g" Feb 25 11:20:39 crc kubenswrapper[5005]: I0225 11:20:39.795721 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/55c9939e-1f30-42e8-8d16-e706a5cc356e-proxy-ca-bundles\") pod \"controller-manager-587bcdf5bc-whn89\" (UID: \"55c9939e-1f30-42e8-8d16-e706a5cc356e\") " pod="openshift-controller-manager/controller-manager-587bcdf5bc-whn89" Feb 25 11:20:39 crc kubenswrapper[5005]: I0225 11:20:39.795787 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55c9939e-1f30-42e8-8d16-e706a5cc356e-config\") pod \"controller-manager-587bcdf5bc-whn89\" (UID: \"55c9939e-1f30-42e8-8d16-e706a5cc356e\") " pod="openshift-controller-manager/controller-manager-587bcdf5bc-whn89" Feb 25 11:20:39 crc kubenswrapper[5005]: I0225 11:20:39.795828 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55c9939e-1f30-42e8-8d16-e706a5cc356e-serving-cert\") pod \"controller-manager-587bcdf5bc-whn89\" (UID: \"55c9939e-1f30-42e8-8d16-e706a5cc356e\") " pod="openshift-controller-manager/controller-manager-587bcdf5bc-whn89" Feb 25 11:20:39 crc kubenswrapper[5005]: I0225 11:20:39.795949 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgf96\" (UniqueName: \"kubernetes.io/projected/55c9939e-1f30-42e8-8d16-e706a5cc356e-kube-api-access-tgf96\") pod \"controller-manager-587bcdf5bc-whn89\" (UID: \"55c9939e-1f30-42e8-8d16-e706a5cc356e\") " pod="openshift-controller-manager/controller-manager-587bcdf5bc-whn89" Feb 25 11:20:39 crc kubenswrapper[5005]: I0225 11:20:39.795998 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55c9939e-1f30-42e8-8d16-e706a5cc356e-client-ca\") pod \"controller-manager-587bcdf5bc-whn89\" (UID: \"55c9939e-1f30-42e8-8d16-e706a5cc356e\") " pod="openshift-controller-manager/controller-manager-587bcdf5bc-whn89" Feb 25 11:20:39 crc kubenswrapper[5005]: I0225 11:20:39.822977 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 25 11:20:39 crc kubenswrapper[5005]: I0225 11:20:39.845312 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6fd99546b4-jkt6r" Feb 25 11:20:39 crc kubenswrapper[5005]: I0225 11:20:39.896591 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55c9939e-1f30-42e8-8d16-e706a5cc356e-config\") pod \"controller-manager-587bcdf5bc-whn89\" (UID: \"55c9939e-1f30-42e8-8d16-e706a5cc356e\") " pod="openshift-controller-manager/controller-manager-587bcdf5bc-whn89" Feb 25 11:20:39 crc kubenswrapper[5005]: I0225 11:20:39.896650 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55c9939e-1f30-42e8-8d16-e706a5cc356e-serving-cert\") pod \"controller-manager-587bcdf5bc-whn89\" (UID: \"55c9939e-1f30-42e8-8d16-e706a5cc356e\") " pod="openshift-controller-manager/controller-manager-587bcdf5bc-whn89" Feb 25 11:20:39 crc kubenswrapper[5005]: I0225 11:20:39.896722 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgf96\" (UniqueName: \"kubernetes.io/projected/55c9939e-1f30-42e8-8d16-e706a5cc356e-kube-api-access-tgf96\") pod \"controller-manager-587bcdf5bc-whn89\" (UID: \"55c9939e-1f30-42e8-8d16-e706a5cc356e\") " pod="openshift-controller-manager/controller-manager-587bcdf5bc-whn89" Feb 25 11:20:39 crc kubenswrapper[5005]: I0225 11:20:39.896747 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55c9939e-1f30-42e8-8d16-e706a5cc356e-client-ca\") pod \"controller-manager-587bcdf5bc-whn89\" (UID: \"55c9939e-1f30-42e8-8d16-e706a5cc356e\") " pod="openshift-controller-manager/controller-manager-587bcdf5bc-whn89" Feb 25 11:20:39 crc kubenswrapper[5005]: I0225 11:20:39.896775 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/55c9939e-1f30-42e8-8d16-e706a5cc356e-proxy-ca-bundles\") pod \"controller-manager-587bcdf5bc-whn89\" (UID: \"55c9939e-1f30-42e8-8d16-e706a5cc356e\") " pod="openshift-controller-manager/controller-manager-587bcdf5bc-whn89" Feb 25 11:20:39 crc kubenswrapper[5005]: I0225 11:20:39.897945 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55c9939e-1f30-42e8-8d16-e706a5cc356e-client-ca\") pod \"controller-manager-587bcdf5bc-whn89\" (UID: \"55c9939e-1f30-42e8-8d16-e706a5cc356e\") " pod="openshift-controller-manager/controller-manager-587bcdf5bc-whn89" Feb 25 11:20:39 crc kubenswrapper[5005]: I0225 11:20:39.898934 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55c9939e-1f30-42e8-8d16-e706a5cc356e-config\") pod \"controller-manager-587bcdf5bc-whn89\" (UID: \"55c9939e-1f30-42e8-8d16-e706a5cc356e\") " pod="openshift-controller-manager/controller-manager-587bcdf5bc-whn89" Feb 25 11:20:39 crc kubenswrapper[5005]: I0225 11:20:39.899082 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/55c9939e-1f30-42e8-8d16-e706a5cc356e-proxy-ca-bundles\") pod \"controller-manager-587bcdf5bc-whn89\" (UID: \"55c9939e-1f30-42e8-8d16-e706a5cc356e\") " pod="openshift-controller-manager/controller-manager-587bcdf5bc-whn89" Feb 25 11:20:39 crc kubenswrapper[5005]: I0225 11:20:39.913414 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qf86g" Feb 25 11:20:39 crc kubenswrapper[5005]: I0225 11:20:39.997439 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2l6b\" (UniqueName: \"kubernetes.io/projected/82d88a40-4cc0-4e78-89c1-90fb1f0f2fc3-kube-api-access-x2l6b\") pod \"82d88a40-4cc0-4e78-89c1-90fb1f0f2fc3\" (UID: \"82d88a40-4cc0-4e78-89c1-90fb1f0f2fc3\") " Feb 25 11:20:39 crc kubenswrapper[5005]: I0225 11:20:39.997489 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82d88a40-4cc0-4e78-89c1-90fb1f0f2fc3-serving-cert\") pod \"82d88a40-4cc0-4e78-89c1-90fb1f0f2fc3\" (UID: \"82d88a40-4cc0-4e78-89c1-90fb1f0f2fc3\") " Feb 25 11:20:39 crc kubenswrapper[5005]: I0225 11:20:39.997529 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/82d88a40-4cc0-4e78-89c1-90fb1f0f2fc3-client-ca\") pod \"82d88a40-4cc0-4e78-89c1-90fb1f0f2fc3\" (UID: \"82d88a40-4cc0-4e78-89c1-90fb1f0f2fc3\") " Feb 25 11:20:39 crc kubenswrapper[5005]: I0225 11:20:39.997561 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82d88a40-4cc0-4e78-89c1-90fb1f0f2fc3-config\") pod \"82d88a40-4cc0-4e78-89c1-90fb1f0f2fc3\" (UID: \"82d88a40-4cc0-4e78-89c1-90fb1f0f2fc3\") " Feb 25 11:20:39 crc kubenswrapper[5005]: I0225 11:20:39.998282 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82d88a40-4cc0-4e78-89c1-90fb1f0f2fc3-client-ca" (OuterVolumeSpecName: "client-ca") pod "82d88a40-4cc0-4e78-89c1-90fb1f0f2fc3" (UID: "82d88a40-4cc0-4e78-89c1-90fb1f0f2fc3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:20:39 crc kubenswrapper[5005]: I0225 11:20:39.998425 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82d88a40-4cc0-4e78-89c1-90fb1f0f2fc3-config" (OuterVolumeSpecName: "config") pod "82d88a40-4cc0-4e78-89c1-90fb1f0f2fc3" (UID: "82d88a40-4cc0-4e78-89c1-90fb1f0f2fc3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:20:40 crc kubenswrapper[5005]: I0225 11:20:40.059094 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55c9939e-1f30-42e8-8d16-e706a5cc356e-serving-cert\") pod \"controller-manager-587bcdf5bc-whn89\" (UID: \"55c9939e-1f30-42e8-8d16-e706a5cc356e\") " pod="openshift-controller-manager/controller-manager-587bcdf5bc-whn89" Feb 25 11:20:40 crc kubenswrapper[5005]: I0225 11:20:40.059678 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgf96\" (UniqueName: \"kubernetes.io/projected/55c9939e-1f30-42e8-8d16-e706a5cc356e-kube-api-access-tgf96\") pod \"controller-manager-587bcdf5bc-whn89\" (UID: \"55c9939e-1f30-42e8-8d16-e706a5cc356e\") " pod="openshift-controller-manager/controller-manager-587bcdf5bc-whn89" Feb 25 11:20:40 crc kubenswrapper[5005]: I0225 11:20:40.070993 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82d88a40-4cc0-4e78-89c1-90fb1f0f2fc3-kube-api-access-x2l6b" (OuterVolumeSpecName: "kube-api-access-x2l6b") pod "82d88a40-4cc0-4e78-89c1-90fb1f0f2fc3" (UID: "82d88a40-4cc0-4e78-89c1-90fb1f0f2fc3"). InnerVolumeSpecName "kube-api-access-x2l6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:20:40 crc kubenswrapper[5005]: I0225 11:20:40.071295 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82d88a40-4cc0-4e78-89c1-90fb1f0f2fc3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "82d88a40-4cc0-4e78-89c1-90fb1f0f2fc3" (UID: "82d88a40-4cc0-4e78-89c1-90fb1f0f2fc3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:20:40 crc kubenswrapper[5005]: I0225 11:20:40.098897 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2l6b\" (UniqueName: \"kubernetes.io/projected/82d88a40-4cc0-4e78-89c1-90fb1f0f2fc3-kube-api-access-x2l6b\") on node \"crc\" DevicePath \"\"" Feb 25 11:20:40 crc kubenswrapper[5005]: I0225 11:20:40.098927 5005 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82d88a40-4cc0-4e78-89c1-90fb1f0f2fc3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 25 11:20:40 crc kubenswrapper[5005]: I0225 11:20:40.098936 5005 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/82d88a40-4cc0-4e78-89c1-90fb1f0f2fc3-client-ca\") on node \"crc\" DevicePath \"\"" Feb 25 11:20:40 crc kubenswrapper[5005]: I0225 11:20:40.098945 5005 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82d88a40-4cc0-4e78-89c1-90fb1f0f2fc3-config\") on node \"crc\" DevicePath \"\"" Feb 25 11:20:40 crc kubenswrapper[5005]: I0225 11:20:40.251444 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-587bcdf5bc-whn89" Feb 25 11:20:40 crc kubenswrapper[5005]: I0225 11:20:40.479214 5005 generic.go:334] "Generic (PLEG): container finished" podID="82d88a40-4cc0-4e78-89c1-90fb1f0f2fc3" containerID="e0ec1569d14c1bd09344870760a186f5ad9b6e5d2b87153bfc31c7c70720368f" exitCode=0 Feb 25 11:20:40 crc kubenswrapper[5005]: I0225 11:20:40.479279 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6fd99546b4-jkt6r" event={"ID":"82d88a40-4cc0-4e78-89c1-90fb1f0f2fc3","Type":"ContainerDied","Data":"e0ec1569d14c1bd09344870760a186f5ad9b6e5d2b87153bfc31c7c70720368f"} Feb 25 11:20:40 crc kubenswrapper[5005]: I0225 11:20:40.479291 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6fd99546b4-jkt6r" Feb 25 11:20:40 crc kubenswrapper[5005]: I0225 11:20:40.479306 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6fd99546b4-jkt6r" event={"ID":"82d88a40-4cc0-4e78-89c1-90fb1f0f2fc3","Type":"ContainerDied","Data":"48d456b66ce50a45b11fe57dbc3a75e2596f7f4de1effbc017436ee243c09a90"} Feb 25 11:20:40 crc kubenswrapper[5005]: I0225 11:20:40.479325 5005 scope.go:117] "RemoveContainer" containerID="e0ec1569d14c1bd09344870760a186f5ad9b6e5d2b87153bfc31c7c70720368f" Feb 25 11:20:40 crc kubenswrapper[5005]: I0225 11:20:40.495510 5005 scope.go:117] "RemoveContainer" containerID="e0ec1569d14c1bd09344870760a186f5ad9b6e5d2b87153bfc31c7c70720368f" Feb 25 11:20:40 crc kubenswrapper[5005]: E0225 11:20:40.497155 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0ec1569d14c1bd09344870760a186f5ad9b6e5d2b87153bfc31c7c70720368f\": container with ID starting with e0ec1569d14c1bd09344870760a186f5ad9b6e5d2b87153bfc31c7c70720368f not found: ID does not exist" containerID="e0ec1569d14c1bd09344870760a186f5ad9b6e5d2b87153bfc31c7c70720368f" Feb 25 11:20:40 crc kubenswrapper[5005]: I0225 11:20:40.497203 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0ec1569d14c1bd09344870760a186f5ad9b6e5d2b87153bfc31c7c70720368f"} err="failed to get container status \"e0ec1569d14c1bd09344870760a186f5ad9b6e5d2b87153bfc31c7c70720368f\": rpc error: code = NotFound desc = could not find container \"e0ec1569d14c1bd09344870760a186f5ad9b6e5d2b87153bfc31c7c70720368f\": container with ID starting with e0ec1569d14c1bd09344870760a186f5ad9b6e5d2b87153bfc31c7c70720368f not found: ID does not exist" Feb 25 11:20:40 crc kubenswrapper[5005]: I0225 11:20:40.505961 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fd99546b4-jkt6r"] Feb 25 11:20:40 crc kubenswrapper[5005]: I0225 11:20:40.513438 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fd99546b4-jkt6r"] Feb 25 11:20:40 crc kubenswrapper[5005]: I0225 11:20:40.521232 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qf86g" Feb 25 11:20:40 crc kubenswrapper[5005]: I0225 11:20:40.591167 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qf86g"] Feb 25 11:20:40 crc kubenswrapper[5005]: I0225 11:20:40.643896 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-587bcdf5bc-whn89"] Feb 25 11:20:40 crc kubenswrapper[5005]: W0225 11:20:40.657241 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55c9939e_1f30_42e8_8d16_e706a5cc356e.slice/crio-57ba796c84c5ac0038669c1a249ab08cdb866cd75a76063e4c6d31472b2f4e38 WatchSource:0}: Error finding container 57ba796c84c5ac0038669c1a249ab08cdb866cd75a76063e4c6d31472b2f4e38: Status 404 returned error can't find the container with id 57ba796c84c5ac0038669c1a249ab08cdb866cd75a76063e4c6d31472b2f4e38 Feb 25 11:20:40 crc kubenswrapper[5005]: I0225 11:20:40.691765 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82d88a40-4cc0-4e78-89c1-90fb1f0f2fc3" path="/var/lib/kubelet/pods/82d88a40-4cc0-4e78-89c1-90fb1f0f2fc3/volumes" Feb 25 11:20:40 crc kubenswrapper[5005]: I0225 11:20:40.692477 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d" path="/var/lib/kubelet/pods/a0bc35e4-99cf-4c65-bfbb-d7003a8cba4d/volumes" Feb 25 11:20:40 crc kubenswrapper[5005]: I0225 11:20:40.752405 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 25 11:20:40 crc kubenswrapper[5005]: I0225 11:20:40.914244 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0d9ee39f-feeb-433b-bc90-810d1df73ff4-kube-api-access\") pod \"0d9ee39f-feeb-433b-bc90-810d1df73ff4\" (UID: \"0d9ee39f-feeb-433b-bc90-810d1df73ff4\") " Feb 25 11:20:40 crc kubenswrapper[5005]: I0225 11:20:40.914365 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0d9ee39f-feeb-433b-bc90-810d1df73ff4-kubelet-dir\") pod \"0d9ee39f-feeb-433b-bc90-810d1df73ff4\" (UID: \"0d9ee39f-feeb-433b-bc90-810d1df73ff4\") " Feb 25 11:20:40 crc kubenswrapper[5005]: I0225 11:20:40.914432 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d9ee39f-feeb-433b-bc90-810d1df73ff4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0d9ee39f-feeb-433b-bc90-810d1df73ff4" (UID: "0d9ee39f-feeb-433b-bc90-810d1df73ff4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 25 11:20:40 crc kubenswrapper[5005]: I0225 11:20:40.914671 5005 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0d9ee39f-feeb-433b-bc90-810d1df73ff4-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 25 11:20:40 crc kubenswrapper[5005]: I0225 11:20:40.919638 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d9ee39f-feeb-433b-bc90-810d1df73ff4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0d9ee39f-feeb-433b-bc90-810d1df73ff4" (UID: "0d9ee39f-feeb-433b-bc90-810d1df73ff4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:20:41 crc kubenswrapper[5005]: I0225 11:20:41.015322 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0d9ee39f-feeb-433b-bc90-810d1df73ff4-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 25 11:20:41 crc kubenswrapper[5005]: I0225 11:20:41.488989 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"0d9ee39f-feeb-433b-bc90-810d1df73ff4","Type":"ContainerDied","Data":"230a0deb3d8e012260d29282fd054eff7842f945f91860715663ffaff67d8c24"} Feb 25 11:20:41 crc kubenswrapper[5005]: I0225 11:20:41.489471 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="230a0deb3d8e012260d29282fd054eff7842f945f91860715663ffaff67d8c24" Feb 25 11:20:41 crc kubenswrapper[5005]: I0225 11:20:41.489000 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 25 11:20:41 crc kubenswrapper[5005]: I0225 11:20:41.491986 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-587bcdf5bc-whn89" event={"ID":"55c9939e-1f30-42e8-8d16-e706a5cc356e","Type":"ContainerStarted","Data":"a7922367aeb71c1e9167b70c0848a4a5678034d5ae291bfddf7616c1e1f94738"} Feb 25 11:20:41 crc kubenswrapper[5005]: I0225 11:20:41.492031 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-587bcdf5bc-whn89" event={"ID":"55c9939e-1f30-42e8-8d16-e706a5cc356e","Type":"ContainerStarted","Data":"57ba796c84c5ac0038669c1a249ab08cdb866cd75a76063e4c6d31472b2f4e38"} Feb 25 11:20:41 crc kubenswrapper[5005]: I0225 11:20:41.492385 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-587bcdf5bc-whn89" Feb 25 11:20:41 crc kubenswrapper[5005]: I0225 11:20:41.503230 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-587bcdf5bc-whn89" Feb 25 11:20:41 crc kubenswrapper[5005]: I0225 11:20:41.528048 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-587bcdf5bc-whn89" podStartSLOduration=3.528026932 podStartE2EDuration="3.528026932s" podCreationTimestamp="2026-02-25 11:20:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:20:41.527165161 +0000 UTC m=+155.567897488" watchObservedRunningTime="2026-02-25 11:20:41.528026932 +0000 UTC m=+155.568759259" Feb 25 11:20:41 crc kubenswrapper[5005]: I0225 11:20:41.637989 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-585f44c67c-fpmdj"] Feb 25 11:20:41 crc kubenswrapper[5005]: E0225 11:20:41.638275 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d9ee39f-feeb-433b-bc90-810d1df73ff4" containerName="pruner" Feb 25 11:20:41 crc kubenswrapper[5005]: I0225 11:20:41.638297 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d9ee39f-feeb-433b-bc90-810d1df73ff4" containerName="pruner" Feb 25 11:20:41 crc kubenswrapper[5005]: E0225 11:20:41.638326 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82d88a40-4cc0-4e78-89c1-90fb1f0f2fc3" containerName="route-controller-manager" Feb 25 11:20:41 crc kubenswrapper[5005]: I0225 11:20:41.638334 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="82d88a40-4cc0-4e78-89c1-90fb1f0f2fc3" containerName="route-controller-manager" Feb 25 11:20:41 crc kubenswrapper[5005]: I0225 11:20:41.638541 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="82d88a40-4cc0-4e78-89c1-90fb1f0f2fc3" containerName="route-controller-manager" Feb 25 11:20:41 crc kubenswrapper[5005]: I0225 11:20:41.638566 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d9ee39f-feeb-433b-bc90-810d1df73ff4" containerName="pruner" Feb 25 11:20:41 crc kubenswrapper[5005]: I0225 11:20:41.639059 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-585f44c67c-fpmdj" Feb 25 11:20:41 crc kubenswrapper[5005]: I0225 11:20:41.644288 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 25 11:20:41 crc kubenswrapper[5005]: I0225 11:20:41.645146 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 25 11:20:41 crc kubenswrapper[5005]: I0225 11:20:41.645323 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 25 11:20:41 crc kubenswrapper[5005]: I0225 11:20:41.645968 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 25 11:20:41 crc kubenswrapper[5005]: I0225 11:20:41.646093 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 25 11:20:41 crc kubenswrapper[5005]: I0225 11:20:41.652074 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-585f44c67c-fpmdj"] Feb 25 11:20:41 crc kubenswrapper[5005]: I0225 11:20:41.653144 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 25 11:20:41 crc kubenswrapper[5005]: I0225 11:20:41.725982 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3862f25-5b8b-46a1-8630-404797ca616a-config\") pod \"route-controller-manager-585f44c67c-fpmdj\" (UID: \"f3862f25-5b8b-46a1-8630-404797ca616a\") " pod="openshift-route-controller-manager/route-controller-manager-585f44c67c-fpmdj" Feb 25 11:20:41 crc kubenswrapper[5005]: I0225 11:20:41.726039 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f3862f25-5b8b-46a1-8630-404797ca616a-client-ca\") pod \"route-controller-manager-585f44c67c-fpmdj\" (UID: \"f3862f25-5b8b-46a1-8630-404797ca616a\") " pod="openshift-route-controller-manager/route-controller-manager-585f44c67c-fpmdj" Feb 25 11:20:41 crc kubenswrapper[5005]: I0225 11:20:41.726341 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxxvt\" (UniqueName: \"kubernetes.io/projected/f3862f25-5b8b-46a1-8630-404797ca616a-kube-api-access-hxxvt\") pod \"route-controller-manager-585f44c67c-fpmdj\" (UID: \"f3862f25-5b8b-46a1-8630-404797ca616a\") " pod="openshift-route-controller-manager/route-controller-manager-585f44c67c-fpmdj" Feb 25 11:20:41 crc kubenswrapper[5005]: I0225 11:20:41.726881 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3862f25-5b8b-46a1-8630-404797ca616a-serving-cert\") pod \"route-controller-manager-585f44c67c-fpmdj\" (UID: \"f3862f25-5b8b-46a1-8630-404797ca616a\") " pod="openshift-route-controller-manager/route-controller-manager-585f44c67c-fpmdj" Feb 25 11:20:41 crc kubenswrapper[5005]: I0225 11:20:41.828031 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxxvt\" (UniqueName: \"kubernetes.io/projected/f3862f25-5b8b-46a1-8630-404797ca616a-kube-api-access-hxxvt\") pod \"route-controller-manager-585f44c67c-fpmdj\" (UID: \"f3862f25-5b8b-46a1-8630-404797ca616a\") " pod="openshift-route-controller-manager/route-controller-manager-585f44c67c-fpmdj" Feb 25 11:20:41 crc kubenswrapper[5005]: I0225 11:20:41.828195 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3862f25-5b8b-46a1-8630-404797ca616a-serving-cert\") pod \"route-controller-manager-585f44c67c-fpmdj\" (UID: \"f3862f25-5b8b-46a1-8630-404797ca616a\") " pod="openshift-route-controller-manager/route-controller-manager-585f44c67c-fpmdj" Feb 25 11:20:41 crc kubenswrapper[5005]: I0225 11:20:41.828286 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3862f25-5b8b-46a1-8630-404797ca616a-config\") pod \"route-controller-manager-585f44c67c-fpmdj\" (UID: \"f3862f25-5b8b-46a1-8630-404797ca616a\") " pod="openshift-route-controller-manager/route-controller-manager-585f44c67c-fpmdj" Feb 25 11:20:41 crc kubenswrapper[5005]: I0225 11:20:41.828426 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f3862f25-5b8b-46a1-8630-404797ca616a-client-ca\") pod \"route-controller-manager-585f44c67c-fpmdj\" (UID: \"f3862f25-5b8b-46a1-8630-404797ca616a\") " pod="openshift-route-controller-manager/route-controller-manager-585f44c67c-fpmdj" Feb 25 11:20:41 crc kubenswrapper[5005]: I0225 11:20:41.829477 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f3862f25-5b8b-46a1-8630-404797ca616a-client-ca\") pod \"route-controller-manager-585f44c67c-fpmdj\" (UID: \"f3862f25-5b8b-46a1-8630-404797ca616a\") " pod="openshift-route-controller-manager/route-controller-manager-585f44c67c-fpmdj" Feb 25 11:20:41 crc kubenswrapper[5005]: I0225 11:20:41.829859 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3862f25-5b8b-46a1-8630-404797ca616a-config\") pod \"route-controller-manager-585f44c67c-fpmdj\" (UID: \"f3862f25-5b8b-46a1-8630-404797ca616a\") " pod="openshift-route-controller-manager/route-controller-manager-585f44c67c-fpmdj" Feb 25 11:20:41 crc kubenswrapper[5005]: I0225 11:20:41.837400 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3862f25-5b8b-46a1-8630-404797ca616a-serving-cert\") pod \"route-controller-manager-585f44c67c-fpmdj\" (UID: \"f3862f25-5b8b-46a1-8630-404797ca616a\") " pod="openshift-route-controller-manager/route-controller-manager-585f44c67c-fpmdj" Feb 25 11:20:41 crc kubenswrapper[5005]: I0225 11:20:41.847565 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxxvt\" (UniqueName: \"kubernetes.io/projected/f3862f25-5b8b-46a1-8630-404797ca616a-kube-api-access-hxxvt\") pod \"route-controller-manager-585f44c67c-fpmdj\" (UID: \"f3862f25-5b8b-46a1-8630-404797ca616a\") " pod="openshift-route-controller-manager/route-controller-manager-585f44c67c-fpmdj" Feb 25 11:20:41 crc kubenswrapper[5005]: I0225 11:20:41.954739 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-585f44c67c-fpmdj" Feb 25 11:20:42 crc kubenswrapper[5005]: I0225 11:20:42.176028 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 25 11:20:42 crc kubenswrapper[5005]: I0225 11:20:42.177530 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 25 11:20:42 crc kubenswrapper[5005]: I0225 11:20:42.180877 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 25 11:20:42 crc kubenswrapper[5005]: I0225 11:20:42.184974 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 25 11:20:42 crc kubenswrapper[5005]: I0225 11:20:42.185799 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 25 11:20:42 crc kubenswrapper[5005]: I0225 11:20:42.345277 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1b3efbe3-23ee-4444-9ee5-c8ea6b2109c4-kube-api-access\") pod \"installer-9-crc\" (UID: \"1b3efbe3-23ee-4444-9ee5-c8ea6b2109c4\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 25 11:20:42 crc kubenswrapper[5005]: I0225 11:20:42.345335 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1b3efbe3-23ee-4444-9ee5-c8ea6b2109c4-var-lock\") pod \"installer-9-crc\" (UID: \"1b3efbe3-23ee-4444-9ee5-c8ea6b2109c4\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 25 11:20:42 crc kubenswrapper[5005]: I0225 11:20:42.345393 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1b3efbe3-23ee-4444-9ee5-c8ea6b2109c4-kubelet-dir\") pod \"installer-9-crc\" (UID: \"1b3efbe3-23ee-4444-9ee5-c8ea6b2109c4\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 25 11:20:42 crc kubenswrapper[5005]: I0225 11:20:42.385110 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-585f44c67c-fpmdj"] Feb 25 11:20:42 crc kubenswrapper[5005]: I0225 11:20:42.446738 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1b3efbe3-23ee-4444-9ee5-c8ea6b2109c4-kube-api-access\") pod \"installer-9-crc\" (UID: \"1b3efbe3-23ee-4444-9ee5-c8ea6b2109c4\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 25 11:20:42 crc kubenswrapper[5005]: I0225 11:20:42.446811 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1b3efbe3-23ee-4444-9ee5-c8ea6b2109c4-var-lock\") pod \"installer-9-crc\" (UID: \"1b3efbe3-23ee-4444-9ee5-c8ea6b2109c4\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 25 11:20:42 crc kubenswrapper[5005]: I0225 11:20:42.446863 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1b3efbe3-23ee-4444-9ee5-c8ea6b2109c4-kubelet-dir\") pod \"installer-9-crc\" (UID: \"1b3efbe3-23ee-4444-9ee5-c8ea6b2109c4\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 25 11:20:42 crc kubenswrapper[5005]: I0225 11:20:42.446977 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1b3efbe3-23ee-4444-9ee5-c8ea6b2109c4-var-lock\") pod \"installer-9-crc\" (UID: \"1b3efbe3-23ee-4444-9ee5-c8ea6b2109c4\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 25 11:20:42 crc kubenswrapper[5005]: I0225 11:20:42.446993 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1b3efbe3-23ee-4444-9ee5-c8ea6b2109c4-kubelet-dir\") pod \"installer-9-crc\" (UID: \"1b3efbe3-23ee-4444-9ee5-c8ea6b2109c4\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 25 11:20:42 crc kubenswrapper[5005]: I0225 11:20:42.473237 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1b3efbe3-23ee-4444-9ee5-c8ea6b2109c4-kube-api-access\") pod \"installer-9-crc\" (UID: \"1b3efbe3-23ee-4444-9ee5-c8ea6b2109c4\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 25 11:20:42 crc kubenswrapper[5005]: I0225 11:20:42.505017 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-585f44c67c-fpmdj" event={"ID":"f3862f25-5b8b-46a1-8630-404797ca616a","Type":"ContainerStarted","Data":"30db779a08cb3d03a105fe06030e32f0a10941bc20511b090f839397f8f403c3"} Feb 25 11:20:42 crc kubenswrapper[5005]: I0225 11:20:42.505419 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qf86g" podUID="5a7766dc-4b89-4d09-87a3-7690f8af1ad7" containerName="registry-server" containerID="cri-o://23d6f2a7955cebedfbba289b5e65d81ef367fdb3fdf74514774ca8dccc761c9e" gracePeriod=2 Feb 25 11:20:42 crc kubenswrapper[5005]: I0225 11:20:42.507307 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 25 11:20:42 crc kubenswrapper[5005]: I0225 11:20:42.925437 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 25 11:20:43 crc kubenswrapper[5005]: I0225 11:20:43.279050 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 25 11:20:43 crc kubenswrapper[5005]: I0225 11:20:43.512751 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"1b3efbe3-23ee-4444-9ee5-c8ea6b2109c4","Type":"ContainerStarted","Data":"e0b28b9462e6cd8cfdb15e12eeb5e59f87f11b43afa2878205e62a461c97cdd3"} Feb 25 11:20:43 crc kubenswrapper[5005]: I0225 11:20:43.513052 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"1b3efbe3-23ee-4444-9ee5-c8ea6b2109c4","Type":"ContainerStarted","Data":"1de2f4b27cf45483f36d76c50c20b12a8b5d558de7cbf3193b7b893399cd6d7e"} Feb 25 11:20:43 crc kubenswrapper[5005]: I0225 11:20:43.516104 5005 generic.go:334] "Generic (PLEG): container finished" podID="5a7766dc-4b89-4d09-87a3-7690f8af1ad7" containerID="23d6f2a7955cebedfbba289b5e65d81ef367fdb3fdf74514774ca8dccc761c9e" exitCode=0 Feb 25 11:20:43 crc kubenswrapper[5005]: I0225 11:20:43.516150 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qf86g" event={"ID":"5a7766dc-4b89-4d09-87a3-7690f8af1ad7","Type":"ContainerDied","Data":"23d6f2a7955cebedfbba289b5e65d81ef367fdb3fdf74514774ca8dccc761c9e"} Feb 25 11:20:43 crc kubenswrapper[5005]: I0225 11:20:43.518484 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-585f44c67c-fpmdj" event={"ID":"f3862f25-5b8b-46a1-8630-404797ca616a","Type":"ContainerStarted","Data":"eb10e35000a042a640c42fdfef4c8c516b4ef096dfcb0ab21e90190508329948"} Feb 25 11:20:43 crc kubenswrapper[5005]: I0225 11:20:43.518510 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-585f44c67c-fpmdj" Feb 25 11:20:43 crc kubenswrapper[5005]: I0225 11:20:43.533938 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-585f44c67c-fpmdj" Feb 25 11:20:43 crc kubenswrapper[5005]: I0225 11:20:43.552769 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-585f44c67c-fpmdj" podStartSLOduration=5.552756168 podStartE2EDuration="5.552756168s" podCreationTimestamp="2026-02-25 11:20:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:20:43.550654044 +0000 UTC m=+157.591386381" watchObservedRunningTime="2026-02-25 11:20:43.552756168 +0000 UTC m=+157.593488495" Feb 25 11:20:43 crc kubenswrapper[5005]: I0225 11:20:43.553719 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=1.553712322 podStartE2EDuration="1.553712322s" podCreationTimestamp="2026-02-25 11:20:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:20:43.531660534 +0000 UTC m=+157.572392861" watchObservedRunningTime="2026-02-25 11:20:43.553712322 +0000 UTC m=+157.594444649" Feb 25 11:20:43 crc kubenswrapper[5005]: I0225 11:20:43.619655 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qf86g" Feb 25 11:20:43 crc kubenswrapper[5005]: I0225 11:20:43.764008 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a7766dc-4b89-4d09-87a3-7690f8af1ad7-catalog-content\") pod \"5a7766dc-4b89-4d09-87a3-7690f8af1ad7\" (UID: \"5a7766dc-4b89-4d09-87a3-7690f8af1ad7\") " Feb 25 11:20:43 crc kubenswrapper[5005]: I0225 11:20:43.764131 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a7766dc-4b89-4d09-87a3-7690f8af1ad7-utilities\") pod \"5a7766dc-4b89-4d09-87a3-7690f8af1ad7\" (UID: \"5a7766dc-4b89-4d09-87a3-7690f8af1ad7\") " Feb 25 11:20:43 crc kubenswrapper[5005]: I0225 11:20:43.764164 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2dtt\" (UniqueName: \"kubernetes.io/projected/5a7766dc-4b89-4d09-87a3-7690f8af1ad7-kube-api-access-v2dtt\") pod \"5a7766dc-4b89-4d09-87a3-7690f8af1ad7\" (UID: \"5a7766dc-4b89-4d09-87a3-7690f8af1ad7\") " Feb 25 11:20:43 crc kubenswrapper[5005]: I0225 11:20:43.764893 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a7766dc-4b89-4d09-87a3-7690f8af1ad7-utilities" (OuterVolumeSpecName: "utilities") pod "5a7766dc-4b89-4d09-87a3-7690f8af1ad7" (UID: "5a7766dc-4b89-4d09-87a3-7690f8af1ad7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:20:43 crc kubenswrapper[5005]: I0225 11:20:43.770935 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a7766dc-4b89-4d09-87a3-7690f8af1ad7-kube-api-access-v2dtt" (OuterVolumeSpecName: "kube-api-access-v2dtt") pod "5a7766dc-4b89-4d09-87a3-7690f8af1ad7" (UID: "5a7766dc-4b89-4d09-87a3-7690f8af1ad7"). InnerVolumeSpecName "kube-api-access-v2dtt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:20:43 crc kubenswrapper[5005]: I0225 11:20:43.865214 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a7766dc-4b89-4d09-87a3-7690f8af1ad7-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 11:20:43 crc kubenswrapper[5005]: I0225 11:20:43.865244 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2dtt\" (UniqueName: \"kubernetes.io/projected/5a7766dc-4b89-4d09-87a3-7690f8af1ad7-kube-api-access-v2dtt\") on node \"crc\" DevicePath \"\"" Feb 25 11:20:44 crc kubenswrapper[5005]: I0225 11:20:44.009667 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a7766dc-4b89-4d09-87a3-7690f8af1ad7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5a7766dc-4b89-4d09-87a3-7690f8af1ad7" (UID: "5a7766dc-4b89-4d09-87a3-7690f8af1ad7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:20:44 crc kubenswrapper[5005]: I0225 11:20:44.068285 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a7766dc-4b89-4d09-87a3-7690f8af1ad7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 11:20:44 crc kubenswrapper[5005]: I0225 11:20:44.527517 5005 generic.go:334] "Generic (PLEG): container finished" podID="4e5d39a0-d977-4973-a2a3-55699e86de91" containerID="efa4fab6845a904d2453745b60f92daf2c19142cae8703fc8c7ca40d731f6e2c" exitCode=0 Feb 25 11:20:44 crc kubenswrapper[5005]: I0225 11:20:44.527986 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vw6ng" event={"ID":"4e5d39a0-d977-4973-a2a3-55699e86de91","Type":"ContainerDied","Data":"efa4fab6845a904d2453745b60f92daf2c19142cae8703fc8c7ca40d731f6e2c"} Feb 25 11:20:44 crc kubenswrapper[5005]: I0225 11:20:44.532871 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qf86g" Feb 25 11:20:44 crc kubenswrapper[5005]: I0225 11:20:44.532885 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qf86g" event={"ID":"5a7766dc-4b89-4d09-87a3-7690f8af1ad7","Type":"ContainerDied","Data":"c4a0bb2e9cd7adb5c243d7efb7bb232e237aa44074af7de974f87a5096aaaf78"} Feb 25 11:20:44 crc kubenswrapper[5005]: I0225 11:20:44.532949 5005 scope.go:117] "RemoveContainer" containerID="23d6f2a7955cebedfbba289b5e65d81ef367fdb3fdf74514774ca8dccc761c9e" Feb 25 11:20:44 crc kubenswrapper[5005]: I0225 11:20:44.577568 5005 scope.go:117] "RemoveContainer" containerID="3837af66c67ae31840de45c8e92d4efa2354836e56f46a45e9887725581f740e" Feb 25 11:20:44 crc kubenswrapper[5005]: I0225 11:20:44.580186 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qf86g"] Feb 25 11:20:44 crc kubenswrapper[5005]: I0225 11:20:44.585899 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qf86g"] Feb 25 11:20:44 crc kubenswrapper[5005]: I0225 11:20:44.602077 5005 scope.go:117] "RemoveContainer" containerID="0f90e868f308552b9fe1a39a52fe61b1b4744f25cc4bba1dc7625c4574cb65ce" Feb 25 11:20:44 crc kubenswrapper[5005]: I0225 11:20:44.701553 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a7766dc-4b89-4d09-87a3-7690f8af1ad7" path="/var/lib/kubelet/pods/5a7766dc-4b89-4d09-87a3-7690f8af1ad7/volumes" Feb 25 11:20:45 crc kubenswrapper[5005]: I0225 11:20:45.546907 5005 generic.go:334] "Generic (PLEG): container finished" podID="8e6f04db-3c08-49ba-af16-7ebbc0cfe6f1" containerID="b31f210cf23ff3eee8b69b2d1fa6fab1634f3a5d31f041bf4dcbc32aa6be45df" exitCode=0 Feb 25 11:20:45 crc kubenswrapper[5005]: I0225 11:20:45.546990 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zhhkb" event={"ID":"8e6f04db-3c08-49ba-af16-7ebbc0cfe6f1","Type":"ContainerDied","Data":"b31f210cf23ff3eee8b69b2d1fa6fab1634f3a5d31f041bf4dcbc32aa6be45df"} Feb 25 11:20:45 crc kubenswrapper[5005]: I0225 11:20:45.551118 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vw6ng" event={"ID":"4e5d39a0-d977-4973-a2a3-55699e86de91","Type":"ContainerStarted","Data":"a4eccfabd239252ec9261e9af13bee8677b2c7e3f6199893c41e8932206d9c2a"} Feb 25 11:20:45 crc kubenswrapper[5005]: I0225 11:20:45.556987 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9bdm4" event={"ID":"f0fc9766-1d31-4d5a-8234-47d9e32f59be","Type":"ContainerStarted","Data":"8c353db74ccda82b617cdb4349427d99643bcbed3950421bf641dfb706248d63"} Feb 25 11:20:45 crc kubenswrapper[5005]: I0225 11:20:45.595863 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vw6ng" podStartSLOduration=2.614035695 podStartE2EDuration="44.595839933s" podCreationTimestamp="2026-02-25 11:20:01 +0000 UTC" firstStartedPulling="2026-02-25 11:20:03.021115159 +0000 UTC m=+117.061847486" lastFinishedPulling="2026-02-25 11:20:45.002919367 +0000 UTC m=+159.043651724" observedRunningTime="2026-02-25 11:20:45.590921288 +0000 UTC m=+159.631653625" watchObservedRunningTime="2026-02-25 11:20:45.595839933 +0000 UTC m=+159.636572260" Feb 25 11:20:46 crc kubenswrapper[5005]: I0225 11:20:46.562814 5005 generic.go:334] "Generic (PLEG): container finished" podID="f0fc9766-1d31-4d5a-8234-47d9e32f59be" containerID="8c353db74ccda82b617cdb4349427d99643bcbed3950421bf641dfb706248d63" exitCode=0 Feb 25 11:20:46 crc kubenswrapper[5005]: I0225 11:20:46.562871 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9bdm4" event={"ID":"f0fc9766-1d31-4d5a-8234-47d9e32f59be","Type":"ContainerDied","Data":"8c353db74ccda82b617cdb4349427d99643bcbed3950421bf641dfb706248d63"} Feb 25 11:20:46 crc kubenswrapper[5005]: I0225 11:20:46.566980 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zhhkb" event={"ID":"8e6f04db-3c08-49ba-af16-7ebbc0cfe6f1","Type":"ContainerStarted","Data":"4394c47dfe16f1049ffddfaff3a81628f8fa67692f2df0ace0c34ae1b3ab19be"} Feb 25 11:20:46 crc kubenswrapper[5005]: I0225 11:20:46.569590 5005 generic.go:334] "Generic (PLEG): container finished" podID="a80639a8-456e-4f88-a949-ce0c6fa8284c" containerID="8458e476cd09de215d8cd5f8701cadf4021748efd1809b8ccb09f21d60243683" exitCode=0 Feb 25 11:20:46 crc kubenswrapper[5005]: I0225 11:20:46.569648 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dgfww" event={"ID":"a80639a8-456e-4f88-a949-ce0c6fa8284c","Type":"ContainerDied","Data":"8458e476cd09de215d8cd5f8701cadf4021748efd1809b8ccb09f21d60243683"} Feb 25 11:20:46 crc kubenswrapper[5005]: I0225 11:20:46.614845 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zhhkb" podStartSLOduration=2.745432928 podStartE2EDuration="44.614826835s" podCreationTimestamp="2026-02-25 11:20:02 +0000 UTC" firstStartedPulling="2026-02-25 11:20:04.099629465 +0000 UTC m=+118.140361792" lastFinishedPulling="2026-02-25 11:20:45.969023362 +0000 UTC m=+160.009755699" observedRunningTime="2026-02-25 11:20:46.613040891 +0000 UTC m=+160.653773218" watchObservedRunningTime="2026-02-25 11:20:46.614826835 +0000 UTC m=+160.655559162" Feb 25 11:20:47 crc kubenswrapper[5005]: I0225 11:20:47.580710 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9bdm4" event={"ID":"f0fc9766-1d31-4d5a-8234-47d9e32f59be","Type":"ContainerStarted","Data":"d23d7c3f091e304f99032ea79230ff5b393277aa44da3fc6a2971a4980508fc4"} Feb 25 11:20:47 crc kubenswrapper[5005]: I0225 11:20:47.599307 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9bdm4" podStartSLOduration=2.152929545 podStartE2EDuration="48.599287988s" podCreationTimestamp="2026-02-25 11:19:59 +0000 UTC" firstStartedPulling="2026-02-25 11:20:00.589816611 +0000 UTC m=+114.630548928" lastFinishedPulling="2026-02-25 11:20:47.036175044 +0000 UTC m=+161.076907371" observedRunningTime="2026-02-25 11:20:47.598360165 +0000 UTC m=+161.639092532" watchObservedRunningTime="2026-02-25 11:20:47.599287988 +0000 UTC m=+161.640020315" Feb 25 11:20:48 crc kubenswrapper[5005]: I0225 11:20:48.588800 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dgfww" event={"ID":"a80639a8-456e-4f88-a949-ce0c6fa8284c","Type":"ContainerStarted","Data":"bd007eca1dc4ac6bf1fd5a1abd9f85e5d4173c99a000d6732c23d456881dc12e"} Feb 25 11:20:48 crc kubenswrapper[5005]: I0225 11:20:48.592947 5005 generic.go:334] "Generic (PLEG): container finished" podID="d4ff9940-8045-4d50-99a8-85f30d202aae" containerID="d5252fa3ede36290c040c2a90438259b3dbf341695dcb3f9a342949e313ecafc" exitCode=0 Feb 25 11:20:48 crc kubenswrapper[5005]: I0225 11:20:48.593013 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jlrk4" event={"ID":"d4ff9940-8045-4d50-99a8-85f30d202aae","Type":"ContainerDied","Data":"d5252fa3ede36290c040c2a90438259b3dbf341695dcb3f9a342949e313ecafc"} Feb 25 11:20:48 crc kubenswrapper[5005]: I0225 11:20:48.594452 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533640-qhftv" event={"ID":"705e7826-0109-4ea7-bfe8-3cf9e37285bc","Type":"ContainerStarted","Data":"2d98975ee81a28eedb319b71763ee373c43fc5c9d348e0af1034229ebbb4fcd8"} Feb 25 11:20:48 crc kubenswrapper[5005]: I0225 11:20:48.596749 5005 generic.go:334] "Generic (PLEG): container finished" podID="5552a8c8-de53-484f-a47f-42dbd9983137" containerID="ec5510a9ad2c33b810ba9445d55c5e2e2f960f1cfa0a67ecfe7fd627051243aa" exitCode=0 Feb 25 11:20:48 crc kubenswrapper[5005]: I0225 11:20:48.596781 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z5mfq" event={"ID":"5552a8c8-de53-484f-a47f-42dbd9983137","Type":"ContainerDied","Data":"ec5510a9ad2c33b810ba9445d55c5e2e2f960f1cfa0a67ecfe7fd627051243aa"} Feb 25 11:20:48 crc kubenswrapper[5005]: I0225 11:20:48.606697 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dgfww" podStartSLOduration=3.609602509 podStartE2EDuration="50.60668436s" podCreationTimestamp="2026-02-25 11:19:58 +0000 UTC" firstStartedPulling="2026-02-25 11:20:00.709820455 +0000 UTC m=+114.750552782" lastFinishedPulling="2026-02-25 11:20:47.706902306 +0000 UTC m=+161.747634633" observedRunningTime="2026-02-25 11:20:48.606592667 +0000 UTC m=+162.647324994" watchObservedRunningTime="2026-02-25 11:20:48.60668436 +0000 UTC m=+162.647416687" Feb 25 11:20:48 crc kubenswrapper[5005]: I0225 11:20:48.626403 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29533640-qhftv" podStartSLOduration=1.400671183 podStartE2EDuration="48.626386396s" podCreationTimestamp="2026-02-25 11:20:00 +0000 UTC" firstStartedPulling="2026-02-25 11:20:00.949724908 +0000 UTC m=+114.990457235" lastFinishedPulling="2026-02-25 11:20:48.175440121 +0000 UTC m=+162.216172448" observedRunningTime="2026-02-25 11:20:48.622750117 +0000 UTC m=+162.663482454" watchObservedRunningTime="2026-02-25 11:20:48.626386396 +0000 UTC m=+162.667118723" Feb 25 11:20:48 crc kubenswrapper[5005]: I0225 11:20:48.979781 5005 csr.go:261] certificate signing request csr-wps6b is approved, waiting to be issued Feb 25 11:20:48 crc kubenswrapper[5005]: I0225 11:20:48.989868 5005 csr.go:257] certificate signing request csr-wps6b is issued Feb 25 11:20:49 crc kubenswrapper[5005]: I0225 11:20:49.421795 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dgfww" Feb 25 11:20:49 crc kubenswrapper[5005]: I0225 11:20:49.421863 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dgfww" Feb 25 11:20:49 crc kubenswrapper[5005]: I0225 11:20:49.507006 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9bdm4" Feb 25 11:20:49 crc kubenswrapper[5005]: I0225 11:20:49.507069 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9bdm4" Feb 25 11:20:49 crc kubenswrapper[5005]: I0225 11:20:49.547105 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9bdm4" Feb 25 11:20:49 crc kubenswrapper[5005]: I0225 11:20:49.604212 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z5mfq" event={"ID":"5552a8c8-de53-484f-a47f-42dbd9983137","Type":"ContainerStarted","Data":"265a68dc1e846f753949ea1c2eec4994b2273b7a29a9bdb0c85bb3395edc2a27"} Feb 25 11:20:49 crc kubenswrapper[5005]: I0225 11:20:49.606358 5005 generic.go:334] "Generic (PLEG): container finished" podID="7d259751-70f2-4fcc-a6c7-4b99993eb217" containerID="d43c7b0216c4096d364817ce1e502b15c3dc1e3977be4e6d09db48540eafe4a8" exitCode=0 Feb 25 11:20:49 crc kubenswrapper[5005]: I0225 11:20:49.606414 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-twk58" event={"ID":"7d259751-70f2-4fcc-a6c7-4b99993eb217","Type":"ContainerDied","Data":"d43c7b0216c4096d364817ce1e502b15c3dc1e3977be4e6d09db48540eafe4a8"} Feb 25 11:20:49 crc kubenswrapper[5005]: I0225 11:20:49.609402 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jlrk4" event={"ID":"d4ff9940-8045-4d50-99a8-85f30d202aae","Type":"ContainerStarted","Data":"04e9e5efffe4d02306c5548c74c766d7dbb28fed899f21bcb5413103007a125e"} Feb 25 11:20:49 crc kubenswrapper[5005]: I0225 11:20:49.610486 5005 generic.go:334] "Generic (PLEG): container finished" podID="705e7826-0109-4ea7-bfe8-3cf9e37285bc" containerID="2d98975ee81a28eedb319b71763ee373c43fc5c9d348e0af1034229ebbb4fcd8" exitCode=0 Feb 25 11:20:49 crc kubenswrapper[5005]: I0225 11:20:49.610578 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533640-qhftv" event={"ID":"705e7826-0109-4ea7-bfe8-3cf9e37285bc","Type":"ContainerDied","Data":"2d98975ee81a28eedb319b71763ee373c43fc5c9d348e0af1034229ebbb4fcd8"} Feb 25 11:20:49 crc kubenswrapper[5005]: I0225 11:20:49.647356 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-z5mfq" podStartSLOduration=2.622919196 podStartE2EDuration="48.647332717s" podCreationTimestamp="2026-02-25 11:20:01 +0000 UTC" firstStartedPulling="2026-02-25 11:20:03.02088748 +0000 UTC m=+117.061619807" lastFinishedPulling="2026-02-25 11:20:49.045301001 +0000 UTC m=+163.086033328" observedRunningTime="2026-02-25 11:20:49.632031477 +0000 UTC m=+163.672763804" watchObservedRunningTime="2026-02-25 11:20:49.647332717 +0000 UTC m=+163.688065044" Feb 25 11:20:49 crc kubenswrapper[5005]: I0225 11:20:49.666294 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jlrk4" podStartSLOduration=3.383330836 podStartE2EDuration="50.666263275s" podCreationTimestamp="2026-02-25 11:19:59 +0000 UTC" firstStartedPulling="2026-02-25 11:20:01.785715745 +0000 UTC m=+115.826448072" lastFinishedPulling="2026-02-25 11:20:49.068648184 +0000 UTC m=+163.109380511" observedRunningTime="2026-02-25 11:20:49.661149295 +0000 UTC m=+163.701881632" watchObservedRunningTime="2026-02-25 11:20:49.666263275 +0000 UTC m=+163.706995602" Feb 25 11:20:49 crc kubenswrapper[5005]: I0225 11:20:49.982238 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jlrk4" Feb 25 11:20:49 crc kubenswrapper[5005]: I0225 11:20:49.982601 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jlrk4" Feb 25 11:20:49 crc kubenswrapper[5005]: I0225 11:20:49.991405 5005 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-11-25 14:04:56.333241807 +0000 UTC Feb 25 11:20:49 crc kubenswrapper[5005]: I0225 11:20:49.991446 5005 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6554h44m6.341797925s for next certificate rotation Feb 25 11:20:50 crc kubenswrapper[5005]: I0225 11:20:50.459628 5005 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-dgfww" podUID="a80639a8-456e-4f88-a949-ce0c6fa8284c" containerName="registry-server" probeResult="failure" output=< Feb 25 11:20:50 crc kubenswrapper[5005]: timeout: failed to connect service ":50051" within 1s Feb 25 11:20:50 crc kubenswrapper[5005]: > Feb 25 11:20:50 crc kubenswrapper[5005]: I0225 11:20:50.950333 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533640-qhftv" Feb 25 11:20:50 crc kubenswrapper[5005]: I0225 11:20:50.992279 5005 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-12-30 00:05:27.972122186 +0000 UTC Feb 25 11:20:50 crc kubenswrapper[5005]: I0225 11:20:50.992337 5005 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7380h44m36.979789642s for next certificate rotation Feb 25 11:20:51 crc kubenswrapper[5005]: I0225 11:20:51.030023 5005 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-jlrk4" podUID="d4ff9940-8045-4d50-99a8-85f30d202aae" containerName="registry-server" probeResult="failure" output=< Feb 25 11:20:51 crc kubenswrapper[5005]: timeout: failed to connect service ":50051" within 1s Feb 25 11:20:51 crc kubenswrapper[5005]: > Feb 25 11:20:51 crc kubenswrapper[5005]: I0225 11:20:51.060050 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qm9d\" (UniqueName: \"kubernetes.io/projected/705e7826-0109-4ea7-bfe8-3cf9e37285bc-kube-api-access-6qm9d\") pod \"705e7826-0109-4ea7-bfe8-3cf9e37285bc\" (UID: \"705e7826-0109-4ea7-bfe8-3cf9e37285bc\") " Feb 25 11:20:51 crc kubenswrapper[5005]: I0225 11:20:51.258580 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/705e7826-0109-4ea7-bfe8-3cf9e37285bc-kube-api-access-6qm9d" (OuterVolumeSpecName: "kube-api-access-6qm9d") pod "705e7826-0109-4ea7-bfe8-3cf9e37285bc" (UID: "705e7826-0109-4ea7-bfe8-3cf9e37285bc"). InnerVolumeSpecName "kube-api-access-6qm9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:20:51 crc kubenswrapper[5005]: I0225 11:20:51.261631 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qm9d\" (UniqueName: \"kubernetes.io/projected/705e7826-0109-4ea7-bfe8-3cf9e37285bc-kube-api-access-6qm9d\") on node \"crc\" DevicePath \"\"" Feb 25 11:20:51 crc kubenswrapper[5005]: I0225 11:20:51.485574 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vw6ng" Feb 25 11:20:51 crc kubenswrapper[5005]: I0225 11:20:51.485965 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vw6ng" Feb 25 11:20:51 crc kubenswrapper[5005]: I0225 11:20:51.567112 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vw6ng" Feb 25 11:20:51 crc kubenswrapper[5005]: I0225 11:20:51.623760 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-twk58" event={"ID":"7d259751-70f2-4fcc-a6c7-4b99993eb217","Type":"ContainerStarted","Data":"b813b7065814d7116933cf98a41e5a6b718fe7cf944ed6ad71757c52712547f6"} Feb 25 11:20:51 crc kubenswrapper[5005]: I0225 11:20:51.625348 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533640-qhftv" Feb 25 11:20:51 crc kubenswrapper[5005]: I0225 11:20:51.625336 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533640-qhftv" event={"ID":"705e7826-0109-4ea7-bfe8-3cf9e37285bc","Type":"ContainerDied","Data":"6c93eec9f67bcc8155d49ea3cabcdf9fac05cfbd181ab5188e67802193042e1e"} Feb 25 11:20:51 crc kubenswrapper[5005]: I0225 11:20:51.625405 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c93eec9f67bcc8155d49ea3cabcdf9fac05cfbd181ab5188e67802193042e1e" Feb 25 11:20:51 crc kubenswrapper[5005]: I0225 11:20:51.641287 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-twk58" podStartSLOduration=2.370454346 podStartE2EDuration="49.641262406s" podCreationTimestamp="2026-02-25 11:20:02 +0000 UTC" firstStartedPulling="2026-02-25 11:20:04.112232694 +0000 UTC m=+118.152965021" lastFinishedPulling="2026-02-25 11:20:51.383040754 +0000 UTC m=+165.423773081" observedRunningTime="2026-02-25 11:20:51.638186148 +0000 UTC m=+165.678918485" watchObservedRunningTime="2026-02-25 11:20:51.641262406 +0000 UTC m=+165.681994723" Feb 25 11:20:51 crc kubenswrapper[5005]: I0225 11:20:51.683750 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vw6ng" Feb 25 11:20:51 crc kubenswrapper[5005]: I0225 11:20:51.889976 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-z5mfq" Feb 25 11:20:51 crc kubenswrapper[5005]: I0225 11:20:51.890041 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-z5mfq" Feb 25 11:20:51 crc kubenswrapper[5005]: I0225 11:20:51.931561 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-z5mfq" Feb 25 11:20:52 crc kubenswrapper[5005]: I0225 11:20:52.516480 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zhhkb" Feb 25 11:20:52 crc kubenswrapper[5005]: I0225 11:20:52.516531 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zhhkb" Feb 25 11:20:52 crc kubenswrapper[5005]: I0225 11:20:52.568472 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zhhkb" Feb 25 11:20:52 crc kubenswrapper[5005]: I0225 11:20:52.714920 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zhhkb" Feb 25 11:20:52 crc kubenswrapper[5005]: I0225 11:20:52.926026 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-twk58" Feb 25 11:20:52 crc kubenswrapper[5005]: I0225 11:20:52.926476 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-twk58" Feb 25 11:20:53 crc kubenswrapper[5005]: I0225 11:20:53.978923 5005 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-twk58" podUID="7d259751-70f2-4fcc-a6c7-4b99993eb217" containerName="registry-server" probeResult="failure" output=< Feb 25 11:20:53 crc kubenswrapper[5005]: timeout: failed to connect service ":50051" within 1s Feb 25 11:20:53 crc kubenswrapper[5005]: > Feb 25 11:20:58 crc kubenswrapper[5005]: I0225 11:20:58.255570 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-587bcdf5bc-whn89"] Feb 25 11:20:58 crc kubenswrapper[5005]: I0225 11:20:58.255760 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-587bcdf5bc-whn89" podUID="55c9939e-1f30-42e8-8d16-e706a5cc356e" containerName="controller-manager" containerID="cri-o://a7922367aeb71c1e9167b70c0848a4a5678034d5ae291bfddf7616c1e1f94738" gracePeriod=30 Feb 25 11:20:58 crc kubenswrapper[5005]: I0225 11:20:58.273361 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-585f44c67c-fpmdj"] Feb 25 11:20:58 crc kubenswrapper[5005]: I0225 11:20:58.273558 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-585f44c67c-fpmdj" podUID="f3862f25-5b8b-46a1-8630-404797ca616a" containerName="route-controller-manager" containerID="cri-o://eb10e35000a042a640c42fdfef4c8c516b4ef096dfcb0ab21e90190508329948" gracePeriod=30 Feb 25 11:20:58 crc kubenswrapper[5005]: I0225 11:20:58.692047 5005 generic.go:334] "Generic (PLEG): container finished" podID="f3862f25-5b8b-46a1-8630-404797ca616a" containerID="eb10e35000a042a640c42fdfef4c8c516b4ef096dfcb0ab21e90190508329948" exitCode=0 Feb 25 11:20:58 crc kubenswrapper[5005]: I0225 11:20:58.694041 5005 generic.go:334] "Generic (PLEG): container finished" podID="55c9939e-1f30-42e8-8d16-e706a5cc356e" containerID="a7922367aeb71c1e9167b70c0848a4a5678034d5ae291bfddf7616c1e1f94738" exitCode=0 Feb 25 11:20:58 crc kubenswrapper[5005]: I0225 11:20:58.698772 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-585f44c67c-fpmdj" event={"ID":"f3862f25-5b8b-46a1-8630-404797ca616a","Type":"ContainerDied","Data":"eb10e35000a042a640c42fdfef4c8c516b4ef096dfcb0ab21e90190508329948"} Feb 25 11:20:58 crc kubenswrapper[5005]: I0225 11:20:58.698881 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-587bcdf5bc-whn89" event={"ID":"55c9939e-1f30-42e8-8d16-e706a5cc356e","Type":"ContainerDied","Data":"a7922367aeb71c1e9167b70c0848a4a5678034d5ae291bfddf7616c1e1f94738"} Feb 25 11:20:58 crc kubenswrapper[5005]: I0225 11:20:58.818414 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-585f44c67c-fpmdj" Feb 25 11:20:58 crc kubenswrapper[5005]: I0225 11:20:58.893806 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f3862f25-5b8b-46a1-8630-404797ca616a-client-ca\") pod \"f3862f25-5b8b-46a1-8630-404797ca616a\" (UID: \"f3862f25-5b8b-46a1-8630-404797ca616a\") " Feb 25 11:20:58 crc kubenswrapper[5005]: I0225 11:20:58.893905 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxxvt\" (UniqueName: \"kubernetes.io/projected/f3862f25-5b8b-46a1-8630-404797ca616a-kube-api-access-hxxvt\") pod \"f3862f25-5b8b-46a1-8630-404797ca616a\" (UID: \"f3862f25-5b8b-46a1-8630-404797ca616a\") " Feb 25 11:20:58 crc kubenswrapper[5005]: I0225 11:20:58.894006 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3862f25-5b8b-46a1-8630-404797ca616a-serving-cert\") pod \"f3862f25-5b8b-46a1-8630-404797ca616a\" (UID: \"f3862f25-5b8b-46a1-8630-404797ca616a\") " Feb 25 11:20:58 crc kubenswrapper[5005]: I0225 11:20:58.894065 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3862f25-5b8b-46a1-8630-404797ca616a-config\") pod \"f3862f25-5b8b-46a1-8630-404797ca616a\" (UID: \"f3862f25-5b8b-46a1-8630-404797ca616a\") " Feb 25 11:20:58 crc kubenswrapper[5005]: I0225 11:20:58.894628 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3862f25-5b8b-46a1-8630-404797ca616a-client-ca" (OuterVolumeSpecName: "client-ca") pod "f3862f25-5b8b-46a1-8630-404797ca616a" (UID: "f3862f25-5b8b-46a1-8630-404797ca616a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:20:58 crc kubenswrapper[5005]: I0225 11:20:58.894721 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3862f25-5b8b-46a1-8630-404797ca616a-config" (OuterVolumeSpecName: "config") pod "f3862f25-5b8b-46a1-8630-404797ca616a" (UID: "f3862f25-5b8b-46a1-8630-404797ca616a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:20:58 crc kubenswrapper[5005]: I0225 11:20:58.899036 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3862f25-5b8b-46a1-8630-404797ca616a-kube-api-access-hxxvt" (OuterVolumeSpecName: "kube-api-access-hxxvt") pod "f3862f25-5b8b-46a1-8630-404797ca616a" (UID: "f3862f25-5b8b-46a1-8630-404797ca616a"). InnerVolumeSpecName "kube-api-access-hxxvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:20:58 crc kubenswrapper[5005]: I0225 11:20:58.899417 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3862f25-5b8b-46a1-8630-404797ca616a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f3862f25-5b8b-46a1-8630-404797ca616a" (UID: "f3862f25-5b8b-46a1-8630-404797ca616a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:20:58 crc kubenswrapper[5005]: I0225 11:20:58.909638 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-587bcdf5bc-whn89" Feb 25 11:20:58 crc kubenswrapper[5005]: I0225 11:20:58.995364 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgf96\" (UniqueName: \"kubernetes.io/projected/55c9939e-1f30-42e8-8d16-e706a5cc356e-kube-api-access-tgf96\") pod \"55c9939e-1f30-42e8-8d16-e706a5cc356e\" (UID: \"55c9939e-1f30-42e8-8d16-e706a5cc356e\") " Feb 25 11:20:58 crc kubenswrapper[5005]: I0225 11:20:58.995686 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55c9939e-1f30-42e8-8d16-e706a5cc356e-config\") pod \"55c9939e-1f30-42e8-8d16-e706a5cc356e\" (UID: \"55c9939e-1f30-42e8-8d16-e706a5cc356e\") " Feb 25 11:20:58 crc kubenswrapper[5005]: I0225 11:20:58.995848 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55c9939e-1f30-42e8-8d16-e706a5cc356e-client-ca\") pod \"55c9939e-1f30-42e8-8d16-e706a5cc356e\" (UID: \"55c9939e-1f30-42e8-8d16-e706a5cc356e\") " Feb 25 11:20:58 crc kubenswrapper[5005]: I0225 11:20:58.996008 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/55c9939e-1f30-42e8-8d16-e706a5cc356e-proxy-ca-bundles\") pod \"55c9939e-1f30-42e8-8d16-e706a5cc356e\" (UID: \"55c9939e-1f30-42e8-8d16-e706a5cc356e\") " Feb 25 11:20:58 crc kubenswrapper[5005]: I0225 11:20:58.996150 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55c9939e-1f30-42e8-8d16-e706a5cc356e-serving-cert\") pod \"55c9939e-1f30-42e8-8d16-e706a5cc356e\" (UID: \"55c9939e-1f30-42e8-8d16-e706a5cc356e\") " Feb 25 11:20:58 crc kubenswrapper[5005]: I0225 11:20:58.996467 5005 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3862f25-5b8b-46a1-8630-404797ca616a-config\") on node \"crc\" DevicePath \"\"" Feb 25 11:20:58 crc kubenswrapper[5005]: I0225 11:20:58.996603 5005 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f3862f25-5b8b-46a1-8630-404797ca616a-client-ca\") on node \"crc\" DevicePath \"\"" Feb 25 11:20:58 crc kubenswrapper[5005]: I0225 11:20:58.996780 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxxvt\" (UniqueName: \"kubernetes.io/projected/f3862f25-5b8b-46a1-8630-404797ca616a-kube-api-access-hxxvt\") on node \"crc\" DevicePath \"\"" Feb 25 11:20:58 crc kubenswrapper[5005]: I0225 11:20:58.996948 5005 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3862f25-5b8b-46a1-8630-404797ca616a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 25 11:20:58 crc kubenswrapper[5005]: I0225 11:20:58.996675 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55c9939e-1f30-42e8-8d16-e706a5cc356e-client-ca" (OuterVolumeSpecName: "client-ca") pod "55c9939e-1f30-42e8-8d16-e706a5cc356e" (UID: "55c9939e-1f30-42e8-8d16-e706a5cc356e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:20:58 crc kubenswrapper[5005]: I0225 11:20:58.996825 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55c9939e-1f30-42e8-8d16-e706a5cc356e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "55c9939e-1f30-42e8-8d16-e706a5cc356e" (UID: "55c9939e-1f30-42e8-8d16-e706a5cc356e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:20:58 crc kubenswrapper[5005]: I0225 11:20:58.996903 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55c9939e-1f30-42e8-8d16-e706a5cc356e-config" (OuterVolumeSpecName: "config") pod "55c9939e-1f30-42e8-8d16-e706a5cc356e" (UID: "55c9939e-1f30-42e8-8d16-e706a5cc356e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:20:58 crc kubenswrapper[5005]: I0225 11:20:58.999952 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55c9939e-1f30-42e8-8d16-e706a5cc356e-kube-api-access-tgf96" (OuterVolumeSpecName: "kube-api-access-tgf96") pod "55c9939e-1f30-42e8-8d16-e706a5cc356e" (UID: "55c9939e-1f30-42e8-8d16-e706a5cc356e"). InnerVolumeSpecName "kube-api-access-tgf96". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:20:59 crc kubenswrapper[5005]: I0225 11:20:59.001685 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55c9939e-1f30-42e8-8d16-e706a5cc356e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "55c9939e-1f30-42e8-8d16-e706a5cc356e" (UID: "55c9939e-1f30-42e8-8d16-e706a5cc356e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:20:59 crc kubenswrapper[5005]: I0225 11:20:59.099077 5005 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55c9939e-1f30-42e8-8d16-e706a5cc356e-client-ca\") on node \"crc\" DevicePath \"\"" Feb 25 11:20:59 crc kubenswrapper[5005]: I0225 11:20:59.099136 5005 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/55c9939e-1f30-42e8-8d16-e706a5cc356e-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 25 11:20:59 crc kubenswrapper[5005]: I0225 11:20:59.099159 5005 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55c9939e-1f30-42e8-8d16-e706a5cc356e-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 25 11:20:59 crc kubenswrapper[5005]: I0225 11:20:59.099180 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgf96\" (UniqueName: \"kubernetes.io/projected/55c9939e-1f30-42e8-8d16-e706a5cc356e-kube-api-access-tgf96\") on node \"crc\" DevicePath \"\"" Feb 25 11:20:59 crc kubenswrapper[5005]: I0225 11:20:59.099200 5005 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55c9939e-1f30-42e8-8d16-e706a5cc356e-config\") on node \"crc\" DevicePath \"\"" Feb 25 11:20:59 crc kubenswrapper[5005]: I0225 11:20:59.504355 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dgfww" Feb 25 11:20:59 crc kubenswrapper[5005]: I0225 11:20:59.562238 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dgfww" Feb 25 11:20:59 crc kubenswrapper[5005]: I0225 11:20:59.576397 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9bdm4" Feb 25 11:20:59 crc kubenswrapper[5005]: I0225 11:20:59.668458 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7559c79686-zn9fc"] Feb 25 11:20:59 crc kubenswrapper[5005]: E0225 11:20:59.668872 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a7766dc-4b89-4d09-87a3-7690f8af1ad7" containerName="extract-content" Feb 25 11:20:59 crc kubenswrapper[5005]: I0225 11:20:59.668903 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a7766dc-4b89-4d09-87a3-7690f8af1ad7" containerName="extract-content" Feb 25 11:20:59 crc kubenswrapper[5005]: E0225 11:20:59.668924 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a7766dc-4b89-4d09-87a3-7690f8af1ad7" containerName="extract-utilities" Feb 25 11:20:59 crc kubenswrapper[5005]: I0225 11:20:59.668938 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a7766dc-4b89-4d09-87a3-7690f8af1ad7" containerName="extract-utilities" Feb 25 11:20:59 crc kubenswrapper[5005]: E0225 11:20:59.668969 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55c9939e-1f30-42e8-8d16-e706a5cc356e" containerName="controller-manager" Feb 25 11:20:59 crc kubenswrapper[5005]: I0225 11:20:59.668981 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="55c9939e-1f30-42e8-8d16-e706a5cc356e" containerName="controller-manager" Feb 25 11:20:59 crc kubenswrapper[5005]: E0225 11:20:59.669001 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a7766dc-4b89-4d09-87a3-7690f8af1ad7" containerName="registry-server" Feb 25 11:20:59 crc kubenswrapper[5005]: I0225 11:20:59.669015 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a7766dc-4b89-4d09-87a3-7690f8af1ad7" containerName="registry-server" Feb 25 11:20:59 crc kubenswrapper[5005]: E0225 11:20:59.669031 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3862f25-5b8b-46a1-8630-404797ca616a" containerName="route-controller-manager" Feb 25 11:20:59 crc kubenswrapper[5005]: I0225 11:20:59.669045 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3862f25-5b8b-46a1-8630-404797ca616a" containerName="route-controller-manager" Feb 25 11:20:59 crc kubenswrapper[5005]: E0225 11:20:59.669061 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="705e7826-0109-4ea7-bfe8-3cf9e37285bc" containerName="oc" Feb 25 11:20:59 crc kubenswrapper[5005]: I0225 11:20:59.669073 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="705e7826-0109-4ea7-bfe8-3cf9e37285bc" containerName="oc" Feb 25 11:20:59 crc kubenswrapper[5005]: I0225 11:20:59.669243 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3862f25-5b8b-46a1-8630-404797ca616a" containerName="route-controller-manager" Feb 25 11:20:59 crc kubenswrapper[5005]: I0225 11:20:59.669264 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a7766dc-4b89-4d09-87a3-7690f8af1ad7" containerName="registry-server" Feb 25 11:20:59 crc kubenswrapper[5005]: I0225 11:20:59.669283 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="705e7826-0109-4ea7-bfe8-3cf9e37285bc" containerName="oc" Feb 25 11:20:59 crc kubenswrapper[5005]: I0225 11:20:59.669302 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="55c9939e-1f30-42e8-8d16-e706a5cc356e" containerName="controller-manager" Feb 25 11:20:59 crc kubenswrapper[5005]: I0225 11:20:59.669935 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7559c79686-zn9fc" Feb 25 11:20:59 crc kubenswrapper[5005]: I0225 11:20:59.674314 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-654d5c677d-rt9q2"] Feb 25 11:20:59 crc kubenswrapper[5005]: I0225 11:20:59.675404 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-654d5c677d-rt9q2" Feb 25 11:20:59 crc kubenswrapper[5005]: I0225 11:20:59.679264 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-654d5c677d-rt9q2"] Feb 25 11:20:59 crc kubenswrapper[5005]: I0225 11:20:59.691101 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7559c79686-zn9fc"] Feb 25 11:20:59 crc kubenswrapper[5005]: I0225 11:20:59.707155 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cecf8f78-dacb-47a9-95f6-46c6f141e468-serving-cert\") pod \"controller-manager-7559c79686-zn9fc\" (UID: \"cecf8f78-dacb-47a9-95f6-46c6f141e468\") " pod="openshift-controller-manager/controller-manager-7559c79686-zn9fc" Feb 25 11:20:59 crc kubenswrapper[5005]: I0225 11:20:59.707782 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c538682-fd83-4c33-b28c-0031f11cf907-config\") pod \"route-controller-manager-654d5c677d-rt9q2\" (UID: \"2c538682-fd83-4c33-b28c-0031f11cf907\") " pod="openshift-route-controller-manager/route-controller-manager-654d5c677d-rt9q2" Feb 25 11:20:59 crc kubenswrapper[5005]: I0225 11:20:59.707830 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cecf8f78-dacb-47a9-95f6-46c6f141e468-proxy-ca-bundles\") pod \"controller-manager-7559c79686-zn9fc\" (UID: \"cecf8f78-dacb-47a9-95f6-46c6f141e468\") " pod="openshift-controller-manager/controller-manager-7559c79686-zn9fc" Feb 25 11:20:59 crc kubenswrapper[5005]: I0225 11:20:59.709066 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-587bcdf5bc-whn89" event={"ID":"55c9939e-1f30-42e8-8d16-e706a5cc356e","Type":"ContainerDied","Data":"57ba796c84c5ac0038669c1a249ab08cdb866cd75a76063e4c6d31472b2f4e38"} Feb 25 11:20:59 crc kubenswrapper[5005]: I0225 11:20:59.709125 5005 scope.go:117] "RemoveContainer" containerID="a7922367aeb71c1e9167b70c0848a4a5678034d5ae291bfddf7616c1e1f94738" Feb 25 11:20:59 crc kubenswrapper[5005]: I0225 11:20:59.709302 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-587bcdf5bc-whn89" Feb 25 11:20:59 crc kubenswrapper[5005]: I0225 11:20:59.709739 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c538682-fd83-4c33-b28c-0031f11cf907-serving-cert\") pod \"route-controller-manager-654d5c677d-rt9q2\" (UID: \"2c538682-fd83-4c33-b28c-0031f11cf907\") " pod="openshift-route-controller-manager/route-controller-manager-654d5c677d-rt9q2" Feb 25 11:20:59 crc kubenswrapper[5005]: I0225 11:20:59.709779 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk5g5\" (UniqueName: \"kubernetes.io/projected/2c538682-fd83-4c33-b28c-0031f11cf907-kube-api-access-dk5g5\") pod \"route-controller-manager-654d5c677d-rt9q2\" (UID: \"2c538682-fd83-4c33-b28c-0031f11cf907\") " pod="openshift-route-controller-manager/route-controller-manager-654d5c677d-rt9q2" Feb 25 11:20:59 crc kubenswrapper[5005]: I0225 11:20:59.709835 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2c538682-fd83-4c33-b28c-0031f11cf907-client-ca\") pod \"route-controller-manager-654d5c677d-rt9q2\" (UID: \"2c538682-fd83-4c33-b28c-0031f11cf907\") " pod="openshift-route-controller-manager/route-controller-manager-654d5c677d-rt9q2" Feb 25 11:20:59 crc kubenswrapper[5005]: I0225 11:20:59.709906 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cecf8f78-dacb-47a9-95f6-46c6f141e468-config\") pod \"controller-manager-7559c79686-zn9fc\" (UID: \"cecf8f78-dacb-47a9-95f6-46c6f141e468\") " pod="openshift-controller-manager/controller-manager-7559c79686-zn9fc" Feb 25 11:20:59 crc kubenswrapper[5005]: I0225 11:20:59.709940 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cecf8f78-dacb-47a9-95f6-46c6f141e468-client-ca\") pod \"controller-manager-7559c79686-zn9fc\" (UID: \"cecf8f78-dacb-47a9-95f6-46c6f141e468\") " pod="openshift-controller-manager/controller-manager-7559c79686-zn9fc" Feb 25 11:20:59 crc kubenswrapper[5005]: I0225 11:20:59.710043 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2p9x\" (UniqueName: \"kubernetes.io/projected/cecf8f78-dacb-47a9-95f6-46c6f141e468-kube-api-access-x2p9x\") pod \"controller-manager-7559c79686-zn9fc\" (UID: \"cecf8f78-dacb-47a9-95f6-46c6f141e468\") " pod="openshift-controller-manager/controller-manager-7559c79686-zn9fc" Feb 25 11:20:59 crc kubenswrapper[5005]: I0225 11:20:59.712834 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-585f44c67c-fpmdj" Feb 25 11:20:59 crc kubenswrapper[5005]: I0225 11:20:59.712905 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-585f44c67c-fpmdj" event={"ID":"f3862f25-5b8b-46a1-8630-404797ca616a","Type":"ContainerDied","Data":"30db779a08cb3d03a105fe06030e32f0a10941bc20511b090f839397f8f403c3"} Feb 25 11:20:59 crc kubenswrapper[5005]: I0225 11:20:59.740150 5005 scope.go:117] "RemoveContainer" containerID="eb10e35000a042a640c42fdfef4c8c516b4ef096dfcb0ab21e90190508329948" Feb 25 11:20:59 crc kubenswrapper[5005]: I0225 11:20:59.760469 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-587bcdf5bc-whn89"] Feb 25 11:20:59 crc kubenswrapper[5005]: I0225 11:20:59.765538 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-587bcdf5bc-whn89"] Feb 25 11:20:59 crc kubenswrapper[5005]: I0225 11:20:59.768653 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-585f44c67c-fpmdj"] Feb 25 11:20:59 crc kubenswrapper[5005]: I0225 11:20:59.771058 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-585f44c67c-fpmdj"] Feb 25 11:20:59 crc kubenswrapper[5005]: I0225 11:20:59.811107 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cecf8f78-dacb-47a9-95f6-46c6f141e468-proxy-ca-bundles\") pod \"controller-manager-7559c79686-zn9fc\" (UID: \"cecf8f78-dacb-47a9-95f6-46c6f141e468\") " pod="openshift-controller-manager/controller-manager-7559c79686-zn9fc" Feb 25 11:20:59 crc kubenswrapper[5005]: I0225 11:20:59.811155 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c538682-fd83-4c33-b28c-0031f11cf907-serving-cert\") pod \"route-controller-manager-654d5c677d-rt9q2\" (UID: \"2c538682-fd83-4c33-b28c-0031f11cf907\") " pod="openshift-route-controller-manager/route-controller-manager-654d5c677d-rt9q2" Feb 25 11:20:59 crc kubenswrapper[5005]: I0225 11:20:59.811174 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dk5g5\" (UniqueName: \"kubernetes.io/projected/2c538682-fd83-4c33-b28c-0031f11cf907-kube-api-access-dk5g5\") pod \"route-controller-manager-654d5c677d-rt9q2\" (UID: \"2c538682-fd83-4c33-b28c-0031f11cf907\") " pod="openshift-route-controller-manager/route-controller-manager-654d5c677d-rt9q2" Feb 25 11:20:59 crc kubenswrapper[5005]: I0225 11:20:59.811194 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2c538682-fd83-4c33-b28c-0031f11cf907-client-ca\") pod \"route-controller-manager-654d5c677d-rt9q2\" (UID: \"2c538682-fd83-4c33-b28c-0031f11cf907\") " pod="openshift-route-controller-manager/route-controller-manager-654d5c677d-rt9q2" Feb 25 11:20:59 crc kubenswrapper[5005]: I0225 11:20:59.811226 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cecf8f78-dacb-47a9-95f6-46c6f141e468-config\") pod \"controller-manager-7559c79686-zn9fc\" (UID: \"cecf8f78-dacb-47a9-95f6-46c6f141e468\") " pod="openshift-controller-manager/controller-manager-7559c79686-zn9fc" Feb 25 11:20:59 crc kubenswrapper[5005]: I0225 11:20:59.811241 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cecf8f78-dacb-47a9-95f6-46c6f141e468-client-ca\") pod \"controller-manager-7559c79686-zn9fc\" (UID: \"cecf8f78-dacb-47a9-95f6-46c6f141e468\") " pod="openshift-controller-manager/controller-manager-7559c79686-zn9fc" Feb 25 11:20:59 crc kubenswrapper[5005]: I0225 11:20:59.811268 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2p9x\" (UniqueName: \"kubernetes.io/projected/cecf8f78-dacb-47a9-95f6-46c6f141e468-kube-api-access-x2p9x\") pod \"controller-manager-7559c79686-zn9fc\" (UID: \"cecf8f78-dacb-47a9-95f6-46c6f141e468\") " pod="openshift-controller-manager/controller-manager-7559c79686-zn9fc" Feb 25 11:20:59 crc kubenswrapper[5005]: I0225 11:20:59.811292 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cecf8f78-dacb-47a9-95f6-46c6f141e468-serving-cert\") pod \"controller-manager-7559c79686-zn9fc\" (UID: \"cecf8f78-dacb-47a9-95f6-46c6f141e468\") " pod="openshift-controller-manager/controller-manager-7559c79686-zn9fc" Feb 25 11:20:59 crc kubenswrapper[5005]: I0225 11:20:59.811316 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c538682-fd83-4c33-b28c-0031f11cf907-config\") pod \"route-controller-manager-654d5c677d-rt9q2\" (UID: \"2c538682-fd83-4c33-b28c-0031f11cf907\") " pod="openshift-route-controller-manager/route-controller-manager-654d5c677d-rt9q2" Feb 25 11:20:59 crc kubenswrapper[5005]: I0225 11:20:59.812521 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c538682-fd83-4c33-b28c-0031f11cf907-config\") pod \"route-controller-manager-654d5c677d-rt9q2\" (UID: \"2c538682-fd83-4c33-b28c-0031f11cf907\") " pod="openshift-route-controller-manager/route-controller-manager-654d5c677d-rt9q2" Feb 25 11:20:59 crc kubenswrapper[5005]: I0225 11:20:59.812559 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cecf8f78-dacb-47a9-95f6-46c6f141e468-client-ca\") pod \"controller-manager-7559c79686-zn9fc\" (UID: \"cecf8f78-dacb-47a9-95f6-46c6f141e468\") " pod="openshift-controller-manager/controller-manager-7559c79686-zn9fc" Feb 25 11:20:59 crc kubenswrapper[5005]: I0225 11:20:59.812963 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2c538682-fd83-4c33-b28c-0031f11cf907-client-ca\") pod \"route-controller-manager-654d5c677d-rt9q2\" (UID: \"2c538682-fd83-4c33-b28c-0031f11cf907\") " pod="openshift-route-controller-manager/route-controller-manager-654d5c677d-rt9q2" Feb 25 11:20:59 crc kubenswrapper[5005]: I0225 11:20:59.813903 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cecf8f78-dacb-47a9-95f6-46c6f141e468-config\") pod \"controller-manager-7559c79686-zn9fc\" (UID: \"cecf8f78-dacb-47a9-95f6-46c6f141e468\") " pod="openshift-controller-manager/controller-manager-7559c79686-zn9fc" Feb 25 11:20:59 crc kubenswrapper[5005]: I0225 11:20:59.813993 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cecf8f78-dacb-47a9-95f6-46c6f141e468-proxy-ca-bundles\") pod \"controller-manager-7559c79686-zn9fc\" (UID: \"cecf8f78-dacb-47a9-95f6-46c6f141e468\") " pod="openshift-controller-manager/controller-manager-7559c79686-zn9fc" Feb 25 11:20:59 crc kubenswrapper[5005]: I0225 11:20:59.818610 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c538682-fd83-4c33-b28c-0031f11cf907-serving-cert\") pod \"route-controller-manager-654d5c677d-rt9q2\" (UID: \"2c538682-fd83-4c33-b28c-0031f11cf907\") " pod="openshift-route-controller-manager/route-controller-manager-654d5c677d-rt9q2" Feb 25 11:20:59 crc kubenswrapper[5005]: I0225 11:20:59.822047 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cecf8f78-dacb-47a9-95f6-46c6f141e468-serving-cert\") pod \"controller-manager-7559c79686-zn9fc\" (UID: \"cecf8f78-dacb-47a9-95f6-46c6f141e468\") " pod="openshift-controller-manager/controller-manager-7559c79686-zn9fc" Feb 25 11:20:59 crc kubenswrapper[5005]: I0225 11:20:59.837144 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dk5g5\" (UniqueName: \"kubernetes.io/projected/2c538682-fd83-4c33-b28c-0031f11cf907-kube-api-access-dk5g5\") pod \"route-controller-manager-654d5c677d-rt9q2\" (UID: \"2c538682-fd83-4c33-b28c-0031f11cf907\") " pod="openshift-route-controller-manager/route-controller-manager-654d5c677d-rt9q2" Feb 25 11:20:59 crc kubenswrapper[5005]: I0225 11:20:59.837599 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2p9x\" (UniqueName: \"kubernetes.io/projected/cecf8f78-dacb-47a9-95f6-46c6f141e468-kube-api-access-x2p9x\") pod \"controller-manager-7559c79686-zn9fc\" (UID: \"cecf8f78-dacb-47a9-95f6-46c6f141e468\") " pod="openshift-controller-manager/controller-manager-7559c79686-zn9fc" Feb 25 11:20:59 crc kubenswrapper[5005]: I0225 11:20:59.991357 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7559c79686-zn9fc" Feb 25 11:21:00 crc kubenswrapper[5005]: I0225 11:21:00.010780 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-654d5c677d-rt9q2" Feb 25 11:21:00 crc kubenswrapper[5005]: I0225 11:21:00.041278 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jlrk4" Feb 25 11:21:00 crc kubenswrapper[5005]: I0225 11:21:00.130424 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jlrk4" Feb 25 11:21:00 crc kubenswrapper[5005]: I0225 11:21:00.293654 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7559c79686-zn9fc"] Feb 25 11:21:00 crc kubenswrapper[5005]: W0225 11:21:00.298432 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcecf8f78_dacb_47a9_95f6_46c6f141e468.slice/crio-ab7e42e18b61d5c8cd515438a6c060f830a85504bb7ab69756366fb79a1725dc WatchSource:0}: Error finding container ab7e42e18b61d5c8cd515438a6c060f830a85504bb7ab69756366fb79a1725dc: Status 404 returned error can't find the container with id ab7e42e18b61d5c8cd515438a6c060f830a85504bb7ab69756366fb79a1725dc Feb 25 11:21:00 crc kubenswrapper[5005]: I0225 11:21:00.457619 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-654d5c677d-rt9q2"] Feb 25 11:21:00 crc kubenswrapper[5005]: W0225 11:21:00.476823 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c538682_fd83_4c33_b28c_0031f11cf907.slice/crio-1e269de9662e6a42d153992e1b056c5487132bd229561ef6e9a9f5a86c8225be WatchSource:0}: Error finding container 1e269de9662e6a42d153992e1b056c5487132bd229561ef6e9a9f5a86c8225be: Status 404 returned error can't find the container with id 1e269de9662e6a42d153992e1b056c5487132bd229561ef6e9a9f5a86c8225be Feb 25 11:21:00 crc kubenswrapper[5005]: I0225 11:21:00.694126 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55c9939e-1f30-42e8-8d16-e706a5cc356e" path="/var/lib/kubelet/pods/55c9939e-1f30-42e8-8d16-e706a5cc356e/volumes" Feb 25 11:21:00 crc kubenswrapper[5005]: I0225 11:21:00.695502 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3862f25-5b8b-46a1-8630-404797ca616a" path="/var/lib/kubelet/pods/f3862f25-5b8b-46a1-8630-404797ca616a/volumes" Feb 25 11:21:00 crc kubenswrapper[5005]: I0225 11:21:00.720015 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-654d5c677d-rt9q2" event={"ID":"2c538682-fd83-4c33-b28c-0031f11cf907","Type":"ContainerStarted","Data":"049753b6ff0dfb9295486a0e755ce1792a50a9f099ea773f053cc9a630cf84f6"} Feb 25 11:21:00 crc kubenswrapper[5005]: I0225 11:21:00.720221 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-654d5c677d-rt9q2" Feb 25 11:21:00 crc kubenswrapper[5005]: I0225 11:21:00.720250 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-654d5c677d-rt9q2" event={"ID":"2c538682-fd83-4c33-b28c-0031f11cf907","Type":"ContainerStarted","Data":"1e269de9662e6a42d153992e1b056c5487132bd229561ef6e9a9f5a86c8225be"} Feb 25 11:21:00 crc kubenswrapper[5005]: I0225 11:21:00.724095 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7559c79686-zn9fc" event={"ID":"cecf8f78-dacb-47a9-95f6-46c6f141e468","Type":"ContainerStarted","Data":"c4eecba03393fa07c1f9ad0204f05cd69db676a1e43dfa19b28d96c7461a9607"} Feb 25 11:21:00 crc kubenswrapper[5005]: I0225 11:21:00.724141 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7559c79686-zn9fc" event={"ID":"cecf8f78-dacb-47a9-95f6-46c6f141e468","Type":"ContainerStarted","Data":"ab7e42e18b61d5c8cd515438a6c060f830a85504bb7ab69756366fb79a1725dc"} Feb 25 11:21:00 crc kubenswrapper[5005]: I0225 11:21:00.738917 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-654d5c677d-rt9q2" podStartSLOduration=2.738896889 podStartE2EDuration="2.738896889s" podCreationTimestamp="2026-02-25 11:20:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:21:00.735557061 +0000 UTC m=+174.776289388" watchObservedRunningTime="2026-02-25 11:21:00.738896889 +0000 UTC m=+174.779629216" Feb 25 11:21:00 crc kubenswrapper[5005]: I0225 11:21:00.754880 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7559c79686-zn9fc" podStartSLOduration=2.754866083 podStartE2EDuration="2.754866083s" podCreationTimestamp="2026-02-25 11:20:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:21:00.752974556 +0000 UTC m=+174.793706873" watchObservedRunningTime="2026-02-25 11:21:00.754866083 +0000 UTC m=+174.795598410" Feb 25 11:21:01 crc kubenswrapper[5005]: I0225 11:21:01.149033 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-654d5c677d-rt9q2" Feb 25 11:21:01 crc kubenswrapper[5005]: I0225 11:21:01.386165 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jlrk4"] Feb 25 11:21:01 crc kubenswrapper[5005]: I0225 11:21:01.731107 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7559c79686-zn9fc" Feb 25 11:21:01 crc kubenswrapper[5005]: I0225 11:21:01.731592 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jlrk4" podUID="d4ff9940-8045-4d50-99a8-85f30d202aae" containerName="registry-server" containerID="cri-o://04e9e5efffe4d02306c5548c74c766d7dbb28fed899f21bcb5413103007a125e" gracePeriod=2 Feb 25 11:21:01 crc kubenswrapper[5005]: I0225 11:21:01.738327 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7559c79686-zn9fc" Feb 25 11:21:01 crc kubenswrapper[5005]: I0225 11:21:01.943899 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-z5mfq" Feb 25 11:21:02 crc kubenswrapper[5005]: I0225 11:21:02.738357 5005 generic.go:334] "Generic (PLEG): container finished" podID="d4ff9940-8045-4d50-99a8-85f30d202aae" containerID="04e9e5efffe4d02306c5548c74c766d7dbb28fed899f21bcb5413103007a125e" exitCode=0 Feb 25 11:21:02 crc kubenswrapper[5005]: I0225 11:21:02.738477 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jlrk4" event={"ID":"d4ff9940-8045-4d50-99a8-85f30d202aae","Type":"ContainerDied","Data":"04e9e5efffe4d02306c5548c74c766d7dbb28fed899f21bcb5413103007a125e"} Feb 25 11:21:02 crc kubenswrapper[5005]: I0225 11:21:02.738543 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jlrk4" event={"ID":"d4ff9940-8045-4d50-99a8-85f30d202aae","Type":"ContainerDied","Data":"370ff990f6319cecced9f1a6cca8777cdb96869da770adbf118a1b82501ccbc4"} Feb 25 11:21:02 crc kubenswrapper[5005]: I0225 11:21:02.738570 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="370ff990f6319cecced9f1a6cca8777cdb96869da770adbf118a1b82501ccbc4" Feb 25 11:21:03 crc kubenswrapper[5005]: I0225 11:21:03.001016 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-twk58" Feb 25 11:21:03 crc kubenswrapper[5005]: I0225 11:21:03.006861 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jlrk4" Feb 25 11:21:03 crc kubenswrapper[5005]: I0225 11:21:03.064635 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6r8dk\" (UniqueName: \"kubernetes.io/projected/d4ff9940-8045-4d50-99a8-85f30d202aae-kube-api-access-6r8dk\") pod \"d4ff9940-8045-4d50-99a8-85f30d202aae\" (UID: \"d4ff9940-8045-4d50-99a8-85f30d202aae\") " Feb 25 11:21:03 crc kubenswrapper[5005]: I0225 11:21:03.064776 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4ff9940-8045-4d50-99a8-85f30d202aae-catalog-content\") pod \"d4ff9940-8045-4d50-99a8-85f30d202aae\" (UID: \"d4ff9940-8045-4d50-99a8-85f30d202aae\") " Feb 25 11:21:03 crc kubenswrapper[5005]: I0225 11:21:03.064841 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4ff9940-8045-4d50-99a8-85f30d202aae-utilities\") pod \"d4ff9940-8045-4d50-99a8-85f30d202aae\" (UID: \"d4ff9940-8045-4d50-99a8-85f30d202aae\") " Feb 25 11:21:03 crc kubenswrapper[5005]: I0225 11:21:03.066503 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4ff9940-8045-4d50-99a8-85f30d202aae-utilities" (OuterVolumeSpecName: "utilities") pod "d4ff9940-8045-4d50-99a8-85f30d202aae" (UID: "d4ff9940-8045-4d50-99a8-85f30d202aae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:21:03 crc kubenswrapper[5005]: I0225 11:21:03.077533 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4ff9940-8045-4d50-99a8-85f30d202aae-kube-api-access-6r8dk" (OuterVolumeSpecName: "kube-api-access-6r8dk") pod "d4ff9940-8045-4d50-99a8-85f30d202aae" (UID: "d4ff9940-8045-4d50-99a8-85f30d202aae"). InnerVolumeSpecName "kube-api-access-6r8dk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:21:03 crc kubenswrapper[5005]: I0225 11:21:03.084646 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-twk58" Feb 25 11:21:03 crc kubenswrapper[5005]: I0225 11:21:03.139939 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4ff9940-8045-4d50-99a8-85f30d202aae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d4ff9940-8045-4d50-99a8-85f30d202aae" (UID: "d4ff9940-8045-4d50-99a8-85f30d202aae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:21:03 crc kubenswrapper[5005]: I0225 11:21:03.166408 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6r8dk\" (UniqueName: \"kubernetes.io/projected/d4ff9940-8045-4d50-99a8-85f30d202aae-kube-api-access-6r8dk\") on node \"crc\" DevicePath \"\"" Feb 25 11:21:03 crc kubenswrapper[5005]: I0225 11:21:03.166449 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4ff9940-8045-4d50-99a8-85f30d202aae-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 11:21:03 crc kubenswrapper[5005]: I0225 11:21:03.166461 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4ff9940-8045-4d50-99a8-85f30d202aae-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 11:21:03 crc kubenswrapper[5005]: I0225 11:21:03.743855 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jlrk4" Feb 25 11:21:03 crc kubenswrapper[5005]: I0225 11:21:03.791842 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jlrk4"] Feb 25 11:21:03 crc kubenswrapper[5005]: I0225 11:21:03.797723 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jlrk4"] Feb 25 11:21:04 crc kubenswrapper[5005]: I0225 11:21:04.192763 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z5mfq"] Feb 25 11:21:04 crc kubenswrapper[5005]: I0225 11:21:04.193151 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-z5mfq" podUID="5552a8c8-de53-484f-a47f-42dbd9983137" containerName="registry-server" containerID="cri-o://265a68dc1e846f753949ea1c2eec4994b2273b7a29a9bdb0c85bb3395edc2a27" gracePeriod=2 Feb 25 11:21:04 crc kubenswrapper[5005]: I0225 11:21:04.578922 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-rx5lf" podUID="9897feaf-0f0f-44a2-bb22-8863579d6359" containerName="oauth-openshift" containerID="cri-o://c7aed6ad648831587addaa3269752bb2fd9e7d854bdaa3139a2929282baaee77" gracePeriod=15 Feb 25 11:21:04 crc kubenswrapper[5005]: I0225 11:21:04.695080 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4ff9940-8045-4d50-99a8-85f30d202aae" path="/var/lib/kubelet/pods/d4ff9940-8045-4d50-99a8-85f30d202aae/volumes" Feb 25 11:21:04 crc kubenswrapper[5005]: I0225 11:21:04.753240 5005 generic.go:334] "Generic (PLEG): container finished" podID="5552a8c8-de53-484f-a47f-42dbd9983137" containerID="265a68dc1e846f753949ea1c2eec4994b2273b7a29a9bdb0c85bb3395edc2a27" exitCode=0 Feb 25 11:21:04 crc kubenswrapper[5005]: I0225 11:21:04.753366 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z5mfq" event={"ID":"5552a8c8-de53-484f-a47f-42dbd9983137","Type":"ContainerDied","Data":"265a68dc1e846f753949ea1c2eec4994b2273b7a29a9bdb0c85bb3395edc2a27"} Feb 25 11:21:04 crc kubenswrapper[5005]: I0225 11:21:04.755874 5005 generic.go:334] "Generic (PLEG): container finished" podID="9897feaf-0f0f-44a2-bb22-8863579d6359" containerID="c7aed6ad648831587addaa3269752bb2fd9e7d854bdaa3139a2929282baaee77" exitCode=0 Feb 25 11:21:04 crc kubenswrapper[5005]: I0225 11:21:04.755937 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rx5lf" event={"ID":"9897feaf-0f0f-44a2-bb22-8863579d6359","Type":"ContainerDied","Data":"c7aed6ad648831587addaa3269752bb2fd9e7d854bdaa3139a2929282baaee77"} Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.153519 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rx5lf" Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.195860 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9897feaf-0f0f-44a2-bb22-8863579d6359-v4-0-config-user-template-login\") pod \"9897feaf-0f0f-44a2-bb22-8863579d6359\" (UID: \"9897feaf-0f0f-44a2-bb22-8863579d6359\") " Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.195921 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9897feaf-0f0f-44a2-bb22-8863579d6359-v4-0-config-system-ocp-branding-template\") pod \"9897feaf-0f0f-44a2-bb22-8863579d6359\" (UID: \"9897feaf-0f0f-44a2-bb22-8863579d6359\") " Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.195966 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9897feaf-0f0f-44a2-bb22-8863579d6359-v4-0-config-user-template-error\") pod \"9897feaf-0f0f-44a2-bb22-8863579d6359\" (UID: \"9897feaf-0f0f-44a2-bb22-8863579d6359\") " Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.196020 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9897feaf-0f0f-44a2-bb22-8863579d6359-audit-dir\") pod \"9897feaf-0f0f-44a2-bb22-8863579d6359\" (UID: \"9897feaf-0f0f-44a2-bb22-8863579d6359\") " Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.196068 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9897feaf-0f0f-44a2-bb22-8863579d6359-v4-0-config-system-cliconfig\") pod \"9897feaf-0f0f-44a2-bb22-8863579d6359\" (UID: \"9897feaf-0f0f-44a2-bb22-8863579d6359\") " Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.196135 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9897feaf-0f0f-44a2-bb22-8863579d6359-audit-policies\") pod \"9897feaf-0f0f-44a2-bb22-8863579d6359\" (UID: \"9897feaf-0f0f-44a2-bb22-8863579d6359\") " Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.196164 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9897feaf-0f0f-44a2-bb22-8863579d6359-v4-0-config-system-router-certs\") pod \"9897feaf-0f0f-44a2-bb22-8863579d6359\" (UID: \"9897feaf-0f0f-44a2-bb22-8863579d6359\") " Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.196192 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9897feaf-0f0f-44a2-bb22-8863579d6359-v4-0-config-system-service-ca\") pod \"9897feaf-0f0f-44a2-bb22-8863579d6359\" (UID: \"9897feaf-0f0f-44a2-bb22-8863579d6359\") " Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.196221 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9897feaf-0f0f-44a2-bb22-8863579d6359-v4-0-config-user-idp-0-file-data\") pod \"9897feaf-0f0f-44a2-bb22-8863579d6359\" (UID: \"9897feaf-0f0f-44a2-bb22-8863579d6359\") " Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.196264 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9897feaf-0f0f-44a2-bb22-8863579d6359-v4-0-config-system-serving-cert\") pod \"9897feaf-0f0f-44a2-bb22-8863579d6359\" (UID: \"9897feaf-0f0f-44a2-bb22-8863579d6359\") " Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.196304 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9897feaf-0f0f-44a2-bb22-8863579d6359-v4-0-config-system-session\") pod \"9897feaf-0f0f-44a2-bb22-8863579d6359\" (UID: \"9897feaf-0f0f-44a2-bb22-8863579d6359\") " Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.196338 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9897feaf-0f0f-44a2-bb22-8863579d6359-v4-0-config-system-trusted-ca-bundle\") pod \"9897feaf-0f0f-44a2-bb22-8863579d6359\" (UID: \"9897feaf-0f0f-44a2-bb22-8863579d6359\") " Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.196398 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9897feaf-0f0f-44a2-bb22-8863579d6359-v4-0-config-user-template-provider-selection\") pod \"9897feaf-0f0f-44a2-bb22-8863579d6359\" (UID: \"9897feaf-0f0f-44a2-bb22-8863579d6359\") " Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.196438 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjt27\" (UniqueName: \"kubernetes.io/projected/9897feaf-0f0f-44a2-bb22-8863579d6359-kube-api-access-hjt27\") pod \"9897feaf-0f0f-44a2-bb22-8863579d6359\" (UID: \"9897feaf-0f0f-44a2-bb22-8863579d6359\") " Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.202212 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9897feaf-0f0f-44a2-bb22-8863579d6359-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "9897feaf-0f0f-44a2-bb22-8863579d6359" (UID: "9897feaf-0f0f-44a2-bb22-8863579d6359"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.202187 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9897feaf-0f0f-44a2-bb22-8863579d6359-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "9897feaf-0f0f-44a2-bb22-8863579d6359" (UID: "9897feaf-0f0f-44a2-bb22-8863579d6359"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.204743 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9897feaf-0f0f-44a2-bb22-8863579d6359-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "9897feaf-0f0f-44a2-bb22-8863579d6359" (UID: "9897feaf-0f0f-44a2-bb22-8863579d6359"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.204805 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9897feaf-0f0f-44a2-bb22-8863579d6359-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "9897feaf-0f0f-44a2-bb22-8863579d6359" (UID: "9897feaf-0f0f-44a2-bb22-8863579d6359"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.205800 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9897feaf-0f0f-44a2-bb22-8863579d6359-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "9897feaf-0f0f-44a2-bb22-8863579d6359" (UID: "9897feaf-0f0f-44a2-bb22-8863579d6359"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.206278 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9897feaf-0f0f-44a2-bb22-8863579d6359-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "9897feaf-0f0f-44a2-bb22-8863579d6359" (UID: "9897feaf-0f0f-44a2-bb22-8863579d6359"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.207638 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9897feaf-0f0f-44a2-bb22-8863579d6359-kube-api-access-hjt27" (OuterVolumeSpecName: "kube-api-access-hjt27") pod "9897feaf-0f0f-44a2-bb22-8863579d6359" (UID: "9897feaf-0f0f-44a2-bb22-8863579d6359"). InnerVolumeSpecName "kube-api-access-hjt27". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.226502 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9897feaf-0f0f-44a2-bb22-8863579d6359-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "9897feaf-0f0f-44a2-bb22-8863579d6359" (UID: "9897feaf-0f0f-44a2-bb22-8863579d6359"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.230096 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9897feaf-0f0f-44a2-bb22-8863579d6359-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "9897feaf-0f0f-44a2-bb22-8863579d6359" (UID: "9897feaf-0f0f-44a2-bb22-8863579d6359"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.230781 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9897feaf-0f0f-44a2-bb22-8863579d6359-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "9897feaf-0f0f-44a2-bb22-8863579d6359" (UID: "9897feaf-0f0f-44a2-bb22-8863579d6359"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.234672 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9897feaf-0f0f-44a2-bb22-8863579d6359-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "9897feaf-0f0f-44a2-bb22-8863579d6359" (UID: "9897feaf-0f0f-44a2-bb22-8863579d6359"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.236496 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9897feaf-0f0f-44a2-bb22-8863579d6359-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "9897feaf-0f0f-44a2-bb22-8863579d6359" (UID: "9897feaf-0f0f-44a2-bb22-8863579d6359"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.239848 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9897feaf-0f0f-44a2-bb22-8863579d6359-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "9897feaf-0f0f-44a2-bb22-8863579d6359" (UID: "9897feaf-0f0f-44a2-bb22-8863579d6359"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.240076 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9897feaf-0f0f-44a2-bb22-8863579d6359-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "9897feaf-0f0f-44a2-bb22-8863579d6359" (UID: "9897feaf-0f0f-44a2-bb22-8863579d6359"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.290340 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z5mfq" Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.297147 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5552a8c8-de53-484f-a47f-42dbd9983137-utilities\") pod \"5552a8c8-de53-484f-a47f-42dbd9983137\" (UID: \"5552a8c8-de53-484f-a47f-42dbd9983137\") " Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.297182 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ctf8w\" (UniqueName: \"kubernetes.io/projected/5552a8c8-de53-484f-a47f-42dbd9983137-kube-api-access-ctf8w\") pod \"5552a8c8-de53-484f-a47f-42dbd9983137\" (UID: \"5552a8c8-de53-484f-a47f-42dbd9983137\") " Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.297254 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5552a8c8-de53-484f-a47f-42dbd9983137-catalog-content\") pod \"5552a8c8-de53-484f-a47f-42dbd9983137\" (UID: \"5552a8c8-de53-484f-a47f-42dbd9983137\") " Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.297512 5005 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9897feaf-0f0f-44a2-bb22-8863579d6359-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.297526 5005 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9897feaf-0f0f-44a2-bb22-8863579d6359-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.297535 5005 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9897feaf-0f0f-44a2-bb22-8863579d6359-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.297545 5005 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9897feaf-0f0f-44a2-bb22-8863579d6359-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.297553 5005 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9897feaf-0f0f-44a2-bb22-8863579d6359-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.297563 5005 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9897feaf-0f0f-44a2-bb22-8863579d6359-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.297571 5005 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9897feaf-0f0f-44a2-bb22-8863579d6359-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.297580 5005 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9897feaf-0f0f-44a2-bb22-8863579d6359-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.297589 5005 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9897feaf-0f0f-44a2-bb22-8863579d6359-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.297598 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjt27\" (UniqueName: \"kubernetes.io/projected/9897feaf-0f0f-44a2-bb22-8863579d6359-kube-api-access-hjt27\") on node \"crc\" DevicePath \"\"" Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.297607 5005 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9897feaf-0f0f-44a2-bb22-8863579d6359-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.297615 5005 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9897feaf-0f0f-44a2-bb22-8863579d6359-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.297623 5005 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9897feaf-0f0f-44a2-bb22-8863579d6359-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.297632 5005 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9897feaf-0f0f-44a2-bb22-8863579d6359-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.302110 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5552a8c8-de53-484f-a47f-42dbd9983137-utilities" (OuterVolumeSpecName: "utilities") pod "5552a8c8-de53-484f-a47f-42dbd9983137" (UID: "5552a8c8-de53-484f-a47f-42dbd9983137"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.304582 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5552a8c8-de53-484f-a47f-42dbd9983137-kube-api-access-ctf8w" (OuterVolumeSpecName: "kube-api-access-ctf8w") pod "5552a8c8-de53-484f-a47f-42dbd9983137" (UID: "5552a8c8-de53-484f-a47f-42dbd9983137"). InnerVolumeSpecName "kube-api-access-ctf8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.321157 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5552a8c8-de53-484f-a47f-42dbd9983137-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5552a8c8-de53-484f-a47f-42dbd9983137" (UID: "5552a8c8-de53-484f-a47f-42dbd9983137"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.398933 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5552a8c8-de53-484f-a47f-42dbd9983137-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.398979 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ctf8w\" (UniqueName: \"kubernetes.io/projected/5552a8c8-de53-484f-a47f-42dbd9983137-kube-api-access-ctf8w\") on node \"crc\" DevicePath \"\"" Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.398992 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5552a8c8-de53-484f-a47f-42dbd9983137-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.586869 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-twk58"] Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.587070 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-twk58" podUID="7d259751-70f2-4fcc-a6c7-4b99993eb217" containerName="registry-server" containerID="cri-o://b813b7065814d7116933cf98a41e5a6b718fe7cf944ed6ad71757c52712547f6" gracePeriod=2 Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.764805 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z5mfq" event={"ID":"5552a8c8-de53-484f-a47f-42dbd9983137","Type":"ContainerDied","Data":"f1631d1827497423d3751e82eda53e233f2d61031d84cbf2ad881c88c6f3d250"} Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.764909 5005 scope.go:117] "RemoveContainer" containerID="265a68dc1e846f753949ea1c2eec4994b2273b7a29a9bdb0c85bb3395edc2a27" Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.764835 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z5mfq" Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.770894 5005 generic.go:334] "Generic (PLEG): container finished" podID="7d259751-70f2-4fcc-a6c7-4b99993eb217" containerID="b813b7065814d7116933cf98a41e5a6b718fe7cf944ed6ad71757c52712547f6" exitCode=0 Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.770988 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-twk58" event={"ID":"7d259751-70f2-4fcc-a6c7-4b99993eb217","Type":"ContainerDied","Data":"b813b7065814d7116933cf98a41e5a6b718fe7cf944ed6ad71757c52712547f6"} Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.772527 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rx5lf" event={"ID":"9897feaf-0f0f-44a2-bb22-8863579d6359","Type":"ContainerDied","Data":"6373d6f8c78636a99def47adcda1e4a7af15f855b09b2b9fc4a00b06eafb1d3a"} Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.772639 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rx5lf" Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.792927 5005 scope.go:117] "RemoveContainer" containerID="ec5510a9ad2c33b810ba9445d55c5e2e2f960f1cfa0a67ecfe7fd627051243aa" Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.808041 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z5mfq"] Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.814006 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-z5mfq"] Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.844548 5005 scope.go:117] "RemoveContainer" containerID="87acc748555d5bf915b5cdca04b4aef7a9cca547eab1b788807dd41b7632de11" Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.848581 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rx5lf"] Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.853663 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rx5lf"] Feb 25 11:21:05 crc kubenswrapper[5005]: I0225 11:21:05.887605 5005 scope.go:117] "RemoveContainer" containerID="c7aed6ad648831587addaa3269752bb2fd9e7d854bdaa3139a2929282baaee77" Feb 25 11:21:06 crc kubenswrapper[5005]: I0225 11:21:06.005430 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-twk58" Feb 25 11:21:06 crc kubenswrapper[5005]: I0225 11:21:06.108992 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-928hn\" (UniqueName: \"kubernetes.io/projected/7d259751-70f2-4fcc-a6c7-4b99993eb217-kube-api-access-928hn\") pod \"7d259751-70f2-4fcc-a6c7-4b99993eb217\" (UID: \"7d259751-70f2-4fcc-a6c7-4b99993eb217\") " Feb 25 11:21:06 crc kubenswrapper[5005]: I0225 11:21:06.109076 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d259751-70f2-4fcc-a6c7-4b99993eb217-catalog-content\") pod \"7d259751-70f2-4fcc-a6c7-4b99993eb217\" (UID: \"7d259751-70f2-4fcc-a6c7-4b99993eb217\") " Feb 25 11:21:06 crc kubenswrapper[5005]: I0225 11:21:06.109179 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d259751-70f2-4fcc-a6c7-4b99993eb217-utilities\") pod \"7d259751-70f2-4fcc-a6c7-4b99993eb217\" (UID: \"7d259751-70f2-4fcc-a6c7-4b99993eb217\") " Feb 25 11:21:06 crc kubenswrapper[5005]: I0225 11:21:06.110342 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d259751-70f2-4fcc-a6c7-4b99993eb217-utilities" (OuterVolumeSpecName: "utilities") pod "7d259751-70f2-4fcc-a6c7-4b99993eb217" (UID: "7d259751-70f2-4fcc-a6c7-4b99993eb217"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:21:06 crc kubenswrapper[5005]: I0225 11:21:06.112492 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d259751-70f2-4fcc-a6c7-4b99993eb217-kube-api-access-928hn" (OuterVolumeSpecName: "kube-api-access-928hn") pod "7d259751-70f2-4fcc-a6c7-4b99993eb217" (UID: "7d259751-70f2-4fcc-a6c7-4b99993eb217"). InnerVolumeSpecName "kube-api-access-928hn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:21:06 crc kubenswrapper[5005]: I0225 11:21:06.211504 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d259751-70f2-4fcc-a6c7-4b99993eb217-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 11:21:06 crc kubenswrapper[5005]: I0225 11:21:06.211555 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-928hn\" (UniqueName: \"kubernetes.io/projected/7d259751-70f2-4fcc-a6c7-4b99993eb217-kube-api-access-928hn\") on node \"crc\" DevicePath \"\"" Feb 25 11:21:06 crc kubenswrapper[5005]: I0225 11:21:06.241604 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d259751-70f2-4fcc-a6c7-4b99993eb217-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7d259751-70f2-4fcc-a6c7-4b99993eb217" (UID: "7d259751-70f2-4fcc-a6c7-4b99993eb217"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:21:06 crc kubenswrapper[5005]: I0225 11:21:06.312632 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d259751-70f2-4fcc-a6c7-4b99993eb217-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 11:21:06 crc kubenswrapper[5005]: I0225 11:21:06.694456 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5552a8c8-de53-484f-a47f-42dbd9983137" path="/var/lib/kubelet/pods/5552a8c8-de53-484f-a47f-42dbd9983137/volumes" Feb 25 11:21:06 crc kubenswrapper[5005]: I0225 11:21:06.695767 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9897feaf-0f0f-44a2-bb22-8863579d6359" path="/var/lib/kubelet/pods/9897feaf-0f0f-44a2-bb22-8863579d6359/volumes" Feb 25 11:21:06 crc kubenswrapper[5005]: I0225 11:21:06.783989 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-twk58" Feb 25 11:21:06 crc kubenswrapper[5005]: I0225 11:21:06.783951 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-twk58" event={"ID":"7d259751-70f2-4fcc-a6c7-4b99993eb217","Type":"ContainerDied","Data":"fd4fc558b9f780612dbf63fae249c8b80f8f063ea7dad5fa4e5d53d85965572e"} Feb 25 11:21:06 crc kubenswrapper[5005]: I0225 11:21:06.784584 5005 scope.go:117] "RemoveContainer" containerID="b813b7065814d7116933cf98a41e5a6b718fe7cf944ed6ad71757c52712547f6" Feb 25 11:21:06 crc kubenswrapper[5005]: I0225 11:21:06.804001 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-twk58"] Feb 25 11:21:06 crc kubenswrapper[5005]: I0225 11:21:06.808408 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-twk58"] Feb 25 11:21:06 crc kubenswrapper[5005]: I0225 11:21:06.822712 5005 scope.go:117] "RemoveContainer" containerID="d43c7b0216c4096d364817ce1e502b15c3dc1e3977be4e6d09db48540eafe4a8" Feb 25 11:21:06 crc kubenswrapper[5005]: I0225 11:21:06.855929 5005 scope.go:117] "RemoveContainer" containerID="a050921e8d57448330f27ac9cae59d3a55c022b059a7301a2739693da5e5c012" Feb 25 11:21:08 crc kubenswrapper[5005]: I0225 11:21:08.692780 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d259751-70f2-4fcc-a6c7-4b99993eb217" path="/var/lib/kubelet/pods/7d259751-70f2-4fcc-a6c7-4b99993eb217/volumes" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.668095 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7bccf64dbb-n48rh"] Feb 25 11:21:14 crc kubenswrapper[5005]: E0225 11:21:14.668832 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5552a8c8-de53-484f-a47f-42dbd9983137" containerName="extract-utilities" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.668844 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="5552a8c8-de53-484f-a47f-42dbd9983137" containerName="extract-utilities" Feb 25 11:21:14 crc kubenswrapper[5005]: E0225 11:21:14.668854 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d259751-70f2-4fcc-a6c7-4b99993eb217" containerName="extract-content" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.668861 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d259751-70f2-4fcc-a6c7-4b99993eb217" containerName="extract-content" Feb 25 11:21:14 crc kubenswrapper[5005]: E0225 11:21:14.668876 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5552a8c8-de53-484f-a47f-42dbd9983137" containerName="registry-server" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.668883 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="5552a8c8-de53-484f-a47f-42dbd9983137" containerName="registry-server" Feb 25 11:21:14 crc kubenswrapper[5005]: E0225 11:21:14.668890 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d259751-70f2-4fcc-a6c7-4b99993eb217" containerName="registry-server" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.668896 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d259751-70f2-4fcc-a6c7-4b99993eb217" containerName="registry-server" Feb 25 11:21:14 crc kubenswrapper[5005]: E0225 11:21:14.668906 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d259751-70f2-4fcc-a6c7-4b99993eb217" containerName="extract-utilities" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.668912 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d259751-70f2-4fcc-a6c7-4b99993eb217" containerName="extract-utilities" Feb 25 11:21:14 crc kubenswrapper[5005]: E0225 11:21:14.668920 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5552a8c8-de53-484f-a47f-42dbd9983137" containerName="extract-content" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.668925 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="5552a8c8-de53-484f-a47f-42dbd9983137" containerName="extract-content" Feb 25 11:21:14 crc kubenswrapper[5005]: E0225 11:21:14.668933 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4ff9940-8045-4d50-99a8-85f30d202aae" containerName="extract-content" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.668939 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4ff9940-8045-4d50-99a8-85f30d202aae" containerName="extract-content" Feb 25 11:21:14 crc kubenswrapper[5005]: E0225 11:21:14.668948 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9897feaf-0f0f-44a2-bb22-8863579d6359" containerName="oauth-openshift" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.668953 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="9897feaf-0f0f-44a2-bb22-8863579d6359" containerName="oauth-openshift" Feb 25 11:21:14 crc kubenswrapper[5005]: E0225 11:21:14.668959 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4ff9940-8045-4d50-99a8-85f30d202aae" containerName="registry-server" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.668965 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4ff9940-8045-4d50-99a8-85f30d202aae" containerName="registry-server" Feb 25 11:21:14 crc kubenswrapper[5005]: E0225 11:21:14.668974 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4ff9940-8045-4d50-99a8-85f30d202aae" containerName="extract-utilities" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.668979 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4ff9940-8045-4d50-99a8-85f30d202aae" containerName="extract-utilities" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.669068 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d259751-70f2-4fcc-a6c7-4b99993eb217" containerName="registry-server" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.669077 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="5552a8c8-de53-484f-a47f-42dbd9983137" containerName="registry-server" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.669083 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4ff9940-8045-4d50-99a8-85f30d202aae" containerName="registry-server" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.669099 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="9897feaf-0f0f-44a2-bb22-8863579d6359" containerName="oauth-openshift" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.669505 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7bccf64dbb-n48rh" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.671719 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.672634 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.672807 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.672839 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.673070 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.673165 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.673508 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.673557 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.674238 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.674262 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.674326 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.676697 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.690958 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.692199 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7bccf64dbb-n48rh"] Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.694121 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.714500 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.762272 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c08076ad-8c08-475f-8c91-4fa74fb382ef-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7bccf64dbb-n48rh\" (UID: \"c08076ad-8c08-475f-8c91-4fa74fb382ef\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-n48rh" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.762380 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c08076ad-8c08-475f-8c91-4fa74fb382ef-audit-policies\") pod \"oauth-openshift-7bccf64dbb-n48rh\" (UID: \"c08076ad-8c08-475f-8c91-4fa74fb382ef\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-n48rh" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.762411 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c08076ad-8c08-475f-8c91-4fa74fb382ef-v4-0-config-system-router-certs\") pod \"oauth-openshift-7bccf64dbb-n48rh\" (UID: \"c08076ad-8c08-475f-8c91-4fa74fb382ef\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-n48rh" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.762434 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c08076ad-8c08-475f-8c91-4fa74fb382ef-v4-0-config-user-template-login\") pod \"oauth-openshift-7bccf64dbb-n48rh\" (UID: \"c08076ad-8c08-475f-8c91-4fa74fb382ef\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-n48rh" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.762464 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c08076ad-8c08-475f-8c91-4fa74fb382ef-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7bccf64dbb-n48rh\" (UID: \"c08076ad-8c08-475f-8c91-4fa74fb382ef\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-n48rh" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.762481 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c08076ad-8c08-475f-8c91-4fa74fb382ef-v4-0-config-system-session\") pod \"oauth-openshift-7bccf64dbb-n48rh\" (UID: \"c08076ad-8c08-475f-8c91-4fa74fb382ef\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-n48rh" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.762618 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c08076ad-8c08-475f-8c91-4fa74fb382ef-v4-0-config-system-service-ca\") pod \"oauth-openshift-7bccf64dbb-n48rh\" (UID: \"c08076ad-8c08-475f-8c91-4fa74fb382ef\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-n48rh" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.762682 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c08076ad-8c08-475f-8c91-4fa74fb382ef-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7bccf64dbb-n48rh\" (UID: \"c08076ad-8c08-475f-8c91-4fa74fb382ef\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-n48rh" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.762706 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c08076ad-8c08-475f-8c91-4fa74fb382ef-audit-dir\") pod \"oauth-openshift-7bccf64dbb-n48rh\" (UID: \"c08076ad-8c08-475f-8c91-4fa74fb382ef\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-n48rh" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.762747 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c08076ad-8c08-475f-8c91-4fa74fb382ef-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7bccf64dbb-n48rh\" (UID: \"c08076ad-8c08-475f-8c91-4fa74fb382ef\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-n48rh" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.762768 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c08076ad-8c08-475f-8c91-4fa74fb382ef-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7bccf64dbb-n48rh\" (UID: \"c08076ad-8c08-475f-8c91-4fa74fb382ef\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-n48rh" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.762786 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25z4x\" (UniqueName: \"kubernetes.io/projected/c08076ad-8c08-475f-8c91-4fa74fb382ef-kube-api-access-25z4x\") pod \"oauth-openshift-7bccf64dbb-n48rh\" (UID: \"c08076ad-8c08-475f-8c91-4fa74fb382ef\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-n48rh" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.762808 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c08076ad-8c08-475f-8c91-4fa74fb382ef-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7bccf64dbb-n48rh\" (UID: \"c08076ad-8c08-475f-8c91-4fa74fb382ef\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-n48rh" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.762844 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c08076ad-8c08-475f-8c91-4fa74fb382ef-v4-0-config-user-template-error\") pod \"oauth-openshift-7bccf64dbb-n48rh\" (UID: \"c08076ad-8c08-475f-8c91-4fa74fb382ef\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-n48rh" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.864076 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c08076ad-8c08-475f-8c91-4fa74fb382ef-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7bccf64dbb-n48rh\" (UID: \"c08076ad-8c08-475f-8c91-4fa74fb382ef\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-n48rh" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.864124 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c08076ad-8c08-475f-8c91-4fa74fb382ef-audit-policies\") pod \"oauth-openshift-7bccf64dbb-n48rh\" (UID: \"c08076ad-8c08-475f-8c91-4fa74fb382ef\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-n48rh" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.864143 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c08076ad-8c08-475f-8c91-4fa74fb382ef-v4-0-config-system-router-certs\") pod \"oauth-openshift-7bccf64dbb-n48rh\" (UID: \"c08076ad-8c08-475f-8c91-4fa74fb382ef\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-n48rh" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.864167 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c08076ad-8c08-475f-8c91-4fa74fb382ef-v4-0-config-user-template-login\") pod \"oauth-openshift-7bccf64dbb-n48rh\" (UID: \"c08076ad-8c08-475f-8c91-4fa74fb382ef\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-n48rh" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.864189 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c08076ad-8c08-475f-8c91-4fa74fb382ef-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7bccf64dbb-n48rh\" (UID: \"c08076ad-8c08-475f-8c91-4fa74fb382ef\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-n48rh" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.864212 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c08076ad-8c08-475f-8c91-4fa74fb382ef-v4-0-config-system-session\") pod \"oauth-openshift-7bccf64dbb-n48rh\" (UID: \"c08076ad-8c08-475f-8c91-4fa74fb382ef\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-n48rh" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.864254 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c08076ad-8c08-475f-8c91-4fa74fb382ef-v4-0-config-system-service-ca\") pod \"oauth-openshift-7bccf64dbb-n48rh\" (UID: \"c08076ad-8c08-475f-8c91-4fa74fb382ef\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-n48rh" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.864279 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c08076ad-8c08-475f-8c91-4fa74fb382ef-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7bccf64dbb-n48rh\" (UID: \"c08076ad-8c08-475f-8c91-4fa74fb382ef\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-n48rh" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.864297 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c08076ad-8c08-475f-8c91-4fa74fb382ef-audit-dir\") pod \"oauth-openshift-7bccf64dbb-n48rh\" (UID: \"c08076ad-8c08-475f-8c91-4fa74fb382ef\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-n48rh" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.864315 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c08076ad-8c08-475f-8c91-4fa74fb382ef-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7bccf64dbb-n48rh\" (UID: \"c08076ad-8c08-475f-8c91-4fa74fb382ef\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-n48rh" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.864335 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c08076ad-8c08-475f-8c91-4fa74fb382ef-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7bccf64dbb-n48rh\" (UID: \"c08076ad-8c08-475f-8c91-4fa74fb382ef\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-n48rh" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.864351 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25z4x\" (UniqueName: \"kubernetes.io/projected/c08076ad-8c08-475f-8c91-4fa74fb382ef-kube-api-access-25z4x\") pod \"oauth-openshift-7bccf64dbb-n48rh\" (UID: \"c08076ad-8c08-475f-8c91-4fa74fb382ef\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-n48rh" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.864381 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c08076ad-8c08-475f-8c91-4fa74fb382ef-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7bccf64dbb-n48rh\" (UID: \"c08076ad-8c08-475f-8c91-4fa74fb382ef\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-n48rh" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.864396 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c08076ad-8c08-475f-8c91-4fa74fb382ef-v4-0-config-user-template-error\") pod \"oauth-openshift-7bccf64dbb-n48rh\" (UID: \"c08076ad-8c08-475f-8c91-4fa74fb382ef\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-n48rh" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.865126 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c08076ad-8c08-475f-8c91-4fa74fb382ef-audit-policies\") pod \"oauth-openshift-7bccf64dbb-n48rh\" (UID: \"c08076ad-8c08-475f-8c91-4fa74fb382ef\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-n48rh" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.865222 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c08076ad-8c08-475f-8c91-4fa74fb382ef-audit-dir\") pod \"oauth-openshift-7bccf64dbb-n48rh\" (UID: \"c08076ad-8c08-475f-8c91-4fa74fb382ef\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-n48rh" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.865844 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c08076ad-8c08-475f-8c91-4fa74fb382ef-v4-0-config-system-service-ca\") pod \"oauth-openshift-7bccf64dbb-n48rh\" (UID: \"c08076ad-8c08-475f-8c91-4fa74fb382ef\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-n48rh" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.865906 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c08076ad-8c08-475f-8c91-4fa74fb382ef-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7bccf64dbb-n48rh\" (UID: \"c08076ad-8c08-475f-8c91-4fa74fb382ef\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-n48rh" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.866114 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c08076ad-8c08-475f-8c91-4fa74fb382ef-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7bccf64dbb-n48rh\" (UID: \"c08076ad-8c08-475f-8c91-4fa74fb382ef\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-n48rh" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.871709 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c08076ad-8c08-475f-8c91-4fa74fb382ef-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7bccf64dbb-n48rh\" (UID: \"c08076ad-8c08-475f-8c91-4fa74fb382ef\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-n48rh" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.871709 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c08076ad-8c08-475f-8c91-4fa74fb382ef-v4-0-config-user-template-login\") pod \"oauth-openshift-7bccf64dbb-n48rh\" (UID: \"c08076ad-8c08-475f-8c91-4fa74fb382ef\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-n48rh" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.871934 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c08076ad-8c08-475f-8c91-4fa74fb382ef-v4-0-config-system-session\") pod \"oauth-openshift-7bccf64dbb-n48rh\" (UID: \"c08076ad-8c08-475f-8c91-4fa74fb382ef\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-n48rh" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.872720 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c08076ad-8c08-475f-8c91-4fa74fb382ef-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7bccf64dbb-n48rh\" (UID: \"c08076ad-8c08-475f-8c91-4fa74fb382ef\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-n48rh" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.873130 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c08076ad-8c08-475f-8c91-4fa74fb382ef-v4-0-config-system-router-certs\") pod \"oauth-openshift-7bccf64dbb-n48rh\" (UID: \"c08076ad-8c08-475f-8c91-4fa74fb382ef\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-n48rh" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.873917 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c08076ad-8c08-475f-8c91-4fa74fb382ef-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7bccf64dbb-n48rh\" (UID: \"c08076ad-8c08-475f-8c91-4fa74fb382ef\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-n48rh" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.878034 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c08076ad-8c08-475f-8c91-4fa74fb382ef-v4-0-config-user-template-error\") pod \"oauth-openshift-7bccf64dbb-n48rh\" (UID: \"c08076ad-8c08-475f-8c91-4fa74fb382ef\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-n48rh" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.884465 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c08076ad-8c08-475f-8c91-4fa74fb382ef-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7bccf64dbb-n48rh\" (UID: \"c08076ad-8c08-475f-8c91-4fa74fb382ef\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-n48rh" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.888843 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25z4x\" (UniqueName: \"kubernetes.io/projected/c08076ad-8c08-475f-8c91-4fa74fb382ef-kube-api-access-25z4x\") pod \"oauth-openshift-7bccf64dbb-n48rh\" (UID: \"c08076ad-8c08-475f-8c91-4fa74fb382ef\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-n48rh" Feb 25 11:21:14 crc kubenswrapper[5005]: I0225 11:21:14.984393 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7bccf64dbb-n48rh" Feb 25 11:21:15 crc kubenswrapper[5005]: I0225 11:21:15.506194 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7bccf64dbb-n48rh"] Feb 25 11:21:15 crc kubenswrapper[5005]: W0225 11:21:15.514255 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc08076ad_8c08_475f_8c91_4fa74fb382ef.slice/crio-4f973770c7e1b35c6c77f650a4263e9054821b5e3bae017c6a693d86cd941bc3 WatchSource:0}: Error finding container 4f973770c7e1b35c6c77f650a4263e9054821b5e3bae017c6a693d86cd941bc3: Status 404 returned error can't find the container with id 4f973770c7e1b35c6c77f650a4263e9054821b5e3bae017c6a693d86cd941bc3 Feb 25 11:21:15 crc kubenswrapper[5005]: I0225 11:21:15.847104 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7bccf64dbb-n48rh" event={"ID":"c08076ad-8c08-475f-8c91-4fa74fb382ef","Type":"ContainerStarted","Data":"44f9dafb814a0efb34178786af45e12338cdff6b363653c298521124efca8b81"} Feb 25 11:21:15 crc kubenswrapper[5005]: I0225 11:21:15.847503 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7bccf64dbb-n48rh" event={"ID":"c08076ad-8c08-475f-8c91-4fa74fb382ef","Type":"ContainerStarted","Data":"4f973770c7e1b35c6c77f650a4263e9054821b5e3bae017c6a693d86cd941bc3"} Feb 25 11:21:15 crc kubenswrapper[5005]: I0225 11:21:15.847842 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7bccf64dbb-n48rh" Feb 25 11:21:15 crc kubenswrapper[5005]: I0225 11:21:15.848969 5005 patch_prober.go:28] interesting pod/oauth-openshift-7bccf64dbb-n48rh container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.67:6443/healthz\": dial tcp 10.217.0.67:6443: connect: connection refused" start-of-body= Feb 25 11:21:15 crc kubenswrapper[5005]: I0225 11:21:15.849029 5005 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-7bccf64dbb-n48rh" podUID="c08076ad-8c08-475f-8c91-4fa74fb382ef" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.67:6443/healthz\": dial tcp 10.217.0.67:6443: connect: connection refused" Feb 25 11:21:15 crc kubenswrapper[5005]: I0225 11:21:15.871848 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7bccf64dbb-n48rh" podStartSLOduration=36.871830819 podStartE2EDuration="36.871830819s" podCreationTimestamp="2026-02-25 11:20:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:21:15.871405823 +0000 UTC m=+189.912138170" watchObservedRunningTime="2026-02-25 11:21:15.871830819 +0000 UTC m=+189.912563146" Feb 25 11:21:16 crc kubenswrapper[5005]: I0225 11:21:16.855088 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7bccf64dbb-n48rh" Feb 25 11:21:18 crc kubenswrapper[5005]: I0225 11:21:18.247971 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7559c79686-zn9fc"] Feb 25 11:21:18 crc kubenswrapper[5005]: I0225 11:21:18.248508 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7559c79686-zn9fc" podUID="cecf8f78-dacb-47a9-95f6-46c6f141e468" containerName="controller-manager" containerID="cri-o://c4eecba03393fa07c1f9ad0204f05cd69db676a1e43dfa19b28d96c7461a9607" gracePeriod=30 Feb 25 11:21:18 crc kubenswrapper[5005]: I0225 11:21:18.344578 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-654d5c677d-rt9q2"] Feb 25 11:21:18 crc kubenswrapper[5005]: I0225 11:21:18.344791 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-654d5c677d-rt9q2" podUID="2c538682-fd83-4c33-b28c-0031f11cf907" containerName="route-controller-manager" containerID="cri-o://049753b6ff0dfb9295486a0e755ce1792a50a9f099ea773f053cc9a630cf84f6" gracePeriod=30 Feb 25 11:21:18 crc kubenswrapper[5005]: I0225 11:21:18.861961 5005 generic.go:334] "Generic (PLEG): container finished" podID="2c538682-fd83-4c33-b28c-0031f11cf907" containerID="049753b6ff0dfb9295486a0e755ce1792a50a9f099ea773f053cc9a630cf84f6" exitCode=0 Feb 25 11:21:18 crc kubenswrapper[5005]: I0225 11:21:18.862069 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-654d5c677d-rt9q2" event={"ID":"2c538682-fd83-4c33-b28c-0031f11cf907","Type":"ContainerDied","Data":"049753b6ff0dfb9295486a0e755ce1792a50a9f099ea773f053cc9a630cf84f6"} Feb 25 11:21:18 crc kubenswrapper[5005]: I0225 11:21:18.862430 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-654d5c677d-rt9q2" event={"ID":"2c538682-fd83-4c33-b28c-0031f11cf907","Type":"ContainerDied","Data":"1e269de9662e6a42d153992e1b056c5487132bd229561ef6e9a9f5a86c8225be"} Feb 25 11:21:18 crc kubenswrapper[5005]: I0225 11:21:18.862448 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e269de9662e6a42d153992e1b056c5487132bd229561ef6e9a9f5a86c8225be" Feb 25 11:21:18 crc kubenswrapper[5005]: I0225 11:21:18.863510 5005 generic.go:334] "Generic (PLEG): container finished" podID="cecf8f78-dacb-47a9-95f6-46c6f141e468" containerID="c4eecba03393fa07c1f9ad0204f05cd69db676a1e43dfa19b28d96c7461a9607" exitCode=0 Feb 25 11:21:18 crc kubenswrapper[5005]: I0225 11:21:18.863550 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7559c79686-zn9fc" event={"ID":"cecf8f78-dacb-47a9-95f6-46c6f141e468","Type":"ContainerDied","Data":"c4eecba03393fa07c1f9ad0204f05cd69db676a1e43dfa19b28d96c7461a9607"} Feb 25 11:21:18 crc kubenswrapper[5005]: I0225 11:21:18.868537 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-654d5c677d-rt9q2" Feb 25 11:21:18 crc kubenswrapper[5005]: I0225 11:21:18.916755 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c538682-fd83-4c33-b28c-0031f11cf907-config\") pod \"2c538682-fd83-4c33-b28c-0031f11cf907\" (UID: \"2c538682-fd83-4c33-b28c-0031f11cf907\") " Feb 25 11:21:18 crc kubenswrapper[5005]: I0225 11:21:18.916821 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c538682-fd83-4c33-b28c-0031f11cf907-serving-cert\") pod \"2c538682-fd83-4c33-b28c-0031f11cf907\" (UID: \"2c538682-fd83-4c33-b28c-0031f11cf907\") " Feb 25 11:21:18 crc kubenswrapper[5005]: I0225 11:21:18.916906 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dk5g5\" (UniqueName: \"kubernetes.io/projected/2c538682-fd83-4c33-b28c-0031f11cf907-kube-api-access-dk5g5\") pod \"2c538682-fd83-4c33-b28c-0031f11cf907\" (UID: \"2c538682-fd83-4c33-b28c-0031f11cf907\") " Feb 25 11:21:18 crc kubenswrapper[5005]: I0225 11:21:18.916928 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2c538682-fd83-4c33-b28c-0031f11cf907-client-ca\") pod \"2c538682-fd83-4c33-b28c-0031f11cf907\" (UID: \"2c538682-fd83-4c33-b28c-0031f11cf907\") " Feb 25 11:21:18 crc kubenswrapper[5005]: I0225 11:21:18.917882 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c538682-fd83-4c33-b28c-0031f11cf907-client-ca" (OuterVolumeSpecName: "client-ca") pod "2c538682-fd83-4c33-b28c-0031f11cf907" (UID: "2c538682-fd83-4c33-b28c-0031f11cf907"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:21:18 crc kubenswrapper[5005]: I0225 11:21:18.918445 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c538682-fd83-4c33-b28c-0031f11cf907-config" (OuterVolumeSpecName: "config") pod "2c538682-fd83-4c33-b28c-0031f11cf907" (UID: "2c538682-fd83-4c33-b28c-0031f11cf907"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:21:18 crc kubenswrapper[5005]: I0225 11:21:18.926644 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c538682-fd83-4c33-b28c-0031f11cf907-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2c538682-fd83-4c33-b28c-0031f11cf907" (UID: "2c538682-fd83-4c33-b28c-0031f11cf907"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:21:18 crc kubenswrapper[5005]: I0225 11:21:18.926649 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c538682-fd83-4c33-b28c-0031f11cf907-kube-api-access-dk5g5" (OuterVolumeSpecName: "kube-api-access-dk5g5") pod "2c538682-fd83-4c33-b28c-0031f11cf907" (UID: "2c538682-fd83-4c33-b28c-0031f11cf907"). InnerVolumeSpecName "kube-api-access-dk5g5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:21:18 crc kubenswrapper[5005]: I0225 11:21:18.938911 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7559c79686-zn9fc" Feb 25 11:21:19 crc kubenswrapper[5005]: I0225 11:21:19.018533 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2p9x\" (UniqueName: \"kubernetes.io/projected/cecf8f78-dacb-47a9-95f6-46c6f141e468-kube-api-access-x2p9x\") pod \"cecf8f78-dacb-47a9-95f6-46c6f141e468\" (UID: \"cecf8f78-dacb-47a9-95f6-46c6f141e468\") " Feb 25 11:21:19 crc kubenswrapper[5005]: I0225 11:21:19.018609 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cecf8f78-dacb-47a9-95f6-46c6f141e468-proxy-ca-bundles\") pod \"cecf8f78-dacb-47a9-95f6-46c6f141e468\" (UID: \"cecf8f78-dacb-47a9-95f6-46c6f141e468\") " Feb 25 11:21:19 crc kubenswrapper[5005]: I0225 11:21:19.018634 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cecf8f78-dacb-47a9-95f6-46c6f141e468-config\") pod \"cecf8f78-dacb-47a9-95f6-46c6f141e468\" (UID: \"cecf8f78-dacb-47a9-95f6-46c6f141e468\") " Feb 25 11:21:19 crc kubenswrapper[5005]: I0225 11:21:19.018678 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cecf8f78-dacb-47a9-95f6-46c6f141e468-client-ca\") pod \"cecf8f78-dacb-47a9-95f6-46c6f141e468\" (UID: \"cecf8f78-dacb-47a9-95f6-46c6f141e468\") " Feb 25 11:21:19 crc kubenswrapper[5005]: I0225 11:21:19.018755 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cecf8f78-dacb-47a9-95f6-46c6f141e468-serving-cert\") pod \"cecf8f78-dacb-47a9-95f6-46c6f141e468\" (UID: \"cecf8f78-dacb-47a9-95f6-46c6f141e468\") " Feb 25 11:21:19 crc kubenswrapper[5005]: I0225 11:21:19.018947 5005 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c538682-fd83-4c33-b28c-0031f11cf907-config\") on node \"crc\" DevicePath \"\"" Feb 25 11:21:19 crc kubenswrapper[5005]: I0225 11:21:19.018963 5005 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c538682-fd83-4c33-b28c-0031f11cf907-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 25 11:21:19 crc kubenswrapper[5005]: I0225 11:21:19.018974 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dk5g5\" (UniqueName: \"kubernetes.io/projected/2c538682-fd83-4c33-b28c-0031f11cf907-kube-api-access-dk5g5\") on node \"crc\" DevicePath \"\"" Feb 25 11:21:19 crc kubenswrapper[5005]: I0225 11:21:19.018981 5005 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2c538682-fd83-4c33-b28c-0031f11cf907-client-ca\") on node \"crc\" DevicePath \"\"" Feb 25 11:21:19 crc kubenswrapper[5005]: I0225 11:21:19.019381 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cecf8f78-dacb-47a9-95f6-46c6f141e468-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "cecf8f78-dacb-47a9-95f6-46c6f141e468" (UID: "cecf8f78-dacb-47a9-95f6-46c6f141e468"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:21:19 crc kubenswrapper[5005]: I0225 11:21:19.019701 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cecf8f78-dacb-47a9-95f6-46c6f141e468-client-ca" (OuterVolumeSpecName: "client-ca") pod "cecf8f78-dacb-47a9-95f6-46c6f141e468" (UID: "cecf8f78-dacb-47a9-95f6-46c6f141e468"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:21:19 crc kubenswrapper[5005]: I0225 11:21:19.019731 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cecf8f78-dacb-47a9-95f6-46c6f141e468-config" (OuterVolumeSpecName: "config") pod "cecf8f78-dacb-47a9-95f6-46c6f141e468" (UID: "cecf8f78-dacb-47a9-95f6-46c6f141e468"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:21:19 crc kubenswrapper[5005]: I0225 11:21:19.023501 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cecf8f78-dacb-47a9-95f6-46c6f141e468-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "cecf8f78-dacb-47a9-95f6-46c6f141e468" (UID: "cecf8f78-dacb-47a9-95f6-46c6f141e468"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:21:19 crc kubenswrapper[5005]: I0225 11:21:19.023516 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cecf8f78-dacb-47a9-95f6-46c6f141e468-kube-api-access-x2p9x" (OuterVolumeSpecName: "kube-api-access-x2p9x") pod "cecf8f78-dacb-47a9-95f6-46c6f141e468" (UID: "cecf8f78-dacb-47a9-95f6-46c6f141e468"). InnerVolumeSpecName "kube-api-access-x2p9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:21:19 crc kubenswrapper[5005]: I0225 11:21:19.120325 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2p9x\" (UniqueName: \"kubernetes.io/projected/cecf8f78-dacb-47a9-95f6-46c6f141e468-kube-api-access-x2p9x\") on node \"crc\" DevicePath \"\"" Feb 25 11:21:19 crc kubenswrapper[5005]: I0225 11:21:19.120355 5005 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cecf8f78-dacb-47a9-95f6-46c6f141e468-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 25 11:21:19 crc kubenswrapper[5005]: I0225 11:21:19.120363 5005 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cecf8f78-dacb-47a9-95f6-46c6f141e468-config\") on node \"crc\" DevicePath \"\"" Feb 25 11:21:19 crc kubenswrapper[5005]: I0225 11:21:19.120387 5005 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cecf8f78-dacb-47a9-95f6-46c6f141e468-client-ca\") on node \"crc\" DevicePath \"\"" Feb 25 11:21:19 crc kubenswrapper[5005]: I0225 11:21:19.120395 5005 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cecf8f78-dacb-47a9-95f6-46c6f141e468-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 25 11:21:19 crc kubenswrapper[5005]: I0225 11:21:19.736573 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-85466dc99d-nhkg7"] Feb 25 11:21:19 crc kubenswrapper[5005]: E0225 11:21:19.737018 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cecf8f78-dacb-47a9-95f6-46c6f141e468" containerName="controller-manager" Feb 25 11:21:19 crc kubenswrapper[5005]: I0225 11:21:19.737046 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="cecf8f78-dacb-47a9-95f6-46c6f141e468" containerName="controller-manager" Feb 25 11:21:19 crc kubenswrapper[5005]: E0225 11:21:19.737088 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c538682-fd83-4c33-b28c-0031f11cf907" containerName="route-controller-manager" Feb 25 11:21:19 crc kubenswrapper[5005]: I0225 11:21:19.737104 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c538682-fd83-4c33-b28c-0031f11cf907" containerName="route-controller-manager" Feb 25 11:21:19 crc kubenswrapper[5005]: I0225 11:21:19.737348 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c538682-fd83-4c33-b28c-0031f11cf907" containerName="route-controller-manager" Feb 25 11:21:19 crc kubenswrapper[5005]: I0225 11:21:19.737424 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="cecf8f78-dacb-47a9-95f6-46c6f141e468" containerName="controller-manager" Feb 25 11:21:19 crc kubenswrapper[5005]: I0225 11:21:19.738238 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-85466dc99d-nhkg7" Feb 25 11:21:19 crc kubenswrapper[5005]: I0225 11:21:19.743546 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-984d4d6d5-bjfm4"] Feb 25 11:21:19 crc kubenswrapper[5005]: I0225 11:21:19.744545 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-984d4d6d5-bjfm4" Feb 25 11:21:19 crc kubenswrapper[5005]: I0225 11:21:19.751140 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-984d4d6d5-bjfm4"] Feb 25 11:21:19 crc kubenswrapper[5005]: I0225 11:21:19.756707 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-85466dc99d-nhkg7"] Feb 25 11:21:19 crc kubenswrapper[5005]: I0225 11:21:19.826491 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40ca54d7-877a-4b26-a3a2-720e0fe10af0-serving-cert\") pod \"route-controller-manager-984d4d6d5-bjfm4\" (UID: \"40ca54d7-877a-4b26-a3a2-720e0fe10af0\") " pod="openshift-route-controller-manager/route-controller-manager-984d4d6d5-bjfm4" Feb 25 11:21:19 crc kubenswrapper[5005]: I0225 11:21:19.826579 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8f5de503-be40-4c9c-baa7-7f992ce7c84f-proxy-ca-bundles\") pod \"controller-manager-85466dc99d-nhkg7\" (UID: \"8f5de503-be40-4c9c-baa7-7f992ce7c84f\") " pod="openshift-controller-manager/controller-manager-85466dc99d-nhkg7" Feb 25 11:21:19 crc kubenswrapper[5005]: I0225 11:21:19.826760 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40ca54d7-877a-4b26-a3a2-720e0fe10af0-config\") pod \"route-controller-manager-984d4d6d5-bjfm4\" (UID: \"40ca54d7-877a-4b26-a3a2-720e0fe10af0\") " pod="openshift-route-controller-manager/route-controller-manager-984d4d6d5-bjfm4" Feb 25 11:21:19 crc kubenswrapper[5005]: I0225 11:21:19.826914 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/40ca54d7-877a-4b26-a3a2-720e0fe10af0-client-ca\") pod \"route-controller-manager-984d4d6d5-bjfm4\" (UID: \"40ca54d7-877a-4b26-a3a2-720e0fe10af0\") " pod="openshift-route-controller-manager/route-controller-manager-984d4d6d5-bjfm4" Feb 25 11:21:19 crc kubenswrapper[5005]: I0225 11:21:19.827060 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnwjs\" (UniqueName: \"kubernetes.io/projected/8f5de503-be40-4c9c-baa7-7f992ce7c84f-kube-api-access-cnwjs\") pod \"controller-manager-85466dc99d-nhkg7\" (UID: \"8f5de503-be40-4c9c-baa7-7f992ce7c84f\") " pod="openshift-controller-manager/controller-manager-85466dc99d-nhkg7" Feb 25 11:21:19 crc kubenswrapper[5005]: I0225 11:21:19.827121 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8f5de503-be40-4c9c-baa7-7f992ce7c84f-client-ca\") pod \"controller-manager-85466dc99d-nhkg7\" (UID: \"8f5de503-be40-4c9c-baa7-7f992ce7c84f\") " pod="openshift-controller-manager/controller-manager-85466dc99d-nhkg7" Feb 25 11:21:19 crc kubenswrapper[5005]: I0225 11:21:19.827216 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54bpv\" (UniqueName: \"kubernetes.io/projected/40ca54d7-877a-4b26-a3a2-720e0fe10af0-kube-api-access-54bpv\") pod \"route-controller-manager-984d4d6d5-bjfm4\" (UID: \"40ca54d7-877a-4b26-a3a2-720e0fe10af0\") " pod="openshift-route-controller-manager/route-controller-manager-984d4d6d5-bjfm4" Feb 25 11:21:19 crc kubenswrapper[5005]: I0225 11:21:19.827363 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f5de503-be40-4c9c-baa7-7f992ce7c84f-config\") pod \"controller-manager-85466dc99d-nhkg7\" (UID: \"8f5de503-be40-4c9c-baa7-7f992ce7c84f\") " pod="openshift-controller-manager/controller-manager-85466dc99d-nhkg7" Feb 25 11:21:19 crc kubenswrapper[5005]: I0225 11:21:19.827530 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f5de503-be40-4c9c-baa7-7f992ce7c84f-serving-cert\") pod \"controller-manager-85466dc99d-nhkg7\" (UID: \"8f5de503-be40-4c9c-baa7-7f992ce7c84f\") " pod="openshift-controller-manager/controller-manager-85466dc99d-nhkg7" Feb 25 11:21:19 crc kubenswrapper[5005]: I0225 11:21:19.871524 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-654d5c677d-rt9q2" Feb 25 11:21:19 crc kubenswrapper[5005]: I0225 11:21:19.871559 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7559c79686-zn9fc" Feb 25 11:21:19 crc kubenswrapper[5005]: I0225 11:21:19.871542 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7559c79686-zn9fc" event={"ID":"cecf8f78-dacb-47a9-95f6-46c6f141e468","Type":"ContainerDied","Data":"ab7e42e18b61d5c8cd515438a6c060f830a85504bb7ab69756366fb79a1725dc"} Feb 25 11:21:19 crc kubenswrapper[5005]: I0225 11:21:19.872282 5005 scope.go:117] "RemoveContainer" containerID="c4eecba03393fa07c1f9ad0204f05cd69db676a1e43dfa19b28d96c7461a9607" Feb 25 11:21:19 crc kubenswrapper[5005]: I0225 11:21:19.919157 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7559c79686-zn9fc"] Feb 25 11:21:19 crc kubenswrapper[5005]: I0225 11:21:19.925763 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7559c79686-zn9fc"] Feb 25 11:21:19 crc kubenswrapper[5005]: I0225 11:21:19.929127 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40ca54d7-877a-4b26-a3a2-720e0fe10af0-config\") pod \"route-controller-manager-984d4d6d5-bjfm4\" (UID: \"40ca54d7-877a-4b26-a3a2-720e0fe10af0\") " pod="openshift-route-controller-manager/route-controller-manager-984d4d6d5-bjfm4" Feb 25 11:21:19 crc kubenswrapper[5005]: I0225 11:21:19.929189 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/40ca54d7-877a-4b26-a3a2-720e0fe10af0-client-ca\") pod \"route-controller-manager-984d4d6d5-bjfm4\" (UID: \"40ca54d7-877a-4b26-a3a2-720e0fe10af0\") " pod="openshift-route-controller-manager/route-controller-manager-984d4d6d5-bjfm4" Feb 25 11:21:19 crc kubenswrapper[5005]: I0225 11:21:19.929226 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnwjs\" (UniqueName: \"kubernetes.io/projected/8f5de503-be40-4c9c-baa7-7f992ce7c84f-kube-api-access-cnwjs\") pod \"controller-manager-85466dc99d-nhkg7\" (UID: \"8f5de503-be40-4c9c-baa7-7f992ce7c84f\") " pod="openshift-controller-manager/controller-manager-85466dc99d-nhkg7" Feb 25 11:21:19 crc kubenswrapper[5005]: I0225 11:21:19.929266 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8f5de503-be40-4c9c-baa7-7f992ce7c84f-client-ca\") pod \"controller-manager-85466dc99d-nhkg7\" (UID: \"8f5de503-be40-4c9c-baa7-7f992ce7c84f\") " pod="openshift-controller-manager/controller-manager-85466dc99d-nhkg7" Feb 25 11:21:19 crc kubenswrapper[5005]: I0225 11:21:19.929297 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54bpv\" (UniqueName: \"kubernetes.io/projected/40ca54d7-877a-4b26-a3a2-720e0fe10af0-kube-api-access-54bpv\") pod \"route-controller-manager-984d4d6d5-bjfm4\" (UID: \"40ca54d7-877a-4b26-a3a2-720e0fe10af0\") " pod="openshift-route-controller-manager/route-controller-manager-984d4d6d5-bjfm4" Feb 25 11:21:19 crc kubenswrapper[5005]: I0225 11:21:19.929318 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f5de503-be40-4c9c-baa7-7f992ce7c84f-config\") pod \"controller-manager-85466dc99d-nhkg7\" (UID: \"8f5de503-be40-4c9c-baa7-7f992ce7c84f\") " pod="openshift-controller-manager/controller-manager-85466dc99d-nhkg7" Feb 25 11:21:19 crc kubenswrapper[5005]: I0225 11:21:19.929336 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f5de503-be40-4c9c-baa7-7f992ce7c84f-serving-cert\") pod \"controller-manager-85466dc99d-nhkg7\" (UID: \"8f5de503-be40-4c9c-baa7-7f992ce7c84f\") " pod="openshift-controller-manager/controller-manager-85466dc99d-nhkg7" Feb 25 11:21:19 crc kubenswrapper[5005]: I0225 11:21:19.929359 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40ca54d7-877a-4b26-a3a2-720e0fe10af0-serving-cert\") pod \"route-controller-manager-984d4d6d5-bjfm4\" (UID: \"40ca54d7-877a-4b26-a3a2-720e0fe10af0\") " pod="openshift-route-controller-manager/route-controller-manager-984d4d6d5-bjfm4" Feb 25 11:21:19 crc kubenswrapper[5005]: I0225 11:21:19.930018 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8f5de503-be40-4c9c-baa7-7f992ce7c84f-proxy-ca-bundles\") pod \"controller-manager-85466dc99d-nhkg7\" (UID: \"8f5de503-be40-4c9c-baa7-7f992ce7c84f\") " pod="openshift-controller-manager/controller-manager-85466dc99d-nhkg7" Feb 25 11:21:19 crc kubenswrapper[5005]: I0225 11:21:19.930555 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/40ca54d7-877a-4b26-a3a2-720e0fe10af0-client-ca\") pod \"route-controller-manager-984d4d6d5-bjfm4\" (UID: \"40ca54d7-877a-4b26-a3a2-720e0fe10af0\") " pod="openshift-route-controller-manager/route-controller-manager-984d4d6d5-bjfm4" Feb 25 11:21:19 crc kubenswrapper[5005]: I0225 11:21:19.930791 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8f5de503-be40-4c9c-baa7-7f992ce7c84f-client-ca\") pod \"controller-manager-85466dc99d-nhkg7\" (UID: \"8f5de503-be40-4c9c-baa7-7f992ce7c84f\") " pod="openshift-controller-manager/controller-manager-85466dc99d-nhkg7" Feb 25 11:21:19 crc kubenswrapper[5005]: I0225 11:21:19.930968 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40ca54d7-877a-4b26-a3a2-720e0fe10af0-config\") pod \"route-controller-manager-984d4d6d5-bjfm4\" (UID: \"40ca54d7-877a-4b26-a3a2-720e0fe10af0\") " pod="openshift-route-controller-manager/route-controller-manager-984d4d6d5-bjfm4" Feb 25 11:21:19 crc kubenswrapper[5005]: I0225 11:21:19.930998 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8f5de503-be40-4c9c-baa7-7f992ce7c84f-proxy-ca-bundles\") pod \"controller-manager-85466dc99d-nhkg7\" (UID: \"8f5de503-be40-4c9c-baa7-7f992ce7c84f\") " pod="openshift-controller-manager/controller-manager-85466dc99d-nhkg7" Feb 25 11:21:19 crc kubenswrapper[5005]: I0225 11:21:19.931767 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f5de503-be40-4c9c-baa7-7f992ce7c84f-config\") pod \"controller-manager-85466dc99d-nhkg7\" (UID: \"8f5de503-be40-4c9c-baa7-7f992ce7c84f\") " pod="openshift-controller-manager/controller-manager-85466dc99d-nhkg7" Feb 25 11:21:19 crc kubenswrapper[5005]: I0225 11:21:19.933142 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-654d5c677d-rt9q2"] Feb 25 11:21:19 crc kubenswrapper[5005]: I0225 11:21:19.934112 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40ca54d7-877a-4b26-a3a2-720e0fe10af0-serving-cert\") pod \"route-controller-manager-984d4d6d5-bjfm4\" (UID: \"40ca54d7-877a-4b26-a3a2-720e0fe10af0\") " pod="openshift-route-controller-manager/route-controller-manager-984d4d6d5-bjfm4" Feb 25 11:21:19 crc kubenswrapper[5005]: I0225 11:21:19.934813 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f5de503-be40-4c9c-baa7-7f992ce7c84f-serving-cert\") pod \"controller-manager-85466dc99d-nhkg7\" (UID: \"8f5de503-be40-4c9c-baa7-7f992ce7c84f\") " pod="openshift-controller-manager/controller-manager-85466dc99d-nhkg7" Feb 25 11:21:19 crc kubenswrapper[5005]: I0225 11:21:19.936488 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-654d5c677d-rt9q2"] Feb 25 11:21:19 crc kubenswrapper[5005]: I0225 11:21:19.949930 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnwjs\" (UniqueName: \"kubernetes.io/projected/8f5de503-be40-4c9c-baa7-7f992ce7c84f-kube-api-access-cnwjs\") pod \"controller-manager-85466dc99d-nhkg7\" (UID: \"8f5de503-be40-4c9c-baa7-7f992ce7c84f\") " pod="openshift-controller-manager/controller-manager-85466dc99d-nhkg7" Feb 25 11:21:19 crc kubenswrapper[5005]: I0225 11:21:19.952189 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54bpv\" (UniqueName: \"kubernetes.io/projected/40ca54d7-877a-4b26-a3a2-720e0fe10af0-kube-api-access-54bpv\") pod \"route-controller-manager-984d4d6d5-bjfm4\" (UID: \"40ca54d7-877a-4b26-a3a2-720e0fe10af0\") " pod="openshift-route-controller-manager/route-controller-manager-984d4d6d5-bjfm4" Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.055805 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-85466dc99d-nhkg7" Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.074002 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-984d4d6d5-bjfm4" Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.295454 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-85466dc99d-nhkg7"] Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.563298 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-984d4d6d5-bjfm4"] Feb 25 11:21:20 crc kubenswrapper[5005]: W0225 11:21:20.566137 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40ca54d7_877a_4b26_a3a2_720e0fe10af0.slice/crio-33b11fb4d6e74bf943fd0528c915f1bd5479996bcaefdbbd7ae23ef41d114f12 WatchSource:0}: Error finding container 33b11fb4d6e74bf943fd0528c915f1bd5479996bcaefdbbd7ae23ef41d114f12: Status 404 returned error can't find the container with id 33b11fb4d6e74bf943fd0528c915f1bd5479996bcaefdbbd7ae23ef41d114f12 Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.692644 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c538682-fd83-4c33-b28c-0031f11cf907" path="/var/lib/kubelet/pods/2c538682-fd83-4c33-b28c-0031f11cf907/volumes" Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.693397 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cecf8f78-dacb-47a9-95f6-46c6f141e468" path="/var/lib/kubelet/pods/cecf8f78-dacb-47a9-95f6-46c6f141e468/volumes" Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.786396 5005 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.786992 5005 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.787012 5005 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 25 11:21:20 crc kubenswrapper[5005]: E0225 11:21:20.787130 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.787143 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 25 11:21:20 crc kubenswrapper[5005]: E0225 11:21:20.787151 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.787157 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 25 11:21:20 crc kubenswrapper[5005]: E0225 11:21:20.787166 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.787171 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 25 11:21:20 crc kubenswrapper[5005]: E0225 11:21:20.787180 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.787185 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 25 11:21:20 crc kubenswrapper[5005]: E0225 11:21:20.787192 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.787197 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 25 11:21:20 crc kubenswrapper[5005]: E0225 11:21:20.787203 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.787209 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 25 11:21:20 crc kubenswrapper[5005]: E0225 11:21:20.787216 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.787221 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 25 11:21:20 crc kubenswrapper[5005]: E0225 11:21:20.787230 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.787235 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 25 11:21:20 crc kubenswrapper[5005]: E0225 11:21:20.787243 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.787249 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.787328 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.787338 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.787345 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.787353 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.787361 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.787387 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.787394 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 25 11:21:20 crc kubenswrapper[5005]: E0225 11:21:20.787478 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.787486 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.787573 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.787583 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.788470 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.788793 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://615166d051854429ec9eefe2bb6a7c81676db08ee8b7e91a0503c82f82c27b7a" gracePeriod=15 Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.788789 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://5d75520daa9f2549db391f75b35e5a1ae156323f4ed30dd5c3eff1c669b99079" gracePeriod=15 Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.788886 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://9fb86efb4724977eee4e0b80af3aa1b7320b68118016250cd74ef6f631008417" gracePeriod=15 Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.788835 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://bc3059892241d2a21084bd5b7d8bda16e9d67b83712919ce2ea4c7157f3c0b19" gracePeriod=15 Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.788905 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://68ec20b6113b19505a0f124a1bbd4c2a1d418c686236a02e96759f3ccbe3b7e1" gracePeriod=15 Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.795676 5005 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.817066 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.869860 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.869926 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.869948 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.869971 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.869990 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.870004 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.870017 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.870041 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.912012 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-85466dc99d-nhkg7" event={"ID":"8f5de503-be40-4c9c-baa7-7f992ce7c84f","Type":"ContainerStarted","Data":"79fd31020ed68faac02b5ad3376588c73c54d1818f000f5ef987733ab1caad9e"} Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.912064 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-85466dc99d-nhkg7" event={"ID":"8f5de503-be40-4c9c-baa7-7f992ce7c84f","Type":"ContainerStarted","Data":"db28f846ba033e062d75b7236f964dcb8c9906e5451033a69a0e8b61876167fc"} Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.913509 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-85466dc99d-nhkg7" Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.918151 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.919250 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.919778 5005 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="615166d051854429ec9eefe2bb6a7c81676db08ee8b7e91a0503c82f82c27b7a" exitCode=0 Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.920498 5005 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9fb86efb4724977eee4e0b80af3aa1b7320b68118016250cd74ef6f631008417" exitCode=0 Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.920587 5005 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="68ec20b6113b19505a0f124a1bbd4c2a1d418c686236a02e96759f3ccbe3b7e1" exitCode=2 Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.920718 5005 scope.go:117] "RemoveContainer" containerID="c59c602484c6cda3ffbd176e13b44ae1676fa65bde2f71e60e0e03bdc0c96375" Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.929196 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-984d4d6d5-bjfm4" event={"ID":"40ca54d7-877a-4b26-a3a2-720e0fe10af0","Type":"ContainerStarted","Data":"075c16fa4924929b7bbd86140297fa0d4155792e539fd3dae597686b7499302a"} Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.929241 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-984d4d6d5-bjfm4" event={"ID":"40ca54d7-877a-4b26-a3a2-720e0fe10af0","Type":"ContainerStarted","Data":"33b11fb4d6e74bf943fd0528c915f1bd5479996bcaefdbbd7ae23ef41d114f12"} Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.929541 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-984d4d6d5-bjfm4" Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.930630 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-85466dc99d-nhkg7" Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.970640 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.970697 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.970718 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.970743 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.970762 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.970777 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.970781 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.970833 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.970836 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.970791 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.970869 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.970890 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.970909 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.970933 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.970952 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 25 11:21:20 crc kubenswrapper[5005]: I0225 11:21:20.971084 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 25 11:21:21 crc kubenswrapper[5005]: I0225 11:21:21.114248 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 25 11:21:21 crc kubenswrapper[5005]: W0225 11:21:21.129399 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-bab5ec39dc42ade3a6022fd45baa8bac091af59ca5bb36dd896e26aac10b74ae WatchSource:0}: Error finding container bab5ec39dc42ade3a6022fd45baa8bac091af59ca5bb36dd896e26aac10b74ae: Status 404 returned error can't find the container with id bab5ec39dc42ade3a6022fd45baa8bac091af59ca5bb36dd896e26aac10b74ae Feb 25 11:21:21 crc kubenswrapper[5005]: E0225 11:21:21.131833 5005 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.233:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1897796b873e93f5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:21:21.131066357 +0000 UTC m=+195.171798684,LastTimestamp:2026-02-25 11:21:21.131066357 +0000 UTC m=+195.171798684,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:21:21 crc kubenswrapper[5005]: I0225 11:21:21.929691 5005 patch_prober.go:28] interesting pod/route-controller-manager-984d4d6d5-bjfm4 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 25 11:21:21 crc kubenswrapper[5005]: I0225 11:21:21.930015 5005 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-984d4d6d5-bjfm4" podUID="40ca54d7-877a-4b26-a3a2-720e0fe10af0" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 25 11:21:21 crc kubenswrapper[5005]: I0225 11:21:21.938336 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 25 11:21:21 crc kubenswrapper[5005]: I0225 11:21:21.939441 5005 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="bc3059892241d2a21084bd5b7d8bda16e9d67b83712919ce2ea4c7157f3c0b19" exitCode=0 Feb 25 11:21:21 crc kubenswrapper[5005]: I0225 11:21:21.941104 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"7f707437e5e9648bfbc7a4df105756aeaca4fe5eddae1568e20039510903dda8"} Feb 25 11:21:21 crc kubenswrapper[5005]: I0225 11:21:21.941154 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"bab5ec39dc42ade3a6022fd45baa8bac091af59ca5bb36dd896e26aac10b74ae"} Feb 25 11:21:21 crc kubenswrapper[5005]: I0225 11:21:21.942992 5005 generic.go:334] "Generic (PLEG): container finished" podID="1b3efbe3-23ee-4444-9ee5-c8ea6b2109c4" containerID="e0b28b9462e6cd8cfdb15e12eeb5e59f87f11b43afa2878205e62a461c97cdd3" exitCode=0 Feb 25 11:21:21 crc kubenswrapper[5005]: I0225 11:21:21.943054 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"1b3efbe3-23ee-4444-9ee5-c8ea6b2109c4","Type":"ContainerDied","Data":"e0b28b9462e6cd8cfdb15e12eeb5e59f87f11b43afa2878205e62a461c97cdd3"} Feb 25 11:21:22 crc kubenswrapper[5005]: I0225 11:21:22.944231 5005 patch_prober.go:28] interesting pod/route-controller-manager-984d4d6d5-bjfm4 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 25 11:21:22 crc kubenswrapper[5005]: I0225 11:21:22.944326 5005 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-984d4d6d5-bjfm4" podUID="40ca54d7-877a-4b26-a3a2-720e0fe10af0" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 25 11:21:23 crc kubenswrapper[5005]: I0225 11:21:23.286440 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 25 11:21:23 crc kubenswrapper[5005]: I0225 11:21:23.287636 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 25 11:21:23 crc kubenswrapper[5005]: I0225 11:21:23.328720 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 25 11:21:23 crc kubenswrapper[5005]: I0225 11:21:23.411344 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1b3efbe3-23ee-4444-9ee5-c8ea6b2109c4-var-lock\") pod \"1b3efbe3-23ee-4444-9ee5-c8ea6b2109c4\" (UID: \"1b3efbe3-23ee-4444-9ee5-c8ea6b2109c4\") " Feb 25 11:21:23 crc kubenswrapper[5005]: I0225 11:21:23.411429 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1b3efbe3-23ee-4444-9ee5-c8ea6b2109c4-kubelet-dir\") pod \"1b3efbe3-23ee-4444-9ee5-c8ea6b2109c4\" (UID: \"1b3efbe3-23ee-4444-9ee5-c8ea6b2109c4\") " Feb 25 11:21:23 crc kubenswrapper[5005]: I0225 11:21:23.411478 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 25 11:21:23 crc kubenswrapper[5005]: I0225 11:21:23.411484 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b3efbe3-23ee-4444-9ee5-c8ea6b2109c4-var-lock" (OuterVolumeSpecName: "var-lock") pod "1b3efbe3-23ee-4444-9ee5-c8ea6b2109c4" (UID: "1b3efbe3-23ee-4444-9ee5-c8ea6b2109c4"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 25 11:21:23 crc kubenswrapper[5005]: I0225 11:21:23.411514 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 25 11:21:23 crc kubenswrapper[5005]: I0225 11:21:23.411543 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b3efbe3-23ee-4444-9ee5-c8ea6b2109c4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1b3efbe3-23ee-4444-9ee5-c8ea6b2109c4" (UID: "1b3efbe3-23ee-4444-9ee5-c8ea6b2109c4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 25 11:21:23 crc kubenswrapper[5005]: I0225 11:21:23.411554 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1b3efbe3-23ee-4444-9ee5-c8ea6b2109c4-kube-api-access\") pod \"1b3efbe3-23ee-4444-9ee5-c8ea6b2109c4\" (UID: \"1b3efbe3-23ee-4444-9ee5-c8ea6b2109c4\") " Feb 25 11:21:23 crc kubenswrapper[5005]: I0225 11:21:23.411566 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 25 11:21:23 crc kubenswrapper[5005]: I0225 11:21:23.411584 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 25 11:21:23 crc kubenswrapper[5005]: I0225 11:21:23.411623 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 25 11:21:23 crc kubenswrapper[5005]: I0225 11:21:23.411924 5005 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1b3efbe3-23ee-4444-9ee5-c8ea6b2109c4-var-lock\") on node \"crc\" DevicePath \"\"" Feb 25 11:21:23 crc kubenswrapper[5005]: I0225 11:21:23.411917 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 25 11:21:23 crc kubenswrapper[5005]: I0225 11:21:23.411946 5005 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1b3efbe3-23ee-4444-9ee5-c8ea6b2109c4-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 25 11:21:23 crc kubenswrapper[5005]: I0225 11:21:23.412054 5005 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 25 11:21:23 crc kubenswrapper[5005]: I0225 11:21:23.412102 5005 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 25 11:21:23 crc kubenswrapper[5005]: I0225 11:21:23.416148 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b3efbe3-23ee-4444-9ee5-c8ea6b2109c4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1b3efbe3-23ee-4444-9ee5-c8ea6b2109c4" (UID: "1b3efbe3-23ee-4444-9ee5-c8ea6b2109c4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:21:23 crc kubenswrapper[5005]: I0225 11:21:23.513320 5005 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 25 11:21:23 crc kubenswrapper[5005]: I0225 11:21:23.513407 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1b3efbe3-23ee-4444-9ee5-c8ea6b2109c4-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 25 11:21:23 crc kubenswrapper[5005]: I0225 11:21:23.960892 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 25 11:21:23 crc kubenswrapper[5005]: I0225 11:21:23.960928 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"1b3efbe3-23ee-4444-9ee5-c8ea6b2109c4","Type":"ContainerDied","Data":"1de2f4b27cf45483f36d76c50c20b12a8b5d558de7cbf3193b7b893399cd6d7e"} Feb 25 11:21:23 crc kubenswrapper[5005]: I0225 11:21:23.961161 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1de2f4b27cf45483f36d76c50c20b12a8b5d558de7cbf3193b7b893399cd6d7e" Feb 25 11:21:23 crc kubenswrapper[5005]: I0225 11:21:23.966163 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 25 11:21:23 crc kubenswrapper[5005]: I0225 11:21:23.967171 5005 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5d75520daa9f2549db391f75b35e5a1ae156323f4ed30dd5c3eff1c669b99079" exitCode=0 Feb 25 11:21:23 crc kubenswrapper[5005]: I0225 11:21:23.967292 5005 scope.go:117] "RemoveContainer" containerID="615166d051854429ec9eefe2bb6a7c81676db08ee8b7e91a0503c82f82c27b7a" Feb 25 11:21:23 crc kubenswrapper[5005]: I0225 11:21:23.967306 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 25 11:21:23 crc kubenswrapper[5005]: I0225 11:21:23.999029 5005 scope.go:117] "RemoveContainer" containerID="9fb86efb4724977eee4e0b80af3aa1b7320b68118016250cd74ef6f631008417" Feb 25 11:21:24 crc kubenswrapper[5005]: I0225 11:21:24.023750 5005 scope.go:117] "RemoveContainer" containerID="bc3059892241d2a21084bd5b7d8bda16e9d67b83712919ce2ea4c7157f3c0b19" Feb 25 11:21:24 crc kubenswrapper[5005]: I0225 11:21:24.050719 5005 scope.go:117] "RemoveContainer" containerID="68ec20b6113b19505a0f124a1bbd4c2a1d418c686236a02e96759f3ccbe3b7e1" Feb 25 11:21:24 crc kubenswrapper[5005]: I0225 11:21:24.078516 5005 scope.go:117] "RemoveContainer" containerID="5d75520daa9f2549db391f75b35e5a1ae156323f4ed30dd5c3eff1c669b99079" Feb 25 11:21:24 crc kubenswrapper[5005]: I0225 11:21:24.109042 5005 scope.go:117] "RemoveContainer" containerID="f1215ec0ba3dc9272bbd8f648ab046459d3c8dd9de728a938102d269a234c9b8" Feb 25 11:21:24 crc kubenswrapper[5005]: I0225 11:21:24.144907 5005 scope.go:117] "RemoveContainer" containerID="615166d051854429ec9eefe2bb6a7c81676db08ee8b7e91a0503c82f82c27b7a" Feb 25 11:21:24 crc kubenswrapper[5005]: E0225 11:21:24.146524 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"615166d051854429ec9eefe2bb6a7c81676db08ee8b7e91a0503c82f82c27b7a\": container with ID starting with 615166d051854429ec9eefe2bb6a7c81676db08ee8b7e91a0503c82f82c27b7a not found: ID does not exist" containerID="615166d051854429ec9eefe2bb6a7c81676db08ee8b7e91a0503c82f82c27b7a" Feb 25 11:21:24 crc kubenswrapper[5005]: I0225 11:21:24.146592 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"615166d051854429ec9eefe2bb6a7c81676db08ee8b7e91a0503c82f82c27b7a"} err="failed to get container status \"615166d051854429ec9eefe2bb6a7c81676db08ee8b7e91a0503c82f82c27b7a\": rpc error: code = NotFound desc = could not find container \"615166d051854429ec9eefe2bb6a7c81676db08ee8b7e91a0503c82f82c27b7a\": container with ID starting with 615166d051854429ec9eefe2bb6a7c81676db08ee8b7e91a0503c82f82c27b7a not found: ID does not exist" Feb 25 11:21:24 crc kubenswrapper[5005]: I0225 11:21:24.146635 5005 scope.go:117] "RemoveContainer" containerID="9fb86efb4724977eee4e0b80af3aa1b7320b68118016250cd74ef6f631008417" Feb 25 11:21:24 crc kubenswrapper[5005]: E0225 11:21:24.147152 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fb86efb4724977eee4e0b80af3aa1b7320b68118016250cd74ef6f631008417\": container with ID starting with 9fb86efb4724977eee4e0b80af3aa1b7320b68118016250cd74ef6f631008417 not found: ID does not exist" containerID="9fb86efb4724977eee4e0b80af3aa1b7320b68118016250cd74ef6f631008417" Feb 25 11:21:24 crc kubenswrapper[5005]: I0225 11:21:24.147345 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fb86efb4724977eee4e0b80af3aa1b7320b68118016250cd74ef6f631008417"} err="failed to get container status \"9fb86efb4724977eee4e0b80af3aa1b7320b68118016250cd74ef6f631008417\": rpc error: code = NotFound desc = could not find container \"9fb86efb4724977eee4e0b80af3aa1b7320b68118016250cd74ef6f631008417\": container with ID starting with 9fb86efb4724977eee4e0b80af3aa1b7320b68118016250cd74ef6f631008417 not found: ID does not exist" Feb 25 11:21:24 crc kubenswrapper[5005]: I0225 11:21:24.147548 5005 scope.go:117] "RemoveContainer" containerID="bc3059892241d2a21084bd5b7d8bda16e9d67b83712919ce2ea4c7157f3c0b19" Feb 25 11:21:24 crc kubenswrapper[5005]: E0225 11:21:24.148907 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc3059892241d2a21084bd5b7d8bda16e9d67b83712919ce2ea4c7157f3c0b19\": container with ID starting with bc3059892241d2a21084bd5b7d8bda16e9d67b83712919ce2ea4c7157f3c0b19 not found: ID does not exist" containerID="bc3059892241d2a21084bd5b7d8bda16e9d67b83712919ce2ea4c7157f3c0b19" Feb 25 11:21:24 crc kubenswrapper[5005]: I0225 11:21:24.148956 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc3059892241d2a21084bd5b7d8bda16e9d67b83712919ce2ea4c7157f3c0b19"} err="failed to get container status \"bc3059892241d2a21084bd5b7d8bda16e9d67b83712919ce2ea4c7157f3c0b19\": rpc error: code = NotFound desc = could not find container \"bc3059892241d2a21084bd5b7d8bda16e9d67b83712919ce2ea4c7157f3c0b19\": container with ID starting with bc3059892241d2a21084bd5b7d8bda16e9d67b83712919ce2ea4c7157f3c0b19 not found: ID does not exist" Feb 25 11:21:24 crc kubenswrapper[5005]: I0225 11:21:24.148989 5005 scope.go:117] "RemoveContainer" containerID="68ec20b6113b19505a0f124a1bbd4c2a1d418c686236a02e96759f3ccbe3b7e1" Feb 25 11:21:24 crc kubenswrapper[5005]: E0225 11:21:24.149438 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68ec20b6113b19505a0f124a1bbd4c2a1d418c686236a02e96759f3ccbe3b7e1\": container with ID starting with 68ec20b6113b19505a0f124a1bbd4c2a1d418c686236a02e96759f3ccbe3b7e1 not found: ID does not exist" containerID="68ec20b6113b19505a0f124a1bbd4c2a1d418c686236a02e96759f3ccbe3b7e1" Feb 25 11:21:24 crc kubenswrapper[5005]: I0225 11:21:24.149529 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68ec20b6113b19505a0f124a1bbd4c2a1d418c686236a02e96759f3ccbe3b7e1"} err="failed to get container status \"68ec20b6113b19505a0f124a1bbd4c2a1d418c686236a02e96759f3ccbe3b7e1\": rpc error: code = NotFound desc = could not find container \"68ec20b6113b19505a0f124a1bbd4c2a1d418c686236a02e96759f3ccbe3b7e1\": container with ID starting with 68ec20b6113b19505a0f124a1bbd4c2a1d418c686236a02e96759f3ccbe3b7e1 not found: ID does not exist" Feb 25 11:21:24 crc kubenswrapper[5005]: I0225 11:21:24.149571 5005 scope.go:117] "RemoveContainer" containerID="5d75520daa9f2549db391f75b35e5a1ae156323f4ed30dd5c3eff1c669b99079" Feb 25 11:21:24 crc kubenswrapper[5005]: E0225 11:21:24.149983 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d75520daa9f2549db391f75b35e5a1ae156323f4ed30dd5c3eff1c669b99079\": container with ID starting with 5d75520daa9f2549db391f75b35e5a1ae156323f4ed30dd5c3eff1c669b99079 not found: ID does not exist" containerID="5d75520daa9f2549db391f75b35e5a1ae156323f4ed30dd5c3eff1c669b99079" Feb 25 11:21:24 crc kubenswrapper[5005]: I0225 11:21:24.150032 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d75520daa9f2549db391f75b35e5a1ae156323f4ed30dd5c3eff1c669b99079"} err="failed to get container status \"5d75520daa9f2549db391f75b35e5a1ae156323f4ed30dd5c3eff1c669b99079\": rpc error: code = NotFound desc = could not find container \"5d75520daa9f2549db391f75b35e5a1ae156323f4ed30dd5c3eff1c669b99079\": container with ID starting with 5d75520daa9f2549db391f75b35e5a1ae156323f4ed30dd5c3eff1c669b99079 not found: ID does not exist" Feb 25 11:21:24 crc kubenswrapper[5005]: I0225 11:21:24.150061 5005 scope.go:117] "RemoveContainer" containerID="f1215ec0ba3dc9272bbd8f648ab046459d3c8dd9de728a938102d269a234c9b8" Feb 25 11:21:24 crc kubenswrapper[5005]: E0225 11:21:24.150585 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1215ec0ba3dc9272bbd8f648ab046459d3c8dd9de728a938102d269a234c9b8\": container with ID starting with f1215ec0ba3dc9272bbd8f648ab046459d3c8dd9de728a938102d269a234c9b8 not found: ID does not exist" containerID="f1215ec0ba3dc9272bbd8f648ab046459d3c8dd9de728a938102d269a234c9b8" Feb 25 11:21:24 crc kubenswrapper[5005]: I0225 11:21:24.150631 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1215ec0ba3dc9272bbd8f648ab046459d3c8dd9de728a938102d269a234c9b8"} err="failed to get container status \"f1215ec0ba3dc9272bbd8f648ab046459d3c8dd9de728a938102d269a234c9b8\": rpc error: code = NotFound desc = could not find container \"f1215ec0ba3dc9272bbd8f648ab046459d3c8dd9de728a938102d269a234c9b8\": container with ID starting with f1215ec0ba3dc9272bbd8f648ab046459d3c8dd9de728a938102d269a234c9b8 not found: ID does not exist" Feb 25 11:21:24 crc kubenswrapper[5005]: I0225 11:21:24.697093 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 25 11:21:25 crc kubenswrapper[5005]: I0225 11:21:25.914307 5005 status_manager.go:851] "Failed to get status for pod" podUID="8f5de503-be40-4c9c-baa7-7f992ce7c84f" pod="openshift-controller-manager/controller-manager-85466dc99d-nhkg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-85466dc99d-nhkg7\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 25 11:21:25 crc kubenswrapper[5005]: I0225 11:21:25.915076 5005 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 25 11:21:25 crc kubenswrapper[5005]: I0225 11:21:25.915697 5005 status_manager.go:851] "Failed to get status for pod" podUID="40ca54d7-877a-4b26-a3a2-720e0fe10af0" pod="openshift-route-controller-manager/route-controller-manager-984d4d6d5-bjfm4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-984d4d6d5-bjfm4\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 25 11:21:25 crc kubenswrapper[5005]: I0225 11:21:25.916089 5005 status_manager.go:851] "Failed to get status for pod" podUID="1b3efbe3-23ee-4444-9ee5-c8ea6b2109c4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 25 11:21:25 crc kubenswrapper[5005]: I0225 11:21:25.916445 5005 status_manager.go:851] "Failed to get status for pod" podUID="8f5de503-be40-4c9c-baa7-7f992ce7c84f" pod="openshift-controller-manager/controller-manager-85466dc99d-nhkg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-85466dc99d-nhkg7\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 25 11:21:25 crc kubenswrapper[5005]: I0225 11:21:25.916774 5005 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 25 11:21:26 crc kubenswrapper[5005]: I0225 11:21:26.691199 5005 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 25 11:21:26 crc kubenswrapper[5005]: I0225 11:21:26.691679 5005 status_manager.go:851] "Failed to get status for pod" podUID="40ca54d7-877a-4b26-a3a2-720e0fe10af0" pod="openshift-route-controller-manager/route-controller-manager-984d4d6d5-bjfm4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-984d4d6d5-bjfm4\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 25 11:21:26 crc kubenswrapper[5005]: I0225 11:21:26.692087 5005 status_manager.go:851] "Failed to get status for pod" podUID="1b3efbe3-23ee-4444-9ee5-c8ea6b2109c4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 25 11:21:26 crc kubenswrapper[5005]: I0225 11:21:26.692578 5005 status_manager.go:851] "Failed to get status for pod" podUID="8f5de503-be40-4c9c-baa7-7f992ce7c84f" pod="openshift-controller-manager/controller-manager-85466dc99d-nhkg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-85466dc99d-nhkg7\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 25 11:21:27 crc kubenswrapper[5005]: E0225 11:21:27.689628 5005 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.233:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1897796b873e93f5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-25 11:21:21.131066357 +0000 UTC m=+195.171798684,LastTimestamp:2026-02-25 11:21:21.131066357 +0000 UTC m=+195.171798684,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 25 11:21:30 crc kubenswrapper[5005]: E0225 11:21:30.711708 5005 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 25 11:21:30 crc kubenswrapper[5005]: E0225 11:21:30.717304 5005 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 25 11:21:30 crc kubenswrapper[5005]: E0225 11:21:30.717830 5005 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 25 11:21:30 crc kubenswrapper[5005]: E0225 11:21:30.718576 5005 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 25 11:21:30 crc kubenswrapper[5005]: E0225 11:21:30.718926 5005 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 25 11:21:30 crc kubenswrapper[5005]: I0225 11:21:30.718973 5005 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 25 11:21:30 crc kubenswrapper[5005]: E0225 11:21:30.719413 5005 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" interval="200ms" Feb 25 11:21:30 crc kubenswrapper[5005]: E0225 11:21:30.746079 5005 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.233:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" volumeName="registry-storage" Feb 25 11:21:30 crc kubenswrapper[5005]: E0225 11:21:30.840640 5005 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-25T11:21:30Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-25T11:21:30Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-25T11:21:30Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-25T11:21:30Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 25 11:21:30 crc kubenswrapper[5005]: E0225 11:21:30.841234 5005 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 25 11:21:30 crc kubenswrapper[5005]: E0225 11:21:30.841939 5005 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 25 11:21:30 crc kubenswrapper[5005]: E0225 11:21:30.842701 5005 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 25 11:21:30 crc kubenswrapper[5005]: E0225 11:21:30.843133 5005 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 25 11:21:30 crc kubenswrapper[5005]: E0225 11:21:30.843173 5005 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 25 11:21:30 crc kubenswrapper[5005]: E0225 11:21:30.920874 5005 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" interval="400ms" Feb 25 11:21:31 crc kubenswrapper[5005]: I0225 11:21:31.074627 5005 patch_prober.go:28] interesting pod/route-controller-manager-984d4d6d5-bjfm4 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 25 11:21:31 crc kubenswrapper[5005]: I0225 11:21:31.074704 5005 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-984d4d6d5-bjfm4" podUID="40ca54d7-877a-4b26-a3a2-720e0fe10af0" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 25 11:21:31 crc kubenswrapper[5005]: E0225 11:21:31.321921 5005 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" interval="800ms" Feb 25 11:21:32 crc kubenswrapper[5005]: E0225 11:21:32.122967 5005 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" interval="1.6s" Feb 25 11:21:33 crc kubenswrapper[5005]: I0225 11:21:33.685436 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 25 11:21:33 crc kubenswrapper[5005]: I0225 11:21:33.686947 5005 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 25 11:21:33 crc kubenswrapper[5005]: I0225 11:21:33.687658 5005 status_manager.go:851] "Failed to get status for pod" podUID="1b3efbe3-23ee-4444-9ee5-c8ea6b2109c4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 25 11:21:33 crc kubenswrapper[5005]: I0225 11:21:33.688425 5005 status_manager.go:851] "Failed to get status for pod" podUID="40ca54d7-877a-4b26-a3a2-720e0fe10af0" pod="openshift-route-controller-manager/route-controller-manager-984d4d6d5-bjfm4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-984d4d6d5-bjfm4\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 25 11:21:33 crc kubenswrapper[5005]: I0225 11:21:33.689111 5005 status_manager.go:851] "Failed to get status for pod" podUID="8f5de503-be40-4c9c-baa7-7f992ce7c84f" pod="openshift-controller-manager/controller-manager-85466dc99d-nhkg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-85466dc99d-nhkg7\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 25 11:21:33 crc kubenswrapper[5005]: I0225 11:21:33.710762 5005 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a9c06f67-7a5e-4278-818e-873a7b9618de" Feb 25 11:21:33 crc kubenswrapper[5005]: I0225 11:21:33.710989 5005 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a9c06f67-7a5e-4278-818e-873a7b9618de" Feb 25 11:21:33 crc kubenswrapper[5005]: E0225 11:21:33.711529 5005 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 25 11:21:33 crc kubenswrapper[5005]: I0225 11:21:33.712317 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 25 11:21:33 crc kubenswrapper[5005]: E0225 11:21:33.724509 5005 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" interval="3.2s" Feb 25 11:21:33 crc kubenswrapper[5005]: W0225 11:21:33.733089 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-2be711f93004afc01227d9a8bca7c4cb807fa3ea849f6c7dd036903d6fe07033 WatchSource:0}: Error finding container 2be711f93004afc01227d9a8bca7c4cb807fa3ea849f6c7dd036903d6fe07033: Status 404 returned error can't find the container with id 2be711f93004afc01227d9a8bca7c4cb807fa3ea849f6c7dd036903d6fe07033 Feb 25 11:21:34 crc kubenswrapper[5005]: I0225 11:21:34.034321 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 25 11:21:34 crc kubenswrapper[5005]: I0225 11:21:34.034940 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 25 11:21:34 crc kubenswrapper[5005]: I0225 11:21:34.034995 5005 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="cc7b6fc03c411ec651f8db31f2510f6b0fa45f7397e8466048befea7c261ae8e" exitCode=1 Feb 25 11:21:34 crc kubenswrapper[5005]: I0225 11:21:34.035064 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"cc7b6fc03c411ec651f8db31f2510f6b0fa45f7397e8466048befea7c261ae8e"} Feb 25 11:21:34 crc kubenswrapper[5005]: I0225 11:21:34.035576 5005 scope.go:117] "RemoveContainer" containerID="cc7b6fc03c411ec651f8db31f2510f6b0fa45f7397e8466048befea7c261ae8e" Feb 25 11:21:34 crc kubenswrapper[5005]: I0225 11:21:34.036006 5005 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 25 11:21:34 crc kubenswrapper[5005]: I0225 11:21:34.036340 5005 status_manager.go:851] "Failed to get status for pod" podUID="40ca54d7-877a-4b26-a3a2-720e0fe10af0" pod="openshift-route-controller-manager/route-controller-manager-984d4d6d5-bjfm4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-984d4d6d5-bjfm4\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 25 11:21:34 crc kubenswrapper[5005]: I0225 11:21:34.036662 5005 status_manager.go:851] "Failed to get status for pod" podUID="1b3efbe3-23ee-4444-9ee5-c8ea6b2109c4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 25 11:21:34 crc kubenswrapper[5005]: I0225 11:21:34.036915 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"2ef55bf9af56aa006101192b78c23e131b6486eb9c90c6d516a1e424626ef9d5"} Feb 25 11:21:34 crc kubenswrapper[5005]: I0225 11:21:34.036978 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"2be711f93004afc01227d9a8bca7c4cb807fa3ea849f6c7dd036903d6fe07033"} Feb 25 11:21:34 crc kubenswrapper[5005]: I0225 11:21:34.036923 5005 status_manager.go:851] "Failed to get status for pod" podUID="8f5de503-be40-4c9c-baa7-7f992ce7c84f" pod="openshift-controller-manager/controller-manager-85466dc99d-nhkg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-85466dc99d-nhkg7\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 25 11:21:34 crc kubenswrapper[5005]: I0225 11:21:34.037219 5005 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a9c06f67-7a5e-4278-818e-873a7b9618de" Feb 25 11:21:34 crc kubenswrapper[5005]: I0225 11:21:34.037239 5005 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a9c06f67-7a5e-4278-818e-873a7b9618de" Feb 25 11:21:34 crc kubenswrapper[5005]: I0225 11:21:34.037645 5005 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 25 11:21:34 crc kubenswrapper[5005]: E0225 11:21:34.037698 5005 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 25 11:21:34 crc kubenswrapper[5005]: I0225 11:21:34.038759 5005 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 25 11:21:34 crc kubenswrapper[5005]: I0225 11:21:34.039221 5005 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 25 11:21:34 crc kubenswrapper[5005]: I0225 11:21:34.039822 5005 status_manager.go:851] "Failed to get status for pod" podUID="40ca54d7-877a-4b26-a3a2-720e0fe10af0" pod="openshift-route-controller-manager/route-controller-manager-984d4d6d5-bjfm4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-984d4d6d5-bjfm4\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 25 11:21:34 crc kubenswrapper[5005]: I0225 11:21:34.040470 5005 status_manager.go:851] "Failed to get status for pod" podUID="1b3efbe3-23ee-4444-9ee5-c8ea6b2109c4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 25 11:21:34 crc kubenswrapper[5005]: I0225 11:21:34.040947 5005 status_manager.go:851] "Failed to get status for pod" podUID="8f5de503-be40-4c9c-baa7-7f992ce7c84f" pod="openshift-controller-manager/controller-manager-85466dc99d-nhkg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-85466dc99d-nhkg7\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 25 11:21:35 crc kubenswrapper[5005]: I0225 11:21:35.045174 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 25 11:21:35 crc kubenswrapper[5005]: I0225 11:21:35.046648 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 25 11:21:35 crc kubenswrapper[5005]: I0225 11:21:35.046755 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"542f6ac4c9f860568a7fc4834aad1dcba23abe5be214d7904be8990c90775e94"} Feb 25 11:21:35 crc kubenswrapper[5005]: I0225 11:21:35.047613 5005 status_manager.go:851] "Failed to get status for pod" podUID="8f5de503-be40-4c9c-baa7-7f992ce7c84f" pod="openshift-controller-manager/controller-manager-85466dc99d-nhkg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-85466dc99d-nhkg7\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 25 11:21:35 crc kubenswrapper[5005]: I0225 11:21:35.047943 5005 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 25 11:21:35 crc kubenswrapper[5005]: I0225 11:21:35.048309 5005 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 25 11:21:35 crc kubenswrapper[5005]: I0225 11:21:35.048642 5005 status_manager.go:851] "Failed to get status for pod" podUID="1b3efbe3-23ee-4444-9ee5-c8ea6b2109c4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 25 11:21:35 crc kubenswrapper[5005]: I0225 11:21:35.048889 5005 status_manager.go:851] "Failed to get status for pod" podUID="40ca54d7-877a-4b26-a3a2-720e0fe10af0" pod="openshift-route-controller-manager/route-controller-manager-984d4d6d5-bjfm4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-984d4d6d5-bjfm4\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 25 11:21:35 crc kubenswrapper[5005]: I0225 11:21:35.049059 5005 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="2ef55bf9af56aa006101192b78c23e131b6486eb9c90c6d516a1e424626ef9d5" exitCode=0 Feb 25 11:21:35 crc kubenswrapper[5005]: I0225 11:21:35.049101 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"2ef55bf9af56aa006101192b78c23e131b6486eb9c90c6d516a1e424626ef9d5"} Feb 25 11:21:35 crc kubenswrapper[5005]: I0225 11:21:35.049473 5005 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a9c06f67-7a5e-4278-818e-873a7b9618de" Feb 25 11:21:35 crc kubenswrapper[5005]: I0225 11:21:35.049497 5005 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a9c06f67-7a5e-4278-818e-873a7b9618de" Feb 25 11:21:35 crc kubenswrapper[5005]: I0225 11:21:35.049715 5005 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 25 11:21:35 crc kubenswrapper[5005]: E0225 11:21:35.049866 5005 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 25 11:21:35 crc kubenswrapper[5005]: I0225 11:21:35.050155 5005 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 25 11:21:35 crc kubenswrapper[5005]: I0225 11:21:35.050582 5005 status_manager.go:851] "Failed to get status for pod" podUID="40ca54d7-877a-4b26-a3a2-720e0fe10af0" pod="openshift-route-controller-manager/route-controller-manager-984d4d6d5-bjfm4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-984d4d6d5-bjfm4\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 25 11:21:35 crc kubenswrapper[5005]: I0225 11:21:35.050763 5005 status_manager.go:851] "Failed to get status for pod" podUID="1b3efbe3-23ee-4444-9ee5-c8ea6b2109c4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 25 11:21:35 crc kubenswrapper[5005]: I0225 11:21:35.050976 5005 status_manager.go:851] "Failed to get status for pod" podUID="8f5de503-be40-4c9c-baa7-7f992ce7c84f" pod="openshift-controller-manager/controller-manager-85466dc99d-nhkg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-85466dc99d-nhkg7\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 25 11:21:36 crc kubenswrapper[5005]: I0225 11:21:36.070772 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"444fcad6f7960b412ecffb23785468503a49aac20581fae30aaf167c1fda2253"} Feb 25 11:21:36 crc kubenswrapper[5005]: I0225 11:21:36.070816 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"cc8b77767f08416469d6e0c4f4b1515aa6291bd079f310b3df83d2b824070cb9"} Feb 25 11:21:36 crc kubenswrapper[5005]: I0225 11:21:36.070827 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"01ba15593f2b57025f8357109907ce3760ab8f0faddb6a5a85a264f0bd89011e"} Feb 25 11:21:36 crc kubenswrapper[5005]: I0225 11:21:36.070838 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9941e688311e990cb04c0e35aabccc0cd42df94dfc2000afadbb1cb5e228548f"} Feb 25 11:21:37 crc kubenswrapper[5005]: I0225 11:21:37.077632 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"793c9ed351a45b534e9eb8965a17e6b29f8799fa42b778299c39208ae4914c22"} Feb 25 11:21:37 crc kubenswrapper[5005]: I0225 11:21:37.077960 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 25 11:21:37 crc kubenswrapper[5005]: I0225 11:21:37.077896 5005 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a9c06f67-7a5e-4278-818e-873a7b9618de" Feb 25 11:21:37 crc kubenswrapper[5005]: I0225 11:21:37.077977 5005 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a9c06f67-7a5e-4278-818e-873a7b9618de" Feb 25 11:21:37 crc kubenswrapper[5005]: I0225 11:21:37.669535 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 25 11:21:37 crc kubenswrapper[5005]: I0225 11:21:37.669705 5005 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 25 11:21:37 crc kubenswrapper[5005]: I0225 11:21:37.669752 5005 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 25 11:21:38 crc kubenswrapper[5005]: I0225 11:21:38.713318 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 25 11:21:38 crc kubenswrapper[5005]: I0225 11:21:38.713639 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 25 11:21:38 crc kubenswrapper[5005]: I0225 11:21:38.718325 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 25 11:21:40 crc kubenswrapper[5005]: I0225 11:21:40.996670 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 25 11:21:41 crc kubenswrapper[5005]: I0225 11:21:41.075506 5005 patch_prober.go:28] interesting pod/route-controller-manager-984d4d6d5-bjfm4 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 25 11:21:41 crc kubenswrapper[5005]: I0225 11:21:41.075600 5005 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-984d4d6d5-bjfm4" podUID="40ca54d7-877a-4b26-a3a2-720e0fe10af0" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 25 11:21:42 crc kubenswrapper[5005]: I0225 11:21:42.088447 5005 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 25 11:21:43 crc kubenswrapper[5005]: I0225 11:21:43.111044 5005 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a9c06f67-7a5e-4278-818e-873a7b9618de" Feb 25 11:21:43 crc kubenswrapper[5005]: I0225 11:21:43.111398 5005 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a9c06f67-7a5e-4278-818e-873a7b9618de" Feb 25 11:21:43 crc kubenswrapper[5005]: I0225 11:21:43.116950 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 25 11:21:43 crc kubenswrapper[5005]: I0225 11:21:43.127318 5005 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="3b25d415-e88d-4171-ae2a-3e0fcb202c57" Feb 25 11:21:44 crc kubenswrapper[5005]: I0225 11:21:44.116348 5005 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a9c06f67-7a5e-4278-818e-873a7b9618de" Feb 25 11:21:44 crc kubenswrapper[5005]: I0225 11:21:44.116401 5005 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a9c06f67-7a5e-4278-818e-873a7b9618de" Feb 25 11:21:46 crc kubenswrapper[5005]: I0225 11:21:46.705456 5005 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="3b25d415-e88d-4171-ae2a-3e0fcb202c57" Feb 25 11:21:47 crc kubenswrapper[5005]: I0225 11:21:47.669355 5005 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 25 11:21:47 crc kubenswrapper[5005]: I0225 11:21:47.669441 5005 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 25 11:21:51 crc kubenswrapper[5005]: I0225 11:21:51.075122 5005 patch_prober.go:28] interesting pod/route-controller-manager-984d4d6d5-bjfm4 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 25 11:21:51 crc kubenswrapper[5005]: I0225 11:21:51.075619 5005 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-984d4d6d5-bjfm4" podUID="40ca54d7-877a-4b26-a3a2-720e0fe10af0" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 25 11:21:51 crc kubenswrapper[5005]: I0225 11:21:51.075203 5005 patch_prober.go:28] interesting pod/route-controller-manager-984d4d6d5-bjfm4 container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 25 11:21:51 crc kubenswrapper[5005]: I0225 11:21:51.075747 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-984d4d6d5-bjfm4" podUID="40ca54d7-877a-4b26-a3a2-720e0fe10af0" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 25 11:21:52 crc kubenswrapper[5005]: I0225 11:21:52.175698 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-984d4d6d5-bjfm4_40ca54d7-877a-4b26-a3a2-720e0fe10af0/route-controller-manager/0.log" Feb 25 11:21:52 crc kubenswrapper[5005]: I0225 11:21:52.176083 5005 generic.go:334] "Generic (PLEG): container finished" podID="40ca54d7-877a-4b26-a3a2-720e0fe10af0" containerID="075c16fa4924929b7bbd86140297fa0d4155792e539fd3dae597686b7499302a" exitCode=255 Feb 25 11:21:52 crc kubenswrapper[5005]: I0225 11:21:52.176132 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-984d4d6d5-bjfm4" event={"ID":"40ca54d7-877a-4b26-a3a2-720e0fe10af0","Type":"ContainerDied","Data":"075c16fa4924929b7bbd86140297fa0d4155792e539fd3dae597686b7499302a"} Feb 25 11:21:52 crc kubenswrapper[5005]: I0225 11:21:52.176916 5005 scope.go:117] "RemoveContainer" containerID="075c16fa4924929b7bbd86140297fa0d4155792e539fd3dae597686b7499302a" Feb 25 11:21:52 crc kubenswrapper[5005]: I0225 11:21:52.347361 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 25 11:21:52 crc kubenswrapper[5005]: I0225 11:21:52.799772 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 25 11:21:53 crc kubenswrapper[5005]: I0225 11:21:53.104955 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 25 11:21:53 crc kubenswrapper[5005]: I0225 11:21:53.183953 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-984d4d6d5-bjfm4_40ca54d7-877a-4b26-a3a2-720e0fe10af0/route-controller-manager/0.log" Feb 25 11:21:53 crc kubenswrapper[5005]: I0225 11:21:53.184016 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-984d4d6d5-bjfm4" event={"ID":"40ca54d7-877a-4b26-a3a2-720e0fe10af0","Type":"ContainerStarted","Data":"b298ec5453f986421657bfc6f6ce3da9b95879c6948f817f113fc4a58ef7b5f3"} Feb 25 11:21:53 crc kubenswrapper[5005]: I0225 11:21:53.184611 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-984d4d6d5-bjfm4" Feb 25 11:21:53 crc kubenswrapper[5005]: I0225 11:21:53.244566 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 25 11:21:53 crc kubenswrapper[5005]: I0225 11:21:53.347114 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 25 11:21:53 crc kubenswrapper[5005]: I0225 11:21:53.628709 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 25 11:21:53 crc kubenswrapper[5005]: I0225 11:21:53.714279 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 25 11:21:54 crc kubenswrapper[5005]: I0225 11:21:54.031960 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 25 11:21:54 crc kubenswrapper[5005]: I0225 11:21:54.105352 5005 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 25 11:21:54 crc kubenswrapper[5005]: I0225 11:21:54.184492 5005 patch_prober.go:28] interesting pod/route-controller-manager-984d4d6d5-bjfm4 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 25 11:21:54 crc kubenswrapper[5005]: I0225 11:21:54.184581 5005 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-984d4d6d5-bjfm4" podUID="40ca54d7-877a-4b26-a3a2-720e0fe10af0" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 25 11:21:54 crc kubenswrapper[5005]: I0225 11:21:54.301442 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 25 11:21:54 crc kubenswrapper[5005]: I0225 11:21:54.332491 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 25 11:21:54 crc kubenswrapper[5005]: I0225 11:21:54.422086 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 25 11:21:54 crc kubenswrapper[5005]: I0225 11:21:54.456239 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 25 11:21:54 crc kubenswrapper[5005]: I0225 11:21:54.933743 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 25 11:21:55 crc kubenswrapper[5005]: I0225 11:21:55.097016 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 25 11:21:55 crc kubenswrapper[5005]: I0225 11:21:55.118098 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 25 11:21:55 crc kubenswrapper[5005]: I0225 11:21:55.119983 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 25 11:21:55 crc kubenswrapper[5005]: I0225 11:21:55.190249 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 25 11:21:55 crc kubenswrapper[5005]: I0225 11:21:55.191425 5005 patch_prober.go:28] interesting pod/route-controller-manager-984d4d6d5-bjfm4 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 25 11:21:55 crc kubenswrapper[5005]: I0225 11:21:55.191472 5005 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-984d4d6d5-bjfm4" podUID="40ca54d7-877a-4b26-a3a2-720e0fe10af0" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 25 11:21:55 crc kubenswrapper[5005]: I0225 11:21:55.222958 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 25 11:21:55 crc kubenswrapper[5005]: I0225 11:21:55.231043 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 25 11:21:55 crc kubenswrapper[5005]: I0225 11:21:55.276333 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 25 11:21:55 crc kubenswrapper[5005]: I0225 11:21:55.327315 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 25 11:21:55 crc kubenswrapper[5005]: I0225 11:21:55.477156 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 25 11:21:55 crc kubenswrapper[5005]: I0225 11:21:55.510509 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 25 11:21:55 crc kubenswrapper[5005]: I0225 11:21:55.642464 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 25 11:21:55 crc kubenswrapper[5005]: I0225 11:21:55.665455 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 25 11:21:55 crc kubenswrapper[5005]: I0225 11:21:55.666258 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 25 11:21:55 crc kubenswrapper[5005]: I0225 11:21:55.705717 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 25 11:21:55 crc kubenswrapper[5005]: I0225 11:21:55.771615 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 25 11:21:55 crc kubenswrapper[5005]: I0225 11:21:55.815280 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 25 11:21:55 crc kubenswrapper[5005]: I0225 11:21:55.833953 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 25 11:21:55 crc kubenswrapper[5005]: I0225 11:21:55.971684 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 25 11:21:55 crc kubenswrapper[5005]: I0225 11:21:55.990977 5005 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 25 11:21:56 crc kubenswrapper[5005]: I0225 11:21:56.000409 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 25 11:21:56 crc kubenswrapper[5005]: I0225 11:21:56.003908 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 25 11:21:56 crc kubenswrapper[5005]: I0225 11:21:56.009641 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 25 11:21:56 crc kubenswrapper[5005]: I0225 11:21:56.034970 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 25 11:21:56 crc kubenswrapper[5005]: I0225 11:21:56.181137 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 25 11:21:56 crc kubenswrapper[5005]: I0225 11:21:56.289293 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 25 11:21:56 crc kubenswrapper[5005]: I0225 11:21:56.468154 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 25 11:21:56 crc kubenswrapper[5005]: I0225 11:21:56.488338 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 25 11:21:56 crc kubenswrapper[5005]: I0225 11:21:56.490612 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 25 11:21:56 crc kubenswrapper[5005]: I0225 11:21:56.543094 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 25 11:21:56 crc kubenswrapper[5005]: I0225 11:21:56.546362 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 25 11:21:56 crc kubenswrapper[5005]: I0225 11:21:56.549948 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 25 11:21:56 crc kubenswrapper[5005]: I0225 11:21:56.583545 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 25 11:21:56 crc kubenswrapper[5005]: I0225 11:21:56.717868 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 25 11:21:56 crc kubenswrapper[5005]: I0225 11:21:56.926298 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 25 11:21:56 crc kubenswrapper[5005]: I0225 11:21:56.948200 5005 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 25 11:21:56 crc kubenswrapper[5005]: I0225 11:21:56.951805 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-85466dc99d-nhkg7" podStartSLOduration=38.951781108 podStartE2EDuration="38.951781108s" podCreationTimestamp="2026-02-25 11:21:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:21:41.87971312 +0000 UTC m=+215.920445497" watchObservedRunningTime="2026-02-25 11:21:56.951781108 +0000 UTC m=+230.992513475" Feb 25 11:21:56 crc kubenswrapper[5005]: I0225 11:21:56.952397 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-984d4d6d5-bjfm4" podStartSLOduration=38.952359852 podStartE2EDuration="38.952359852s" podCreationTimestamp="2026-02-25 11:21:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:21:41.839871693 +0000 UTC m=+215.880604020" watchObservedRunningTime="2026-02-25 11:21:56.952359852 +0000 UTC m=+230.993092219" Feb 25 11:21:56 crc kubenswrapper[5005]: I0225 11:21:56.953705 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=36.953690177 podStartE2EDuration="36.953690177s" podCreationTimestamp="2026-02-25 11:21:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:21:41.776893735 +0000 UTC m=+215.817626072" watchObservedRunningTime="2026-02-25 11:21:56.953690177 +0000 UTC m=+230.994422544" Feb 25 11:21:56 crc kubenswrapper[5005]: I0225 11:21:56.954176 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 25 11:21:56 crc kubenswrapper[5005]: I0225 11:21:56.956508 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 25 11:21:56 crc kubenswrapper[5005]: I0225 11:21:56.956686 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 25 11:21:56 crc kubenswrapper[5005]: I0225 11:21:56.964437 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 25 11:21:56 crc kubenswrapper[5005]: I0225 11:21:56.979621 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=14.979607639 podStartE2EDuration="14.979607639s" podCreationTimestamp="2026-02-25 11:21:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:21:56.978342426 +0000 UTC m=+231.019074743" watchObservedRunningTime="2026-02-25 11:21:56.979607639 +0000 UTC m=+231.020339956" Feb 25 11:21:57 crc kubenswrapper[5005]: I0225 11:21:57.022685 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 25 11:21:57 crc kubenswrapper[5005]: I0225 11:21:57.052061 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 25 11:21:57 crc kubenswrapper[5005]: I0225 11:21:57.114524 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 25 11:21:57 crc kubenswrapper[5005]: I0225 11:21:57.181575 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 25 11:21:57 crc kubenswrapper[5005]: I0225 11:21:57.191548 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 25 11:21:57 crc kubenswrapper[5005]: I0225 11:21:57.193197 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 25 11:21:57 crc kubenswrapper[5005]: I0225 11:21:57.197962 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 25 11:21:57 crc kubenswrapper[5005]: I0225 11:21:57.274165 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 25 11:21:57 crc kubenswrapper[5005]: I0225 11:21:57.287419 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 25 11:21:57 crc kubenswrapper[5005]: I0225 11:21:57.348273 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 25 11:21:57 crc kubenswrapper[5005]: I0225 11:21:57.363135 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 25 11:21:57 crc kubenswrapper[5005]: I0225 11:21:57.403325 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 25 11:21:57 crc kubenswrapper[5005]: I0225 11:21:57.407212 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 25 11:21:57 crc kubenswrapper[5005]: I0225 11:21:57.473622 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 25 11:21:57 crc kubenswrapper[5005]: I0225 11:21:57.534217 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 25 11:21:57 crc kubenswrapper[5005]: I0225 11:21:57.536060 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 25 11:21:57 crc kubenswrapper[5005]: I0225 11:21:57.543633 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 25 11:21:57 crc kubenswrapper[5005]: I0225 11:21:57.560867 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 25 11:21:57 crc kubenswrapper[5005]: I0225 11:21:57.607342 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 25 11:21:57 crc kubenswrapper[5005]: I0225 11:21:57.669218 5005 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 25 11:21:57 crc kubenswrapper[5005]: I0225 11:21:57.669292 5005 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 25 11:21:57 crc kubenswrapper[5005]: I0225 11:21:57.669352 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 25 11:21:57 crc kubenswrapper[5005]: I0225 11:21:57.670122 5005 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"542f6ac4c9f860568a7fc4834aad1dcba23abe5be214d7904be8990c90775e94"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Feb 25 11:21:57 crc kubenswrapper[5005]: I0225 11:21:57.670250 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://542f6ac4c9f860568a7fc4834aad1dcba23abe5be214d7904be8990c90775e94" gracePeriod=30 Feb 25 11:21:57 crc kubenswrapper[5005]: I0225 11:21:57.672468 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 25 11:21:57 crc kubenswrapper[5005]: I0225 11:21:57.694338 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 25 11:21:57 crc kubenswrapper[5005]: I0225 11:21:57.706623 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 25 11:21:57 crc kubenswrapper[5005]: I0225 11:21:57.734631 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 25 11:21:57 crc kubenswrapper[5005]: I0225 11:21:57.907661 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 25 11:21:57 crc kubenswrapper[5005]: I0225 11:21:57.920155 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 25 11:21:57 crc kubenswrapper[5005]: I0225 11:21:57.968352 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 25 11:21:58 crc kubenswrapper[5005]: I0225 11:21:58.009637 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 25 11:21:58 crc kubenswrapper[5005]: I0225 11:21:58.014007 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 25 11:21:58 crc kubenswrapper[5005]: I0225 11:21:58.029743 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 25 11:21:58 crc kubenswrapper[5005]: I0225 11:21:58.087441 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 11:21:58 crc kubenswrapper[5005]: I0225 11:21:58.087499 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 11:21:58 crc kubenswrapper[5005]: I0225 11:21:58.210463 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 25 11:21:58 crc kubenswrapper[5005]: I0225 11:21:58.255165 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 25 11:21:58 crc kubenswrapper[5005]: I0225 11:21:58.288017 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 25 11:21:58 crc kubenswrapper[5005]: I0225 11:21:58.381500 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 25 11:21:58 crc kubenswrapper[5005]: I0225 11:21:58.397645 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 25 11:21:58 crc kubenswrapper[5005]: I0225 11:21:58.463238 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 25 11:21:58 crc kubenswrapper[5005]: I0225 11:21:58.579679 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 25 11:21:58 crc kubenswrapper[5005]: I0225 11:21:58.582966 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 25 11:21:58 crc kubenswrapper[5005]: I0225 11:21:58.635573 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 25 11:21:58 crc kubenswrapper[5005]: I0225 11:21:58.698544 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 25 11:21:58 crc kubenswrapper[5005]: I0225 11:21:58.833002 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 25 11:21:58 crc kubenswrapper[5005]: I0225 11:21:58.915635 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 25 11:21:58 crc kubenswrapper[5005]: I0225 11:21:58.945852 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 25 11:21:59 crc kubenswrapper[5005]: I0225 11:21:59.023346 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 25 11:21:59 crc kubenswrapper[5005]: I0225 11:21:59.057668 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 25 11:21:59 crc kubenswrapper[5005]: I0225 11:21:59.268771 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 25 11:21:59 crc kubenswrapper[5005]: I0225 11:21:59.326708 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 25 11:21:59 crc kubenswrapper[5005]: I0225 11:21:59.352547 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 25 11:21:59 crc kubenswrapper[5005]: I0225 11:21:59.404527 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 25 11:21:59 crc kubenswrapper[5005]: I0225 11:21:59.427731 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 25 11:21:59 crc kubenswrapper[5005]: I0225 11:21:59.433214 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 25 11:21:59 crc kubenswrapper[5005]: I0225 11:21:59.593656 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 25 11:21:59 crc kubenswrapper[5005]: I0225 11:21:59.616126 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 25 11:21:59 crc kubenswrapper[5005]: I0225 11:21:59.616564 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 25 11:21:59 crc kubenswrapper[5005]: I0225 11:21:59.640081 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 25 11:21:59 crc kubenswrapper[5005]: I0225 11:21:59.699580 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 25 11:21:59 crc kubenswrapper[5005]: I0225 11:21:59.700117 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 25 11:21:59 crc kubenswrapper[5005]: I0225 11:21:59.710297 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 25 11:21:59 crc kubenswrapper[5005]: I0225 11:21:59.729284 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 25 11:21:59 crc kubenswrapper[5005]: I0225 11:21:59.796426 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 25 11:21:59 crc kubenswrapper[5005]: I0225 11:21:59.798932 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 25 11:21:59 crc kubenswrapper[5005]: I0225 11:21:59.825235 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 25 11:21:59 crc kubenswrapper[5005]: I0225 11:21:59.958709 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 25 11:22:00 crc kubenswrapper[5005]: I0225 11:22:00.005184 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 25 11:22:00 crc kubenswrapper[5005]: I0225 11:22:00.066487 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 25 11:22:00 crc kubenswrapper[5005]: I0225 11:22:00.071276 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 25 11:22:00 crc kubenswrapper[5005]: I0225 11:22:00.111718 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 25 11:22:00 crc kubenswrapper[5005]: I0225 11:22:00.118895 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 25 11:22:00 crc kubenswrapper[5005]: I0225 11:22:00.250366 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 25 11:22:00 crc kubenswrapper[5005]: I0225 11:22:00.286819 5005 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 25 11:22:00 crc kubenswrapper[5005]: I0225 11:22:00.401437 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 25 11:22:00 crc kubenswrapper[5005]: I0225 11:22:00.411315 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 25 11:22:00 crc kubenswrapper[5005]: I0225 11:22:00.660086 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 25 11:22:00 crc kubenswrapper[5005]: I0225 11:22:00.821992 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 25 11:22:00 crc kubenswrapper[5005]: I0225 11:22:00.827806 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 25 11:22:00 crc kubenswrapper[5005]: I0225 11:22:00.886798 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 25 11:22:00 crc kubenswrapper[5005]: I0225 11:22:00.921512 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 25 11:22:00 crc kubenswrapper[5005]: I0225 11:22:00.993164 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 25 11:22:01 crc kubenswrapper[5005]: I0225 11:22:01.074205 5005 patch_prober.go:28] interesting pod/route-controller-manager-984d4d6d5-bjfm4 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 25 11:22:01 crc kubenswrapper[5005]: I0225 11:22:01.074727 5005 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-984d4d6d5-bjfm4" podUID="40ca54d7-877a-4b26-a3a2-720e0fe10af0" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 25 11:22:01 crc kubenswrapper[5005]: I0225 11:22:01.125632 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 25 11:22:01 crc kubenswrapper[5005]: I0225 11:22:01.134771 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 25 11:22:01 crc kubenswrapper[5005]: I0225 11:22:01.161541 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 25 11:22:01 crc kubenswrapper[5005]: I0225 11:22:01.264633 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 25 11:22:01 crc kubenswrapper[5005]: I0225 11:22:01.274826 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 25 11:22:01 crc kubenswrapper[5005]: I0225 11:22:01.293027 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 25 11:22:01 crc kubenswrapper[5005]: I0225 11:22:01.322716 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 25 11:22:01 crc kubenswrapper[5005]: I0225 11:22:01.324456 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 25 11:22:01 crc kubenswrapper[5005]: I0225 11:22:01.350746 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 25 11:22:01 crc kubenswrapper[5005]: I0225 11:22:01.424533 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 25 11:22:01 crc kubenswrapper[5005]: I0225 11:22:01.476539 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 25 11:22:01 crc kubenswrapper[5005]: I0225 11:22:01.528126 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 25 11:22:01 crc kubenswrapper[5005]: I0225 11:22:01.541494 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 25 11:22:01 crc kubenswrapper[5005]: I0225 11:22:01.543161 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 25 11:22:01 crc kubenswrapper[5005]: I0225 11:22:01.555941 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 25 11:22:01 crc kubenswrapper[5005]: I0225 11:22:01.556197 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 25 11:22:01 crc kubenswrapper[5005]: I0225 11:22:01.654037 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 25 11:22:01 crc kubenswrapper[5005]: I0225 11:22:01.667801 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 25 11:22:01 crc kubenswrapper[5005]: I0225 11:22:01.689176 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 25 11:22:01 crc kubenswrapper[5005]: I0225 11:22:01.690774 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 25 11:22:01 crc kubenswrapper[5005]: I0225 11:22:01.694351 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 25 11:22:01 crc kubenswrapper[5005]: I0225 11:22:01.790678 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 25 11:22:01 crc kubenswrapper[5005]: I0225 11:22:01.888476 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 25 11:22:01 crc kubenswrapper[5005]: I0225 11:22:01.938138 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 25 11:22:01 crc kubenswrapper[5005]: I0225 11:22:01.998038 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 25 11:22:02 crc kubenswrapper[5005]: I0225 11:22:02.078768 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 25 11:22:02 crc kubenswrapper[5005]: I0225 11:22:02.096319 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 25 11:22:02 crc kubenswrapper[5005]: I0225 11:22:02.096910 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 25 11:22:02 crc kubenswrapper[5005]: I0225 11:22:02.201717 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 25 11:22:02 crc kubenswrapper[5005]: I0225 11:22:02.204506 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 25 11:22:02 crc kubenswrapper[5005]: I0225 11:22:02.214436 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 25 11:22:02 crc kubenswrapper[5005]: I0225 11:22:02.285615 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 25 11:22:02 crc kubenswrapper[5005]: I0225 11:22:02.328693 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 25 11:22:02 crc kubenswrapper[5005]: I0225 11:22:02.346523 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 25 11:22:02 crc kubenswrapper[5005]: I0225 11:22:02.373193 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 25 11:22:02 crc kubenswrapper[5005]: I0225 11:22:02.402544 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 25 11:22:02 crc kubenswrapper[5005]: I0225 11:22:02.471678 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 25 11:22:02 crc kubenswrapper[5005]: I0225 11:22:02.605225 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 25 11:22:02 crc kubenswrapper[5005]: I0225 11:22:02.669403 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 25 11:22:02 crc kubenswrapper[5005]: I0225 11:22:02.732011 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 25 11:22:02 crc kubenswrapper[5005]: I0225 11:22:02.840195 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 25 11:22:02 crc kubenswrapper[5005]: I0225 11:22:02.850228 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 25 11:22:02 crc kubenswrapper[5005]: I0225 11:22:02.940885 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 25 11:22:02 crc kubenswrapper[5005]: I0225 11:22:02.984055 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 25 11:22:03 crc kubenswrapper[5005]: I0225 11:22:03.072022 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 25 11:22:03 crc kubenswrapper[5005]: I0225 11:22:03.108273 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 25 11:22:03 crc kubenswrapper[5005]: I0225 11:22:03.118766 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 25 11:22:03 crc kubenswrapper[5005]: I0225 11:22:03.123658 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 25 11:22:03 crc kubenswrapper[5005]: I0225 11:22:03.214295 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 25 11:22:03 crc kubenswrapper[5005]: I0225 11:22:03.298537 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 25 11:22:03 crc kubenswrapper[5005]: I0225 11:22:03.359709 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 25 11:22:03 crc kubenswrapper[5005]: I0225 11:22:03.389406 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 25 11:22:03 crc kubenswrapper[5005]: I0225 11:22:03.433076 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 25 11:22:03 crc kubenswrapper[5005]: I0225 11:22:03.437219 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 25 11:22:03 crc kubenswrapper[5005]: I0225 11:22:03.448152 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 25 11:22:03 crc kubenswrapper[5005]: I0225 11:22:03.460510 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 25 11:22:03 crc kubenswrapper[5005]: I0225 11:22:03.595757 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 25 11:22:03 crc kubenswrapper[5005]: I0225 11:22:03.686551 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 25 11:22:03 crc kubenswrapper[5005]: I0225 11:22:03.730032 5005 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 25 11:22:03 crc kubenswrapper[5005]: I0225 11:22:03.780094 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 25 11:22:03 crc kubenswrapper[5005]: I0225 11:22:03.792015 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 25 11:22:03 crc kubenswrapper[5005]: I0225 11:22:03.828595 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 25 11:22:03 crc kubenswrapper[5005]: I0225 11:22:03.852301 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 25 11:22:03 crc kubenswrapper[5005]: I0225 11:22:03.869497 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 25 11:22:03 crc kubenswrapper[5005]: I0225 11:22:03.909766 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 25 11:22:03 crc kubenswrapper[5005]: I0225 11:22:03.954830 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 25 11:22:03 crc kubenswrapper[5005]: I0225 11:22:03.972245 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 25 11:22:03 crc kubenswrapper[5005]: I0225 11:22:03.977976 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 25 11:22:04 crc kubenswrapper[5005]: I0225 11:22:04.100961 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 25 11:22:04 crc kubenswrapper[5005]: I0225 11:22:04.137325 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 25 11:22:04 crc kubenswrapper[5005]: I0225 11:22:04.273784 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 25 11:22:04 crc kubenswrapper[5005]: I0225 11:22:04.305490 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 25 11:22:04 crc kubenswrapper[5005]: I0225 11:22:04.361131 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 25 11:22:04 crc kubenswrapper[5005]: I0225 11:22:04.445568 5005 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 25 11:22:04 crc kubenswrapper[5005]: I0225 11:22:04.445841 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://7f707437e5e9648bfbc7a4df105756aeaca4fe5eddae1568e20039510903dda8" gracePeriod=5 Feb 25 11:22:04 crc kubenswrapper[5005]: I0225 11:22:04.495280 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 25 11:22:04 crc kubenswrapper[5005]: I0225 11:22:04.530530 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 25 11:22:04 crc kubenswrapper[5005]: I0225 11:22:04.549992 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 25 11:22:04 crc kubenswrapper[5005]: I0225 11:22:04.646804 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 25 11:22:04 crc kubenswrapper[5005]: I0225 11:22:04.704634 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 25 11:22:04 crc kubenswrapper[5005]: I0225 11:22:04.719585 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 25 11:22:05 crc kubenswrapper[5005]: I0225 11:22:05.144528 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 25 11:22:05 crc kubenswrapper[5005]: I0225 11:22:05.232114 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 25 11:22:05 crc kubenswrapper[5005]: I0225 11:22:05.234013 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 25 11:22:05 crc kubenswrapper[5005]: I0225 11:22:05.305448 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 25 11:22:05 crc kubenswrapper[5005]: I0225 11:22:05.310919 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 25 11:22:05 crc kubenswrapper[5005]: I0225 11:22:05.351434 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 25 11:22:05 crc kubenswrapper[5005]: I0225 11:22:05.469319 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 25 11:22:05 crc kubenswrapper[5005]: I0225 11:22:05.517869 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 25 11:22:05 crc kubenswrapper[5005]: I0225 11:22:05.577739 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 25 11:22:05 crc kubenswrapper[5005]: I0225 11:22:05.595251 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 25 11:22:05 crc kubenswrapper[5005]: I0225 11:22:05.672030 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 25 11:22:05 crc kubenswrapper[5005]: I0225 11:22:05.842118 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 25 11:22:05 crc kubenswrapper[5005]: I0225 11:22:05.851328 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 25 11:22:05 crc kubenswrapper[5005]: I0225 11:22:05.948960 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 25 11:22:06 crc kubenswrapper[5005]: I0225 11:22:06.028732 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 25 11:22:06 crc kubenswrapper[5005]: I0225 11:22:06.136286 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 25 11:22:06 crc kubenswrapper[5005]: I0225 11:22:06.158193 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 25 11:22:06 crc kubenswrapper[5005]: I0225 11:22:06.275228 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 25 11:22:06 crc kubenswrapper[5005]: I0225 11:22:06.420791 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 25 11:22:06 crc kubenswrapper[5005]: I0225 11:22:06.623393 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 25 11:22:06 crc kubenswrapper[5005]: I0225 11:22:06.725327 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 25 11:22:06 crc kubenswrapper[5005]: I0225 11:22:06.744433 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 25 11:22:06 crc kubenswrapper[5005]: I0225 11:22:06.748464 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 25 11:22:06 crc kubenswrapper[5005]: I0225 11:22:06.781311 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 25 11:22:06 crc kubenswrapper[5005]: I0225 11:22:06.784234 5005 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 25 11:22:06 crc kubenswrapper[5005]: I0225 11:22:06.920462 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 25 11:22:06 crc kubenswrapper[5005]: I0225 11:22:06.939691 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 25 11:22:06 crc kubenswrapper[5005]: I0225 11:22:06.968340 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 25 11:22:07 crc kubenswrapper[5005]: I0225 11:22:07.359406 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 25 11:22:07 crc kubenswrapper[5005]: I0225 11:22:07.505326 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 25 11:22:07 crc kubenswrapper[5005]: I0225 11:22:07.533649 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 25 11:22:07 crc kubenswrapper[5005]: I0225 11:22:07.750569 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 25 11:22:07 crc kubenswrapper[5005]: I0225 11:22:07.761624 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 25 11:22:07 crc kubenswrapper[5005]: I0225 11:22:07.764819 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 25 11:22:07 crc kubenswrapper[5005]: I0225 11:22:07.958541 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 25 11:22:07 crc kubenswrapper[5005]: I0225 11:22:07.960095 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 25 11:22:08 crc kubenswrapper[5005]: I0225 11:22:08.134779 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 25 11:22:08 crc kubenswrapper[5005]: I0225 11:22:08.206885 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 25 11:22:08 crc kubenswrapper[5005]: I0225 11:22:08.285962 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 25 11:22:08 crc kubenswrapper[5005]: I0225 11:22:08.304221 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 25 11:22:08 crc kubenswrapper[5005]: I0225 11:22:08.359855 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 25 11:22:08 crc kubenswrapper[5005]: I0225 11:22:08.520192 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 25 11:22:08 crc kubenswrapper[5005]: I0225 11:22:08.769619 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 25 11:22:09 crc kubenswrapper[5005]: I0225 11:22:09.311658 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 25 11:22:09 crc kubenswrapper[5005]: I0225 11:22:09.634016 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 25 11:22:09 crc kubenswrapper[5005]: I0225 11:22:09.982219 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 25 11:22:10 crc kubenswrapper[5005]: I0225 11:22:10.055722 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 25 11:22:10 crc kubenswrapper[5005]: I0225 11:22:10.055798 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 25 11:22:10 crc kubenswrapper[5005]: I0225 11:22:10.080153 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-984d4d6d5-bjfm4" Feb 25 11:22:10 crc kubenswrapper[5005]: I0225 11:22:10.235273 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 25 11:22:10 crc kubenswrapper[5005]: I0225 11:22:10.235982 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 25 11:22:10 crc kubenswrapper[5005]: I0225 11:22:10.236298 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 25 11:22:10 crc kubenswrapper[5005]: I0225 11:22:10.236693 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 25 11:22:10 crc kubenswrapper[5005]: I0225 11:22:10.236982 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 25 11:22:10 crc kubenswrapper[5005]: I0225 11:22:10.235566 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 25 11:22:10 crc kubenswrapper[5005]: I0225 11:22:10.236069 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 25 11:22:10 crc kubenswrapper[5005]: I0225 11:22:10.236483 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 25 11:22:10 crc kubenswrapper[5005]: I0225 11:22:10.236766 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 25 11:22:10 crc kubenswrapper[5005]: I0225 11:22:10.238329 5005 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 25 11:22:10 crc kubenswrapper[5005]: I0225 11:22:10.238655 5005 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 25 11:22:10 crc kubenswrapper[5005]: I0225 11:22:10.238860 5005 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 25 11:22:10 crc kubenswrapper[5005]: I0225 11:22:10.239051 5005 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 25 11:22:10 crc kubenswrapper[5005]: I0225 11:22:10.250027 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 25 11:22:10 crc kubenswrapper[5005]: I0225 11:22:10.277567 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 25 11:22:10 crc kubenswrapper[5005]: I0225 11:22:10.277638 5005 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="7f707437e5e9648bfbc7a4df105756aeaca4fe5eddae1568e20039510903dda8" exitCode=137 Feb 25 11:22:10 crc kubenswrapper[5005]: I0225 11:22:10.277677 5005 scope.go:117] "RemoveContainer" containerID="7f707437e5e9648bfbc7a4df105756aeaca4fe5eddae1568e20039510903dda8" Feb 25 11:22:10 crc kubenswrapper[5005]: I0225 11:22:10.277783 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 25 11:22:10 crc kubenswrapper[5005]: I0225 11:22:10.314647 5005 scope.go:117] "RemoveContainer" containerID="7f707437e5e9648bfbc7a4df105756aeaca4fe5eddae1568e20039510903dda8" Feb 25 11:22:10 crc kubenswrapper[5005]: E0225 11:22:10.315610 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f707437e5e9648bfbc7a4df105756aeaca4fe5eddae1568e20039510903dda8\": container with ID starting with 7f707437e5e9648bfbc7a4df105756aeaca4fe5eddae1568e20039510903dda8 not found: ID does not exist" containerID="7f707437e5e9648bfbc7a4df105756aeaca4fe5eddae1568e20039510903dda8" Feb 25 11:22:10 crc kubenswrapper[5005]: I0225 11:22:10.315903 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f707437e5e9648bfbc7a4df105756aeaca4fe5eddae1568e20039510903dda8"} err="failed to get container status \"7f707437e5e9648bfbc7a4df105756aeaca4fe5eddae1568e20039510903dda8\": rpc error: code = NotFound desc = could not find container \"7f707437e5e9648bfbc7a4df105756aeaca4fe5eddae1568e20039510903dda8\": container with ID starting with 7f707437e5e9648bfbc7a4df105756aeaca4fe5eddae1568e20039510903dda8 not found: ID does not exist" Feb 25 11:22:10 crc kubenswrapper[5005]: I0225 11:22:10.341215 5005 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 25 11:22:10 crc kubenswrapper[5005]: I0225 11:22:10.700006 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 25 11:22:10 crc kubenswrapper[5005]: I0225 11:22:10.700244 5005 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Feb 25 11:22:10 crc kubenswrapper[5005]: I0225 11:22:10.711818 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 25 11:22:10 crc kubenswrapper[5005]: I0225 11:22:10.712109 5005 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="0b469fd7-a3cb-434e-8b5f-9dd98e35cc95" Feb 25 11:22:10 crc kubenswrapper[5005]: I0225 11:22:10.717393 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 25 11:22:10 crc kubenswrapper[5005]: I0225 11:22:10.717441 5005 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="0b469fd7-a3cb-434e-8b5f-9dd98e35cc95" Feb 25 11:22:28 crc kubenswrapper[5005]: I0225 11:22:28.087541 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 11:22:28 crc kubenswrapper[5005]: I0225 11:22:28.088147 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 11:22:28 crc kubenswrapper[5005]: I0225 11:22:28.384874 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 25 11:22:28 crc kubenswrapper[5005]: I0225 11:22:28.388232 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 25 11:22:28 crc kubenswrapper[5005]: I0225 11:22:28.388757 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 25 11:22:28 crc kubenswrapper[5005]: I0225 11:22:28.388805 5005 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="542f6ac4c9f860568a7fc4834aad1dcba23abe5be214d7904be8990c90775e94" exitCode=137 Feb 25 11:22:28 crc kubenswrapper[5005]: I0225 11:22:28.388847 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"542f6ac4c9f860568a7fc4834aad1dcba23abe5be214d7904be8990c90775e94"} Feb 25 11:22:28 crc kubenswrapper[5005]: I0225 11:22:28.388898 5005 scope.go:117] "RemoveContainer" containerID="cc7b6fc03c411ec651f8db31f2510f6b0fa45f7397e8466048befea7c261ae8e" Feb 25 11:22:29 crc kubenswrapper[5005]: I0225 11:22:29.399678 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 25 11:22:29 crc kubenswrapper[5005]: I0225 11:22:29.406646 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 25 11:22:29 crc kubenswrapper[5005]: I0225 11:22:29.406780 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"de99923ba9026e9a75515b54b12b60f5549dd6aba6d612ddad4bc1e02241d3e3"} Feb 25 11:22:30 crc kubenswrapper[5005]: I0225 11:22:30.415455 5005 generic.go:334] "Generic (PLEG): container finished" podID="9a85e597-ce82-4b41-b00e-e142bbd38849" containerID="070c821be0e163c8231a2e69ec7c9b2acb5f74f3016d587ef1266e120cc1dd95" exitCode=0 Feb 25 11:22:30 crc kubenswrapper[5005]: I0225 11:22:30.415634 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hzbjf" event={"ID":"9a85e597-ce82-4b41-b00e-e142bbd38849","Type":"ContainerDied","Data":"070c821be0e163c8231a2e69ec7c9b2acb5f74f3016d587ef1266e120cc1dd95"} Feb 25 11:22:30 crc kubenswrapper[5005]: I0225 11:22:30.416808 5005 scope.go:117] "RemoveContainer" containerID="070c821be0e163c8231a2e69ec7c9b2acb5f74f3016d587ef1266e120cc1dd95" Feb 25 11:22:31 crc kubenswrapper[5005]: I0225 11:22:30.996643 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 25 11:22:31 crc kubenswrapper[5005]: I0225 11:22:31.423312 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hzbjf" event={"ID":"9a85e597-ce82-4b41-b00e-e142bbd38849","Type":"ContainerStarted","Data":"400978855a1b9bd9d5930e674499cf5737d8171b793312438755fb9cf17c515f"} Feb 25 11:22:31 crc kubenswrapper[5005]: I0225 11:22:31.424011 5005 status_manager.go:317] "Container readiness changed for unknown container" pod="openshift-marketplace/marketplace-operator-79b997595-hzbjf" containerID="cri-o://070c821be0e163c8231a2e69ec7c9b2acb5f74f3016d587ef1266e120cc1dd95" Feb 25 11:22:31 crc kubenswrapper[5005]: I0225 11:22:31.424050 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-hzbjf" Feb 25 11:22:32 crc kubenswrapper[5005]: I0225 11:22:32.430095 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-hzbjf" Feb 25 11:22:32 crc kubenswrapper[5005]: I0225 11:22:32.432245 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-hzbjf" Feb 25 11:22:37 crc kubenswrapper[5005]: I0225 11:22:37.669172 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 25 11:22:37 crc kubenswrapper[5005]: I0225 11:22:37.674775 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 25 11:22:38 crc kubenswrapper[5005]: I0225 11:22:38.474665 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 25 11:22:48 crc kubenswrapper[5005]: I0225 11:22:48.716358 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533642-7t82d"] Feb 25 11:22:48 crc kubenswrapper[5005]: E0225 11:22:48.716990 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b3efbe3-23ee-4444-9ee5-c8ea6b2109c4" containerName="installer" Feb 25 11:22:48 crc kubenswrapper[5005]: I0225 11:22:48.717002 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b3efbe3-23ee-4444-9ee5-c8ea6b2109c4" containerName="installer" Feb 25 11:22:48 crc kubenswrapper[5005]: E0225 11:22:48.717014 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 25 11:22:48 crc kubenswrapper[5005]: I0225 11:22:48.717020 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 25 11:22:48 crc kubenswrapper[5005]: I0225 11:22:48.717120 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b3efbe3-23ee-4444-9ee5-c8ea6b2109c4" containerName="installer" Feb 25 11:22:48 crc kubenswrapper[5005]: I0225 11:22:48.717136 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 25 11:22:48 crc kubenswrapper[5005]: I0225 11:22:48.717499 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533642-7t82d" Feb 25 11:22:48 crc kubenswrapper[5005]: I0225 11:22:48.719602 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 11:22:48 crc kubenswrapper[5005]: I0225 11:22:48.719786 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 11:22:48 crc kubenswrapper[5005]: I0225 11:22:48.719873 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 11:22:48 crc kubenswrapper[5005]: I0225 11:22:48.732630 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533642-7t82d"] Feb 25 11:22:48 crc kubenswrapper[5005]: I0225 11:22:48.784817 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjqjm\" (UniqueName: \"kubernetes.io/projected/d8d98113-0837-4250-87f2-f4c32da84c73-kube-api-access-mjqjm\") pod \"auto-csr-approver-29533642-7t82d\" (UID: \"d8d98113-0837-4250-87f2-f4c32da84c73\") " pod="openshift-infra/auto-csr-approver-29533642-7t82d" Feb 25 11:22:48 crc kubenswrapper[5005]: I0225 11:22:48.886449 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjqjm\" (UniqueName: \"kubernetes.io/projected/d8d98113-0837-4250-87f2-f4c32da84c73-kube-api-access-mjqjm\") pod \"auto-csr-approver-29533642-7t82d\" (UID: \"d8d98113-0837-4250-87f2-f4c32da84c73\") " pod="openshift-infra/auto-csr-approver-29533642-7t82d" Feb 25 11:22:48 crc kubenswrapper[5005]: I0225 11:22:48.913704 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjqjm\" (UniqueName: \"kubernetes.io/projected/d8d98113-0837-4250-87f2-f4c32da84c73-kube-api-access-mjqjm\") pod \"auto-csr-approver-29533642-7t82d\" (UID: \"d8d98113-0837-4250-87f2-f4c32da84c73\") " pod="openshift-infra/auto-csr-approver-29533642-7t82d" Feb 25 11:22:49 crc kubenswrapper[5005]: I0225 11:22:49.031995 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533642-7t82d" Feb 25 11:22:49 crc kubenswrapper[5005]: I0225 11:22:49.421721 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533642-7t82d"] Feb 25 11:22:49 crc kubenswrapper[5005]: I0225 11:22:49.544777 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533642-7t82d" event={"ID":"d8d98113-0837-4250-87f2-f4c32da84c73","Type":"ContainerStarted","Data":"92f451efc8f05faf90f19496d4b9639b37a199573cc870ce1a188c38580d02c1"} Feb 25 11:22:51 crc kubenswrapper[5005]: I0225 11:22:51.558416 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533642-7t82d" event={"ID":"d8d98113-0837-4250-87f2-f4c32da84c73","Type":"ContainerStarted","Data":"b7a2ce1a21e524e46a844e30b1b48ab33784e69f462a67ea346c7bf93eb50dd8"} Feb 25 11:22:51 crc kubenswrapper[5005]: I0225 11:22:51.572542 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29533642-7t82d" podStartSLOduration=2.170304454 podStartE2EDuration="3.572517311s" podCreationTimestamp="2026-02-25 11:22:48 +0000 UTC" firstStartedPulling="2026-02-25 11:22:49.43312175 +0000 UTC m=+283.473854087" lastFinishedPulling="2026-02-25 11:22:50.835334577 +0000 UTC m=+284.876066944" observedRunningTime="2026-02-25 11:22:51.569893764 +0000 UTC m=+285.610626091" watchObservedRunningTime="2026-02-25 11:22:51.572517311 +0000 UTC m=+285.613249648" Feb 25 11:22:52 crc kubenswrapper[5005]: I0225 11:22:52.569073 5005 generic.go:334] "Generic (PLEG): container finished" podID="d8d98113-0837-4250-87f2-f4c32da84c73" containerID="b7a2ce1a21e524e46a844e30b1b48ab33784e69f462a67ea346c7bf93eb50dd8" exitCode=0 Feb 25 11:22:52 crc kubenswrapper[5005]: I0225 11:22:52.571535 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533642-7t82d" event={"ID":"d8d98113-0837-4250-87f2-f4c32da84c73","Type":"ContainerDied","Data":"b7a2ce1a21e524e46a844e30b1b48ab33784e69f462a67ea346c7bf93eb50dd8"} Feb 25 11:22:53 crc kubenswrapper[5005]: I0225 11:22:53.925634 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533642-7t82d" Feb 25 11:22:53 crc kubenswrapper[5005]: I0225 11:22:53.953808 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjqjm\" (UniqueName: \"kubernetes.io/projected/d8d98113-0837-4250-87f2-f4c32da84c73-kube-api-access-mjqjm\") pod \"d8d98113-0837-4250-87f2-f4c32da84c73\" (UID: \"d8d98113-0837-4250-87f2-f4c32da84c73\") " Feb 25 11:22:53 crc kubenswrapper[5005]: I0225 11:22:53.967899 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8d98113-0837-4250-87f2-f4c32da84c73-kube-api-access-mjqjm" (OuterVolumeSpecName: "kube-api-access-mjqjm") pod "d8d98113-0837-4250-87f2-f4c32da84c73" (UID: "d8d98113-0837-4250-87f2-f4c32da84c73"). InnerVolumeSpecName "kube-api-access-mjqjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:22:54 crc kubenswrapper[5005]: I0225 11:22:54.056447 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjqjm\" (UniqueName: \"kubernetes.io/projected/d8d98113-0837-4250-87f2-f4c32da84c73-kube-api-access-mjqjm\") on node \"crc\" DevicePath \"\"" Feb 25 11:22:54 crc kubenswrapper[5005]: I0225 11:22:54.587954 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533642-7t82d" event={"ID":"d8d98113-0837-4250-87f2-f4c32da84c73","Type":"ContainerDied","Data":"92f451efc8f05faf90f19496d4b9639b37a199573cc870ce1a188c38580d02c1"} Feb 25 11:22:54 crc kubenswrapper[5005]: I0225 11:22:54.588013 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92f451efc8f05faf90f19496d4b9639b37a199573cc870ce1a188c38580d02c1" Feb 25 11:22:54 crc kubenswrapper[5005]: I0225 11:22:54.588024 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533642-7t82d" Feb 25 11:22:58 crc kubenswrapper[5005]: I0225 11:22:58.087099 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 11:22:58 crc kubenswrapper[5005]: I0225 11:22:58.087692 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 11:22:58 crc kubenswrapper[5005]: I0225 11:22:58.087788 5005 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" Feb 25 11:22:58 crc kubenswrapper[5005]: I0225 11:22:58.088823 5005 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bf45a94e52ca384448ef1d62b7ede2d8e7a4b2be829f1165eaef95efdccc1bf9"} pod="openshift-machine-config-operator/machine-config-daemon-tct5q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 25 11:22:58 crc kubenswrapper[5005]: I0225 11:22:58.088918 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" containerID="cri-o://bf45a94e52ca384448ef1d62b7ede2d8e7a4b2be829f1165eaef95efdccc1bf9" gracePeriod=600 Feb 25 11:22:58 crc kubenswrapper[5005]: I0225 11:22:58.638137 5005 generic.go:334] "Generic (PLEG): container finished" podID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerID="bf45a94e52ca384448ef1d62b7ede2d8e7a4b2be829f1165eaef95efdccc1bf9" exitCode=0 Feb 25 11:22:58 crc kubenswrapper[5005]: I0225 11:22:58.638245 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerDied","Data":"bf45a94e52ca384448ef1d62b7ede2d8e7a4b2be829f1165eaef95efdccc1bf9"} Feb 25 11:22:58 crc kubenswrapper[5005]: I0225 11:22:58.638551 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerStarted","Data":"313fce3c48a9efbcb03b099224aaa818e6cd25bfe8ceeb165fe4267714129461"} Feb 25 11:23:36 crc kubenswrapper[5005]: I0225 11:23:36.671209 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-jkj4r"] Feb 25 11:23:36 crc kubenswrapper[5005]: E0225 11:23:36.673399 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8d98113-0837-4250-87f2-f4c32da84c73" containerName="oc" Feb 25 11:23:36 crc kubenswrapper[5005]: I0225 11:23:36.673503 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8d98113-0837-4250-87f2-f4c32da84c73" containerName="oc" Feb 25 11:23:36 crc kubenswrapper[5005]: I0225 11:23:36.673726 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8d98113-0837-4250-87f2-f4c32da84c73" containerName="oc" Feb 25 11:23:36 crc kubenswrapper[5005]: I0225 11:23:36.674297 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-jkj4r" Feb 25 11:23:36 crc kubenswrapper[5005]: I0225 11:23:36.693122 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-jkj4r"] Feb 25 11:23:36 crc kubenswrapper[5005]: I0225 11:23:36.795495 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsmld\" (UniqueName: \"kubernetes.io/projected/0d9f04cc-c164-43f8-8afd-66ce77db6302-kube-api-access-lsmld\") pod \"image-registry-66df7c8f76-jkj4r\" (UID: \"0d9f04cc-c164-43f8-8afd-66ce77db6302\") " pod="openshift-image-registry/image-registry-66df7c8f76-jkj4r" Feb 25 11:23:36 crc kubenswrapper[5005]: I0225 11:23:36.795604 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0d9f04cc-c164-43f8-8afd-66ce77db6302-registry-certificates\") pod \"image-registry-66df7c8f76-jkj4r\" (UID: \"0d9f04cc-c164-43f8-8afd-66ce77db6302\") " pod="openshift-image-registry/image-registry-66df7c8f76-jkj4r" Feb 25 11:23:36 crc kubenswrapper[5005]: I0225 11:23:36.795635 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d9f04cc-c164-43f8-8afd-66ce77db6302-trusted-ca\") pod \"image-registry-66df7c8f76-jkj4r\" (UID: \"0d9f04cc-c164-43f8-8afd-66ce77db6302\") " pod="openshift-image-registry/image-registry-66df7c8f76-jkj4r" Feb 25 11:23:36 crc kubenswrapper[5005]: I0225 11:23:36.795852 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0d9f04cc-c164-43f8-8afd-66ce77db6302-installation-pull-secrets\") pod \"image-registry-66df7c8f76-jkj4r\" (UID: \"0d9f04cc-c164-43f8-8afd-66ce77db6302\") " pod="openshift-image-registry/image-registry-66df7c8f76-jkj4r" Feb 25 11:23:36 crc kubenswrapper[5005]: I0225 11:23:36.796112 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-jkj4r\" (UID: \"0d9f04cc-c164-43f8-8afd-66ce77db6302\") " pod="openshift-image-registry/image-registry-66df7c8f76-jkj4r" Feb 25 11:23:36 crc kubenswrapper[5005]: I0225 11:23:36.796197 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0d9f04cc-c164-43f8-8afd-66ce77db6302-bound-sa-token\") pod \"image-registry-66df7c8f76-jkj4r\" (UID: \"0d9f04cc-c164-43f8-8afd-66ce77db6302\") " pod="openshift-image-registry/image-registry-66df7c8f76-jkj4r" Feb 25 11:23:36 crc kubenswrapper[5005]: I0225 11:23:36.797611 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0d9f04cc-c164-43f8-8afd-66ce77db6302-registry-tls\") pod \"image-registry-66df7c8f76-jkj4r\" (UID: \"0d9f04cc-c164-43f8-8afd-66ce77db6302\") " pod="openshift-image-registry/image-registry-66df7c8f76-jkj4r" Feb 25 11:23:36 crc kubenswrapper[5005]: I0225 11:23:36.797779 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0d9f04cc-c164-43f8-8afd-66ce77db6302-ca-trust-extracted\") pod \"image-registry-66df7c8f76-jkj4r\" (UID: \"0d9f04cc-c164-43f8-8afd-66ce77db6302\") " pod="openshift-image-registry/image-registry-66df7c8f76-jkj4r" Feb 25 11:23:36 crc kubenswrapper[5005]: I0225 11:23:36.819707 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-jkj4r\" (UID: \"0d9f04cc-c164-43f8-8afd-66ce77db6302\") " pod="openshift-image-registry/image-registry-66df7c8f76-jkj4r" Feb 25 11:23:36 crc kubenswrapper[5005]: I0225 11:23:36.898799 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0d9f04cc-c164-43f8-8afd-66ce77db6302-installation-pull-secrets\") pod \"image-registry-66df7c8f76-jkj4r\" (UID: \"0d9f04cc-c164-43f8-8afd-66ce77db6302\") " pod="openshift-image-registry/image-registry-66df7c8f76-jkj4r" Feb 25 11:23:36 crc kubenswrapper[5005]: I0225 11:23:36.898884 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0d9f04cc-c164-43f8-8afd-66ce77db6302-bound-sa-token\") pod \"image-registry-66df7c8f76-jkj4r\" (UID: \"0d9f04cc-c164-43f8-8afd-66ce77db6302\") " pod="openshift-image-registry/image-registry-66df7c8f76-jkj4r" Feb 25 11:23:36 crc kubenswrapper[5005]: I0225 11:23:36.898924 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0d9f04cc-c164-43f8-8afd-66ce77db6302-registry-tls\") pod \"image-registry-66df7c8f76-jkj4r\" (UID: \"0d9f04cc-c164-43f8-8afd-66ce77db6302\") " pod="openshift-image-registry/image-registry-66df7c8f76-jkj4r" Feb 25 11:23:36 crc kubenswrapper[5005]: I0225 11:23:36.898951 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0d9f04cc-c164-43f8-8afd-66ce77db6302-ca-trust-extracted\") pod \"image-registry-66df7c8f76-jkj4r\" (UID: \"0d9f04cc-c164-43f8-8afd-66ce77db6302\") " pod="openshift-image-registry/image-registry-66df7c8f76-jkj4r" Feb 25 11:23:36 crc kubenswrapper[5005]: I0225 11:23:36.898988 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsmld\" (UniqueName: \"kubernetes.io/projected/0d9f04cc-c164-43f8-8afd-66ce77db6302-kube-api-access-lsmld\") pod \"image-registry-66df7c8f76-jkj4r\" (UID: \"0d9f04cc-c164-43f8-8afd-66ce77db6302\") " pod="openshift-image-registry/image-registry-66df7c8f76-jkj4r" Feb 25 11:23:36 crc kubenswrapper[5005]: I0225 11:23:36.899023 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0d9f04cc-c164-43f8-8afd-66ce77db6302-registry-certificates\") pod \"image-registry-66df7c8f76-jkj4r\" (UID: \"0d9f04cc-c164-43f8-8afd-66ce77db6302\") " pod="openshift-image-registry/image-registry-66df7c8f76-jkj4r" Feb 25 11:23:36 crc kubenswrapper[5005]: I0225 11:23:36.899041 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d9f04cc-c164-43f8-8afd-66ce77db6302-trusted-ca\") pod \"image-registry-66df7c8f76-jkj4r\" (UID: \"0d9f04cc-c164-43f8-8afd-66ce77db6302\") " pod="openshift-image-registry/image-registry-66df7c8f76-jkj4r" Feb 25 11:23:36 crc kubenswrapper[5005]: I0225 11:23:36.900712 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d9f04cc-c164-43f8-8afd-66ce77db6302-trusted-ca\") pod \"image-registry-66df7c8f76-jkj4r\" (UID: \"0d9f04cc-c164-43f8-8afd-66ce77db6302\") " pod="openshift-image-registry/image-registry-66df7c8f76-jkj4r" Feb 25 11:23:36 crc kubenswrapper[5005]: I0225 11:23:36.902057 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0d9f04cc-c164-43f8-8afd-66ce77db6302-ca-trust-extracted\") pod \"image-registry-66df7c8f76-jkj4r\" (UID: \"0d9f04cc-c164-43f8-8afd-66ce77db6302\") " pod="openshift-image-registry/image-registry-66df7c8f76-jkj4r" Feb 25 11:23:36 crc kubenswrapper[5005]: I0225 11:23:36.902767 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0d9f04cc-c164-43f8-8afd-66ce77db6302-registry-certificates\") pod \"image-registry-66df7c8f76-jkj4r\" (UID: \"0d9f04cc-c164-43f8-8afd-66ce77db6302\") " pod="openshift-image-registry/image-registry-66df7c8f76-jkj4r" Feb 25 11:23:36 crc kubenswrapper[5005]: I0225 11:23:36.910077 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0d9f04cc-c164-43f8-8afd-66ce77db6302-installation-pull-secrets\") pod \"image-registry-66df7c8f76-jkj4r\" (UID: \"0d9f04cc-c164-43f8-8afd-66ce77db6302\") " pod="openshift-image-registry/image-registry-66df7c8f76-jkj4r" Feb 25 11:23:36 crc kubenswrapper[5005]: I0225 11:23:36.910142 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0d9f04cc-c164-43f8-8afd-66ce77db6302-registry-tls\") pod \"image-registry-66df7c8f76-jkj4r\" (UID: \"0d9f04cc-c164-43f8-8afd-66ce77db6302\") " pod="openshift-image-registry/image-registry-66df7c8f76-jkj4r" Feb 25 11:23:36 crc kubenswrapper[5005]: I0225 11:23:36.922109 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsmld\" (UniqueName: \"kubernetes.io/projected/0d9f04cc-c164-43f8-8afd-66ce77db6302-kube-api-access-lsmld\") pod \"image-registry-66df7c8f76-jkj4r\" (UID: \"0d9f04cc-c164-43f8-8afd-66ce77db6302\") " pod="openshift-image-registry/image-registry-66df7c8f76-jkj4r" Feb 25 11:23:36 crc kubenswrapper[5005]: I0225 11:23:36.942025 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0d9f04cc-c164-43f8-8afd-66ce77db6302-bound-sa-token\") pod \"image-registry-66df7c8f76-jkj4r\" (UID: \"0d9f04cc-c164-43f8-8afd-66ce77db6302\") " pod="openshift-image-registry/image-registry-66df7c8f76-jkj4r" Feb 25 11:23:37 crc kubenswrapper[5005]: I0225 11:23:37.002146 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-jkj4r" Feb 25 11:23:37 crc kubenswrapper[5005]: I0225 11:23:37.226261 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-jkj4r"] Feb 25 11:23:37 crc kubenswrapper[5005]: I0225 11:23:37.908472 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-jkj4r" event={"ID":"0d9f04cc-c164-43f8-8afd-66ce77db6302","Type":"ContainerStarted","Data":"fbff96ee3f255dc7ba193e75842c59c0c2be52f60eb870bdc2fd8b2166d44b34"} Feb 25 11:23:37 crc kubenswrapper[5005]: I0225 11:23:37.908863 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-jkj4r" event={"ID":"0d9f04cc-c164-43f8-8afd-66ce77db6302","Type":"ContainerStarted","Data":"29e9e566c8185a4dc49dd7484af21e7dac2b1166420ddd03132311ea3d437e14"} Feb 25 11:23:37 crc kubenswrapper[5005]: I0225 11:23:37.908884 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-jkj4r" Feb 25 11:23:37 crc kubenswrapper[5005]: I0225 11:23:37.929352 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-jkj4r" podStartSLOduration=1.929335526 podStartE2EDuration="1.929335526s" podCreationTimestamp="2026-02-25 11:23:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:23:37.925619636 +0000 UTC m=+331.966351953" watchObservedRunningTime="2026-02-25 11:23:37.929335526 +0000 UTC m=+331.970067853" Feb 25 11:23:40 crc kubenswrapper[5005]: I0225 11:23:40.253184 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dgfww"] Feb 25 11:23:40 crc kubenswrapper[5005]: I0225 11:23:40.254058 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dgfww" podUID="a80639a8-456e-4f88-a949-ce0c6fa8284c" containerName="registry-server" containerID="cri-o://bd007eca1dc4ac6bf1fd5a1abd9f85e5d4173c99a000d6732c23d456881dc12e" gracePeriod=30 Feb 25 11:23:40 crc kubenswrapper[5005]: I0225 11:23:40.289044 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9bdm4"] Feb 25 11:23:40 crc kubenswrapper[5005]: I0225 11:23:40.289719 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9bdm4" podUID="f0fc9766-1d31-4d5a-8234-47d9e32f59be" containerName="registry-server" containerID="cri-o://d23d7c3f091e304f99032ea79230ff5b393277aa44da3fc6a2971a4980508fc4" gracePeriod=30 Feb 25 11:23:40 crc kubenswrapper[5005]: I0225 11:23:40.304495 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hzbjf"] Feb 25 11:23:40 crc kubenswrapper[5005]: I0225 11:23:40.304789 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-hzbjf" podUID="9a85e597-ce82-4b41-b00e-e142bbd38849" containerName="marketplace-operator" containerID="cri-o://400978855a1b9bd9d5930e674499cf5737d8171b793312438755fb9cf17c515f" gracePeriod=30 Feb 25 11:23:40 crc kubenswrapper[5005]: I0225 11:23:40.314331 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vw6ng"] Feb 25 11:23:40 crc kubenswrapper[5005]: I0225 11:23:40.314609 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vw6ng" podUID="4e5d39a0-d977-4973-a2a3-55699e86de91" containerName="registry-server" containerID="cri-o://a4eccfabd239252ec9261e9af13bee8677b2c7e3f6199893c41e8932206d9c2a" gracePeriod=30 Feb 25 11:23:40 crc kubenswrapper[5005]: I0225 11:23:40.332325 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zhhkb"] Feb 25 11:23:40 crc kubenswrapper[5005]: I0225 11:23:40.332618 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zhhkb" podUID="8e6f04db-3c08-49ba-af16-7ebbc0cfe6f1" containerName="registry-server" containerID="cri-o://4394c47dfe16f1049ffddfaff3a81628f8fa67692f2df0ace0c34ae1b3ab19be" gracePeriod=30 Feb 25 11:23:40 crc kubenswrapper[5005]: I0225 11:23:40.336482 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9d6zm"] Feb 25 11:23:40 crc kubenswrapper[5005]: I0225 11:23:40.337493 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9d6zm" Feb 25 11:23:40 crc kubenswrapper[5005]: I0225 11:23:40.339201 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9d6zm"] Feb 25 11:23:40 crc kubenswrapper[5005]: I0225 11:23:40.457089 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5733f94a-093f-4eda-ad84-a2d3cf989483-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9d6zm\" (UID: \"5733f94a-093f-4eda-ad84-a2d3cf989483\") " pod="openshift-marketplace/marketplace-operator-79b997595-9d6zm" Feb 25 11:23:40 crc kubenswrapper[5005]: I0225 11:23:40.457187 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-487j4\" (UniqueName: \"kubernetes.io/projected/5733f94a-093f-4eda-ad84-a2d3cf989483-kube-api-access-487j4\") pod \"marketplace-operator-79b997595-9d6zm\" (UID: \"5733f94a-093f-4eda-ad84-a2d3cf989483\") " pod="openshift-marketplace/marketplace-operator-79b997595-9d6zm" Feb 25 11:23:40 crc kubenswrapper[5005]: I0225 11:23:40.457233 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5733f94a-093f-4eda-ad84-a2d3cf989483-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9d6zm\" (UID: \"5733f94a-093f-4eda-ad84-a2d3cf989483\") " pod="openshift-marketplace/marketplace-operator-79b997595-9d6zm" Feb 25 11:23:40 crc kubenswrapper[5005]: I0225 11:23:40.558215 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-487j4\" (UniqueName: \"kubernetes.io/projected/5733f94a-093f-4eda-ad84-a2d3cf989483-kube-api-access-487j4\") pod \"marketplace-operator-79b997595-9d6zm\" (UID: \"5733f94a-093f-4eda-ad84-a2d3cf989483\") " pod="openshift-marketplace/marketplace-operator-79b997595-9d6zm" Feb 25 11:23:40 crc kubenswrapper[5005]: I0225 11:23:40.558288 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5733f94a-093f-4eda-ad84-a2d3cf989483-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9d6zm\" (UID: \"5733f94a-093f-4eda-ad84-a2d3cf989483\") " pod="openshift-marketplace/marketplace-operator-79b997595-9d6zm" Feb 25 11:23:40 crc kubenswrapper[5005]: I0225 11:23:40.558338 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5733f94a-093f-4eda-ad84-a2d3cf989483-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9d6zm\" (UID: \"5733f94a-093f-4eda-ad84-a2d3cf989483\") " pod="openshift-marketplace/marketplace-operator-79b997595-9d6zm" Feb 25 11:23:40 crc kubenswrapper[5005]: I0225 11:23:40.559578 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5733f94a-093f-4eda-ad84-a2d3cf989483-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9d6zm\" (UID: \"5733f94a-093f-4eda-ad84-a2d3cf989483\") " pod="openshift-marketplace/marketplace-operator-79b997595-9d6zm" Feb 25 11:23:40 crc kubenswrapper[5005]: I0225 11:23:40.565085 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5733f94a-093f-4eda-ad84-a2d3cf989483-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9d6zm\" (UID: \"5733f94a-093f-4eda-ad84-a2d3cf989483\") " pod="openshift-marketplace/marketplace-operator-79b997595-9d6zm" Feb 25 11:23:40 crc kubenswrapper[5005]: I0225 11:23:40.574074 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-487j4\" (UniqueName: \"kubernetes.io/projected/5733f94a-093f-4eda-ad84-a2d3cf989483-kube-api-access-487j4\") pod \"marketplace-operator-79b997595-9d6zm\" (UID: \"5733f94a-093f-4eda-ad84-a2d3cf989483\") " pod="openshift-marketplace/marketplace-operator-79b997595-9d6zm" Feb 25 11:23:40 crc kubenswrapper[5005]: I0225 11:23:40.652580 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9d6zm" Feb 25 11:23:40 crc kubenswrapper[5005]: I0225 11:23:40.674065 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dgfww" Feb 25 11:23:40 crc kubenswrapper[5005]: I0225 11:23:40.760231 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9f2q\" (UniqueName: \"kubernetes.io/projected/a80639a8-456e-4f88-a949-ce0c6fa8284c-kube-api-access-f9f2q\") pod \"a80639a8-456e-4f88-a949-ce0c6fa8284c\" (UID: \"a80639a8-456e-4f88-a949-ce0c6fa8284c\") " Feb 25 11:23:40 crc kubenswrapper[5005]: I0225 11:23:40.760309 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a80639a8-456e-4f88-a949-ce0c6fa8284c-utilities\") pod \"a80639a8-456e-4f88-a949-ce0c6fa8284c\" (UID: \"a80639a8-456e-4f88-a949-ce0c6fa8284c\") " Feb 25 11:23:40 crc kubenswrapper[5005]: I0225 11:23:40.760361 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a80639a8-456e-4f88-a949-ce0c6fa8284c-catalog-content\") pod \"a80639a8-456e-4f88-a949-ce0c6fa8284c\" (UID: \"a80639a8-456e-4f88-a949-ce0c6fa8284c\") " Feb 25 11:23:40 crc kubenswrapper[5005]: I0225 11:23:40.761452 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a80639a8-456e-4f88-a949-ce0c6fa8284c-utilities" (OuterVolumeSpecName: "utilities") pod "a80639a8-456e-4f88-a949-ce0c6fa8284c" (UID: "a80639a8-456e-4f88-a949-ce0c6fa8284c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:23:40 crc kubenswrapper[5005]: I0225 11:23:40.768112 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hzbjf" Feb 25 11:23:40 crc kubenswrapper[5005]: I0225 11:23:40.781259 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zhhkb" Feb 25 11:23:40 crc kubenswrapper[5005]: I0225 11:23:40.845978 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a80639a8-456e-4f88-a949-ce0c6fa8284c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a80639a8-456e-4f88-a949-ce0c6fa8284c" (UID: "a80639a8-456e-4f88-a949-ce0c6fa8284c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:23:40 crc kubenswrapper[5005]: I0225 11:23:40.861264 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e6f04db-3c08-49ba-af16-7ebbc0cfe6f1-utilities\") pod \"8e6f04db-3c08-49ba-af16-7ebbc0cfe6f1\" (UID: \"8e6f04db-3c08-49ba-af16-7ebbc0cfe6f1\") " Feb 25 11:23:40 crc kubenswrapper[5005]: I0225 11:23:40.861317 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62sg8\" (UniqueName: \"kubernetes.io/projected/8e6f04db-3c08-49ba-af16-7ebbc0cfe6f1-kube-api-access-62sg8\") pod \"8e6f04db-3c08-49ba-af16-7ebbc0cfe6f1\" (UID: \"8e6f04db-3c08-49ba-af16-7ebbc0cfe6f1\") " Feb 25 11:23:40 crc kubenswrapper[5005]: I0225 11:23:40.861349 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9a85e597-ce82-4b41-b00e-e142bbd38849-marketplace-operator-metrics\") pod \"9a85e597-ce82-4b41-b00e-e142bbd38849\" (UID: \"9a85e597-ce82-4b41-b00e-e142bbd38849\") " Feb 25 11:23:40 crc kubenswrapper[5005]: I0225 11:23:40.861597 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e6f04db-3c08-49ba-af16-7ebbc0cfe6f1-catalog-content\") pod \"8e6f04db-3c08-49ba-af16-7ebbc0cfe6f1\" (UID: \"8e6f04db-3c08-49ba-af16-7ebbc0cfe6f1\") " Feb 25 11:23:40 crc kubenswrapper[5005]: I0225 11:23:40.861705 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9a85e597-ce82-4b41-b00e-e142bbd38849-marketplace-trusted-ca\") pod \"9a85e597-ce82-4b41-b00e-e142bbd38849\" (UID: \"9a85e597-ce82-4b41-b00e-e142bbd38849\") " Feb 25 11:23:40 crc kubenswrapper[5005]: I0225 11:23:40.861745 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skbpw\" (UniqueName: \"kubernetes.io/projected/9a85e597-ce82-4b41-b00e-e142bbd38849-kube-api-access-skbpw\") pod \"9a85e597-ce82-4b41-b00e-e142bbd38849\" (UID: \"9a85e597-ce82-4b41-b00e-e142bbd38849\") " Feb 25 11:23:40 crc kubenswrapper[5005]: I0225 11:23:40.862130 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a80639a8-456e-4f88-a949-ce0c6fa8284c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 11:23:40 crc kubenswrapper[5005]: I0225 11:23:40.862148 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a80639a8-456e-4f88-a949-ce0c6fa8284c-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 11:23:40 crc kubenswrapper[5005]: I0225 11:23:40.862405 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a85e597-ce82-4b41-b00e-e142bbd38849-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "9a85e597-ce82-4b41-b00e-e142bbd38849" (UID: "9a85e597-ce82-4b41-b00e-e142bbd38849"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:23:40 crc kubenswrapper[5005]: I0225 11:23:40.870200 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e6f04db-3c08-49ba-af16-7ebbc0cfe6f1-utilities" (OuterVolumeSpecName: "utilities") pod "8e6f04db-3c08-49ba-af16-7ebbc0cfe6f1" (UID: "8e6f04db-3c08-49ba-af16-7ebbc0cfe6f1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:23:40 crc kubenswrapper[5005]: I0225 11:23:40.933779 5005 generic.go:334] "Generic (PLEG): container finished" podID="f0fc9766-1d31-4d5a-8234-47d9e32f59be" containerID="d23d7c3f091e304f99032ea79230ff5b393277aa44da3fc6a2971a4980508fc4" exitCode=0 Feb 25 11:23:40 crc kubenswrapper[5005]: I0225 11:23:40.933844 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9bdm4" event={"ID":"f0fc9766-1d31-4d5a-8234-47d9e32f59be","Type":"ContainerDied","Data":"d23d7c3f091e304f99032ea79230ff5b393277aa44da3fc6a2971a4980508fc4"} Feb 25 11:23:40 crc kubenswrapper[5005]: I0225 11:23:40.935848 5005 generic.go:334] "Generic (PLEG): container finished" podID="8e6f04db-3c08-49ba-af16-7ebbc0cfe6f1" containerID="4394c47dfe16f1049ffddfaff3a81628f8fa67692f2df0ace0c34ae1b3ab19be" exitCode=0 Feb 25 11:23:40 crc kubenswrapper[5005]: I0225 11:23:40.935891 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zhhkb" event={"ID":"8e6f04db-3c08-49ba-af16-7ebbc0cfe6f1","Type":"ContainerDied","Data":"4394c47dfe16f1049ffddfaff3a81628f8fa67692f2df0ace0c34ae1b3ab19be"} Feb 25 11:23:40 crc kubenswrapper[5005]: I0225 11:23:40.935908 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zhhkb" event={"ID":"8e6f04db-3c08-49ba-af16-7ebbc0cfe6f1","Type":"ContainerDied","Data":"75c47509906c638e9cf129396e6a07549fd7153d846a7bc76e58f18194204a1e"} Feb 25 11:23:40 crc kubenswrapper[5005]: I0225 11:23:40.935923 5005 scope.go:117] "RemoveContainer" containerID="4394c47dfe16f1049ffddfaff3a81628f8fa67692f2df0ace0c34ae1b3ab19be" Feb 25 11:23:40 crc kubenswrapper[5005]: I0225 11:23:40.936028 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zhhkb" Feb 25 11:23:40 crc kubenswrapper[5005]: I0225 11:23:40.938358 5005 generic.go:334] "Generic (PLEG): container finished" podID="9a85e597-ce82-4b41-b00e-e142bbd38849" containerID="400978855a1b9bd9d5930e674499cf5737d8171b793312438755fb9cf17c515f" exitCode=0 Feb 25 11:23:40 crc kubenswrapper[5005]: I0225 11:23:40.938420 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hzbjf" event={"ID":"9a85e597-ce82-4b41-b00e-e142bbd38849","Type":"ContainerDied","Data":"400978855a1b9bd9d5930e674499cf5737d8171b793312438755fb9cf17c515f"} Feb 25 11:23:40 crc kubenswrapper[5005]: I0225 11:23:40.938440 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hzbjf" event={"ID":"9a85e597-ce82-4b41-b00e-e142bbd38849","Type":"ContainerDied","Data":"198c82bb6b80d210e5c253bf7f76bd313de0feeb1efa9de3d0a45ebeb2e1a9c3"} Feb 25 11:23:40 crc kubenswrapper[5005]: I0225 11:23:40.938488 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hzbjf" Feb 25 11:23:40 crc kubenswrapper[5005]: I0225 11:23:40.940590 5005 generic.go:334] "Generic (PLEG): container finished" podID="4e5d39a0-d977-4973-a2a3-55699e86de91" containerID="a4eccfabd239252ec9261e9af13bee8677b2c7e3f6199893c41e8932206d9c2a" exitCode=0 Feb 25 11:23:40 crc kubenswrapper[5005]: I0225 11:23:40.940646 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vw6ng" event={"ID":"4e5d39a0-d977-4973-a2a3-55699e86de91","Type":"ContainerDied","Data":"a4eccfabd239252ec9261e9af13bee8677b2c7e3f6199893c41e8932206d9c2a"} Feb 25 11:23:40 crc kubenswrapper[5005]: I0225 11:23:40.942835 5005 generic.go:334] "Generic (PLEG): container finished" podID="a80639a8-456e-4f88-a949-ce0c6fa8284c" containerID="bd007eca1dc4ac6bf1fd5a1abd9f85e5d4173c99a000d6732c23d456881dc12e" exitCode=0 Feb 25 11:23:40 crc kubenswrapper[5005]: I0225 11:23:40.942864 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dgfww" event={"ID":"a80639a8-456e-4f88-a949-ce0c6fa8284c","Type":"ContainerDied","Data":"bd007eca1dc4ac6bf1fd5a1abd9f85e5d4173c99a000d6732c23d456881dc12e"} Feb 25 11:23:40 crc kubenswrapper[5005]: I0225 11:23:40.942881 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dgfww" event={"ID":"a80639a8-456e-4f88-a949-ce0c6fa8284c","Type":"ContainerDied","Data":"790070acf20bd9866f8d52f6d7272a7a6ea2ee3b3e3b3bdc89f35537df5b07fb"} Feb 25 11:23:40 crc kubenswrapper[5005]: I0225 11:23:40.942934 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dgfww" Feb 25 11:23:40 crc kubenswrapper[5005]: I0225 11:23:40.962893 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e6f04db-3c08-49ba-af16-7ebbc0cfe6f1-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 11:23:40 crc kubenswrapper[5005]: I0225 11:23:40.962917 5005 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9a85e597-ce82-4b41-b00e-e142bbd38849-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 25 11:23:40 crc kubenswrapper[5005]: I0225 11:23:40.996027 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e6f04db-3c08-49ba-af16-7ebbc0cfe6f1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8e6f04db-3c08-49ba-af16-7ebbc0cfe6f1" (UID: "8e6f04db-3c08-49ba-af16-7ebbc0cfe6f1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.064573 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e6f04db-3c08-49ba-af16-7ebbc0cfe6f1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.080184 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a80639a8-456e-4f88-a949-ce0c6fa8284c-kube-api-access-f9f2q" (OuterVolumeSpecName: "kube-api-access-f9f2q") pod "a80639a8-456e-4f88-a949-ce0c6fa8284c" (UID: "a80639a8-456e-4f88-a949-ce0c6fa8284c"). InnerVolumeSpecName "kube-api-access-f9f2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.081616 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a85e597-ce82-4b41-b00e-e142bbd38849-kube-api-access-skbpw" (OuterVolumeSpecName: "kube-api-access-skbpw") pod "9a85e597-ce82-4b41-b00e-e142bbd38849" (UID: "9a85e597-ce82-4b41-b00e-e142bbd38849"). InnerVolumeSpecName "kube-api-access-skbpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.083245 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e6f04db-3c08-49ba-af16-7ebbc0cfe6f1-kube-api-access-62sg8" (OuterVolumeSpecName: "kube-api-access-62sg8") pod "8e6f04db-3c08-49ba-af16-7ebbc0cfe6f1" (UID: "8e6f04db-3c08-49ba-af16-7ebbc0cfe6f1"). InnerVolumeSpecName "kube-api-access-62sg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.084413 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a85e597-ce82-4b41-b00e-e142bbd38849-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "9a85e597-ce82-4b41-b00e-e142bbd38849" (UID: "9a85e597-ce82-4b41-b00e-e142bbd38849"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.100862 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vw6ng" Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.108175 5005 scope.go:117] "RemoveContainer" containerID="b31f210cf23ff3eee8b69b2d1fa6fab1634f3a5d31f041bf4dcbc32aa6be45df" Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.117484 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9d6zm"] Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.132091 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9bdm4" Feb 25 11:23:41 crc kubenswrapper[5005]: W0225 11:23:41.146426 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5733f94a_093f_4eda_ad84_a2d3cf989483.slice/crio-6d11baca94695028e5f1f5a4a8f039fc77e0085456085310e19c671e630f2a6a WatchSource:0}: Error finding container 6d11baca94695028e5f1f5a4a8f039fc77e0085456085310e19c671e630f2a6a: Status 404 returned error can't find the container with id 6d11baca94695028e5f1f5a4a8f039fc77e0085456085310e19c671e630f2a6a Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.157995 5005 scope.go:117] "RemoveContainer" containerID="911e1eda948b505f0e18480e3b57cbdfb8347fc2288f8669c556396ee8899df5" Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.166789 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62sg8\" (UniqueName: \"kubernetes.io/projected/8e6f04db-3c08-49ba-af16-7ebbc0cfe6f1-kube-api-access-62sg8\") on node \"crc\" DevicePath \"\"" Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.166822 5005 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9a85e597-ce82-4b41-b00e-e142bbd38849-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.166837 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9f2q\" (UniqueName: \"kubernetes.io/projected/a80639a8-456e-4f88-a949-ce0c6fa8284c-kube-api-access-f9f2q\") on node \"crc\" DevicePath \"\"" Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.166866 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skbpw\" (UniqueName: \"kubernetes.io/projected/9a85e597-ce82-4b41-b00e-e142bbd38849-kube-api-access-skbpw\") on node \"crc\" DevicePath \"\"" Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.208711 5005 scope.go:117] "RemoveContainer" containerID="4394c47dfe16f1049ffddfaff3a81628f8fa67692f2df0ace0c34ae1b3ab19be" Feb 25 11:23:41 crc kubenswrapper[5005]: E0225 11:23:41.209210 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4394c47dfe16f1049ffddfaff3a81628f8fa67692f2df0ace0c34ae1b3ab19be\": container with ID starting with 4394c47dfe16f1049ffddfaff3a81628f8fa67692f2df0ace0c34ae1b3ab19be not found: ID does not exist" containerID="4394c47dfe16f1049ffddfaff3a81628f8fa67692f2df0ace0c34ae1b3ab19be" Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.209250 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4394c47dfe16f1049ffddfaff3a81628f8fa67692f2df0ace0c34ae1b3ab19be"} err="failed to get container status \"4394c47dfe16f1049ffddfaff3a81628f8fa67692f2df0ace0c34ae1b3ab19be\": rpc error: code = NotFound desc = could not find container \"4394c47dfe16f1049ffddfaff3a81628f8fa67692f2df0ace0c34ae1b3ab19be\": container with ID starting with 4394c47dfe16f1049ffddfaff3a81628f8fa67692f2df0ace0c34ae1b3ab19be not found: ID does not exist" Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.209283 5005 scope.go:117] "RemoveContainer" containerID="b31f210cf23ff3eee8b69b2d1fa6fab1634f3a5d31f041bf4dcbc32aa6be45df" Feb 25 11:23:41 crc kubenswrapper[5005]: E0225 11:23:41.211652 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b31f210cf23ff3eee8b69b2d1fa6fab1634f3a5d31f041bf4dcbc32aa6be45df\": container with ID starting with b31f210cf23ff3eee8b69b2d1fa6fab1634f3a5d31f041bf4dcbc32aa6be45df not found: ID does not exist" containerID="b31f210cf23ff3eee8b69b2d1fa6fab1634f3a5d31f041bf4dcbc32aa6be45df" Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.211687 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b31f210cf23ff3eee8b69b2d1fa6fab1634f3a5d31f041bf4dcbc32aa6be45df"} err="failed to get container status \"b31f210cf23ff3eee8b69b2d1fa6fab1634f3a5d31f041bf4dcbc32aa6be45df\": rpc error: code = NotFound desc = could not find container \"b31f210cf23ff3eee8b69b2d1fa6fab1634f3a5d31f041bf4dcbc32aa6be45df\": container with ID starting with b31f210cf23ff3eee8b69b2d1fa6fab1634f3a5d31f041bf4dcbc32aa6be45df not found: ID does not exist" Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.211705 5005 scope.go:117] "RemoveContainer" containerID="911e1eda948b505f0e18480e3b57cbdfb8347fc2288f8669c556396ee8899df5" Feb 25 11:23:41 crc kubenswrapper[5005]: E0225 11:23:41.212099 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"911e1eda948b505f0e18480e3b57cbdfb8347fc2288f8669c556396ee8899df5\": container with ID starting with 911e1eda948b505f0e18480e3b57cbdfb8347fc2288f8669c556396ee8899df5 not found: ID does not exist" containerID="911e1eda948b505f0e18480e3b57cbdfb8347fc2288f8669c556396ee8899df5" Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.212126 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"911e1eda948b505f0e18480e3b57cbdfb8347fc2288f8669c556396ee8899df5"} err="failed to get container status \"911e1eda948b505f0e18480e3b57cbdfb8347fc2288f8669c556396ee8899df5\": rpc error: code = NotFound desc = could not find container \"911e1eda948b505f0e18480e3b57cbdfb8347fc2288f8669c556396ee8899df5\": container with ID starting with 911e1eda948b505f0e18480e3b57cbdfb8347fc2288f8669c556396ee8899df5 not found: ID does not exist" Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.212141 5005 scope.go:117] "RemoveContainer" containerID="400978855a1b9bd9d5930e674499cf5737d8171b793312438755fb9cf17c515f" Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.229954 5005 scope.go:117] "RemoveContainer" containerID="070c821be0e163c8231a2e69ec7c9b2acb5f74f3016d587ef1266e120cc1dd95" Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.259977 5005 scope.go:117] "RemoveContainer" containerID="400978855a1b9bd9d5930e674499cf5737d8171b793312438755fb9cf17c515f" Feb 25 11:23:41 crc kubenswrapper[5005]: E0225 11:23:41.261044 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"400978855a1b9bd9d5930e674499cf5737d8171b793312438755fb9cf17c515f\": container with ID starting with 400978855a1b9bd9d5930e674499cf5737d8171b793312438755fb9cf17c515f not found: ID does not exist" containerID="400978855a1b9bd9d5930e674499cf5737d8171b793312438755fb9cf17c515f" Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.261129 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"400978855a1b9bd9d5930e674499cf5737d8171b793312438755fb9cf17c515f"} err="failed to get container status \"400978855a1b9bd9d5930e674499cf5737d8171b793312438755fb9cf17c515f\": rpc error: code = NotFound desc = could not find container \"400978855a1b9bd9d5930e674499cf5737d8171b793312438755fb9cf17c515f\": container with ID starting with 400978855a1b9bd9d5930e674499cf5737d8171b793312438755fb9cf17c515f not found: ID does not exist" Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.261166 5005 scope.go:117] "RemoveContainer" containerID="070c821be0e163c8231a2e69ec7c9b2acb5f74f3016d587ef1266e120cc1dd95" Feb 25 11:23:41 crc kubenswrapper[5005]: E0225 11:23:41.261742 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"070c821be0e163c8231a2e69ec7c9b2acb5f74f3016d587ef1266e120cc1dd95\": container with ID starting with 070c821be0e163c8231a2e69ec7c9b2acb5f74f3016d587ef1266e120cc1dd95 not found: ID does not exist" containerID="070c821be0e163c8231a2e69ec7c9b2acb5f74f3016d587ef1266e120cc1dd95" Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.261791 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"070c821be0e163c8231a2e69ec7c9b2acb5f74f3016d587ef1266e120cc1dd95"} err="failed to get container status \"070c821be0e163c8231a2e69ec7c9b2acb5f74f3016d587ef1266e120cc1dd95\": rpc error: code = NotFound desc = could not find container \"070c821be0e163c8231a2e69ec7c9b2acb5f74f3016d587ef1266e120cc1dd95\": container with ID starting with 070c821be0e163c8231a2e69ec7c9b2acb5f74f3016d587ef1266e120cc1dd95 not found: ID does not exist" Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.261816 5005 scope.go:117] "RemoveContainer" containerID="bd007eca1dc4ac6bf1fd5a1abd9f85e5d4173c99a000d6732c23d456881dc12e" Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.267496 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e5d39a0-d977-4973-a2a3-55699e86de91-catalog-content\") pod \"4e5d39a0-d977-4973-a2a3-55699e86de91\" (UID: \"4e5d39a0-d977-4973-a2a3-55699e86de91\") " Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.267534 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vhjn\" (UniqueName: \"kubernetes.io/projected/4e5d39a0-d977-4973-a2a3-55699e86de91-kube-api-access-6vhjn\") pod \"4e5d39a0-d977-4973-a2a3-55699e86de91\" (UID: \"4e5d39a0-d977-4973-a2a3-55699e86de91\") " Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.267592 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flpk8\" (UniqueName: \"kubernetes.io/projected/f0fc9766-1d31-4d5a-8234-47d9e32f59be-kube-api-access-flpk8\") pod \"f0fc9766-1d31-4d5a-8234-47d9e32f59be\" (UID: \"f0fc9766-1d31-4d5a-8234-47d9e32f59be\") " Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.267619 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0fc9766-1d31-4d5a-8234-47d9e32f59be-utilities\") pod \"f0fc9766-1d31-4d5a-8234-47d9e32f59be\" (UID: \"f0fc9766-1d31-4d5a-8234-47d9e32f59be\") " Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.267646 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e5d39a0-d977-4973-a2a3-55699e86de91-utilities\") pod \"4e5d39a0-d977-4973-a2a3-55699e86de91\" (UID: \"4e5d39a0-d977-4973-a2a3-55699e86de91\") " Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.269469 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0fc9766-1d31-4d5a-8234-47d9e32f59be-catalog-content\") pod \"f0fc9766-1d31-4d5a-8234-47d9e32f59be\" (UID: \"f0fc9766-1d31-4d5a-8234-47d9e32f59be\") " Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.269518 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0fc9766-1d31-4d5a-8234-47d9e32f59be-utilities" (OuterVolumeSpecName: "utilities") pod "f0fc9766-1d31-4d5a-8234-47d9e32f59be" (UID: "f0fc9766-1d31-4d5a-8234-47d9e32f59be"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.273976 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e5d39a0-d977-4973-a2a3-55699e86de91-kube-api-access-6vhjn" (OuterVolumeSpecName: "kube-api-access-6vhjn") pod "4e5d39a0-d977-4973-a2a3-55699e86de91" (UID: "4e5d39a0-d977-4973-a2a3-55699e86de91"). InnerVolumeSpecName "kube-api-access-6vhjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.282146 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e5d39a0-d977-4973-a2a3-55699e86de91-utilities" (OuterVolumeSpecName: "utilities") pod "4e5d39a0-d977-4973-a2a3-55699e86de91" (UID: "4e5d39a0-d977-4973-a2a3-55699e86de91"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.284221 5005 scope.go:117] "RemoveContainer" containerID="8458e476cd09de215d8cd5f8701cadf4021748efd1809b8ccb09f21d60243683" Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.296407 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zhhkb"] Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.303612 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zhhkb"] Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.308692 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0fc9766-1d31-4d5a-8234-47d9e32f59be-kube-api-access-flpk8" (OuterVolumeSpecName: "kube-api-access-flpk8") pod "f0fc9766-1d31-4d5a-8234-47d9e32f59be" (UID: "f0fc9766-1d31-4d5a-8234-47d9e32f59be"). InnerVolumeSpecName "kube-api-access-flpk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.309061 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dgfww"] Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.316291 5005 scope.go:117] "RemoveContainer" containerID="eeff8dcc34975225b0343cfa74746cf6d4e0aad2afa4d76fb83897c203d58955" Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.320227 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dgfww"] Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.327272 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hzbjf"] Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.330040 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e5d39a0-d977-4973-a2a3-55699e86de91-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4e5d39a0-d977-4973-a2a3-55699e86de91" (UID: "4e5d39a0-d977-4973-a2a3-55699e86de91"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.335566 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hzbjf"] Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.354430 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0fc9766-1d31-4d5a-8234-47d9e32f59be-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f0fc9766-1d31-4d5a-8234-47d9e32f59be" (UID: "f0fc9766-1d31-4d5a-8234-47d9e32f59be"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.355294 5005 scope.go:117] "RemoveContainer" containerID="bd007eca1dc4ac6bf1fd5a1abd9f85e5d4173c99a000d6732c23d456881dc12e" Feb 25 11:23:41 crc kubenswrapper[5005]: E0225 11:23:41.356616 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd007eca1dc4ac6bf1fd5a1abd9f85e5d4173c99a000d6732c23d456881dc12e\": container with ID starting with bd007eca1dc4ac6bf1fd5a1abd9f85e5d4173c99a000d6732c23d456881dc12e not found: ID does not exist" containerID="bd007eca1dc4ac6bf1fd5a1abd9f85e5d4173c99a000d6732c23d456881dc12e" Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.356645 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd007eca1dc4ac6bf1fd5a1abd9f85e5d4173c99a000d6732c23d456881dc12e"} err="failed to get container status \"bd007eca1dc4ac6bf1fd5a1abd9f85e5d4173c99a000d6732c23d456881dc12e\": rpc error: code = NotFound desc = could not find container \"bd007eca1dc4ac6bf1fd5a1abd9f85e5d4173c99a000d6732c23d456881dc12e\": container with ID starting with bd007eca1dc4ac6bf1fd5a1abd9f85e5d4173c99a000d6732c23d456881dc12e not found: ID does not exist" Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.356666 5005 scope.go:117] "RemoveContainer" containerID="8458e476cd09de215d8cd5f8701cadf4021748efd1809b8ccb09f21d60243683" Feb 25 11:23:41 crc kubenswrapper[5005]: E0225 11:23:41.356974 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8458e476cd09de215d8cd5f8701cadf4021748efd1809b8ccb09f21d60243683\": container with ID starting with 8458e476cd09de215d8cd5f8701cadf4021748efd1809b8ccb09f21d60243683 not found: ID does not exist" containerID="8458e476cd09de215d8cd5f8701cadf4021748efd1809b8ccb09f21d60243683" Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.357039 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8458e476cd09de215d8cd5f8701cadf4021748efd1809b8ccb09f21d60243683"} err="failed to get container status \"8458e476cd09de215d8cd5f8701cadf4021748efd1809b8ccb09f21d60243683\": rpc error: code = NotFound desc = could not find container \"8458e476cd09de215d8cd5f8701cadf4021748efd1809b8ccb09f21d60243683\": container with ID starting with 8458e476cd09de215d8cd5f8701cadf4021748efd1809b8ccb09f21d60243683 not found: ID does not exist" Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.357055 5005 scope.go:117] "RemoveContainer" containerID="eeff8dcc34975225b0343cfa74746cf6d4e0aad2afa4d76fb83897c203d58955" Feb 25 11:23:41 crc kubenswrapper[5005]: E0225 11:23:41.357430 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eeff8dcc34975225b0343cfa74746cf6d4e0aad2afa4d76fb83897c203d58955\": container with ID starting with eeff8dcc34975225b0343cfa74746cf6d4e0aad2afa4d76fb83897c203d58955 not found: ID does not exist" containerID="eeff8dcc34975225b0343cfa74746cf6d4e0aad2afa4d76fb83897c203d58955" Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.357454 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eeff8dcc34975225b0343cfa74746cf6d4e0aad2afa4d76fb83897c203d58955"} err="failed to get container status \"eeff8dcc34975225b0343cfa74746cf6d4e0aad2afa4d76fb83897c203d58955\": rpc error: code = NotFound desc = could not find container \"eeff8dcc34975225b0343cfa74746cf6d4e0aad2afa4d76fb83897c203d58955\": container with ID starting with eeff8dcc34975225b0343cfa74746cf6d4e0aad2afa4d76fb83897c203d58955 not found: ID does not exist" Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.381731 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-flpk8\" (UniqueName: \"kubernetes.io/projected/f0fc9766-1d31-4d5a-8234-47d9e32f59be-kube-api-access-flpk8\") on node \"crc\" DevicePath \"\"" Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.381769 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0fc9766-1d31-4d5a-8234-47d9e32f59be-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.381783 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e5d39a0-d977-4973-a2a3-55699e86de91-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.381794 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0fc9766-1d31-4d5a-8234-47d9e32f59be-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.381806 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e5d39a0-d977-4973-a2a3-55699e86de91-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.381817 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vhjn\" (UniqueName: \"kubernetes.io/projected/4e5d39a0-d977-4973-a2a3-55699e86de91-kube-api-access-6vhjn\") on node \"crc\" DevicePath \"\"" Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.970475 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9d6zm" event={"ID":"5733f94a-093f-4eda-ad84-a2d3cf989483","Type":"ContainerStarted","Data":"117849c2cd1974e24fd0017663121082d77e794b094147e897813f9df52e3977"} Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.970565 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9d6zm" event={"ID":"5733f94a-093f-4eda-ad84-a2d3cf989483","Type":"ContainerStarted","Data":"6d11baca94695028e5f1f5a4a8f039fc77e0085456085310e19c671e630f2a6a"} Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.970599 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-9d6zm" Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.980353 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-9d6zm" Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.984914 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vw6ng" event={"ID":"4e5d39a0-d977-4973-a2a3-55699e86de91","Type":"ContainerDied","Data":"523f7e6a2ac9e9e8ec20808b144ff6668206e3f5b4bbf0539e1031258d5cbe73"} Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.985017 5005 scope.go:117] "RemoveContainer" containerID="a4eccfabd239252ec9261e9af13bee8677b2c7e3f6199893c41e8932206d9c2a" Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.984926 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vw6ng" Feb 25 11:23:41 crc kubenswrapper[5005]: I0225 11:23:41.992075 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-9d6zm" podStartSLOduration=1.992058627 podStartE2EDuration="1.992058627s" podCreationTimestamp="2026-02-25 11:23:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:23:41.99029202 +0000 UTC m=+336.031024377" watchObservedRunningTime="2026-02-25 11:23:41.992058627 +0000 UTC m=+336.032790954" Feb 25 11:23:42 crc kubenswrapper[5005]: I0225 11:23:42.004021 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9bdm4" event={"ID":"f0fc9766-1d31-4d5a-8234-47d9e32f59be","Type":"ContainerDied","Data":"5164004bd7edea935f82d4af708fec589eb3e2afbed0b30967d90c94bf831893"} Feb 25 11:23:42 crc kubenswrapper[5005]: I0225 11:23:42.004144 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9bdm4" Feb 25 11:23:42 crc kubenswrapper[5005]: I0225 11:23:42.025469 5005 scope.go:117] "RemoveContainer" containerID="efa4fab6845a904d2453745b60f92daf2c19142cae8703fc8c7ca40d731f6e2c" Feb 25 11:23:42 crc kubenswrapper[5005]: I0225 11:23:42.046280 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vw6ng"] Feb 25 11:23:42 crc kubenswrapper[5005]: I0225 11:23:42.080220 5005 scope.go:117] "RemoveContainer" containerID="f0d343acbb4beaffba9cb2648c4430d63fe8db62ff5579889441014ff4c5d179" Feb 25 11:23:42 crc kubenswrapper[5005]: I0225 11:23:42.080338 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vw6ng"] Feb 25 11:23:42 crc kubenswrapper[5005]: I0225 11:23:42.086257 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9bdm4"] Feb 25 11:23:42 crc kubenswrapper[5005]: I0225 11:23:42.093247 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9bdm4"] Feb 25 11:23:42 crc kubenswrapper[5005]: I0225 11:23:42.099779 5005 scope.go:117] "RemoveContainer" containerID="d23d7c3f091e304f99032ea79230ff5b393277aa44da3fc6a2971a4980508fc4" Feb 25 11:23:42 crc kubenswrapper[5005]: I0225 11:23:42.117156 5005 scope.go:117] "RemoveContainer" containerID="8c353db74ccda82b617cdb4349427d99643bcbed3950421bf641dfb706248d63" Feb 25 11:23:42 crc kubenswrapper[5005]: I0225 11:23:42.132823 5005 scope.go:117] "RemoveContainer" containerID="d7e7aada8b128f64aa57ad37afd3c8f59d02fcf55f3948ced7964bd6a990f67c" Feb 25 11:23:42 crc kubenswrapper[5005]: I0225 11:23:42.668116 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8wzmf"] Feb 25 11:23:42 crc kubenswrapper[5005]: E0225 11:23:42.669442 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a85e597-ce82-4b41-b00e-e142bbd38849" containerName="marketplace-operator" Feb 25 11:23:42 crc kubenswrapper[5005]: I0225 11:23:42.669540 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a85e597-ce82-4b41-b00e-e142bbd38849" containerName="marketplace-operator" Feb 25 11:23:42 crc kubenswrapper[5005]: E0225 11:23:42.669604 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e5d39a0-d977-4973-a2a3-55699e86de91" containerName="extract-content" Feb 25 11:23:42 crc kubenswrapper[5005]: I0225 11:23:42.669675 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e5d39a0-d977-4973-a2a3-55699e86de91" containerName="extract-content" Feb 25 11:23:42 crc kubenswrapper[5005]: E0225 11:23:42.669738 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0fc9766-1d31-4d5a-8234-47d9e32f59be" containerName="extract-content" Feb 25 11:23:42 crc kubenswrapper[5005]: I0225 11:23:42.669795 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0fc9766-1d31-4d5a-8234-47d9e32f59be" containerName="extract-content" Feb 25 11:23:42 crc kubenswrapper[5005]: E0225 11:23:42.669875 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a80639a8-456e-4f88-a949-ce0c6fa8284c" containerName="extract-utilities" Feb 25 11:23:42 crc kubenswrapper[5005]: I0225 11:23:42.669933 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="a80639a8-456e-4f88-a949-ce0c6fa8284c" containerName="extract-utilities" Feb 25 11:23:42 crc kubenswrapper[5005]: E0225 11:23:42.669987 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a80639a8-456e-4f88-a949-ce0c6fa8284c" containerName="extract-content" Feb 25 11:23:42 crc kubenswrapper[5005]: I0225 11:23:42.670051 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="a80639a8-456e-4f88-a949-ce0c6fa8284c" containerName="extract-content" Feb 25 11:23:42 crc kubenswrapper[5005]: E0225 11:23:42.670122 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e5d39a0-d977-4973-a2a3-55699e86de91" containerName="extract-utilities" Feb 25 11:23:42 crc kubenswrapper[5005]: I0225 11:23:42.670203 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e5d39a0-d977-4973-a2a3-55699e86de91" containerName="extract-utilities" Feb 25 11:23:42 crc kubenswrapper[5005]: E0225 11:23:42.673849 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0fc9766-1d31-4d5a-8234-47d9e32f59be" containerName="registry-server" Feb 25 11:23:42 crc kubenswrapper[5005]: I0225 11:23:42.673958 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0fc9766-1d31-4d5a-8234-47d9e32f59be" containerName="registry-server" Feb 25 11:23:42 crc kubenswrapper[5005]: E0225 11:23:42.674025 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e5d39a0-d977-4973-a2a3-55699e86de91" containerName="registry-server" Feb 25 11:23:42 crc kubenswrapper[5005]: I0225 11:23:42.674081 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e5d39a0-d977-4973-a2a3-55699e86de91" containerName="registry-server" Feb 25 11:23:42 crc kubenswrapper[5005]: E0225 11:23:42.674147 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e6f04db-3c08-49ba-af16-7ebbc0cfe6f1" containerName="extract-content" Feb 25 11:23:42 crc kubenswrapper[5005]: I0225 11:23:42.674209 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e6f04db-3c08-49ba-af16-7ebbc0cfe6f1" containerName="extract-content" Feb 25 11:23:42 crc kubenswrapper[5005]: E0225 11:23:42.674269 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a80639a8-456e-4f88-a949-ce0c6fa8284c" containerName="registry-server" Feb 25 11:23:42 crc kubenswrapper[5005]: I0225 11:23:42.674326 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="a80639a8-456e-4f88-a949-ce0c6fa8284c" containerName="registry-server" Feb 25 11:23:42 crc kubenswrapper[5005]: E0225 11:23:42.674404 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a85e597-ce82-4b41-b00e-e142bbd38849" containerName="marketplace-operator" Feb 25 11:23:42 crc kubenswrapper[5005]: I0225 11:23:42.674469 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a85e597-ce82-4b41-b00e-e142bbd38849" containerName="marketplace-operator" Feb 25 11:23:42 crc kubenswrapper[5005]: E0225 11:23:42.674695 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0fc9766-1d31-4d5a-8234-47d9e32f59be" containerName="extract-utilities" Feb 25 11:23:42 crc kubenswrapper[5005]: I0225 11:23:42.674758 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0fc9766-1d31-4d5a-8234-47d9e32f59be" containerName="extract-utilities" Feb 25 11:23:42 crc kubenswrapper[5005]: E0225 11:23:42.674817 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e6f04db-3c08-49ba-af16-7ebbc0cfe6f1" containerName="registry-server" Feb 25 11:23:42 crc kubenswrapper[5005]: I0225 11:23:42.674874 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e6f04db-3c08-49ba-af16-7ebbc0cfe6f1" containerName="registry-server" Feb 25 11:23:42 crc kubenswrapper[5005]: E0225 11:23:42.674932 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e6f04db-3c08-49ba-af16-7ebbc0cfe6f1" containerName="extract-utilities" Feb 25 11:23:42 crc kubenswrapper[5005]: I0225 11:23:42.674986 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e6f04db-3c08-49ba-af16-7ebbc0cfe6f1" containerName="extract-utilities" Feb 25 11:23:42 crc kubenswrapper[5005]: I0225 11:23:42.675239 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a85e597-ce82-4b41-b00e-e142bbd38849" containerName="marketplace-operator" Feb 25 11:23:42 crc kubenswrapper[5005]: I0225 11:23:42.675315 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e6f04db-3c08-49ba-af16-7ebbc0cfe6f1" containerName="registry-server" Feb 25 11:23:42 crc kubenswrapper[5005]: I0225 11:23:42.675394 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e5d39a0-d977-4973-a2a3-55699e86de91" containerName="registry-server" Feb 25 11:23:42 crc kubenswrapper[5005]: I0225 11:23:42.675453 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a85e597-ce82-4b41-b00e-e142bbd38849" containerName="marketplace-operator" Feb 25 11:23:42 crc kubenswrapper[5005]: I0225 11:23:42.675522 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0fc9766-1d31-4d5a-8234-47d9e32f59be" containerName="registry-server" Feb 25 11:23:42 crc kubenswrapper[5005]: I0225 11:23:42.675587 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="a80639a8-456e-4f88-a949-ce0c6fa8284c" containerName="registry-server" Feb 25 11:23:42 crc kubenswrapper[5005]: I0225 11:23:42.676398 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8wzmf" Feb 25 11:23:42 crc kubenswrapper[5005]: I0225 11:23:42.678983 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 25 11:23:42 crc kubenswrapper[5005]: I0225 11:23:42.681502 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8wzmf"] Feb 25 11:23:42 crc kubenswrapper[5005]: I0225 11:23:42.696191 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e5d39a0-d977-4973-a2a3-55699e86de91" path="/var/lib/kubelet/pods/4e5d39a0-d977-4973-a2a3-55699e86de91/volumes" Feb 25 11:23:42 crc kubenswrapper[5005]: I0225 11:23:42.697016 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e6f04db-3c08-49ba-af16-7ebbc0cfe6f1" path="/var/lib/kubelet/pods/8e6f04db-3c08-49ba-af16-7ebbc0cfe6f1/volumes" Feb 25 11:23:42 crc kubenswrapper[5005]: I0225 11:23:42.697903 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a85e597-ce82-4b41-b00e-e142bbd38849" path="/var/lib/kubelet/pods/9a85e597-ce82-4b41-b00e-e142bbd38849/volumes" Feb 25 11:23:42 crc kubenswrapper[5005]: I0225 11:23:42.699085 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a80639a8-456e-4f88-a949-ce0c6fa8284c" path="/var/lib/kubelet/pods/a80639a8-456e-4f88-a949-ce0c6fa8284c/volumes" Feb 25 11:23:42 crc kubenswrapper[5005]: I0225 11:23:42.699943 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0fc9766-1d31-4d5a-8234-47d9e32f59be" path="/var/lib/kubelet/pods/f0fc9766-1d31-4d5a-8234-47d9e32f59be/volumes" Feb 25 11:23:42 crc kubenswrapper[5005]: I0225 11:23:42.811906 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d339f835-0982-43e9-9d42-3a6893c3905e-utilities\") pod \"redhat-marketplace-8wzmf\" (UID: \"d339f835-0982-43e9-9d42-3a6893c3905e\") " pod="openshift-marketplace/redhat-marketplace-8wzmf" Feb 25 11:23:42 crc kubenswrapper[5005]: I0225 11:23:42.811955 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhq7s\" (UniqueName: \"kubernetes.io/projected/d339f835-0982-43e9-9d42-3a6893c3905e-kube-api-access-mhq7s\") pod \"redhat-marketplace-8wzmf\" (UID: \"d339f835-0982-43e9-9d42-3a6893c3905e\") " pod="openshift-marketplace/redhat-marketplace-8wzmf" Feb 25 11:23:42 crc kubenswrapper[5005]: I0225 11:23:42.812007 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d339f835-0982-43e9-9d42-3a6893c3905e-catalog-content\") pod \"redhat-marketplace-8wzmf\" (UID: \"d339f835-0982-43e9-9d42-3a6893c3905e\") " pod="openshift-marketplace/redhat-marketplace-8wzmf" Feb 25 11:23:42 crc kubenswrapper[5005]: I0225 11:23:42.868094 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4dt5p"] Feb 25 11:23:42 crc kubenswrapper[5005]: I0225 11:23:42.869243 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4dt5p" Feb 25 11:23:42 crc kubenswrapper[5005]: I0225 11:23:42.872538 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 25 11:23:42 crc kubenswrapper[5005]: I0225 11:23:42.879622 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4dt5p"] Feb 25 11:23:42 crc kubenswrapper[5005]: I0225 11:23:42.913646 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d339f835-0982-43e9-9d42-3a6893c3905e-utilities\") pod \"redhat-marketplace-8wzmf\" (UID: \"d339f835-0982-43e9-9d42-3a6893c3905e\") " pod="openshift-marketplace/redhat-marketplace-8wzmf" Feb 25 11:23:42 crc kubenswrapper[5005]: I0225 11:23:42.913734 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhq7s\" (UniqueName: \"kubernetes.io/projected/d339f835-0982-43e9-9d42-3a6893c3905e-kube-api-access-mhq7s\") pod \"redhat-marketplace-8wzmf\" (UID: \"d339f835-0982-43e9-9d42-3a6893c3905e\") " pod="openshift-marketplace/redhat-marketplace-8wzmf" Feb 25 11:23:42 crc kubenswrapper[5005]: I0225 11:23:42.913796 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d339f835-0982-43e9-9d42-3a6893c3905e-catalog-content\") pod \"redhat-marketplace-8wzmf\" (UID: \"d339f835-0982-43e9-9d42-3a6893c3905e\") " pod="openshift-marketplace/redhat-marketplace-8wzmf" Feb 25 11:23:42 crc kubenswrapper[5005]: I0225 11:23:42.914444 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d339f835-0982-43e9-9d42-3a6893c3905e-catalog-content\") pod \"redhat-marketplace-8wzmf\" (UID: \"d339f835-0982-43e9-9d42-3a6893c3905e\") " pod="openshift-marketplace/redhat-marketplace-8wzmf" Feb 25 11:23:42 crc kubenswrapper[5005]: I0225 11:23:42.914517 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d339f835-0982-43e9-9d42-3a6893c3905e-utilities\") pod \"redhat-marketplace-8wzmf\" (UID: \"d339f835-0982-43e9-9d42-3a6893c3905e\") " pod="openshift-marketplace/redhat-marketplace-8wzmf" Feb 25 11:23:42 crc kubenswrapper[5005]: I0225 11:23:42.930299 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhq7s\" (UniqueName: \"kubernetes.io/projected/d339f835-0982-43e9-9d42-3a6893c3905e-kube-api-access-mhq7s\") pod \"redhat-marketplace-8wzmf\" (UID: \"d339f835-0982-43e9-9d42-3a6893c3905e\") " pod="openshift-marketplace/redhat-marketplace-8wzmf" Feb 25 11:23:43 crc kubenswrapper[5005]: I0225 11:23:43.014835 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57238c9b-440f-4958-9380-41b2fed1033e-catalog-content\") pod \"redhat-operators-4dt5p\" (UID: \"57238c9b-440f-4958-9380-41b2fed1033e\") " pod="openshift-marketplace/redhat-operators-4dt5p" Feb 25 11:23:43 crc kubenswrapper[5005]: I0225 11:23:43.014903 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6hll\" (UniqueName: \"kubernetes.io/projected/57238c9b-440f-4958-9380-41b2fed1033e-kube-api-access-w6hll\") pod \"redhat-operators-4dt5p\" (UID: \"57238c9b-440f-4958-9380-41b2fed1033e\") " pod="openshift-marketplace/redhat-operators-4dt5p" Feb 25 11:23:43 crc kubenswrapper[5005]: I0225 11:23:43.014963 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57238c9b-440f-4958-9380-41b2fed1033e-utilities\") pod \"redhat-operators-4dt5p\" (UID: \"57238c9b-440f-4958-9380-41b2fed1033e\") " pod="openshift-marketplace/redhat-operators-4dt5p" Feb 25 11:23:43 crc kubenswrapper[5005]: I0225 11:23:43.038661 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8wzmf" Feb 25 11:23:43 crc kubenswrapper[5005]: I0225 11:23:43.116668 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57238c9b-440f-4958-9380-41b2fed1033e-catalog-content\") pod \"redhat-operators-4dt5p\" (UID: \"57238c9b-440f-4958-9380-41b2fed1033e\") " pod="openshift-marketplace/redhat-operators-4dt5p" Feb 25 11:23:43 crc kubenswrapper[5005]: I0225 11:23:43.116719 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6hll\" (UniqueName: \"kubernetes.io/projected/57238c9b-440f-4958-9380-41b2fed1033e-kube-api-access-w6hll\") pod \"redhat-operators-4dt5p\" (UID: \"57238c9b-440f-4958-9380-41b2fed1033e\") " pod="openshift-marketplace/redhat-operators-4dt5p" Feb 25 11:23:43 crc kubenswrapper[5005]: I0225 11:23:43.116757 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57238c9b-440f-4958-9380-41b2fed1033e-utilities\") pod \"redhat-operators-4dt5p\" (UID: \"57238c9b-440f-4958-9380-41b2fed1033e\") " pod="openshift-marketplace/redhat-operators-4dt5p" Feb 25 11:23:43 crc kubenswrapper[5005]: I0225 11:23:43.117549 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57238c9b-440f-4958-9380-41b2fed1033e-utilities\") pod \"redhat-operators-4dt5p\" (UID: \"57238c9b-440f-4958-9380-41b2fed1033e\") " pod="openshift-marketplace/redhat-operators-4dt5p" Feb 25 11:23:43 crc kubenswrapper[5005]: I0225 11:23:43.117994 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57238c9b-440f-4958-9380-41b2fed1033e-catalog-content\") pod \"redhat-operators-4dt5p\" (UID: \"57238c9b-440f-4958-9380-41b2fed1033e\") " pod="openshift-marketplace/redhat-operators-4dt5p" Feb 25 11:23:43 crc kubenswrapper[5005]: I0225 11:23:43.144792 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6hll\" (UniqueName: \"kubernetes.io/projected/57238c9b-440f-4958-9380-41b2fed1033e-kube-api-access-w6hll\") pod \"redhat-operators-4dt5p\" (UID: \"57238c9b-440f-4958-9380-41b2fed1033e\") " pod="openshift-marketplace/redhat-operators-4dt5p" Feb 25 11:23:43 crc kubenswrapper[5005]: I0225 11:23:43.195924 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4dt5p" Feb 25 11:23:43 crc kubenswrapper[5005]: I0225 11:23:43.232908 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8wzmf"] Feb 25 11:23:43 crc kubenswrapper[5005]: E0225 11:23:43.464888 5005 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd339f835_0982_43e9_9d42_3a6893c3905e.slice/crio-662156c5a59d5baa1f3fbd53e3fb3183cf9f3e97e7637e9f1ddc9da254d53143.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd339f835_0982_43e9_9d42_3a6893c3905e.slice/crio-conmon-662156c5a59d5baa1f3fbd53e3fb3183cf9f3e97e7637e9f1ddc9da254d53143.scope\": RecentStats: unable to find data in memory cache]" Feb 25 11:23:43 crc kubenswrapper[5005]: I0225 11:23:43.629229 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4dt5p"] Feb 25 11:23:44 crc kubenswrapper[5005]: I0225 11:23:44.018757 5005 generic.go:334] "Generic (PLEG): container finished" podID="57238c9b-440f-4958-9380-41b2fed1033e" containerID="63d4a7e5022f97688e98424180aed7d7122a708adc91361b84e3520a7b383b9d" exitCode=0 Feb 25 11:23:44 crc kubenswrapper[5005]: I0225 11:23:44.018853 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dt5p" event={"ID":"57238c9b-440f-4958-9380-41b2fed1033e","Type":"ContainerDied","Data":"63d4a7e5022f97688e98424180aed7d7122a708adc91361b84e3520a7b383b9d"} Feb 25 11:23:44 crc kubenswrapper[5005]: I0225 11:23:44.018895 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dt5p" event={"ID":"57238c9b-440f-4958-9380-41b2fed1033e","Type":"ContainerStarted","Data":"4ef39816fa3af35a0808f40c4539e6af5f723e93e26f19ce6207177170e07e15"} Feb 25 11:23:44 crc kubenswrapper[5005]: I0225 11:23:44.023413 5005 generic.go:334] "Generic (PLEG): container finished" podID="d339f835-0982-43e9-9d42-3a6893c3905e" containerID="662156c5a59d5baa1f3fbd53e3fb3183cf9f3e97e7637e9f1ddc9da254d53143" exitCode=0 Feb 25 11:23:44 crc kubenswrapper[5005]: I0225 11:23:44.023507 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8wzmf" event={"ID":"d339f835-0982-43e9-9d42-3a6893c3905e","Type":"ContainerDied","Data":"662156c5a59d5baa1f3fbd53e3fb3183cf9f3e97e7637e9f1ddc9da254d53143"} Feb 25 11:23:44 crc kubenswrapper[5005]: I0225 11:23:44.023557 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8wzmf" event={"ID":"d339f835-0982-43e9-9d42-3a6893c3905e","Type":"ContainerStarted","Data":"00fcc0cd30bb8a361b362ddfd7f8d3578f16cc1f6fde7203a4155a9e362a3c0c"} Feb 25 11:23:45 crc kubenswrapper[5005]: I0225 11:23:45.071342 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9vlc9"] Feb 25 11:23:45 crc kubenswrapper[5005]: I0225 11:23:45.073787 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9vlc9" Feb 25 11:23:45 crc kubenswrapper[5005]: I0225 11:23:45.075659 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 25 11:23:45 crc kubenswrapper[5005]: I0225 11:23:45.090885 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9vlc9"] Feb 25 11:23:45 crc kubenswrapper[5005]: I0225 11:23:45.245662 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2595820-1d56-4102-82a6-cbc52b963ab4-catalog-content\") pod \"certified-operators-9vlc9\" (UID: \"c2595820-1d56-4102-82a6-cbc52b963ab4\") " pod="openshift-marketplace/certified-operators-9vlc9" Feb 25 11:23:45 crc kubenswrapper[5005]: I0225 11:23:45.245714 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcsfn\" (UniqueName: \"kubernetes.io/projected/c2595820-1d56-4102-82a6-cbc52b963ab4-kube-api-access-rcsfn\") pod \"certified-operators-9vlc9\" (UID: \"c2595820-1d56-4102-82a6-cbc52b963ab4\") " pod="openshift-marketplace/certified-operators-9vlc9" Feb 25 11:23:45 crc kubenswrapper[5005]: I0225 11:23:45.245991 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2595820-1d56-4102-82a6-cbc52b963ab4-utilities\") pod \"certified-operators-9vlc9\" (UID: \"c2595820-1d56-4102-82a6-cbc52b963ab4\") " pod="openshift-marketplace/certified-operators-9vlc9" Feb 25 11:23:45 crc kubenswrapper[5005]: I0225 11:23:45.264450 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rkjrs"] Feb 25 11:23:45 crc kubenswrapper[5005]: I0225 11:23:45.265423 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rkjrs" Feb 25 11:23:45 crc kubenswrapper[5005]: I0225 11:23:45.268687 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 25 11:23:45 crc kubenswrapper[5005]: I0225 11:23:45.277415 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rkjrs"] Feb 25 11:23:45 crc kubenswrapper[5005]: I0225 11:23:45.347208 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2595820-1d56-4102-82a6-cbc52b963ab4-utilities\") pod \"certified-operators-9vlc9\" (UID: \"c2595820-1d56-4102-82a6-cbc52b963ab4\") " pod="openshift-marketplace/certified-operators-9vlc9" Feb 25 11:23:45 crc kubenswrapper[5005]: I0225 11:23:45.347269 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2595820-1d56-4102-82a6-cbc52b963ab4-catalog-content\") pod \"certified-operators-9vlc9\" (UID: \"c2595820-1d56-4102-82a6-cbc52b963ab4\") " pod="openshift-marketplace/certified-operators-9vlc9" Feb 25 11:23:45 crc kubenswrapper[5005]: I0225 11:23:45.347286 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcsfn\" (UniqueName: \"kubernetes.io/projected/c2595820-1d56-4102-82a6-cbc52b963ab4-kube-api-access-rcsfn\") pod \"certified-operators-9vlc9\" (UID: \"c2595820-1d56-4102-82a6-cbc52b963ab4\") " pod="openshift-marketplace/certified-operators-9vlc9" Feb 25 11:23:45 crc kubenswrapper[5005]: I0225 11:23:45.347773 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2595820-1d56-4102-82a6-cbc52b963ab4-utilities\") pod \"certified-operators-9vlc9\" (UID: \"c2595820-1d56-4102-82a6-cbc52b963ab4\") " pod="openshift-marketplace/certified-operators-9vlc9" Feb 25 11:23:45 crc kubenswrapper[5005]: I0225 11:23:45.348023 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2595820-1d56-4102-82a6-cbc52b963ab4-catalog-content\") pod \"certified-operators-9vlc9\" (UID: \"c2595820-1d56-4102-82a6-cbc52b963ab4\") " pod="openshift-marketplace/certified-operators-9vlc9" Feb 25 11:23:45 crc kubenswrapper[5005]: I0225 11:23:45.365455 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcsfn\" (UniqueName: \"kubernetes.io/projected/c2595820-1d56-4102-82a6-cbc52b963ab4-kube-api-access-rcsfn\") pod \"certified-operators-9vlc9\" (UID: \"c2595820-1d56-4102-82a6-cbc52b963ab4\") " pod="openshift-marketplace/certified-operators-9vlc9" Feb 25 11:23:45 crc kubenswrapper[5005]: I0225 11:23:45.391968 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9vlc9" Feb 25 11:23:45 crc kubenswrapper[5005]: I0225 11:23:45.448996 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81b20372-b556-4def-bd2f-1452f40fe338-catalog-content\") pod \"community-operators-rkjrs\" (UID: \"81b20372-b556-4def-bd2f-1452f40fe338\") " pod="openshift-marketplace/community-operators-rkjrs" Feb 25 11:23:45 crc kubenswrapper[5005]: I0225 11:23:45.449421 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81b20372-b556-4def-bd2f-1452f40fe338-utilities\") pod \"community-operators-rkjrs\" (UID: \"81b20372-b556-4def-bd2f-1452f40fe338\") " pod="openshift-marketplace/community-operators-rkjrs" Feb 25 11:23:45 crc kubenswrapper[5005]: I0225 11:23:45.449679 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bnhq\" (UniqueName: \"kubernetes.io/projected/81b20372-b556-4def-bd2f-1452f40fe338-kube-api-access-6bnhq\") pod \"community-operators-rkjrs\" (UID: \"81b20372-b556-4def-bd2f-1452f40fe338\") " pod="openshift-marketplace/community-operators-rkjrs" Feb 25 11:23:45 crc kubenswrapper[5005]: I0225 11:23:45.550785 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81b20372-b556-4def-bd2f-1452f40fe338-catalog-content\") pod \"community-operators-rkjrs\" (UID: \"81b20372-b556-4def-bd2f-1452f40fe338\") " pod="openshift-marketplace/community-operators-rkjrs" Feb 25 11:23:45 crc kubenswrapper[5005]: I0225 11:23:45.550852 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81b20372-b556-4def-bd2f-1452f40fe338-utilities\") pod \"community-operators-rkjrs\" (UID: \"81b20372-b556-4def-bd2f-1452f40fe338\") " pod="openshift-marketplace/community-operators-rkjrs" Feb 25 11:23:45 crc kubenswrapper[5005]: I0225 11:23:45.550923 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bnhq\" (UniqueName: \"kubernetes.io/projected/81b20372-b556-4def-bd2f-1452f40fe338-kube-api-access-6bnhq\") pod \"community-operators-rkjrs\" (UID: \"81b20372-b556-4def-bd2f-1452f40fe338\") " pod="openshift-marketplace/community-operators-rkjrs" Feb 25 11:23:45 crc kubenswrapper[5005]: I0225 11:23:45.551411 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81b20372-b556-4def-bd2f-1452f40fe338-utilities\") pod \"community-operators-rkjrs\" (UID: \"81b20372-b556-4def-bd2f-1452f40fe338\") " pod="openshift-marketplace/community-operators-rkjrs" Feb 25 11:23:45 crc kubenswrapper[5005]: I0225 11:23:45.551443 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81b20372-b556-4def-bd2f-1452f40fe338-catalog-content\") pod \"community-operators-rkjrs\" (UID: \"81b20372-b556-4def-bd2f-1452f40fe338\") " pod="openshift-marketplace/community-operators-rkjrs" Feb 25 11:23:45 crc kubenswrapper[5005]: I0225 11:23:45.573285 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bnhq\" (UniqueName: \"kubernetes.io/projected/81b20372-b556-4def-bd2f-1452f40fe338-kube-api-access-6bnhq\") pod \"community-operators-rkjrs\" (UID: \"81b20372-b556-4def-bd2f-1452f40fe338\") " pod="openshift-marketplace/community-operators-rkjrs" Feb 25 11:23:45 crc kubenswrapper[5005]: I0225 11:23:45.606935 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rkjrs" Feb 25 11:23:46 crc kubenswrapper[5005]: I0225 11:23:45.764487 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9vlc9"] Feb 25 11:23:46 crc kubenswrapper[5005]: W0225 11:23:45.773174 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc2595820_1d56_4102_82a6_cbc52b963ab4.slice/crio-6c584044f0912f72da9e8bc6a2c5d0341f25c017132d1a0b6b973e4c2c4e172c WatchSource:0}: Error finding container 6c584044f0912f72da9e8bc6a2c5d0341f25c017132d1a0b6b973e4c2c4e172c: Status 404 returned error can't find the container with id 6c584044f0912f72da9e8bc6a2c5d0341f25c017132d1a0b6b973e4c2c4e172c Feb 25 11:23:46 crc kubenswrapper[5005]: I0225 11:23:46.035137 5005 generic.go:334] "Generic (PLEG): container finished" podID="c2595820-1d56-4102-82a6-cbc52b963ab4" containerID="8be50e6414f6669a5c6958aae1d7b83bf431022dc41a58565bc0f3ac2f646d37" exitCode=0 Feb 25 11:23:46 crc kubenswrapper[5005]: I0225 11:23:46.035178 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9vlc9" event={"ID":"c2595820-1d56-4102-82a6-cbc52b963ab4","Type":"ContainerDied","Data":"8be50e6414f6669a5c6958aae1d7b83bf431022dc41a58565bc0f3ac2f646d37"} Feb 25 11:23:46 crc kubenswrapper[5005]: I0225 11:23:46.035212 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9vlc9" event={"ID":"c2595820-1d56-4102-82a6-cbc52b963ab4","Type":"ContainerStarted","Data":"6c584044f0912f72da9e8bc6a2c5d0341f25c017132d1a0b6b973e4c2c4e172c"} Feb 25 11:23:46 crc kubenswrapper[5005]: I0225 11:23:46.428163 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rkjrs"] Feb 25 11:23:46 crc kubenswrapper[5005]: W0225 11:23:46.435668 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81b20372_b556_4def_bd2f_1452f40fe338.slice/crio-5453e1adbf2da47db1d03e2041029930e5ff56c60d6a403fd5a828c77ed44411 WatchSource:0}: Error finding container 5453e1adbf2da47db1d03e2041029930e5ff56c60d6a403fd5a828c77ed44411: Status 404 returned error can't find the container with id 5453e1adbf2da47db1d03e2041029930e5ff56c60d6a403fd5a828c77ed44411 Feb 25 11:23:47 crc kubenswrapper[5005]: I0225 11:23:47.048397 5005 generic.go:334] "Generic (PLEG): container finished" podID="81b20372-b556-4def-bd2f-1452f40fe338" containerID="2d9183f6def143b53b10ad1b7a5a6a5102a6284dc01a1989c56d8f3c7166f962" exitCode=0 Feb 25 11:23:47 crc kubenswrapper[5005]: I0225 11:23:47.048457 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rkjrs" event={"ID":"81b20372-b556-4def-bd2f-1452f40fe338","Type":"ContainerDied","Data":"2d9183f6def143b53b10ad1b7a5a6a5102a6284dc01a1989c56d8f3c7166f962"} Feb 25 11:23:47 crc kubenswrapper[5005]: I0225 11:23:47.048483 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rkjrs" event={"ID":"81b20372-b556-4def-bd2f-1452f40fe338","Type":"ContainerStarted","Data":"5453e1adbf2da47db1d03e2041029930e5ff56c60d6a403fd5a828c77ed44411"} Feb 25 11:23:47 crc kubenswrapper[5005]: I0225 11:23:47.051620 5005 generic.go:334] "Generic (PLEG): container finished" podID="57238c9b-440f-4958-9380-41b2fed1033e" containerID="e58b3a2a58d3ae724af4cb6af2c8aa7ae7cec56faa3b680d6e8a49882aad2a7d" exitCode=0 Feb 25 11:23:47 crc kubenswrapper[5005]: I0225 11:23:47.051666 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dt5p" event={"ID":"57238c9b-440f-4958-9380-41b2fed1033e","Type":"ContainerDied","Data":"e58b3a2a58d3ae724af4cb6af2c8aa7ae7cec56faa3b680d6e8a49882aad2a7d"} Feb 25 11:23:47 crc kubenswrapper[5005]: I0225 11:23:47.056999 5005 generic.go:334] "Generic (PLEG): container finished" podID="d339f835-0982-43e9-9d42-3a6893c3905e" containerID="be4197429856bc33c23ff1e52baa50c26243257f70e6bfec9e66b2e67459c25c" exitCode=0 Feb 25 11:23:47 crc kubenswrapper[5005]: I0225 11:23:47.057023 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8wzmf" event={"ID":"d339f835-0982-43e9-9d42-3a6893c3905e","Type":"ContainerDied","Data":"be4197429856bc33c23ff1e52baa50c26243257f70e6bfec9e66b2e67459c25c"} Feb 25 11:23:48 crc kubenswrapper[5005]: I0225 11:23:48.064877 5005 generic.go:334] "Generic (PLEG): container finished" podID="c2595820-1d56-4102-82a6-cbc52b963ab4" containerID="9db3e5c8ec310f12f87c60bc8a9a5dc20578bff8ae5b18cef0855e76e75c852e" exitCode=0 Feb 25 11:23:48 crc kubenswrapper[5005]: I0225 11:23:48.064960 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9vlc9" event={"ID":"c2595820-1d56-4102-82a6-cbc52b963ab4","Type":"ContainerDied","Data":"9db3e5c8ec310f12f87c60bc8a9a5dc20578bff8ae5b18cef0855e76e75c852e"} Feb 25 11:23:49 crc kubenswrapper[5005]: I0225 11:23:49.089127 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dt5p" event={"ID":"57238c9b-440f-4958-9380-41b2fed1033e","Type":"ContainerStarted","Data":"574ba6b23857e830369aaa7e1a9054e8a3dad6527700ae5e9252e9c09e957fc5"} Feb 25 11:23:49 crc kubenswrapper[5005]: I0225 11:23:49.105410 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4dt5p" podStartSLOduration=2.336553622 podStartE2EDuration="7.105337825s" podCreationTimestamp="2026-02-25 11:23:42 +0000 UTC" firstStartedPulling="2026-02-25 11:23:44.021049245 +0000 UTC m=+338.061781612" lastFinishedPulling="2026-02-25 11:23:48.789833488 +0000 UTC m=+342.830565815" observedRunningTime="2026-02-25 11:23:49.102210102 +0000 UTC m=+343.142942429" watchObservedRunningTime="2026-02-25 11:23:49.105337825 +0000 UTC m=+343.146070152" Feb 25 11:23:50 crc kubenswrapper[5005]: I0225 11:23:50.096682 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9vlc9" event={"ID":"c2595820-1d56-4102-82a6-cbc52b963ab4","Type":"ContainerStarted","Data":"7d7f36dc5136f430e3a8dd50a9ed4888162d7967450fa527ee5183eeaec79fd2"} Feb 25 11:23:50 crc kubenswrapper[5005]: I0225 11:23:50.098250 5005 generic.go:334] "Generic (PLEG): container finished" podID="81b20372-b556-4def-bd2f-1452f40fe338" containerID="acc125cc87dae88bd1bc3b73b4aea651a920eb59858e381dd7106d1379e6957c" exitCode=0 Feb 25 11:23:50 crc kubenswrapper[5005]: I0225 11:23:50.098501 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rkjrs" event={"ID":"81b20372-b556-4def-bd2f-1452f40fe338","Type":"ContainerDied","Data":"acc125cc87dae88bd1bc3b73b4aea651a920eb59858e381dd7106d1379e6957c"} Feb 25 11:23:50 crc kubenswrapper[5005]: I0225 11:23:50.124101 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9vlc9" podStartSLOduration=1.6258539650000001 podStartE2EDuration="5.124077472s" podCreationTimestamp="2026-02-25 11:23:45 +0000 UTC" firstStartedPulling="2026-02-25 11:23:46.03644534 +0000 UTC m=+340.077177657" lastFinishedPulling="2026-02-25 11:23:49.534668837 +0000 UTC m=+343.575401164" observedRunningTime="2026-02-25 11:23:50.120981991 +0000 UTC m=+344.161714358" watchObservedRunningTime="2026-02-25 11:23:50.124077472 +0000 UTC m=+344.164809839" Feb 25 11:23:51 crc kubenswrapper[5005]: I0225 11:23:51.104877 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8wzmf" event={"ID":"d339f835-0982-43e9-9d42-3a6893c3905e","Type":"ContainerStarted","Data":"46aa60136b3194ba523cf3585d3ad66a5abf6a809ba562e631771c95f3c87dd0"} Feb 25 11:23:51 crc kubenswrapper[5005]: I0225 11:23:51.106934 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rkjrs" event={"ID":"81b20372-b556-4def-bd2f-1452f40fe338","Type":"ContainerStarted","Data":"c02166e9269209b5b24ada3c9ff08678af3bd1c1db6782eeae097f1efc3bebf9"} Feb 25 11:23:51 crc kubenswrapper[5005]: I0225 11:23:51.126564 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8wzmf" podStartSLOduration=3.048689692 podStartE2EDuration="9.126548258s" podCreationTimestamp="2026-02-25 11:23:42 +0000 UTC" firstStartedPulling="2026-02-25 11:23:44.025737758 +0000 UTC m=+338.066470085" lastFinishedPulling="2026-02-25 11:23:50.103596324 +0000 UTC m=+344.144328651" observedRunningTime="2026-02-25 11:23:51.124781281 +0000 UTC m=+345.165513608" watchObservedRunningTime="2026-02-25 11:23:51.126548258 +0000 UTC m=+345.167280585" Feb 25 11:23:51 crc kubenswrapper[5005]: I0225 11:23:51.152432 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rkjrs" podStartSLOduration=2.670069234 podStartE2EDuration="6.152413382s" podCreationTimestamp="2026-02-25 11:23:45 +0000 UTC" firstStartedPulling="2026-02-25 11:23:47.052098606 +0000 UTC m=+341.092830933" lastFinishedPulling="2026-02-25 11:23:50.534442754 +0000 UTC m=+344.575175081" observedRunningTime="2026-02-25 11:23:51.150157979 +0000 UTC m=+345.190890306" watchObservedRunningTime="2026-02-25 11:23:51.152413382 +0000 UTC m=+345.193145709" Feb 25 11:23:53 crc kubenswrapper[5005]: I0225 11:23:53.039263 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8wzmf" Feb 25 11:23:53 crc kubenswrapper[5005]: I0225 11:23:53.039706 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8wzmf" Feb 25 11:23:53 crc kubenswrapper[5005]: I0225 11:23:53.093944 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8wzmf" Feb 25 11:23:53 crc kubenswrapper[5005]: I0225 11:23:53.196458 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4dt5p" Feb 25 11:23:53 crc kubenswrapper[5005]: I0225 11:23:53.196502 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4dt5p" Feb 25 11:23:54 crc kubenswrapper[5005]: I0225 11:23:54.243679 5005 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4dt5p" podUID="57238c9b-440f-4958-9380-41b2fed1033e" containerName="registry-server" probeResult="failure" output=< Feb 25 11:23:54 crc kubenswrapper[5005]: timeout: failed to connect service ":50051" within 1s Feb 25 11:23:54 crc kubenswrapper[5005]: > Feb 25 11:23:55 crc kubenswrapper[5005]: I0225 11:23:55.392245 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9vlc9" Feb 25 11:23:55 crc kubenswrapper[5005]: I0225 11:23:55.392682 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9vlc9" Feb 25 11:23:55 crc kubenswrapper[5005]: I0225 11:23:55.444311 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9vlc9" Feb 25 11:23:55 crc kubenswrapper[5005]: I0225 11:23:55.607258 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rkjrs" Feb 25 11:23:55 crc kubenswrapper[5005]: I0225 11:23:55.607321 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rkjrs" Feb 25 11:23:55 crc kubenswrapper[5005]: I0225 11:23:55.668116 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rkjrs" Feb 25 11:23:56 crc kubenswrapper[5005]: I0225 11:23:56.185294 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9vlc9" Feb 25 11:23:56 crc kubenswrapper[5005]: I0225 11:23:56.205355 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rkjrs" Feb 25 11:23:57 crc kubenswrapper[5005]: I0225 11:23:57.007158 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-jkj4r" Feb 25 11:23:57 crc kubenswrapper[5005]: I0225 11:23:57.060720 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xxm8g"] Feb 25 11:24:00 crc kubenswrapper[5005]: I0225 11:24:00.140480 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533644-tplgw"] Feb 25 11:24:00 crc kubenswrapper[5005]: I0225 11:24:00.141621 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533644-tplgw" Feb 25 11:24:00 crc kubenswrapper[5005]: I0225 11:24:00.143824 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 11:24:00 crc kubenswrapper[5005]: I0225 11:24:00.143922 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 11:24:00 crc kubenswrapper[5005]: I0225 11:24:00.144006 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 11:24:00 crc kubenswrapper[5005]: I0225 11:24:00.147925 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533644-tplgw"] Feb 25 11:24:00 crc kubenswrapper[5005]: I0225 11:24:00.253642 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwc8r\" (UniqueName: \"kubernetes.io/projected/f84fbc8f-fd0e-4c51-8667-f0e672a3aebb-kube-api-access-wwc8r\") pod \"auto-csr-approver-29533644-tplgw\" (UID: \"f84fbc8f-fd0e-4c51-8667-f0e672a3aebb\") " pod="openshift-infra/auto-csr-approver-29533644-tplgw" Feb 25 11:24:00 crc kubenswrapper[5005]: I0225 11:24:00.355309 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwc8r\" (UniqueName: \"kubernetes.io/projected/f84fbc8f-fd0e-4c51-8667-f0e672a3aebb-kube-api-access-wwc8r\") pod \"auto-csr-approver-29533644-tplgw\" (UID: \"f84fbc8f-fd0e-4c51-8667-f0e672a3aebb\") " pod="openshift-infra/auto-csr-approver-29533644-tplgw" Feb 25 11:24:00 crc kubenswrapper[5005]: I0225 11:24:00.379902 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwc8r\" (UniqueName: \"kubernetes.io/projected/f84fbc8f-fd0e-4c51-8667-f0e672a3aebb-kube-api-access-wwc8r\") pod \"auto-csr-approver-29533644-tplgw\" (UID: \"f84fbc8f-fd0e-4c51-8667-f0e672a3aebb\") " pod="openshift-infra/auto-csr-approver-29533644-tplgw" Feb 25 11:24:00 crc kubenswrapper[5005]: I0225 11:24:00.457639 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533644-tplgw" Feb 25 11:24:00 crc kubenswrapper[5005]: I0225 11:24:00.660295 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533644-tplgw"] Feb 25 11:24:01 crc kubenswrapper[5005]: I0225 11:24:01.171963 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533644-tplgw" event={"ID":"f84fbc8f-fd0e-4c51-8667-f0e672a3aebb","Type":"ContainerStarted","Data":"e2b8ef12e5bd2cbc56fe624741c6dc3263d7fd388ddac8e730ea06686f6e2c1c"} Feb 25 11:24:02 crc kubenswrapper[5005]: I0225 11:24:02.179970 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533644-tplgw" event={"ID":"f84fbc8f-fd0e-4c51-8667-f0e672a3aebb","Type":"ContainerStarted","Data":"2e7fea44d02b32208a7cd6f694dcd414aba496a9311ea694238a071f385562da"} Feb 25 11:24:02 crc kubenswrapper[5005]: I0225 11:24:02.196651 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29533644-tplgw" podStartSLOduration=1.059930302 podStartE2EDuration="2.19662987s" podCreationTimestamp="2026-02-25 11:24:00 +0000 UTC" firstStartedPulling="2026-02-25 11:24:00.667760103 +0000 UTC m=+354.708492430" lastFinishedPulling="2026-02-25 11:24:01.804459671 +0000 UTC m=+355.845191998" observedRunningTime="2026-02-25 11:24:02.192357951 +0000 UTC m=+356.233090288" watchObservedRunningTime="2026-02-25 11:24:02.19662987 +0000 UTC m=+356.237362197" Feb 25 11:24:03 crc kubenswrapper[5005]: I0225 11:24:03.102839 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8wzmf" Feb 25 11:24:03 crc kubenswrapper[5005]: I0225 11:24:03.187752 5005 generic.go:334] "Generic (PLEG): container finished" podID="f84fbc8f-fd0e-4c51-8667-f0e672a3aebb" containerID="2e7fea44d02b32208a7cd6f694dcd414aba496a9311ea694238a071f385562da" exitCode=0 Feb 25 11:24:03 crc kubenswrapper[5005]: I0225 11:24:03.187820 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533644-tplgw" event={"ID":"f84fbc8f-fd0e-4c51-8667-f0e672a3aebb","Type":"ContainerDied","Data":"2e7fea44d02b32208a7cd6f694dcd414aba496a9311ea694238a071f385562da"} Feb 25 11:24:03 crc kubenswrapper[5005]: I0225 11:24:03.319150 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4dt5p" Feb 25 11:24:03 crc kubenswrapper[5005]: I0225 11:24:03.373584 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4dt5p" Feb 25 11:24:04 crc kubenswrapper[5005]: I0225 11:24:04.464831 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533644-tplgw" Feb 25 11:24:04 crc kubenswrapper[5005]: I0225 11:24:04.611317 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwc8r\" (UniqueName: \"kubernetes.io/projected/f84fbc8f-fd0e-4c51-8667-f0e672a3aebb-kube-api-access-wwc8r\") pod \"f84fbc8f-fd0e-4c51-8667-f0e672a3aebb\" (UID: \"f84fbc8f-fd0e-4c51-8667-f0e672a3aebb\") " Feb 25 11:24:04 crc kubenswrapper[5005]: I0225 11:24:04.620647 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f84fbc8f-fd0e-4c51-8667-f0e672a3aebb-kube-api-access-wwc8r" (OuterVolumeSpecName: "kube-api-access-wwc8r") pod "f84fbc8f-fd0e-4c51-8667-f0e672a3aebb" (UID: "f84fbc8f-fd0e-4c51-8667-f0e672a3aebb"). InnerVolumeSpecName "kube-api-access-wwc8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:24:04 crc kubenswrapper[5005]: I0225 11:24:04.712494 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwc8r\" (UniqueName: \"kubernetes.io/projected/f84fbc8f-fd0e-4c51-8667-f0e672a3aebb-kube-api-access-wwc8r\") on node \"crc\" DevicePath \"\"" Feb 25 11:24:05 crc kubenswrapper[5005]: I0225 11:24:05.206604 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533644-tplgw" event={"ID":"f84fbc8f-fd0e-4c51-8667-f0e672a3aebb","Type":"ContainerDied","Data":"e2b8ef12e5bd2cbc56fe624741c6dc3263d7fd388ddac8e730ea06686f6e2c1c"} Feb 25 11:24:05 crc kubenswrapper[5005]: I0225 11:24:05.206641 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2b8ef12e5bd2cbc56fe624741c6dc3263d7fd388ddac8e730ea06686f6e2c1c" Feb 25 11:24:05 crc kubenswrapper[5005]: I0225 11:24:05.206665 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533644-tplgw" Feb 25 11:24:22 crc kubenswrapper[5005]: I0225 11:24:22.109610 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" podUID="4031b2bd-16b2-49b4-a187-5eb591356aff" containerName="registry" containerID="cri-o://f4b1b34459113a88f4ab7aae312f54578279e010e98a435eca458731625356dd" gracePeriod=30 Feb 25 11:24:22 crc kubenswrapper[5005]: I0225 11:24:22.323863 5005 generic.go:334] "Generic (PLEG): container finished" podID="4031b2bd-16b2-49b4-a187-5eb591356aff" containerID="f4b1b34459113a88f4ab7aae312f54578279e010e98a435eca458731625356dd" exitCode=0 Feb 25 11:24:22 crc kubenswrapper[5005]: I0225 11:24:22.324043 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" event={"ID":"4031b2bd-16b2-49b4-a187-5eb591356aff","Type":"ContainerDied","Data":"f4b1b34459113a88f4ab7aae312f54578279e010e98a435eca458731625356dd"} Feb 25 11:24:22 crc kubenswrapper[5005]: I0225 11:24:22.498523 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:24:22 crc kubenswrapper[5005]: I0225 11:24:22.654774 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4031b2bd-16b2-49b4-a187-5eb591356aff-bound-sa-token\") pod \"4031b2bd-16b2-49b4-a187-5eb591356aff\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " Feb 25 11:24:22 crc kubenswrapper[5005]: I0225 11:24:22.655010 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"4031b2bd-16b2-49b4-a187-5eb591356aff\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " Feb 25 11:24:22 crc kubenswrapper[5005]: I0225 11:24:22.655062 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4031b2bd-16b2-49b4-a187-5eb591356aff-registry-certificates\") pod \"4031b2bd-16b2-49b4-a187-5eb591356aff\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " Feb 25 11:24:22 crc kubenswrapper[5005]: I0225 11:24:22.655097 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4031b2bd-16b2-49b4-a187-5eb591356aff-ca-trust-extracted\") pod \"4031b2bd-16b2-49b4-a187-5eb591356aff\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " Feb 25 11:24:22 crc kubenswrapper[5005]: I0225 11:24:22.655122 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4031b2bd-16b2-49b4-a187-5eb591356aff-trusted-ca\") pod \"4031b2bd-16b2-49b4-a187-5eb591356aff\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " Feb 25 11:24:22 crc kubenswrapper[5005]: I0225 11:24:22.655145 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4031b2bd-16b2-49b4-a187-5eb591356aff-registry-tls\") pod \"4031b2bd-16b2-49b4-a187-5eb591356aff\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " Feb 25 11:24:22 crc kubenswrapper[5005]: I0225 11:24:22.655166 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4031b2bd-16b2-49b4-a187-5eb591356aff-installation-pull-secrets\") pod \"4031b2bd-16b2-49b4-a187-5eb591356aff\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " Feb 25 11:24:22 crc kubenswrapper[5005]: I0225 11:24:22.655201 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmsbt\" (UniqueName: \"kubernetes.io/projected/4031b2bd-16b2-49b4-a187-5eb591356aff-kube-api-access-dmsbt\") pod \"4031b2bd-16b2-49b4-a187-5eb591356aff\" (UID: \"4031b2bd-16b2-49b4-a187-5eb591356aff\") " Feb 25 11:24:22 crc kubenswrapper[5005]: I0225 11:24:22.655787 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4031b2bd-16b2-49b4-a187-5eb591356aff-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "4031b2bd-16b2-49b4-a187-5eb591356aff" (UID: "4031b2bd-16b2-49b4-a187-5eb591356aff"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:24:22 crc kubenswrapper[5005]: I0225 11:24:22.655854 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4031b2bd-16b2-49b4-a187-5eb591356aff-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "4031b2bd-16b2-49b4-a187-5eb591356aff" (UID: "4031b2bd-16b2-49b4-a187-5eb591356aff"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:24:22 crc kubenswrapper[5005]: I0225 11:24:22.664246 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4031b2bd-16b2-49b4-a187-5eb591356aff-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "4031b2bd-16b2-49b4-a187-5eb591356aff" (UID: "4031b2bd-16b2-49b4-a187-5eb591356aff"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:24:22 crc kubenswrapper[5005]: I0225 11:24:22.664757 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4031b2bd-16b2-49b4-a187-5eb591356aff-kube-api-access-dmsbt" (OuterVolumeSpecName: "kube-api-access-dmsbt") pod "4031b2bd-16b2-49b4-a187-5eb591356aff" (UID: "4031b2bd-16b2-49b4-a187-5eb591356aff"). InnerVolumeSpecName "kube-api-access-dmsbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:24:22 crc kubenswrapper[5005]: I0225 11:24:22.665929 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4031b2bd-16b2-49b4-a187-5eb591356aff-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "4031b2bd-16b2-49b4-a187-5eb591356aff" (UID: "4031b2bd-16b2-49b4-a187-5eb591356aff"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:24:22 crc kubenswrapper[5005]: I0225 11:24:22.666227 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4031b2bd-16b2-49b4-a187-5eb591356aff-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "4031b2bd-16b2-49b4-a187-5eb591356aff" (UID: "4031b2bd-16b2-49b4-a187-5eb591356aff"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:24:22 crc kubenswrapper[5005]: I0225 11:24:22.666771 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "4031b2bd-16b2-49b4-a187-5eb591356aff" (UID: "4031b2bd-16b2-49b4-a187-5eb591356aff"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 25 11:24:22 crc kubenswrapper[5005]: I0225 11:24:22.679114 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4031b2bd-16b2-49b4-a187-5eb591356aff-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "4031b2bd-16b2-49b4-a187-5eb591356aff" (UID: "4031b2bd-16b2-49b4-a187-5eb591356aff"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:24:22 crc kubenswrapper[5005]: I0225 11:24:22.756287 5005 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4031b2bd-16b2-49b4-a187-5eb591356aff-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 25 11:24:22 crc kubenswrapper[5005]: I0225 11:24:22.756745 5005 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4031b2bd-16b2-49b4-a187-5eb591356aff-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 25 11:24:22 crc kubenswrapper[5005]: I0225 11:24:22.756848 5005 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4031b2bd-16b2-49b4-a187-5eb591356aff-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 25 11:24:22 crc kubenswrapper[5005]: I0225 11:24:22.756874 5005 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4031b2bd-16b2-49b4-a187-5eb591356aff-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 25 11:24:22 crc kubenswrapper[5005]: I0225 11:24:22.756938 5005 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4031b2bd-16b2-49b4-a187-5eb591356aff-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 25 11:24:22 crc kubenswrapper[5005]: I0225 11:24:22.756957 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmsbt\" (UniqueName: \"kubernetes.io/projected/4031b2bd-16b2-49b4-a187-5eb591356aff-kube-api-access-dmsbt\") on node \"crc\" DevicePath \"\"" Feb 25 11:24:22 crc kubenswrapper[5005]: I0225 11:24:22.757009 5005 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4031b2bd-16b2-49b4-a187-5eb591356aff-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 25 11:24:23 crc kubenswrapper[5005]: I0225 11:24:23.333362 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" event={"ID":"4031b2bd-16b2-49b4-a187-5eb591356aff","Type":"ContainerDied","Data":"051e09b9a1bc39b4ecf381355bb6bd254e70f03f9356d0f3dfed9890553941ea"} Feb 25 11:24:23 crc kubenswrapper[5005]: I0225 11:24:23.333461 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-xxm8g" Feb 25 11:24:23 crc kubenswrapper[5005]: I0225 11:24:23.333474 5005 scope.go:117] "RemoveContainer" containerID="f4b1b34459113a88f4ab7aae312f54578279e010e98a435eca458731625356dd" Feb 25 11:24:23 crc kubenswrapper[5005]: I0225 11:24:23.370825 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xxm8g"] Feb 25 11:24:23 crc kubenswrapper[5005]: I0225 11:24:23.385166 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xxm8g"] Feb 25 11:24:24 crc kubenswrapper[5005]: I0225 11:24:24.699293 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4031b2bd-16b2-49b4-a187-5eb591356aff" path="/var/lib/kubelet/pods/4031b2bd-16b2-49b4-a187-5eb591356aff/volumes" Feb 25 11:24:58 crc kubenswrapper[5005]: I0225 11:24:58.087863 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 11:24:58 crc kubenswrapper[5005]: I0225 11:24:58.088653 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 11:25:28 crc kubenswrapper[5005]: I0225 11:25:28.087412 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 11:25:28 crc kubenswrapper[5005]: I0225 11:25:28.088180 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 11:25:58 crc kubenswrapper[5005]: I0225 11:25:58.087300 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 11:25:58 crc kubenswrapper[5005]: I0225 11:25:58.089126 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 11:25:58 crc kubenswrapper[5005]: I0225 11:25:58.089275 5005 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" Feb 25 11:25:58 crc kubenswrapper[5005]: I0225 11:25:58.090039 5005 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"313fce3c48a9efbcb03b099224aaa818e6cd25bfe8ceeb165fe4267714129461"} pod="openshift-machine-config-operator/machine-config-daemon-tct5q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 25 11:25:58 crc kubenswrapper[5005]: I0225 11:25:58.090296 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" containerID="cri-o://313fce3c48a9efbcb03b099224aaa818e6cd25bfe8ceeb165fe4267714129461" gracePeriod=600 Feb 25 11:25:58 crc kubenswrapper[5005]: I0225 11:25:58.986686 5005 generic.go:334] "Generic (PLEG): container finished" podID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerID="313fce3c48a9efbcb03b099224aaa818e6cd25bfe8ceeb165fe4267714129461" exitCode=0 Feb 25 11:25:58 crc kubenswrapper[5005]: I0225 11:25:58.986843 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerDied","Data":"313fce3c48a9efbcb03b099224aaa818e6cd25bfe8ceeb165fe4267714129461"} Feb 25 11:25:58 crc kubenswrapper[5005]: I0225 11:25:58.987442 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerStarted","Data":"ddfc1e4f47aa879f9f576a0bce4a8073c09097db07a6b229402441fe59b04947"} Feb 25 11:25:58 crc kubenswrapper[5005]: I0225 11:25:58.987483 5005 scope.go:117] "RemoveContainer" containerID="bf45a94e52ca384448ef1d62b7ede2d8e7a4b2be829f1165eaef95efdccc1bf9" Feb 25 11:26:00 crc kubenswrapper[5005]: I0225 11:26:00.140147 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533646-4dlls"] Feb 25 11:26:00 crc kubenswrapper[5005]: E0225 11:26:00.140458 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4031b2bd-16b2-49b4-a187-5eb591356aff" containerName="registry" Feb 25 11:26:00 crc kubenswrapper[5005]: I0225 11:26:00.140478 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="4031b2bd-16b2-49b4-a187-5eb591356aff" containerName="registry" Feb 25 11:26:00 crc kubenswrapper[5005]: E0225 11:26:00.140494 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f84fbc8f-fd0e-4c51-8667-f0e672a3aebb" containerName="oc" Feb 25 11:26:00 crc kubenswrapper[5005]: I0225 11:26:00.140502 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="f84fbc8f-fd0e-4c51-8667-f0e672a3aebb" containerName="oc" Feb 25 11:26:00 crc kubenswrapper[5005]: I0225 11:26:00.140639 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="f84fbc8f-fd0e-4c51-8667-f0e672a3aebb" containerName="oc" Feb 25 11:26:00 crc kubenswrapper[5005]: I0225 11:26:00.140653 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="4031b2bd-16b2-49b4-a187-5eb591356aff" containerName="registry" Feb 25 11:26:00 crc kubenswrapper[5005]: I0225 11:26:00.141091 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533646-4dlls" Feb 25 11:26:00 crc kubenswrapper[5005]: I0225 11:26:00.144147 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 11:26:00 crc kubenswrapper[5005]: I0225 11:26:00.145676 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 11:26:00 crc kubenswrapper[5005]: I0225 11:26:00.151431 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533646-4dlls"] Feb 25 11:26:00 crc kubenswrapper[5005]: I0225 11:26:00.153192 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 11:26:00 crc kubenswrapper[5005]: I0225 11:26:00.278920 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbjzh\" (UniqueName: \"kubernetes.io/projected/e30c92cc-84df-4ffb-892f-38caccfc092a-kube-api-access-xbjzh\") pod \"auto-csr-approver-29533646-4dlls\" (UID: \"e30c92cc-84df-4ffb-892f-38caccfc092a\") " pod="openshift-infra/auto-csr-approver-29533646-4dlls" Feb 25 11:26:00 crc kubenswrapper[5005]: I0225 11:26:00.379901 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbjzh\" (UniqueName: \"kubernetes.io/projected/e30c92cc-84df-4ffb-892f-38caccfc092a-kube-api-access-xbjzh\") pod \"auto-csr-approver-29533646-4dlls\" (UID: \"e30c92cc-84df-4ffb-892f-38caccfc092a\") " pod="openshift-infra/auto-csr-approver-29533646-4dlls" Feb 25 11:26:00 crc kubenswrapper[5005]: I0225 11:26:00.406877 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbjzh\" (UniqueName: \"kubernetes.io/projected/e30c92cc-84df-4ffb-892f-38caccfc092a-kube-api-access-xbjzh\") pod \"auto-csr-approver-29533646-4dlls\" (UID: \"e30c92cc-84df-4ffb-892f-38caccfc092a\") " pod="openshift-infra/auto-csr-approver-29533646-4dlls" Feb 25 11:26:00 crc kubenswrapper[5005]: I0225 11:26:00.465737 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533646-4dlls" Feb 25 11:26:00 crc kubenswrapper[5005]: I0225 11:26:00.692408 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533646-4dlls"] Feb 25 11:26:00 crc kubenswrapper[5005]: W0225 11:26:00.694226 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode30c92cc_84df_4ffb_892f_38caccfc092a.slice/crio-aeae57e44b4073156b5a55ac61cd6e5d03a71f71977081bcd53902ad1cfeb864 WatchSource:0}: Error finding container aeae57e44b4073156b5a55ac61cd6e5d03a71f71977081bcd53902ad1cfeb864: Status 404 returned error can't find the container with id aeae57e44b4073156b5a55ac61cd6e5d03a71f71977081bcd53902ad1cfeb864 Feb 25 11:26:00 crc kubenswrapper[5005]: I0225 11:26:00.697427 5005 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 25 11:26:01 crc kubenswrapper[5005]: I0225 11:26:01.007571 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533646-4dlls" event={"ID":"e30c92cc-84df-4ffb-892f-38caccfc092a","Type":"ContainerStarted","Data":"aeae57e44b4073156b5a55ac61cd6e5d03a71f71977081bcd53902ad1cfeb864"} Feb 25 11:26:02 crc kubenswrapper[5005]: I0225 11:26:02.012152 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533646-4dlls" event={"ID":"e30c92cc-84df-4ffb-892f-38caccfc092a","Type":"ContainerStarted","Data":"f7ce4101d45dd9b9f387ed5810f32fabdf1e7feca4ebd40e50c2319cc53b7e95"} Feb 25 11:26:02 crc kubenswrapper[5005]: I0225 11:26:02.026710 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29533646-4dlls" podStartSLOduration=0.971036071 podStartE2EDuration="2.026693401s" podCreationTimestamp="2026-02-25 11:26:00 +0000 UTC" firstStartedPulling="2026-02-25 11:26:00.697134087 +0000 UTC m=+474.737866424" lastFinishedPulling="2026-02-25 11:26:01.752791397 +0000 UTC m=+475.793523754" observedRunningTime="2026-02-25 11:26:02.024729377 +0000 UTC m=+476.065461734" watchObservedRunningTime="2026-02-25 11:26:02.026693401 +0000 UTC m=+476.067425728" Feb 25 11:26:03 crc kubenswrapper[5005]: I0225 11:26:03.023193 5005 generic.go:334] "Generic (PLEG): container finished" podID="e30c92cc-84df-4ffb-892f-38caccfc092a" containerID="f7ce4101d45dd9b9f387ed5810f32fabdf1e7feca4ebd40e50c2319cc53b7e95" exitCode=0 Feb 25 11:26:03 crc kubenswrapper[5005]: I0225 11:26:03.023320 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533646-4dlls" event={"ID":"e30c92cc-84df-4ffb-892f-38caccfc092a","Type":"ContainerDied","Data":"f7ce4101d45dd9b9f387ed5810f32fabdf1e7feca4ebd40e50c2319cc53b7e95"} Feb 25 11:26:04 crc kubenswrapper[5005]: I0225 11:26:04.333927 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533646-4dlls" Feb 25 11:26:04 crc kubenswrapper[5005]: I0225 11:26:04.535787 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbjzh\" (UniqueName: \"kubernetes.io/projected/e30c92cc-84df-4ffb-892f-38caccfc092a-kube-api-access-xbjzh\") pod \"e30c92cc-84df-4ffb-892f-38caccfc092a\" (UID: \"e30c92cc-84df-4ffb-892f-38caccfc092a\") " Feb 25 11:26:04 crc kubenswrapper[5005]: I0225 11:26:04.544911 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e30c92cc-84df-4ffb-892f-38caccfc092a-kube-api-access-xbjzh" (OuterVolumeSpecName: "kube-api-access-xbjzh") pod "e30c92cc-84df-4ffb-892f-38caccfc092a" (UID: "e30c92cc-84df-4ffb-892f-38caccfc092a"). InnerVolumeSpecName "kube-api-access-xbjzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:26:04 crc kubenswrapper[5005]: I0225 11:26:04.637910 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbjzh\" (UniqueName: \"kubernetes.io/projected/e30c92cc-84df-4ffb-892f-38caccfc092a-kube-api-access-xbjzh\") on node \"crc\" DevicePath \"\"" Feb 25 11:26:05 crc kubenswrapper[5005]: I0225 11:26:05.040515 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533646-4dlls" event={"ID":"e30c92cc-84df-4ffb-892f-38caccfc092a","Type":"ContainerDied","Data":"aeae57e44b4073156b5a55ac61cd6e5d03a71f71977081bcd53902ad1cfeb864"} Feb 25 11:26:05 crc kubenswrapper[5005]: I0225 11:26:05.040581 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aeae57e44b4073156b5a55ac61cd6e5d03a71f71977081bcd53902ad1cfeb864" Feb 25 11:26:05 crc kubenswrapper[5005]: I0225 11:26:05.040663 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533646-4dlls" Feb 25 11:26:05 crc kubenswrapper[5005]: I0225 11:26:05.096794 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533640-qhftv"] Feb 25 11:26:05 crc kubenswrapper[5005]: I0225 11:26:05.105838 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533640-qhftv"] Feb 25 11:26:06 crc kubenswrapper[5005]: I0225 11:26:06.699766 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="705e7826-0109-4ea7-bfe8-3cf9e37285bc" path="/var/lib/kubelet/pods/705e7826-0109-4ea7-bfe8-3cf9e37285bc/volumes" Feb 25 11:26:07 crc kubenswrapper[5005]: I0225 11:26:07.107860 5005 scope.go:117] "RemoveContainer" containerID="58a0cac6b64186b40055e05be39c77b4e3b3c823d354ab7510582a7dda79b83e" Feb 25 11:27:07 crc kubenswrapper[5005]: I0225 11:27:07.152657 5005 scope.go:117] "RemoveContainer" containerID="049753b6ff0dfb9295486a0e755ce1792a50a9f099ea773f053cc9a630cf84f6" Feb 25 11:27:07 crc kubenswrapper[5005]: I0225 11:27:07.176813 5005 scope.go:117] "RemoveContainer" containerID="d5252fa3ede36290c040c2a90438259b3dbf341695dcb3f9a342949e313ecafc" Feb 25 11:27:07 crc kubenswrapper[5005]: I0225 11:27:07.202985 5005 scope.go:117] "RemoveContainer" containerID="04e9e5efffe4d02306c5548c74c766d7dbb28fed899f21bcb5413103007a125e" Feb 25 11:27:07 crc kubenswrapper[5005]: I0225 11:27:07.222354 5005 scope.go:117] "RemoveContainer" containerID="2d98975ee81a28eedb319b71763ee373c43fc5c9d348e0af1034229ebbb4fcd8" Feb 25 11:27:58 crc kubenswrapper[5005]: I0225 11:27:58.087047 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 11:27:58 crc kubenswrapper[5005]: I0225 11:27:58.087986 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 11:28:00 crc kubenswrapper[5005]: I0225 11:28:00.138641 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533648-q2lgl"] Feb 25 11:28:00 crc kubenswrapper[5005]: E0225 11:28:00.139806 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e30c92cc-84df-4ffb-892f-38caccfc092a" containerName="oc" Feb 25 11:28:00 crc kubenswrapper[5005]: I0225 11:28:00.139847 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="e30c92cc-84df-4ffb-892f-38caccfc092a" containerName="oc" Feb 25 11:28:00 crc kubenswrapper[5005]: I0225 11:28:00.140060 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="e30c92cc-84df-4ffb-892f-38caccfc092a" containerName="oc" Feb 25 11:28:00 crc kubenswrapper[5005]: I0225 11:28:00.140678 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533648-q2lgl" Feb 25 11:28:00 crc kubenswrapper[5005]: I0225 11:28:00.143168 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 11:28:00 crc kubenswrapper[5005]: I0225 11:28:00.143849 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 11:28:00 crc kubenswrapper[5005]: I0225 11:28:00.145656 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 11:28:00 crc kubenswrapper[5005]: I0225 11:28:00.148909 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533648-q2lgl"] Feb 25 11:28:00 crc kubenswrapper[5005]: I0225 11:28:00.227345 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46s4g\" (UniqueName: \"kubernetes.io/projected/99160239-c586-4344-b60e-164b1eb940cb-kube-api-access-46s4g\") pod \"auto-csr-approver-29533648-q2lgl\" (UID: \"99160239-c586-4344-b60e-164b1eb940cb\") " pod="openshift-infra/auto-csr-approver-29533648-q2lgl" Feb 25 11:28:00 crc kubenswrapper[5005]: I0225 11:28:00.329235 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46s4g\" (UniqueName: \"kubernetes.io/projected/99160239-c586-4344-b60e-164b1eb940cb-kube-api-access-46s4g\") pod \"auto-csr-approver-29533648-q2lgl\" (UID: \"99160239-c586-4344-b60e-164b1eb940cb\") " pod="openshift-infra/auto-csr-approver-29533648-q2lgl" Feb 25 11:28:00 crc kubenswrapper[5005]: I0225 11:28:00.347797 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46s4g\" (UniqueName: \"kubernetes.io/projected/99160239-c586-4344-b60e-164b1eb940cb-kube-api-access-46s4g\") pod \"auto-csr-approver-29533648-q2lgl\" (UID: \"99160239-c586-4344-b60e-164b1eb940cb\") " pod="openshift-infra/auto-csr-approver-29533648-q2lgl" Feb 25 11:28:00 crc kubenswrapper[5005]: I0225 11:28:00.458223 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533648-q2lgl" Feb 25 11:28:00 crc kubenswrapper[5005]: I0225 11:28:00.695910 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533648-q2lgl"] Feb 25 11:28:00 crc kubenswrapper[5005]: I0225 11:28:00.803900 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533648-q2lgl" event={"ID":"99160239-c586-4344-b60e-164b1eb940cb","Type":"ContainerStarted","Data":"d7857bd9539ead45652b5285509d1f1d5c53e0caf8f86ccb07937cf3ec6526d0"} Feb 25 11:28:02 crc kubenswrapper[5005]: I0225 11:28:02.823105 5005 generic.go:334] "Generic (PLEG): container finished" podID="99160239-c586-4344-b60e-164b1eb940cb" containerID="a9983490c071ee48c65d734c038f6ea8e7e95f5ff000491bf47db5fcee24ff43" exitCode=0 Feb 25 11:28:02 crc kubenswrapper[5005]: I0225 11:28:02.823247 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533648-q2lgl" event={"ID":"99160239-c586-4344-b60e-164b1eb940cb","Type":"ContainerDied","Data":"a9983490c071ee48c65d734c038f6ea8e7e95f5ff000491bf47db5fcee24ff43"} Feb 25 11:28:04 crc kubenswrapper[5005]: I0225 11:28:04.175240 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533648-q2lgl" Feb 25 11:28:04 crc kubenswrapper[5005]: I0225 11:28:04.283661 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46s4g\" (UniqueName: \"kubernetes.io/projected/99160239-c586-4344-b60e-164b1eb940cb-kube-api-access-46s4g\") pod \"99160239-c586-4344-b60e-164b1eb940cb\" (UID: \"99160239-c586-4344-b60e-164b1eb940cb\") " Feb 25 11:28:04 crc kubenswrapper[5005]: I0225 11:28:04.289039 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99160239-c586-4344-b60e-164b1eb940cb-kube-api-access-46s4g" (OuterVolumeSpecName: "kube-api-access-46s4g") pod "99160239-c586-4344-b60e-164b1eb940cb" (UID: "99160239-c586-4344-b60e-164b1eb940cb"). InnerVolumeSpecName "kube-api-access-46s4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:28:04 crc kubenswrapper[5005]: I0225 11:28:04.386257 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46s4g\" (UniqueName: \"kubernetes.io/projected/99160239-c586-4344-b60e-164b1eb940cb-kube-api-access-46s4g\") on node \"crc\" DevicePath \"\"" Feb 25 11:28:04 crc kubenswrapper[5005]: I0225 11:28:04.838821 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533648-q2lgl" event={"ID":"99160239-c586-4344-b60e-164b1eb940cb","Type":"ContainerDied","Data":"d7857bd9539ead45652b5285509d1f1d5c53e0caf8f86ccb07937cf3ec6526d0"} Feb 25 11:28:04 crc kubenswrapper[5005]: I0225 11:28:04.838881 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7857bd9539ead45652b5285509d1f1d5c53e0caf8f86ccb07937cf3ec6526d0" Feb 25 11:28:04 crc kubenswrapper[5005]: I0225 11:28:04.838938 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533648-q2lgl" Feb 25 11:28:05 crc kubenswrapper[5005]: I0225 11:28:05.251938 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533642-7t82d"] Feb 25 11:28:05 crc kubenswrapper[5005]: I0225 11:28:05.260667 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533642-7t82d"] Feb 25 11:28:06 crc kubenswrapper[5005]: I0225 11:28:06.697936 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8d98113-0837-4250-87f2-f4c32da84c73" path="/var/lib/kubelet/pods/d8d98113-0837-4250-87f2-f4c32da84c73/volumes" Feb 25 11:28:28 crc kubenswrapper[5005]: I0225 11:28:28.087286 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 11:28:28 crc kubenswrapper[5005]: I0225 11:28:28.087975 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 11:28:58 crc kubenswrapper[5005]: I0225 11:28:58.088093 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 11:28:58 crc kubenswrapper[5005]: I0225 11:28:58.088518 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 11:28:58 crc kubenswrapper[5005]: I0225 11:28:58.088553 5005 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" Feb 25 11:28:58 crc kubenswrapper[5005]: I0225 11:28:58.089010 5005 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ddfc1e4f47aa879f9f576a0bce4a8073c09097db07a6b229402441fe59b04947"} pod="openshift-machine-config-operator/machine-config-daemon-tct5q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 25 11:28:58 crc kubenswrapper[5005]: I0225 11:28:58.089050 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" containerID="cri-o://ddfc1e4f47aa879f9f576a0bce4a8073c09097db07a6b229402441fe59b04947" gracePeriod=600 Feb 25 11:28:59 crc kubenswrapper[5005]: I0225 11:28:59.215723 5005 generic.go:334] "Generic (PLEG): container finished" podID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerID="ddfc1e4f47aa879f9f576a0bce4a8073c09097db07a6b229402441fe59b04947" exitCode=0 Feb 25 11:28:59 crc kubenswrapper[5005]: I0225 11:28:59.215802 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerDied","Data":"ddfc1e4f47aa879f9f576a0bce4a8073c09097db07a6b229402441fe59b04947"} Feb 25 11:28:59 crc kubenswrapper[5005]: I0225 11:28:59.216599 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerStarted","Data":"725882823e770cccc62c7fb201d52910ad904925f31e4c131e1bed3c3ec5a21f"} Feb 25 11:28:59 crc kubenswrapper[5005]: I0225 11:28:59.216671 5005 scope.go:117] "RemoveContainer" containerID="313fce3c48a9efbcb03b099224aaa818e6cd25bfe8ceeb165fe4267714129461" Feb 25 11:29:07 crc kubenswrapper[5005]: I0225 11:29:07.294084 5005 scope.go:117] "RemoveContainer" containerID="b7a2ce1a21e524e46a844e30b1b48ab33784e69f462a67ea346c7bf93eb50dd8" Feb 25 11:29:20 crc kubenswrapper[5005]: I0225 11:29:20.292642 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-8qqzv"] Feb 25 11:29:20 crc kubenswrapper[5005]: E0225 11:29:20.294446 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99160239-c586-4344-b60e-164b1eb940cb" containerName="oc" Feb 25 11:29:20 crc kubenswrapper[5005]: I0225 11:29:20.294555 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="99160239-c586-4344-b60e-164b1eb940cb" containerName="oc" Feb 25 11:29:20 crc kubenswrapper[5005]: I0225 11:29:20.294760 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="99160239-c586-4344-b60e-164b1eb940cb" containerName="oc" Feb 25 11:29:20 crc kubenswrapper[5005]: I0225 11:29:20.295284 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-8qqzv" Feb 25 11:29:20 crc kubenswrapper[5005]: I0225 11:29:20.298533 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 25 11:29:20 crc kubenswrapper[5005]: I0225 11:29:20.298537 5005 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-csmjp" Feb 25 11:29:20 crc kubenswrapper[5005]: I0225 11:29:20.299172 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 25 11:29:20 crc kubenswrapper[5005]: I0225 11:29:20.300786 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-fqn5s"] Feb 25 11:29:20 crc kubenswrapper[5005]: I0225 11:29:20.301641 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-fqn5s" Feb 25 11:29:20 crc kubenswrapper[5005]: I0225 11:29:20.309961 5005 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-kkxmh" Feb 25 11:29:20 crc kubenswrapper[5005]: I0225 11:29:20.313071 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-8qqzv"] Feb 25 11:29:20 crc kubenswrapper[5005]: I0225 11:29:20.320202 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-fqn5s"] Feb 25 11:29:20 crc kubenswrapper[5005]: I0225 11:29:20.324322 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-7q9jl"] Feb 25 11:29:20 crc kubenswrapper[5005]: I0225 11:29:20.325271 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-7q9jl" Feb 25 11:29:20 crc kubenswrapper[5005]: I0225 11:29:20.328132 5005 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-prnss" Feb 25 11:29:20 crc kubenswrapper[5005]: I0225 11:29:20.334147 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-7q9jl"] Feb 25 11:29:20 crc kubenswrapper[5005]: I0225 11:29:20.334764 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg6m2\" (UniqueName: \"kubernetes.io/projected/3bf0b057-d203-4825-973b-c6ec18ce6008-kube-api-access-sg6m2\") pod \"cert-manager-858654f9db-fqn5s\" (UID: \"3bf0b057-d203-4825-973b-c6ec18ce6008\") " pod="cert-manager/cert-manager-858654f9db-fqn5s" Feb 25 11:29:20 crc kubenswrapper[5005]: I0225 11:29:20.334873 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mrxg\" (UniqueName: \"kubernetes.io/projected/597607f6-7b88-42c4-959e-c40d7273b7d5-kube-api-access-2mrxg\") pod \"cert-manager-webhook-687f57d79b-7q9jl\" (UID: \"597607f6-7b88-42c4-959e-c40d7273b7d5\") " pod="cert-manager/cert-manager-webhook-687f57d79b-7q9jl" Feb 25 11:29:20 crc kubenswrapper[5005]: I0225 11:29:20.334963 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmft5\" (UniqueName: \"kubernetes.io/projected/7080961a-f87d-44d8-ba9b-a97d6e6113a3-kube-api-access-cmft5\") pod \"cert-manager-cainjector-cf98fcc89-8qqzv\" (UID: \"7080961a-f87d-44d8-ba9b-a97d6e6113a3\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-8qqzv" Feb 25 11:29:20 crc kubenswrapper[5005]: I0225 11:29:20.436446 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sg6m2\" (UniqueName: \"kubernetes.io/projected/3bf0b057-d203-4825-973b-c6ec18ce6008-kube-api-access-sg6m2\") pod \"cert-manager-858654f9db-fqn5s\" (UID: \"3bf0b057-d203-4825-973b-c6ec18ce6008\") " pod="cert-manager/cert-manager-858654f9db-fqn5s" Feb 25 11:29:20 crc kubenswrapper[5005]: I0225 11:29:20.436884 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mrxg\" (UniqueName: \"kubernetes.io/projected/597607f6-7b88-42c4-959e-c40d7273b7d5-kube-api-access-2mrxg\") pod \"cert-manager-webhook-687f57d79b-7q9jl\" (UID: \"597607f6-7b88-42c4-959e-c40d7273b7d5\") " pod="cert-manager/cert-manager-webhook-687f57d79b-7q9jl" Feb 25 11:29:20 crc kubenswrapper[5005]: I0225 11:29:20.436952 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmft5\" (UniqueName: \"kubernetes.io/projected/7080961a-f87d-44d8-ba9b-a97d6e6113a3-kube-api-access-cmft5\") pod \"cert-manager-cainjector-cf98fcc89-8qqzv\" (UID: \"7080961a-f87d-44d8-ba9b-a97d6e6113a3\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-8qqzv" Feb 25 11:29:20 crc kubenswrapper[5005]: I0225 11:29:20.456763 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mrxg\" (UniqueName: \"kubernetes.io/projected/597607f6-7b88-42c4-959e-c40d7273b7d5-kube-api-access-2mrxg\") pod \"cert-manager-webhook-687f57d79b-7q9jl\" (UID: \"597607f6-7b88-42c4-959e-c40d7273b7d5\") " pod="cert-manager/cert-manager-webhook-687f57d79b-7q9jl" Feb 25 11:29:20 crc kubenswrapper[5005]: I0225 11:29:20.457026 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sg6m2\" (UniqueName: \"kubernetes.io/projected/3bf0b057-d203-4825-973b-c6ec18ce6008-kube-api-access-sg6m2\") pod \"cert-manager-858654f9db-fqn5s\" (UID: \"3bf0b057-d203-4825-973b-c6ec18ce6008\") " pod="cert-manager/cert-manager-858654f9db-fqn5s" Feb 25 11:29:20 crc kubenswrapper[5005]: I0225 11:29:20.458261 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmft5\" (UniqueName: \"kubernetes.io/projected/7080961a-f87d-44d8-ba9b-a97d6e6113a3-kube-api-access-cmft5\") pod \"cert-manager-cainjector-cf98fcc89-8qqzv\" (UID: \"7080961a-f87d-44d8-ba9b-a97d6e6113a3\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-8qqzv" Feb 25 11:29:20 crc kubenswrapper[5005]: I0225 11:29:20.623362 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-8qqzv" Feb 25 11:29:20 crc kubenswrapper[5005]: I0225 11:29:20.633438 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-fqn5s" Feb 25 11:29:20 crc kubenswrapper[5005]: I0225 11:29:20.649213 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-7q9jl" Feb 25 11:29:21 crc kubenswrapper[5005]: I0225 11:29:21.038037 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-8qqzv"] Feb 25 11:29:21 crc kubenswrapper[5005]: I0225 11:29:21.079097 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-7q9jl"] Feb 25 11:29:21 crc kubenswrapper[5005]: I0225 11:29:21.085056 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-fqn5s"] Feb 25 11:29:21 crc kubenswrapper[5005]: W0225 11:29:21.088429 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod597607f6_7b88_42c4_959e_c40d7273b7d5.slice/crio-6aac06e1c98381f88cfe71c6febcfe0ce181a2977ecca18621d7d99b1264c5fc WatchSource:0}: Error finding container 6aac06e1c98381f88cfe71c6febcfe0ce181a2977ecca18621d7d99b1264c5fc: Status 404 returned error can't find the container with id 6aac06e1c98381f88cfe71c6febcfe0ce181a2977ecca18621d7d99b1264c5fc Feb 25 11:29:21 crc kubenswrapper[5005]: W0225 11:29:21.088825 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3bf0b057_d203_4825_973b_c6ec18ce6008.slice/crio-7b4ea8ab1ca9a3b8e68f2142a44cc7e169d0e1ac69a23f890e6939c88ebc6464 WatchSource:0}: Error finding container 7b4ea8ab1ca9a3b8e68f2142a44cc7e169d0e1ac69a23f890e6939c88ebc6464: Status 404 returned error can't find the container with id 7b4ea8ab1ca9a3b8e68f2142a44cc7e169d0e1ac69a23f890e6939c88ebc6464 Feb 25 11:29:21 crc kubenswrapper[5005]: I0225 11:29:21.352322 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-7q9jl" event={"ID":"597607f6-7b88-42c4-959e-c40d7273b7d5","Type":"ContainerStarted","Data":"6aac06e1c98381f88cfe71c6febcfe0ce181a2977ecca18621d7d99b1264c5fc"} Feb 25 11:29:21 crc kubenswrapper[5005]: I0225 11:29:21.353496 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-fqn5s" event={"ID":"3bf0b057-d203-4825-973b-c6ec18ce6008","Type":"ContainerStarted","Data":"7b4ea8ab1ca9a3b8e68f2142a44cc7e169d0e1ac69a23f890e6939c88ebc6464"} Feb 25 11:29:21 crc kubenswrapper[5005]: I0225 11:29:21.354475 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-8qqzv" event={"ID":"7080961a-f87d-44d8-ba9b-a97d6e6113a3","Type":"ContainerStarted","Data":"a0e35957b522af91048596db19264eeaaba667fd9be9c4130065f23de85478bb"} Feb 25 11:29:25 crc kubenswrapper[5005]: I0225 11:29:25.382612 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-7q9jl" event={"ID":"597607f6-7b88-42c4-959e-c40d7273b7d5","Type":"ContainerStarted","Data":"b56340b63f6e452fdafbd99cba11ffb8ddffabe5043cc3006fe3a5779eff5dbc"} Feb 25 11:29:25 crc kubenswrapper[5005]: I0225 11:29:25.383104 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-7q9jl" Feb 25 11:29:25 crc kubenswrapper[5005]: I0225 11:29:25.384582 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-fqn5s" event={"ID":"3bf0b057-d203-4825-973b-c6ec18ce6008","Type":"ContainerStarted","Data":"eb77552fd94354ebc73d8c3eb232ecadf3e6d9258ba39bfe37e2b34e585d1d4c"} Feb 25 11:29:25 crc kubenswrapper[5005]: I0225 11:29:25.386483 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-8qqzv" event={"ID":"7080961a-f87d-44d8-ba9b-a97d6e6113a3","Type":"ContainerStarted","Data":"89f802e83649647d172aebf437f99b3ebcbbe87c0ef8ea3c8b5bb6259b81d745"} Feb 25 11:29:25 crc kubenswrapper[5005]: I0225 11:29:25.411449 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-7q9jl" podStartSLOduration=1.857873748 podStartE2EDuration="5.41139266s" podCreationTimestamp="2026-02-25 11:29:20 +0000 UTC" firstStartedPulling="2026-02-25 11:29:21.091210877 +0000 UTC m=+675.131943204" lastFinishedPulling="2026-02-25 11:29:24.644729789 +0000 UTC m=+678.685462116" observedRunningTime="2026-02-25 11:29:25.405414183 +0000 UTC m=+679.446146520" watchObservedRunningTime="2026-02-25 11:29:25.41139266 +0000 UTC m=+679.452125027" Feb 25 11:29:25 crc kubenswrapper[5005]: I0225 11:29:25.424378 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-8qqzv" podStartSLOduration=1.825123557 podStartE2EDuration="5.424353464s" podCreationTimestamp="2026-02-25 11:29:20 +0000 UTC" firstStartedPulling="2026-02-25 11:29:21.046060269 +0000 UTC m=+675.086792586" lastFinishedPulling="2026-02-25 11:29:24.645290166 +0000 UTC m=+678.686022493" observedRunningTime="2026-02-25 11:29:25.419691159 +0000 UTC m=+679.460423526" watchObservedRunningTime="2026-02-25 11:29:25.424353464 +0000 UTC m=+679.465085791" Feb 25 11:29:25 crc kubenswrapper[5005]: I0225 11:29:25.443821 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-fqn5s" podStartSLOduration=1.906428813 podStartE2EDuration="5.44379521s" podCreationTimestamp="2026-02-25 11:29:20 +0000 UTC" firstStartedPulling="2026-02-25 11:29:21.090890107 +0000 UTC m=+675.131622474" lastFinishedPulling="2026-02-25 11:29:24.628256544 +0000 UTC m=+678.668988871" observedRunningTime="2026-02-25 11:29:25.438557027 +0000 UTC m=+679.479289384" watchObservedRunningTime="2026-02-25 11:29:25.44379521 +0000 UTC m=+679.484527567" Feb 25 11:29:29 crc kubenswrapper[5005]: I0225 11:29:29.763037 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bfx5c"] Feb 25 11:29:29 crc kubenswrapper[5005]: I0225 11:29:29.764352 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" podUID="c496d07b-7684-4d5f-b36e-be187e76a3de" containerName="ovn-controller" containerID="cri-o://675d052f77946192759abdff843ca242001c41f0531cd06410787bf1fce25f11" gracePeriod=30 Feb 25 11:29:29 crc kubenswrapper[5005]: I0225 11:29:29.764451 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" podUID="c496d07b-7684-4d5f-b36e-be187e76a3de" containerName="sbdb" containerID="cri-o://a0cf3587e332c4bc6e87a16b8e7608938b77c0869473ba73cbea5fd6ee279a39" gracePeriod=30 Feb 25 11:29:29 crc kubenswrapper[5005]: I0225 11:29:29.764561 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" podUID="c496d07b-7684-4d5f-b36e-be187e76a3de" containerName="nbdb" containerID="cri-o://a687626d8ee79cf32049e2009ca48ae5ec596e0b0cad6eab05e7684c5c06c411" gracePeriod=30 Feb 25 11:29:29 crc kubenswrapper[5005]: I0225 11:29:29.764651 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" podUID="c496d07b-7684-4d5f-b36e-be187e76a3de" containerName="northd" containerID="cri-o://cd6c7b4e86f824f8c4269deb6403989232560aa77c122794119360feaefe82a3" gracePeriod=30 Feb 25 11:29:29 crc kubenswrapper[5005]: I0225 11:29:29.764735 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" podUID="c496d07b-7684-4d5f-b36e-be187e76a3de" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://ef3b7c4788ff3fa4664bbe7e070005ad16793de98bd4a4144e13756ea64f16fb" gracePeriod=30 Feb 25 11:29:29 crc kubenswrapper[5005]: I0225 11:29:29.764802 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" podUID="c496d07b-7684-4d5f-b36e-be187e76a3de" containerName="kube-rbac-proxy-node" containerID="cri-o://fc0d45f69c524d9a03fe2659ca1ce58474ff5785e3ac46b8f1cd0423f0b3ae0c" gracePeriod=30 Feb 25 11:29:29 crc kubenswrapper[5005]: I0225 11:29:29.764865 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" podUID="c496d07b-7684-4d5f-b36e-be187e76a3de" containerName="ovn-acl-logging" containerID="cri-o://310fa0d159836088b5b743d8316afa36dce529854c0859acac09cc58bf7c2c99" gracePeriod=30 Feb 25 11:29:29 crc kubenswrapper[5005]: I0225 11:29:29.810296 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" podUID="c496d07b-7684-4d5f-b36e-be187e76a3de" containerName="ovnkube-controller" containerID="cri-o://2d2a7c161124112ded340d1bc04ef4f518e2666e8b47d7dbdd1f08a738c43224" gracePeriod=30 Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.045773 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bfx5c_c496d07b-7684-4d5f-b36e-be187e76a3de/ovn-acl-logging/0.log" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.046188 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bfx5c_c496d07b-7684-4d5f-b36e-be187e76a3de/ovn-controller/0.log" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.046506 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.081176 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c496d07b-7684-4d5f-b36e-be187e76a3de-ovnkube-script-lib\") pod \"c496d07b-7684-4d5f-b36e-be187e76a3de\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.081214 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-systemd-units\") pod \"c496d07b-7684-4d5f-b36e-be187e76a3de\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.081230 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-host-kubelet\") pod \"c496d07b-7684-4d5f-b36e-be187e76a3de\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.081246 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-var-lib-openvswitch\") pod \"c496d07b-7684-4d5f-b36e-be187e76a3de\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.081263 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-node-log\") pod \"c496d07b-7684-4d5f-b36e-be187e76a3de\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.081291 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-log-socket\") pod \"c496d07b-7684-4d5f-b36e-be187e76a3de\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.081316 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-run-systemd\") pod \"c496d07b-7684-4d5f-b36e-be187e76a3de\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.081332 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c496d07b-7684-4d5f-b36e-be187e76a3de-ovnkube-config\") pod \"c496d07b-7684-4d5f-b36e-be187e76a3de\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.081353 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-host-var-lib-cni-networks-ovn-kubernetes\") pod \"c496d07b-7684-4d5f-b36e-be187e76a3de\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.081397 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-host-run-netns\") pod \"c496d07b-7684-4d5f-b36e-be187e76a3de\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.081436 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c496d07b-7684-4d5f-b36e-be187e76a3de-env-overrides\") pod \"c496d07b-7684-4d5f-b36e-be187e76a3de\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.081455 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c496d07b-7684-4d5f-b36e-be187e76a3de-ovn-node-metrics-cert\") pod \"c496d07b-7684-4d5f-b36e-be187e76a3de\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.081487 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-etc-openvswitch\") pod \"c496d07b-7684-4d5f-b36e-be187e76a3de\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.081526 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z24kc\" (UniqueName: \"kubernetes.io/projected/c496d07b-7684-4d5f-b36e-be187e76a3de-kube-api-access-z24kc\") pod \"c496d07b-7684-4d5f-b36e-be187e76a3de\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.081548 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-host-cni-netd\") pod \"c496d07b-7684-4d5f-b36e-be187e76a3de\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.081571 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-run-openvswitch\") pod \"c496d07b-7684-4d5f-b36e-be187e76a3de\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.081564 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "c496d07b-7684-4d5f-b36e-be187e76a3de" (UID: "c496d07b-7684-4d5f-b36e-be187e76a3de"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.081586 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-host-slash\") pod \"c496d07b-7684-4d5f-b36e-be187e76a3de\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.081671 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-host-run-ovn-kubernetes\") pod \"c496d07b-7684-4d5f-b36e-be187e76a3de\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.081745 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-run-ovn\") pod \"c496d07b-7684-4d5f-b36e-be187e76a3de\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.081791 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-host-cni-bin\") pod \"c496d07b-7684-4d5f-b36e-be187e76a3de\" (UID: \"c496d07b-7684-4d5f-b36e-be187e76a3de\") " Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.081614 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-host-slash" (OuterVolumeSpecName: "host-slash") pod "c496d07b-7684-4d5f-b36e-be187e76a3de" (UID: "c496d07b-7684-4d5f-b36e-be187e76a3de"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.081594 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-log-socket" (OuterVolumeSpecName: "log-socket") pod "c496d07b-7684-4d5f-b36e-be187e76a3de" (UID: "c496d07b-7684-4d5f-b36e-be187e76a3de"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.081631 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "c496d07b-7684-4d5f-b36e-be187e76a3de" (UID: "c496d07b-7684-4d5f-b36e-be187e76a3de"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.081647 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "c496d07b-7684-4d5f-b36e-be187e76a3de" (UID: "c496d07b-7684-4d5f-b36e-be187e76a3de"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.081703 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "c496d07b-7684-4d5f-b36e-be187e76a3de" (UID: "c496d07b-7684-4d5f-b36e-be187e76a3de"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.082032 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c496d07b-7684-4d5f-b36e-be187e76a3de-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "c496d07b-7684-4d5f-b36e-be187e76a3de" (UID: "c496d07b-7684-4d5f-b36e-be187e76a3de"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.082062 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c496d07b-7684-4d5f-b36e-be187e76a3de-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "c496d07b-7684-4d5f-b36e-be187e76a3de" (UID: "c496d07b-7684-4d5f-b36e-be187e76a3de"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.082157 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "c496d07b-7684-4d5f-b36e-be187e76a3de" (UID: "c496d07b-7684-4d5f-b36e-be187e76a3de"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.082165 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "c496d07b-7684-4d5f-b36e-be187e76a3de" (UID: "c496d07b-7684-4d5f-b36e-be187e76a3de"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.082199 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "c496d07b-7684-4d5f-b36e-be187e76a3de" (UID: "c496d07b-7684-4d5f-b36e-be187e76a3de"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.082224 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "c496d07b-7684-4d5f-b36e-be187e76a3de" (UID: "c496d07b-7684-4d5f-b36e-be187e76a3de"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.082234 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "c496d07b-7684-4d5f-b36e-be187e76a3de" (UID: "c496d07b-7684-4d5f-b36e-be187e76a3de"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.082274 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "c496d07b-7684-4d5f-b36e-be187e76a3de" (UID: "c496d07b-7684-4d5f-b36e-be187e76a3de"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.082306 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "c496d07b-7684-4d5f-b36e-be187e76a3de" (UID: "c496d07b-7684-4d5f-b36e-be187e76a3de"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.082490 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c496d07b-7684-4d5f-b36e-be187e76a3de-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "c496d07b-7684-4d5f-b36e-be187e76a3de" (UID: "c496d07b-7684-4d5f-b36e-be187e76a3de"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.082519 5005 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.082555 5005 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.082577 5005 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.082598 5005 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-host-slash\") on node \"crc\" DevicePath \"\"" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.082620 5005 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.082647 5005 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.082669 5005 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.082691 5005 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.082714 5005 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.082737 5005 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.082744 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-node-log" (OuterVolumeSpecName: "node-log") pod "c496d07b-7684-4d5f-b36e-be187e76a3de" (UID: "c496d07b-7684-4d5f-b36e-be187e76a3de"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.082760 5005 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-log-socket\") on node \"crc\" DevicePath \"\"" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.082783 5005 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c496d07b-7684-4d5f-b36e-be187e76a3de-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.082806 5005 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.082830 5005 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.082854 5005 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c496d07b-7684-4d5f-b36e-be187e76a3de-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.087988 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c496d07b-7684-4d5f-b36e-be187e76a3de-kube-api-access-z24kc" (OuterVolumeSpecName: "kube-api-access-z24kc") pod "c496d07b-7684-4d5f-b36e-be187e76a3de" (UID: "c496d07b-7684-4d5f-b36e-be187e76a3de"). InnerVolumeSpecName "kube-api-access-z24kc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.088492 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c496d07b-7684-4d5f-b36e-be187e76a3de-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "c496d07b-7684-4d5f-b36e-be187e76a3de" (UID: "c496d07b-7684-4d5f-b36e-be187e76a3de"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.103741 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "c496d07b-7684-4d5f-b36e-be187e76a3de" (UID: "c496d07b-7684-4d5f-b36e-be187e76a3de"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.122868 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-shcv2"] Feb 25 11:29:30 crc kubenswrapper[5005]: E0225 11:29:30.123083 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c496d07b-7684-4d5f-b36e-be187e76a3de" containerName="ovn-acl-logging" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.123097 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="c496d07b-7684-4d5f-b36e-be187e76a3de" containerName="ovn-acl-logging" Feb 25 11:29:30 crc kubenswrapper[5005]: E0225 11:29:30.123107 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c496d07b-7684-4d5f-b36e-be187e76a3de" containerName="kubecfg-setup" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.123113 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="c496d07b-7684-4d5f-b36e-be187e76a3de" containerName="kubecfg-setup" Feb 25 11:29:30 crc kubenswrapper[5005]: E0225 11:29:30.123121 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c496d07b-7684-4d5f-b36e-be187e76a3de" containerName="ovn-controller" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.123127 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="c496d07b-7684-4d5f-b36e-be187e76a3de" containerName="ovn-controller" Feb 25 11:29:30 crc kubenswrapper[5005]: E0225 11:29:30.123137 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c496d07b-7684-4d5f-b36e-be187e76a3de" containerName="sbdb" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.123142 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="c496d07b-7684-4d5f-b36e-be187e76a3de" containerName="sbdb" Feb 25 11:29:30 crc kubenswrapper[5005]: E0225 11:29:30.123151 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c496d07b-7684-4d5f-b36e-be187e76a3de" containerName="ovnkube-controller" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.123156 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="c496d07b-7684-4d5f-b36e-be187e76a3de" containerName="ovnkube-controller" Feb 25 11:29:30 crc kubenswrapper[5005]: E0225 11:29:30.123164 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c496d07b-7684-4d5f-b36e-be187e76a3de" containerName="kube-rbac-proxy-ovn-metrics" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.123170 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="c496d07b-7684-4d5f-b36e-be187e76a3de" containerName="kube-rbac-proxy-ovn-metrics" Feb 25 11:29:30 crc kubenswrapper[5005]: E0225 11:29:30.123177 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c496d07b-7684-4d5f-b36e-be187e76a3de" containerName="northd" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.123182 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="c496d07b-7684-4d5f-b36e-be187e76a3de" containerName="northd" Feb 25 11:29:30 crc kubenswrapper[5005]: E0225 11:29:30.123189 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c496d07b-7684-4d5f-b36e-be187e76a3de" containerName="nbdb" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.123194 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="c496d07b-7684-4d5f-b36e-be187e76a3de" containerName="nbdb" Feb 25 11:29:30 crc kubenswrapper[5005]: E0225 11:29:30.123206 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c496d07b-7684-4d5f-b36e-be187e76a3de" containerName="kube-rbac-proxy-node" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.123212 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="c496d07b-7684-4d5f-b36e-be187e76a3de" containerName="kube-rbac-proxy-node" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.123297 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="c496d07b-7684-4d5f-b36e-be187e76a3de" containerName="ovnkube-controller" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.123305 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="c496d07b-7684-4d5f-b36e-be187e76a3de" containerName="ovn-controller" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.123315 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="c496d07b-7684-4d5f-b36e-be187e76a3de" containerName="kube-rbac-proxy-ovn-metrics" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.123324 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="c496d07b-7684-4d5f-b36e-be187e76a3de" containerName="ovn-acl-logging" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.123330 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="c496d07b-7684-4d5f-b36e-be187e76a3de" containerName="northd" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.123336 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="c496d07b-7684-4d5f-b36e-be187e76a3de" containerName="nbdb" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.123343 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="c496d07b-7684-4d5f-b36e-be187e76a3de" containerName="sbdb" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.123352 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="c496d07b-7684-4d5f-b36e-be187e76a3de" containerName="kube-rbac-proxy-node" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.136110 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.184004 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b7563844-c122-4960-820c-04074aaa9363-etc-openvswitch\") pod \"ovnkube-node-shcv2\" (UID: \"b7563844-c122-4960-820c-04074aaa9363\") " pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.184054 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b7563844-c122-4960-820c-04074aaa9363-node-log\") pod \"ovnkube-node-shcv2\" (UID: \"b7563844-c122-4960-820c-04074aaa9363\") " pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.184158 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b7563844-c122-4960-820c-04074aaa9363-run-openvswitch\") pod \"ovnkube-node-shcv2\" (UID: \"b7563844-c122-4960-820c-04074aaa9363\") " pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.184212 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b7563844-c122-4960-820c-04074aaa9363-log-socket\") pod \"ovnkube-node-shcv2\" (UID: \"b7563844-c122-4960-820c-04074aaa9363\") " pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.184263 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b7563844-c122-4960-820c-04074aaa9363-var-lib-openvswitch\") pod \"ovnkube-node-shcv2\" (UID: \"b7563844-c122-4960-820c-04074aaa9363\") " pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.184288 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b7563844-c122-4960-820c-04074aaa9363-host-cni-bin\") pod \"ovnkube-node-shcv2\" (UID: \"b7563844-c122-4960-820c-04074aaa9363\") " pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.184308 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b7563844-c122-4960-820c-04074aaa9363-host-cni-netd\") pod \"ovnkube-node-shcv2\" (UID: \"b7563844-c122-4960-820c-04074aaa9363\") " pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.184341 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b7563844-c122-4960-820c-04074aaa9363-host-kubelet\") pod \"ovnkube-node-shcv2\" (UID: \"b7563844-c122-4960-820c-04074aaa9363\") " pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.184437 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b7563844-c122-4960-820c-04074aaa9363-run-ovn\") pod \"ovnkube-node-shcv2\" (UID: \"b7563844-c122-4960-820c-04074aaa9363\") " pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.184454 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b7563844-c122-4960-820c-04074aaa9363-run-systemd\") pod \"ovnkube-node-shcv2\" (UID: \"b7563844-c122-4960-820c-04074aaa9363\") " pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.184491 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b7563844-c122-4960-820c-04074aaa9363-ovnkube-config\") pod \"ovnkube-node-shcv2\" (UID: \"b7563844-c122-4960-820c-04074aaa9363\") " pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.184508 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b7563844-c122-4960-820c-04074aaa9363-host-run-netns\") pod \"ovnkube-node-shcv2\" (UID: \"b7563844-c122-4960-820c-04074aaa9363\") " pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.184523 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b7563844-c122-4960-820c-04074aaa9363-env-overrides\") pod \"ovnkube-node-shcv2\" (UID: \"b7563844-c122-4960-820c-04074aaa9363\") " pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.184553 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b7563844-c122-4960-820c-04074aaa9363-host-run-ovn-kubernetes\") pod \"ovnkube-node-shcv2\" (UID: \"b7563844-c122-4960-820c-04074aaa9363\") " pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.184575 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52h6r\" (UniqueName: \"kubernetes.io/projected/b7563844-c122-4960-820c-04074aaa9363-kube-api-access-52h6r\") pod \"ovnkube-node-shcv2\" (UID: \"b7563844-c122-4960-820c-04074aaa9363\") " pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.184614 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b7563844-c122-4960-820c-04074aaa9363-host-slash\") pod \"ovnkube-node-shcv2\" (UID: \"b7563844-c122-4960-820c-04074aaa9363\") " pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.184644 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b7563844-c122-4960-820c-04074aaa9363-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-shcv2\" (UID: \"b7563844-c122-4960-820c-04074aaa9363\") " pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.184689 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b7563844-c122-4960-820c-04074aaa9363-systemd-units\") pod \"ovnkube-node-shcv2\" (UID: \"b7563844-c122-4960-820c-04074aaa9363\") " pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.184725 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b7563844-c122-4960-820c-04074aaa9363-ovn-node-metrics-cert\") pod \"ovnkube-node-shcv2\" (UID: \"b7563844-c122-4960-820c-04074aaa9363\") " pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.184755 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b7563844-c122-4960-820c-04074aaa9363-ovnkube-script-lib\") pod \"ovnkube-node-shcv2\" (UID: \"b7563844-c122-4960-820c-04074aaa9363\") " pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.184815 5005 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.184838 5005 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c496d07b-7684-4d5f-b36e-be187e76a3de-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.184850 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z24kc\" (UniqueName: \"kubernetes.io/projected/c496d07b-7684-4d5f-b36e-be187e76a3de-kube-api-access-z24kc\") on node \"crc\" DevicePath \"\"" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.184861 5005 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c496d07b-7684-4d5f-b36e-be187e76a3de-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.184869 5005 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c496d07b-7684-4d5f-b36e-be187e76a3de-node-log\") on node \"crc\" DevicePath \"\"" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.285695 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b7563844-c122-4960-820c-04074aaa9363-run-ovn\") pod \"ovnkube-node-shcv2\" (UID: \"b7563844-c122-4960-820c-04074aaa9363\") " pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.285759 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b7563844-c122-4960-820c-04074aaa9363-run-systemd\") pod \"ovnkube-node-shcv2\" (UID: \"b7563844-c122-4960-820c-04074aaa9363\") " pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.285801 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b7563844-c122-4960-820c-04074aaa9363-ovnkube-config\") pod \"ovnkube-node-shcv2\" (UID: \"b7563844-c122-4960-820c-04074aaa9363\") " pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.285832 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b7563844-c122-4960-820c-04074aaa9363-host-run-netns\") pod \"ovnkube-node-shcv2\" (UID: \"b7563844-c122-4960-820c-04074aaa9363\") " pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.285867 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b7563844-c122-4960-820c-04074aaa9363-env-overrides\") pod \"ovnkube-node-shcv2\" (UID: \"b7563844-c122-4960-820c-04074aaa9363\") " pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.285901 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b7563844-c122-4960-820c-04074aaa9363-run-systemd\") pod \"ovnkube-node-shcv2\" (UID: \"b7563844-c122-4960-820c-04074aaa9363\") " pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.285904 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b7563844-c122-4960-820c-04074aaa9363-host-run-ovn-kubernetes\") pod \"ovnkube-node-shcv2\" (UID: \"b7563844-c122-4960-820c-04074aaa9363\") " pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.285947 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b7563844-c122-4960-820c-04074aaa9363-host-run-ovn-kubernetes\") pod \"ovnkube-node-shcv2\" (UID: \"b7563844-c122-4960-820c-04074aaa9363\") " pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.285985 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52h6r\" (UniqueName: \"kubernetes.io/projected/b7563844-c122-4960-820c-04074aaa9363-kube-api-access-52h6r\") pod \"ovnkube-node-shcv2\" (UID: \"b7563844-c122-4960-820c-04074aaa9363\") " pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.285997 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b7563844-c122-4960-820c-04074aaa9363-host-run-netns\") pod \"ovnkube-node-shcv2\" (UID: \"b7563844-c122-4960-820c-04074aaa9363\") " pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.285990 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b7563844-c122-4960-820c-04074aaa9363-run-ovn\") pod \"ovnkube-node-shcv2\" (UID: \"b7563844-c122-4960-820c-04074aaa9363\") " pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.286029 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b7563844-c122-4960-820c-04074aaa9363-host-slash\") pod \"ovnkube-node-shcv2\" (UID: \"b7563844-c122-4960-820c-04074aaa9363\") " pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.286072 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b7563844-c122-4960-820c-04074aaa9363-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-shcv2\" (UID: \"b7563844-c122-4960-820c-04074aaa9363\") " pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.286115 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b7563844-c122-4960-820c-04074aaa9363-systemd-units\") pod \"ovnkube-node-shcv2\" (UID: \"b7563844-c122-4960-820c-04074aaa9363\") " pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.286156 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b7563844-c122-4960-820c-04074aaa9363-ovn-node-metrics-cert\") pod \"ovnkube-node-shcv2\" (UID: \"b7563844-c122-4960-820c-04074aaa9363\") " pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.286196 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b7563844-c122-4960-820c-04074aaa9363-ovnkube-script-lib\") pod \"ovnkube-node-shcv2\" (UID: \"b7563844-c122-4960-820c-04074aaa9363\") " pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.286246 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b7563844-c122-4960-820c-04074aaa9363-host-slash\") pod \"ovnkube-node-shcv2\" (UID: \"b7563844-c122-4960-820c-04074aaa9363\") " pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.286269 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b7563844-c122-4960-820c-04074aaa9363-etc-openvswitch\") pod \"ovnkube-node-shcv2\" (UID: \"b7563844-c122-4960-820c-04074aaa9363\") " pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.286339 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b7563844-c122-4960-820c-04074aaa9363-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-shcv2\" (UID: \"b7563844-c122-4960-820c-04074aaa9363\") " pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.286360 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b7563844-c122-4960-820c-04074aaa9363-node-log\") pod \"ovnkube-node-shcv2\" (UID: \"b7563844-c122-4960-820c-04074aaa9363\") " pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.286449 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b7563844-c122-4960-820c-04074aaa9363-run-openvswitch\") pod \"ovnkube-node-shcv2\" (UID: \"b7563844-c122-4960-820c-04074aaa9363\") " pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.286475 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b7563844-c122-4960-820c-04074aaa9363-ovnkube-config\") pod \"ovnkube-node-shcv2\" (UID: \"b7563844-c122-4960-820c-04074aaa9363\") " pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.286487 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b7563844-c122-4960-820c-04074aaa9363-log-socket\") pod \"ovnkube-node-shcv2\" (UID: \"b7563844-c122-4960-820c-04074aaa9363\") " pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.286534 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b7563844-c122-4960-820c-04074aaa9363-log-socket\") pod \"ovnkube-node-shcv2\" (UID: \"b7563844-c122-4960-820c-04074aaa9363\") " pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.286546 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b7563844-c122-4960-820c-04074aaa9363-node-log\") pod \"ovnkube-node-shcv2\" (UID: \"b7563844-c122-4960-820c-04074aaa9363\") " pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.286567 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b7563844-c122-4960-820c-04074aaa9363-env-overrides\") pod \"ovnkube-node-shcv2\" (UID: \"b7563844-c122-4960-820c-04074aaa9363\") " pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.286583 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b7563844-c122-4960-820c-04074aaa9363-run-openvswitch\") pod \"ovnkube-node-shcv2\" (UID: \"b7563844-c122-4960-820c-04074aaa9363\") " pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.286569 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b7563844-c122-4960-820c-04074aaa9363-var-lib-openvswitch\") pod \"ovnkube-node-shcv2\" (UID: \"b7563844-c122-4960-820c-04074aaa9363\") " pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.286304 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b7563844-c122-4960-820c-04074aaa9363-etc-openvswitch\") pod \"ovnkube-node-shcv2\" (UID: \"b7563844-c122-4960-820c-04074aaa9363\") " pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.286655 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b7563844-c122-4960-820c-04074aaa9363-var-lib-openvswitch\") pod \"ovnkube-node-shcv2\" (UID: \"b7563844-c122-4960-820c-04074aaa9363\") " pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.286639 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b7563844-c122-4960-820c-04074aaa9363-host-cni-bin\") pod \"ovnkube-node-shcv2\" (UID: \"b7563844-c122-4960-820c-04074aaa9363\") " pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.286677 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b7563844-c122-4960-820c-04074aaa9363-systemd-units\") pod \"ovnkube-node-shcv2\" (UID: \"b7563844-c122-4960-820c-04074aaa9363\") " pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.286739 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b7563844-c122-4960-820c-04074aaa9363-host-cni-bin\") pod \"ovnkube-node-shcv2\" (UID: \"b7563844-c122-4960-820c-04074aaa9363\") " pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.286710 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b7563844-c122-4960-820c-04074aaa9363-host-cni-netd\") pod \"ovnkube-node-shcv2\" (UID: \"b7563844-c122-4960-820c-04074aaa9363\") " pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.286838 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b7563844-c122-4960-820c-04074aaa9363-host-kubelet\") pod \"ovnkube-node-shcv2\" (UID: \"b7563844-c122-4960-820c-04074aaa9363\") " pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.286757 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b7563844-c122-4960-820c-04074aaa9363-host-cni-netd\") pod \"ovnkube-node-shcv2\" (UID: \"b7563844-c122-4960-820c-04074aaa9363\") " pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.286968 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b7563844-c122-4960-820c-04074aaa9363-host-kubelet\") pod \"ovnkube-node-shcv2\" (UID: \"b7563844-c122-4960-820c-04074aaa9363\") " pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.287437 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b7563844-c122-4960-820c-04074aaa9363-ovnkube-script-lib\") pod \"ovnkube-node-shcv2\" (UID: \"b7563844-c122-4960-820c-04074aaa9363\") " pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.291344 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b7563844-c122-4960-820c-04074aaa9363-ovn-node-metrics-cert\") pod \"ovnkube-node-shcv2\" (UID: \"b7563844-c122-4960-820c-04074aaa9363\") " pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.314783 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52h6r\" (UniqueName: \"kubernetes.io/projected/b7563844-c122-4960-820c-04074aaa9363-kube-api-access-52h6r\") pod \"ovnkube-node-shcv2\" (UID: \"b7563844-c122-4960-820c-04074aaa9363\") " pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.418354 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bfx5c_c496d07b-7684-4d5f-b36e-be187e76a3de/ovn-acl-logging/0.log" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.418869 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bfx5c_c496d07b-7684-4d5f-b36e-be187e76a3de/ovn-controller/0.log" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.419307 5005 generic.go:334] "Generic (PLEG): container finished" podID="c496d07b-7684-4d5f-b36e-be187e76a3de" containerID="2d2a7c161124112ded340d1bc04ef4f518e2666e8b47d7dbdd1f08a738c43224" exitCode=0 Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.419341 5005 generic.go:334] "Generic (PLEG): container finished" podID="c496d07b-7684-4d5f-b36e-be187e76a3de" containerID="a0cf3587e332c4bc6e87a16b8e7608938b77c0869473ba73cbea5fd6ee279a39" exitCode=0 Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.419351 5005 generic.go:334] "Generic (PLEG): container finished" podID="c496d07b-7684-4d5f-b36e-be187e76a3de" containerID="a687626d8ee79cf32049e2009ca48ae5ec596e0b0cad6eab05e7684c5c06c411" exitCode=0 Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.419360 5005 generic.go:334] "Generic (PLEG): container finished" podID="c496d07b-7684-4d5f-b36e-be187e76a3de" containerID="cd6c7b4e86f824f8c4269deb6403989232560aa77c122794119360feaefe82a3" exitCode=0 Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.419371 5005 generic.go:334] "Generic (PLEG): container finished" podID="c496d07b-7684-4d5f-b36e-be187e76a3de" containerID="ef3b7c4788ff3fa4664bbe7e070005ad16793de98bd4a4144e13756ea64f16fb" exitCode=0 Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.419379 5005 generic.go:334] "Generic (PLEG): container finished" podID="c496d07b-7684-4d5f-b36e-be187e76a3de" containerID="fc0d45f69c524d9a03fe2659ca1ce58474ff5785e3ac46b8f1cd0423f0b3ae0c" exitCode=0 Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.419346 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" event={"ID":"c496d07b-7684-4d5f-b36e-be187e76a3de","Type":"ContainerDied","Data":"2d2a7c161124112ded340d1bc04ef4f518e2666e8b47d7dbdd1f08a738c43224"} Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.419404 5005 generic.go:334] "Generic (PLEG): container finished" podID="c496d07b-7684-4d5f-b36e-be187e76a3de" containerID="310fa0d159836088b5b743d8316afa36dce529854c0859acac09cc58bf7c2c99" exitCode=143 Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.419467 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" event={"ID":"c496d07b-7684-4d5f-b36e-be187e76a3de","Type":"ContainerDied","Data":"a0cf3587e332c4bc6e87a16b8e7608938b77c0869473ba73cbea5fd6ee279a39"} Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.419492 5005 generic.go:334] "Generic (PLEG): container finished" podID="c496d07b-7684-4d5f-b36e-be187e76a3de" containerID="675d052f77946192759abdff843ca242001c41f0531cd06410787bf1fce25f11" exitCode=143 Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.419510 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" event={"ID":"c496d07b-7684-4d5f-b36e-be187e76a3de","Type":"ContainerDied","Data":"a687626d8ee79cf32049e2009ca48ae5ec596e0b0cad6eab05e7684c5c06c411"} Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.419529 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.419552 5005 scope.go:117] "RemoveContainer" containerID="2d2a7c161124112ded340d1bc04ef4f518e2666e8b47d7dbdd1f08a738c43224" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.419536 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" event={"ID":"c496d07b-7684-4d5f-b36e-be187e76a3de","Type":"ContainerDied","Data":"cd6c7b4e86f824f8c4269deb6403989232560aa77c122794119360feaefe82a3"} Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.419664 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" event={"ID":"c496d07b-7684-4d5f-b36e-be187e76a3de","Type":"ContainerDied","Data":"ef3b7c4788ff3fa4664bbe7e070005ad16793de98bd4a4144e13756ea64f16fb"} Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.419681 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" event={"ID":"c496d07b-7684-4d5f-b36e-be187e76a3de","Type":"ContainerDied","Data":"fc0d45f69c524d9a03fe2659ca1ce58474ff5785e3ac46b8f1cd0423f0b3ae0c"} Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.419694 5005 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"310fa0d159836088b5b743d8316afa36dce529854c0859acac09cc58bf7c2c99"} Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.419717 5005 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"675d052f77946192759abdff843ca242001c41f0531cd06410787bf1fce25f11"} Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.419725 5005 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"975d46ba18ce34a37e4862e5d5fd264a037d931f085f2f6d4ac11b51073bc492"} Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.419733 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" event={"ID":"c496d07b-7684-4d5f-b36e-be187e76a3de","Type":"ContainerDied","Data":"310fa0d159836088b5b743d8316afa36dce529854c0859acac09cc58bf7c2c99"} Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.419742 5005 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2d2a7c161124112ded340d1bc04ef4f518e2666e8b47d7dbdd1f08a738c43224"} Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.419750 5005 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a0cf3587e332c4bc6e87a16b8e7608938b77c0869473ba73cbea5fd6ee279a39"} Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.419758 5005 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a687626d8ee79cf32049e2009ca48ae5ec596e0b0cad6eab05e7684c5c06c411"} Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.419764 5005 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cd6c7b4e86f824f8c4269deb6403989232560aa77c122794119360feaefe82a3"} Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.419771 5005 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ef3b7c4788ff3fa4664bbe7e070005ad16793de98bd4a4144e13756ea64f16fb"} Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.419777 5005 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fc0d45f69c524d9a03fe2659ca1ce58474ff5785e3ac46b8f1cd0423f0b3ae0c"} Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.419784 5005 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"310fa0d159836088b5b743d8316afa36dce529854c0859acac09cc58bf7c2c99"} Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.419792 5005 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"675d052f77946192759abdff843ca242001c41f0531cd06410787bf1fce25f11"} Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.419798 5005 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"975d46ba18ce34a37e4862e5d5fd264a037d931f085f2f6d4ac11b51073bc492"} Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.419808 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" event={"ID":"c496d07b-7684-4d5f-b36e-be187e76a3de","Type":"ContainerDied","Data":"675d052f77946192759abdff843ca242001c41f0531cd06410787bf1fce25f11"} Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.419818 5005 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2d2a7c161124112ded340d1bc04ef4f518e2666e8b47d7dbdd1f08a738c43224"} Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.419825 5005 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a0cf3587e332c4bc6e87a16b8e7608938b77c0869473ba73cbea5fd6ee279a39"} Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.419831 5005 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a687626d8ee79cf32049e2009ca48ae5ec596e0b0cad6eab05e7684c5c06c411"} Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.419837 5005 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cd6c7b4e86f824f8c4269deb6403989232560aa77c122794119360feaefe82a3"} Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.419844 5005 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ef3b7c4788ff3fa4664bbe7e070005ad16793de98bd4a4144e13756ea64f16fb"} Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.419850 5005 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fc0d45f69c524d9a03fe2659ca1ce58474ff5785e3ac46b8f1cd0423f0b3ae0c"} Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.419857 5005 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"310fa0d159836088b5b743d8316afa36dce529854c0859acac09cc58bf7c2c99"} Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.419863 5005 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"675d052f77946192759abdff843ca242001c41f0531cd06410787bf1fce25f11"} Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.419869 5005 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"975d46ba18ce34a37e4862e5d5fd264a037d931f085f2f6d4ac11b51073bc492"} Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.419879 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfx5c" event={"ID":"c496d07b-7684-4d5f-b36e-be187e76a3de","Type":"ContainerDied","Data":"4a02fd2dd8ef9c173f9e5cd97f53894b0bbf7a0424f96b8c29d8e326350bbf5b"} Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.419889 5005 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2d2a7c161124112ded340d1bc04ef4f518e2666e8b47d7dbdd1f08a738c43224"} Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.419897 5005 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a0cf3587e332c4bc6e87a16b8e7608938b77c0869473ba73cbea5fd6ee279a39"} Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.419904 5005 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a687626d8ee79cf32049e2009ca48ae5ec596e0b0cad6eab05e7684c5c06c411"} Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.419911 5005 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cd6c7b4e86f824f8c4269deb6403989232560aa77c122794119360feaefe82a3"} Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.419917 5005 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ef3b7c4788ff3fa4664bbe7e070005ad16793de98bd4a4144e13756ea64f16fb"} Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.419924 5005 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fc0d45f69c524d9a03fe2659ca1ce58474ff5785e3ac46b8f1cd0423f0b3ae0c"} Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.419930 5005 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"310fa0d159836088b5b743d8316afa36dce529854c0859acac09cc58bf7c2c99"} Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.419936 5005 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"675d052f77946192759abdff843ca242001c41f0531cd06410787bf1fce25f11"} Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.419943 5005 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"975d46ba18ce34a37e4862e5d5fd264a037d931f085f2f6d4ac11b51073bc492"} Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.421935 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dsd74_03175783-f1a5-4ac6-b942-91a23ab4439d/kube-multus/0.log" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.422035 5005 generic.go:334] "Generic (PLEG): container finished" podID="03175783-f1a5-4ac6-b942-91a23ab4439d" containerID="0e7097a39853110bb9bfed8490c0eac568dea29dba97a8561d60566e3d98fa83" exitCode=2 Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.422088 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dsd74" event={"ID":"03175783-f1a5-4ac6-b942-91a23ab4439d","Type":"ContainerDied","Data":"0e7097a39853110bb9bfed8490c0eac568dea29dba97a8561d60566e3d98fa83"} Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.422732 5005 scope.go:117] "RemoveContainer" containerID="0e7097a39853110bb9bfed8490c0eac568dea29dba97a8561d60566e3d98fa83" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.452878 5005 scope.go:117] "RemoveContainer" containerID="a0cf3587e332c4bc6e87a16b8e7608938b77c0869473ba73cbea5fd6ee279a39" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.459119 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.478861 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bfx5c"] Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.487034 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bfx5c"] Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.499264 5005 scope.go:117] "RemoveContainer" containerID="a687626d8ee79cf32049e2009ca48ae5ec596e0b0cad6eab05e7684c5c06c411" Feb 25 11:29:30 crc kubenswrapper[5005]: W0225 11:29:30.509550 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7563844_c122_4960_820c_04074aaa9363.slice/crio-0d617574e68b35aa8e5d33f8515bfbc6af086581f8f2b4e35f615d8d635bf8c3 WatchSource:0}: Error finding container 0d617574e68b35aa8e5d33f8515bfbc6af086581f8f2b4e35f615d8d635bf8c3: Status 404 returned error can't find the container with id 0d617574e68b35aa8e5d33f8515bfbc6af086581f8f2b4e35f615d8d635bf8c3 Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.517611 5005 scope.go:117] "RemoveContainer" containerID="cd6c7b4e86f824f8c4269deb6403989232560aa77c122794119360feaefe82a3" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.540075 5005 scope.go:117] "RemoveContainer" containerID="ef3b7c4788ff3fa4664bbe7e070005ad16793de98bd4a4144e13756ea64f16fb" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.571629 5005 scope.go:117] "RemoveContainer" containerID="fc0d45f69c524d9a03fe2659ca1ce58474ff5785e3ac46b8f1cd0423f0b3ae0c" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.586827 5005 scope.go:117] "RemoveContainer" containerID="310fa0d159836088b5b743d8316afa36dce529854c0859acac09cc58bf7c2c99" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.606739 5005 scope.go:117] "RemoveContainer" containerID="675d052f77946192759abdff843ca242001c41f0531cd06410787bf1fce25f11" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.621629 5005 scope.go:117] "RemoveContainer" containerID="975d46ba18ce34a37e4862e5d5fd264a037d931f085f2f6d4ac11b51073bc492" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.651665 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-7q9jl" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.685595 5005 scope.go:117] "RemoveContainer" containerID="2d2a7c161124112ded340d1bc04ef4f518e2666e8b47d7dbdd1f08a738c43224" Feb 25 11:29:30 crc kubenswrapper[5005]: E0225 11:29:30.686086 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d2a7c161124112ded340d1bc04ef4f518e2666e8b47d7dbdd1f08a738c43224\": container with ID starting with 2d2a7c161124112ded340d1bc04ef4f518e2666e8b47d7dbdd1f08a738c43224 not found: ID does not exist" containerID="2d2a7c161124112ded340d1bc04ef4f518e2666e8b47d7dbdd1f08a738c43224" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.686135 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d2a7c161124112ded340d1bc04ef4f518e2666e8b47d7dbdd1f08a738c43224"} err="failed to get container status \"2d2a7c161124112ded340d1bc04ef4f518e2666e8b47d7dbdd1f08a738c43224\": rpc error: code = NotFound desc = could not find container \"2d2a7c161124112ded340d1bc04ef4f518e2666e8b47d7dbdd1f08a738c43224\": container with ID starting with 2d2a7c161124112ded340d1bc04ef4f518e2666e8b47d7dbdd1f08a738c43224 not found: ID does not exist" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.686164 5005 scope.go:117] "RemoveContainer" containerID="a0cf3587e332c4bc6e87a16b8e7608938b77c0869473ba73cbea5fd6ee279a39" Feb 25 11:29:30 crc kubenswrapper[5005]: E0225 11:29:30.686495 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0cf3587e332c4bc6e87a16b8e7608938b77c0869473ba73cbea5fd6ee279a39\": container with ID starting with a0cf3587e332c4bc6e87a16b8e7608938b77c0869473ba73cbea5fd6ee279a39 not found: ID does not exist" containerID="a0cf3587e332c4bc6e87a16b8e7608938b77c0869473ba73cbea5fd6ee279a39" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.686520 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0cf3587e332c4bc6e87a16b8e7608938b77c0869473ba73cbea5fd6ee279a39"} err="failed to get container status \"a0cf3587e332c4bc6e87a16b8e7608938b77c0869473ba73cbea5fd6ee279a39\": rpc error: code = NotFound desc = could not find container \"a0cf3587e332c4bc6e87a16b8e7608938b77c0869473ba73cbea5fd6ee279a39\": container with ID starting with a0cf3587e332c4bc6e87a16b8e7608938b77c0869473ba73cbea5fd6ee279a39 not found: ID does not exist" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.686536 5005 scope.go:117] "RemoveContainer" containerID="a687626d8ee79cf32049e2009ca48ae5ec596e0b0cad6eab05e7684c5c06c411" Feb 25 11:29:30 crc kubenswrapper[5005]: E0225 11:29:30.686879 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a687626d8ee79cf32049e2009ca48ae5ec596e0b0cad6eab05e7684c5c06c411\": container with ID starting with a687626d8ee79cf32049e2009ca48ae5ec596e0b0cad6eab05e7684c5c06c411 not found: ID does not exist" containerID="a687626d8ee79cf32049e2009ca48ae5ec596e0b0cad6eab05e7684c5c06c411" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.686904 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a687626d8ee79cf32049e2009ca48ae5ec596e0b0cad6eab05e7684c5c06c411"} err="failed to get container status \"a687626d8ee79cf32049e2009ca48ae5ec596e0b0cad6eab05e7684c5c06c411\": rpc error: code = NotFound desc = could not find container \"a687626d8ee79cf32049e2009ca48ae5ec596e0b0cad6eab05e7684c5c06c411\": container with ID starting with a687626d8ee79cf32049e2009ca48ae5ec596e0b0cad6eab05e7684c5c06c411 not found: ID does not exist" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.686918 5005 scope.go:117] "RemoveContainer" containerID="cd6c7b4e86f824f8c4269deb6403989232560aa77c122794119360feaefe82a3" Feb 25 11:29:30 crc kubenswrapper[5005]: E0225 11:29:30.687198 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd6c7b4e86f824f8c4269deb6403989232560aa77c122794119360feaefe82a3\": container with ID starting with cd6c7b4e86f824f8c4269deb6403989232560aa77c122794119360feaefe82a3 not found: ID does not exist" containerID="cd6c7b4e86f824f8c4269deb6403989232560aa77c122794119360feaefe82a3" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.687240 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd6c7b4e86f824f8c4269deb6403989232560aa77c122794119360feaefe82a3"} err="failed to get container status \"cd6c7b4e86f824f8c4269deb6403989232560aa77c122794119360feaefe82a3\": rpc error: code = NotFound desc = could not find container \"cd6c7b4e86f824f8c4269deb6403989232560aa77c122794119360feaefe82a3\": container with ID starting with cd6c7b4e86f824f8c4269deb6403989232560aa77c122794119360feaefe82a3 not found: ID does not exist" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.687279 5005 scope.go:117] "RemoveContainer" containerID="ef3b7c4788ff3fa4664bbe7e070005ad16793de98bd4a4144e13756ea64f16fb" Feb 25 11:29:30 crc kubenswrapper[5005]: E0225 11:29:30.687639 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef3b7c4788ff3fa4664bbe7e070005ad16793de98bd4a4144e13756ea64f16fb\": container with ID starting with ef3b7c4788ff3fa4664bbe7e070005ad16793de98bd4a4144e13756ea64f16fb not found: ID does not exist" containerID="ef3b7c4788ff3fa4664bbe7e070005ad16793de98bd4a4144e13756ea64f16fb" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.687665 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef3b7c4788ff3fa4664bbe7e070005ad16793de98bd4a4144e13756ea64f16fb"} err="failed to get container status \"ef3b7c4788ff3fa4664bbe7e070005ad16793de98bd4a4144e13756ea64f16fb\": rpc error: code = NotFound desc = could not find container \"ef3b7c4788ff3fa4664bbe7e070005ad16793de98bd4a4144e13756ea64f16fb\": container with ID starting with ef3b7c4788ff3fa4664bbe7e070005ad16793de98bd4a4144e13756ea64f16fb not found: ID does not exist" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.687679 5005 scope.go:117] "RemoveContainer" containerID="fc0d45f69c524d9a03fe2659ca1ce58474ff5785e3ac46b8f1cd0423f0b3ae0c" Feb 25 11:29:30 crc kubenswrapper[5005]: E0225 11:29:30.687942 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc0d45f69c524d9a03fe2659ca1ce58474ff5785e3ac46b8f1cd0423f0b3ae0c\": container with ID starting with fc0d45f69c524d9a03fe2659ca1ce58474ff5785e3ac46b8f1cd0423f0b3ae0c not found: ID does not exist" containerID="fc0d45f69c524d9a03fe2659ca1ce58474ff5785e3ac46b8f1cd0423f0b3ae0c" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.687963 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc0d45f69c524d9a03fe2659ca1ce58474ff5785e3ac46b8f1cd0423f0b3ae0c"} err="failed to get container status \"fc0d45f69c524d9a03fe2659ca1ce58474ff5785e3ac46b8f1cd0423f0b3ae0c\": rpc error: code = NotFound desc = could not find container \"fc0d45f69c524d9a03fe2659ca1ce58474ff5785e3ac46b8f1cd0423f0b3ae0c\": container with ID starting with fc0d45f69c524d9a03fe2659ca1ce58474ff5785e3ac46b8f1cd0423f0b3ae0c not found: ID does not exist" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.687975 5005 scope.go:117] "RemoveContainer" containerID="310fa0d159836088b5b743d8316afa36dce529854c0859acac09cc58bf7c2c99" Feb 25 11:29:30 crc kubenswrapper[5005]: E0225 11:29:30.688181 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"310fa0d159836088b5b743d8316afa36dce529854c0859acac09cc58bf7c2c99\": container with ID starting with 310fa0d159836088b5b743d8316afa36dce529854c0859acac09cc58bf7c2c99 not found: ID does not exist" containerID="310fa0d159836088b5b743d8316afa36dce529854c0859acac09cc58bf7c2c99" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.688208 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"310fa0d159836088b5b743d8316afa36dce529854c0859acac09cc58bf7c2c99"} err="failed to get container status \"310fa0d159836088b5b743d8316afa36dce529854c0859acac09cc58bf7c2c99\": rpc error: code = NotFound desc = could not find container \"310fa0d159836088b5b743d8316afa36dce529854c0859acac09cc58bf7c2c99\": container with ID starting with 310fa0d159836088b5b743d8316afa36dce529854c0859acac09cc58bf7c2c99 not found: ID does not exist" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.688227 5005 scope.go:117] "RemoveContainer" containerID="675d052f77946192759abdff843ca242001c41f0531cd06410787bf1fce25f11" Feb 25 11:29:30 crc kubenswrapper[5005]: E0225 11:29:30.688475 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"675d052f77946192759abdff843ca242001c41f0531cd06410787bf1fce25f11\": container with ID starting with 675d052f77946192759abdff843ca242001c41f0531cd06410787bf1fce25f11 not found: ID does not exist" containerID="675d052f77946192759abdff843ca242001c41f0531cd06410787bf1fce25f11" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.688498 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"675d052f77946192759abdff843ca242001c41f0531cd06410787bf1fce25f11"} err="failed to get container status \"675d052f77946192759abdff843ca242001c41f0531cd06410787bf1fce25f11\": rpc error: code = NotFound desc = could not find container \"675d052f77946192759abdff843ca242001c41f0531cd06410787bf1fce25f11\": container with ID starting with 675d052f77946192759abdff843ca242001c41f0531cd06410787bf1fce25f11 not found: ID does not exist" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.688510 5005 scope.go:117] "RemoveContainer" containerID="975d46ba18ce34a37e4862e5d5fd264a037d931f085f2f6d4ac11b51073bc492" Feb 25 11:29:30 crc kubenswrapper[5005]: E0225 11:29:30.688768 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"975d46ba18ce34a37e4862e5d5fd264a037d931f085f2f6d4ac11b51073bc492\": container with ID starting with 975d46ba18ce34a37e4862e5d5fd264a037d931f085f2f6d4ac11b51073bc492 not found: ID does not exist" containerID="975d46ba18ce34a37e4862e5d5fd264a037d931f085f2f6d4ac11b51073bc492" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.688823 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"975d46ba18ce34a37e4862e5d5fd264a037d931f085f2f6d4ac11b51073bc492"} err="failed to get container status \"975d46ba18ce34a37e4862e5d5fd264a037d931f085f2f6d4ac11b51073bc492\": rpc error: code = NotFound desc = could not find container \"975d46ba18ce34a37e4862e5d5fd264a037d931f085f2f6d4ac11b51073bc492\": container with ID starting with 975d46ba18ce34a37e4862e5d5fd264a037d931f085f2f6d4ac11b51073bc492 not found: ID does not exist" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.688860 5005 scope.go:117] "RemoveContainer" containerID="2d2a7c161124112ded340d1bc04ef4f518e2666e8b47d7dbdd1f08a738c43224" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.689127 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d2a7c161124112ded340d1bc04ef4f518e2666e8b47d7dbdd1f08a738c43224"} err="failed to get container status \"2d2a7c161124112ded340d1bc04ef4f518e2666e8b47d7dbdd1f08a738c43224\": rpc error: code = NotFound desc = could not find container \"2d2a7c161124112ded340d1bc04ef4f518e2666e8b47d7dbdd1f08a738c43224\": container with ID starting with 2d2a7c161124112ded340d1bc04ef4f518e2666e8b47d7dbdd1f08a738c43224 not found: ID does not exist" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.689147 5005 scope.go:117] "RemoveContainer" containerID="a0cf3587e332c4bc6e87a16b8e7608938b77c0869473ba73cbea5fd6ee279a39" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.689398 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0cf3587e332c4bc6e87a16b8e7608938b77c0869473ba73cbea5fd6ee279a39"} err="failed to get container status \"a0cf3587e332c4bc6e87a16b8e7608938b77c0869473ba73cbea5fd6ee279a39\": rpc error: code = NotFound desc = could not find container \"a0cf3587e332c4bc6e87a16b8e7608938b77c0869473ba73cbea5fd6ee279a39\": container with ID starting with a0cf3587e332c4bc6e87a16b8e7608938b77c0869473ba73cbea5fd6ee279a39 not found: ID does not exist" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.689433 5005 scope.go:117] "RemoveContainer" containerID="a687626d8ee79cf32049e2009ca48ae5ec596e0b0cad6eab05e7684c5c06c411" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.689683 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a687626d8ee79cf32049e2009ca48ae5ec596e0b0cad6eab05e7684c5c06c411"} err="failed to get container status \"a687626d8ee79cf32049e2009ca48ae5ec596e0b0cad6eab05e7684c5c06c411\": rpc error: code = NotFound desc = could not find container \"a687626d8ee79cf32049e2009ca48ae5ec596e0b0cad6eab05e7684c5c06c411\": container with ID starting with a687626d8ee79cf32049e2009ca48ae5ec596e0b0cad6eab05e7684c5c06c411 not found: ID does not exist" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.689705 5005 scope.go:117] "RemoveContainer" containerID="cd6c7b4e86f824f8c4269deb6403989232560aa77c122794119360feaefe82a3" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.689925 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd6c7b4e86f824f8c4269deb6403989232560aa77c122794119360feaefe82a3"} err="failed to get container status \"cd6c7b4e86f824f8c4269deb6403989232560aa77c122794119360feaefe82a3\": rpc error: code = NotFound desc = could not find container \"cd6c7b4e86f824f8c4269deb6403989232560aa77c122794119360feaefe82a3\": container with ID starting with cd6c7b4e86f824f8c4269deb6403989232560aa77c122794119360feaefe82a3 not found: ID does not exist" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.689997 5005 scope.go:117] "RemoveContainer" containerID="ef3b7c4788ff3fa4664bbe7e070005ad16793de98bd4a4144e13756ea64f16fb" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.690642 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef3b7c4788ff3fa4664bbe7e070005ad16793de98bd4a4144e13756ea64f16fb"} err="failed to get container status \"ef3b7c4788ff3fa4664bbe7e070005ad16793de98bd4a4144e13756ea64f16fb\": rpc error: code = NotFound desc = could not find container \"ef3b7c4788ff3fa4664bbe7e070005ad16793de98bd4a4144e13756ea64f16fb\": container with ID starting with ef3b7c4788ff3fa4664bbe7e070005ad16793de98bd4a4144e13756ea64f16fb not found: ID does not exist" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.690679 5005 scope.go:117] "RemoveContainer" containerID="fc0d45f69c524d9a03fe2659ca1ce58474ff5785e3ac46b8f1cd0423f0b3ae0c" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.691061 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc0d45f69c524d9a03fe2659ca1ce58474ff5785e3ac46b8f1cd0423f0b3ae0c"} err="failed to get container status \"fc0d45f69c524d9a03fe2659ca1ce58474ff5785e3ac46b8f1cd0423f0b3ae0c\": rpc error: code = NotFound desc = could not find container \"fc0d45f69c524d9a03fe2659ca1ce58474ff5785e3ac46b8f1cd0423f0b3ae0c\": container with ID starting with fc0d45f69c524d9a03fe2659ca1ce58474ff5785e3ac46b8f1cd0423f0b3ae0c not found: ID does not exist" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.691090 5005 scope.go:117] "RemoveContainer" containerID="310fa0d159836088b5b743d8316afa36dce529854c0859acac09cc58bf7c2c99" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.691340 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"310fa0d159836088b5b743d8316afa36dce529854c0859acac09cc58bf7c2c99"} err="failed to get container status \"310fa0d159836088b5b743d8316afa36dce529854c0859acac09cc58bf7c2c99\": rpc error: code = NotFound desc = could not find container \"310fa0d159836088b5b743d8316afa36dce529854c0859acac09cc58bf7c2c99\": container with ID starting with 310fa0d159836088b5b743d8316afa36dce529854c0859acac09cc58bf7c2c99 not found: ID does not exist" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.691396 5005 scope.go:117] "RemoveContainer" containerID="675d052f77946192759abdff843ca242001c41f0531cd06410787bf1fce25f11" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.691617 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"675d052f77946192759abdff843ca242001c41f0531cd06410787bf1fce25f11"} err="failed to get container status \"675d052f77946192759abdff843ca242001c41f0531cd06410787bf1fce25f11\": rpc error: code = NotFound desc = could not find container \"675d052f77946192759abdff843ca242001c41f0531cd06410787bf1fce25f11\": container with ID starting with 675d052f77946192759abdff843ca242001c41f0531cd06410787bf1fce25f11 not found: ID does not exist" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.691645 5005 scope.go:117] "RemoveContainer" containerID="975d46ba18ce34a37e4862e5d5fd264a037d931f085f2f6d4ac11b51073bc492" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.692515 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"975d46ba18ce34a37e4862e5d5fd264a037d931f085f2f6d4ac11b51073bc492"} err="failed to get container status \"975d46ba18ce34a37e4862e5d5fd264a037d931f085f2f6d4ac11b51073bc492\": rpc error: code = NotFound desc = could not find container \"975d46ba18ce34a37e4862e5d5fd264a037d931f085f2f6d4ac11b51073bc492\": container with ID starting with 975d46ba18ce34a37e4862e5d5fd264a037d931f085f2f6d4ac11b51073bc492 not found: ID does not exist" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.692536 5005 scope.go:117] "RemoveContainer" containerID="2d2a7c161124112ded340d1bc04ef4f518e2666e8b47d7dbdd1f08a738c43224" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.692707 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d2a7c161124112ded340d1bc04ef4f518e2666e8b47d7dbdd1f08a738c43224"} err="failed to get container status \"2d2a7c161124112ded340d1bc04ef4f518e2666e8b47d7dbdd1f08a738c43224\": rpc error: code = NotFound desc = could not find container \"2d2a7c161124112ded340d1bc04ef4f518e2666e8b47d7dbdd1f08a738c43224\": container with ID starting with 2d2a7c161124112ded340d1bc04ef4f518e2666e8b47d7dbdd1f08a738c43224 not found: ID does not exist" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.692768 5005 scope.go:117] "RemoveContainer" containerID="a0cf3587e332c4bc6e87a16b8e7608938b77c0869473ba73cbea5fd6ee279a39" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.692941 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0cf3587e332c4bc6e87a16b8e7608938b77c0869473ba73cbea5fd6ee279a39"} err="failed to get container status \"a0cf3587e332c4bc6e87a16b8e7608938b77c0869473ba73cbea5fd6ee279a39\": rpc error: code = NotFound desc = could not find container \"a0cf3587e332c4bc6e87a16b8e7608938b77c0869473ba73cbea5fd6ee279a39\": container with ID starting with a0cf3587e332c4bc6e87a16b8e7608938b77c0869473ba73cbea5fd6ee279a39 not found: ID does not exist" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.692963 5005 scope.go:117] "RemoveContainer" containerID="a687626d8ee79cf32049e2009ca48ae5ec596e0b0cad6eab05e7684c5c06c411" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.693180 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a687626d8ee79cf32049e2009ca48ae5ec596e0b0cad6eab05e7684c5c06c411"} err="failed to get container status \"a687626d8ee79cf32049e2009ca48ae5ec596e0b0cad6eab05e7684c5c06c411\": rpc error: code = NotFound desc = could not find container \"a687626d8ee79cf32049e2009ca48ae5ec596e0b0cad6eab05e7684c5c06c411\": container with ID starting with a687626d8ee79cf32049e2009ca48ae5ec596e0b0cad6eab05e7684c5c06c411 not found: ID does not exist" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.693199 5005 scope.go:117] "RemoveContainer" containerID="cd6c7b4e86f824f8c4269deb6403989232560aa77c122794119360feaefe82a3" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.693363 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd6c7b4e86f824f8c4269deb6403989232560aa77c122794119360feaefe82a3"} err="failed to get container status \"cd6c7b4e86f824f8c4269deb6403989232560aa77c122794119360feaefe82a3\": rpc error: code = NotFound desc = could not find container \"cd6c7b4e86f824f8c4269deb6403989232560aa77c122794119360feaefe82a3\": container with ID starting with cd6c7b4e86f824f8c4269deb6403989232560aa77c122794119360feaefe82a3 not found: ID does not exist" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.693416 5005 scope.go:117] "RemoveContainer" containerID="ef3b7c4788ff3fa4664bbe7e070005ad16793de98bd4a4144e13756ea64f16fb" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.693581 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef3b7c4788ff3fa4664bbe7e070005ad16793de98bd4a4144e13756ea64f16fb"} err="failed to get container status \"ef3b7c4788ff3fa4664bbe7e070005ad16793de98bd4a4144e13756ea64f16fb\": rpc error: code = NotFound desc = could not find container \"ef3b7c4788ff3fa4664bbe7e070005ad16793de98bd4a4144e13756ea64f16fb\": container with ID starting with ef3b7c4788ff3fa4664bbe7e070005ad16793de98bd4a4144e13756ea64f16fb not found: ID does not exist" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.693597 5005 scope.go:117] "RemoveContainer" containerID="fc0d45f69c524d9a03fe2659ca1ce58474ff5785e3ac46b8f1cd0423f0b3ae0c" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.693790 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc0d45f69c524d9a03fe2659ca1ce58474ff5785e3ac46b8f1cd0423f0b3ae0c"} err="failed to get container status \"fc0d45f69c524d9a03fe2659ca1ce58474ff5785e3ac46b8f1cd0423f0b3ae0c\": rpc error: code = NotFound desc = could not find container \"fc0d45f69c524d9a03fe2659ca1ce58474ff5785e3ac46b8f1cd0423f0b3ae0c\": container with ID starting with fc0d45f69c524d9a03fe2659ca1ce58474ff5785e3ac46b8f1cd0423f0b3ae0c not found: ID does not exist" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.693808 5005 scope.go:117] "RemoveContainer" containerID="310fa0d159836088b5b743d8316afa36dce529854c0859acac09cc58bf7c2c99" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.693832 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c496d07b-7684-4d5f-b36e-be187e76a3de" path="/var/lib/kubelet/pods/c496d07b-7684-4d5f-b36e-be187e76a3de/volumes" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.694021 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"310fa0d159836088b5b743d8316afa36dce529854c0859acac09cc58bf7c2c99"} err="failed to get container status \"310fa0d159836088b5b743d8316afa36dce529854c0859acac09cc58bf7c2c99\": rpc error: code = NotFound desc = could not find container \"310fa0d159836088b5b743d8316afa36dce529854c0859acac09cc58bf7c2c99\": container with ID starting with 310fa0d159836088b5b743d8316afa36dce529854c0859acac09cc58bf7c2c99 not found: ID does not exist" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.694041 5005 scope.go:117] "RemoveContainer" containerID="675d052f77946192759abdff843ca242001c41f0531cd06410787bf1fce25f11" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.694218 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"675d052f77946192759abdff843ca242001c41f0531cd06410787bf1fce25f11"} err="failed to get container status \"675d052f77946192759abdff843ca242001c41f0531cd06410787bf1fce25f11\": rpc error: code = NotFound desc = could not find container \"675d052f77946192759abdff843ca242001c41f0531cd06410787bf1fce25f11\": container with ID starting with 675d052f77946192759abdff843ca242001c41f0531cd06410787bf1fce25f11 not found: ID does not exist" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.694235 5005 scope.go:117] "RemoveContainer" containerID="975d46ba18ce34a37e4862e5d5fd264a037d931f085f2f6d4ac11b51073bc492" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.694387 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"975d46ba18ce34a37e4862e5d5fd264a037d931f085f2f6d4ac11b51073bc492"} err="failed to get container status \"975d46ba18ce34a37e4862e5d5fd264a037d931f085f2f6d4ac11b51073bc492\": rpc error: code = NotFound desc = could not find container \"975d46ba18ce34a37e4862e5d5fd264a037d931f085f2f6d4ac11b51073bc492\": container with ID starting with 975d46ba18ce34a37e4862e5d5fd264a037d931f085f2f6d4ac11b51073bc492 not found: ID does not exist" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.694404 5005 scope.go:117] "RemoveContainer" containerID="2d2a7c161124112ded340d1bc04ef4f518e2666e8b47d7dbdd1f08a738c43224" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.694574 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d2a7c161124112ded340d1bc04ef4f518e2666e8b47d7dbdd1f08a738c43224"} err="failed to get container status \"2d2a7c161124112ded340d1bc04ef4f518e2666e8b47d7dbdd1f08a738c43224\": rpc error: code = NotFound desc = could not find container \"2d2a7c161124112ded340d1bc04ef4f518e2666e8b47d7dbdd1f08a738c43224\": container with ID starting with 2d2a7c161124112ded340d1bc04ef4f518e2666e8b47d7dbdd1f08a738c43224 not found: ID does not exist" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.694592 5005 scope.go:117] "RemoveContainer" containerID="a0cf3587e332c4bc6e87a16b8e7608938b77c0869473ba73cbea5fd6ee279a39" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.694732 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0cf3587e332c4bc6e87a16b8e7608938b77c0869473ba73cbea5fd6ee279a39"} err="failed to get container status \"a0cf3587e332c4bc6e87a16b8e7608938b77c0869473ba73cbea5fd6ee279a39\": rpc error: code = NotFound desc = could not find container \"a0cf3587e332c4bc6e87a16b8e7608938b77c0869473ba73cbea5fd6ee279a39\": container with ID starting with a0cf3587e332c4bc6e87a16b8e7608938b77c0869473ba73cbea5fd6ee279a39 not found: ID does not exist" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.694748 5005 scope.go:117] "RemoveContainer" containerID="a687626d8ee79cf32049e2009ca48ae5ec596e0b0cad6eab05e7684c5c06c411" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.694880 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a687626d8ee79cf32049e2009ca48ae5ec596e0b0cad6eab05e7684c5c06c411"} err="failed to get container status \"a687626d8ee79cf32049e2009ca48ae5ec596e0b0cad6eab05e7684c5c06c411\": rpc error: code = NotFound desc = could not find container \"a687626d8ee79cf32049e2009ca48ae5ec596e0b0cad6eab05e7684c5c06c411\": container with ID starting with a687626d8ee79cf32049e2009ca48ae5ec596e0b0cad6eab05e7684c5c06c411 not found: ID does not exist" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.694895 5005 scope.go:117] "RemoveContainer" containerID="cd6c7b4e86f824f8c4269deb6403989232560aa77c122794119360feaefe82a3" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.695125 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd6c7b4e86f824f8c4269deb6403989232560aa77c122794119360feaefe82a3"} err="failed to get container status \"cd6c7b4e86f824f8c4269deb6403989232560aa77c122794119360feaefe82a3\": rpc error: code = NotFound desc = could not find container \"cd6c7b4e86f824f8c4269deb6403989232560aa77c122794119360feaefe82a3\": container with ID starting with cd6c7b4e86f824f8c4269deb6403989232560aa77c122794119360feaefe82a3 not found: ID does not exist" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.695141 5005 scope.go:117] "RemoveContainer" containerID="ef3b7c4788ff3fa4664bbe7e070005ad16793de98bd4a4144e13756ea64f16fb" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.695281 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef3b7c4788ff3fa4664bbe7e070005ad16793de98bd4a4144e13756ea64f16fb"} err="failed to get container status \"ef3b7c4788ff3fa4664bbe7e070005ad16793de98bd4a4144e13756ea64f16fb\": rpc error: code = NotFound desc = could not find container \"ef3b7c4788ff3fa4664bbe7e070005ad16793de98bd4a4144e13756ea64f16fb\": container with ID starting with ef3b7c4788ff3fa4664bbe7e070005ad16793de98bd4a4144e13756ea64f16fb not found: ID does not exist" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.695297 5005 scope.go:117] "RemoveContainer" containerID="fc0d45f69c524d9a03fe2659ca1ce58474ff5785e3ac46b8f1cd0423f0b3ae0c" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.695538 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc0d45f69c524d9a03fe2659ca1ce58474ff5785e3ac46b8f1cd0423f0b3ae0c"} err="failed to get container status \"fc0d45f69c524d9a03fe2659ca1ce58474ff5785e3ac46b8f1cd0423f0b3ae0c\": rpc error: code = NotFound desc = could not find container \"fc0d45f69c524d9a03fe2659ca1ce58474ff5785e3ac46b8f1cd0423f0b3ae0c\": container with ID starting with fc0d45f69c524d9a03fe2659ca1ce58474ff5785e3ac46b8f1cd0423f0b3ae0c not found: ID does not exist" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.695565 5005 scope.go:117] "RemoveContainer" containerID="310fa0d159836088b5b743d8316afa36dce529854c0859acac09cc58bf7c2c99" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.695754 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"310fa0d159836088b5b743d8316afa36dce529854c0859acac09cc58bf7c2c99"} err="failed to get container status \"310fa0d159836088b5b743d8316afa36dce529854c0859acac09cc58bf7c2c99\": rpc error: code = NotFound desc = could not find container \"310fa0d159836088b5b743d8316afa36dce529854c0859acac09cc58bf7c2c99\": container with ID starting with 310fa0d159836088b5b743d8316afa36dce529854c0859acac09cc58bf7c2c99 not found: ID does not exist" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.695773 5005 scope.go:117] "RemoveContainer" containerID="675d052f77946192759abdff843ca242001c41f0531cd06410787bf1fce25f11" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.695921 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"675d052f77946192759abdff843ca242001c41f0531cd06410787bf1fce25f11"} err="failed to get container status \"675d052f77946192759abdff843ca242001c41f0531cd06410787bf1fce25f11\": rpc error: code = NotFound desc = could not find container \"675d052f77946192759abdff843ca242001c41f0531cd06410787bf1fce25f11\": container with ID starting with 675d052f77946192759abdff843ca242001c41f0531cd06410787bf1fce25f11 not found: ID does not exist" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.695937 5005 scope.go:117] "RemoveContainer" containerID="975d46ba18ce34a37e4862e5d5fd264a037d931f085f2f6d4ac11b51073bc492" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.696095 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"975d46ba18ce34a37e4862e5d5fd264a037d931f085f2f6d4ac11b51073bc492"} err="failed to get container status \"975d46ba18ce34a37e4862e5d5fd264a037d931f085f2f6d4ac11b51073bc492\": rpc error: code = NotFound desc = could not find container \"975d46ba18ce34a37e4862e5d5fd264a037d931f085f2f6d4ac11b51073bc492\": container with ID starting with 975d46ba18ce34a37e4862e5d5fd264a037d931f085f2f6d4ac11b51073bc492 not found: ID does not exist" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.696109 5005 scope.go:117] "RemoveContainer" containerID="2d2a7c161124112ded340d1bc04ef4f518e2666e8b47d7dbdd1f08a738c43224" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.696238 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d2a7c161124112ded340d1bc04ef4f518e2666e8b47d7dbdd1f08a738c43224"} err="failed to get container status \"2d2a7c161124112ded340d1bc04ef4f518e2666e8b47d7dbdd1f08a738c43224\": rpc error: code = NotFound desc = could not find container \"2d2a7c161124112ded340d1bc04ef4f518e2666e8b47d7dbdd1f08a738c43224\": container with ID starting with 2d2a7c161124112ded340d1bc04ef4f518e2666e8b47d7dbdd1f08a738c43224 not found: ID does not exist" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.696254 5005 scope.go:117] "RemoveContainer" containerID="a0cf3587e332c4bc6e87a16b8e7608938b77c0869473ba73cbea5fd6ee279a39" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.696436 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0cf3587e332c4bc6e87a16b8e7608938b77c0869473ba73cbea5fd6ee279a39"} err="failed to get container status \"a0cf3587e332c4bc6e87a16b8e7608938b77c0869473ba73cbea5fd6ee279a39\": rpc error: code = NotFound desc = could not find container \"a0cf3587e332c4bc6e87a16b8e7608938b77c0869473ba73cbea5fd6ee279a39\": container with ID starting with a0cf3587e332c4bc6e87a16b8e7608938b77c0869473ba73cbea5fd6ee279a39 not found: ID does not exist" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.696451 5005 scope.go:117] "RemoveContainer" containerID="a687626d8ee79cf32049e2009ca48ae5ec596e0b0cad6eab05e7684c5c06c411" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.696586 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a687626d8ee79cf32049e2009ca48ae5ec596e0b0cad6eab05e7684c5c06c411"} err="failed to get container status \"a687626d8ee79cf32049e2009ca48ae5ec596e0b0cad6eab05e7684c5c06c411\": rpc error: code = NotFound desc = could not find container \"a687626d8ee79cf32049e2009ca48ae5ec596e0b0cad6eab05e7684c5c06c411\": container with ID starting with a687626d8ee79cf32049e2009ca48ae5ec596e0b0cad6eab05e7684c5c06c411 not found: ID does not exist" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.696602 5005 scope.go:117] "RemoveContainer" containerID="cd6c7b4e86f824f8c4269deb6403989232560aa77c122794119360feaefe82a3" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.696733 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd6c7b4e86f824f8c4269deb6403989232560aa77c122794119360feaefe82a3"} err="failed to get container status \"cd6c7b4e86f824f8c4269deb6403989232560aa77c122794119360feaefe82a3\": rpc error: code = NotFound desc = could not find container \"cd6c7b4e86f824f8c4269deb6403989232560aa77c122794119360feaefe82a3\": container with ID starting with cd6c7b4e86f824f8c4269deb6403989232560aa77c122794119360feaefe82a3 not found: ID does not exist" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.696748 5005 scope.go:117] "RemoveContainer" containerID="ef3b7c4788ff3fa4664bbe7e070005ad16793de98bd4a4144e13756ea64f16fb" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.696877 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef3b7c4788ff3fa4664bbe7e070005ad16793de98bd4a4144e13756ea64f16fb"} err="failed to get container status \"ef3b7c4788ff3fa4664bbe7e070005ad16793de98bd4a4144e13756ea64f16fb\": rpc error: code = NotFound desc = could not find container \"ef3b7c4788ff3fa4664bbe7e070005ad16793de98bd4a4144e13756ea64f16fb\": container with ID starting with ef3b7c4788ff3fa4664bbe7e070005ad16793de98bd4a4144e13756ea64f16fb not found: ID does not exist" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.696891 5005 scope.go:117] "RemoveContainer" containerID="fc0d45f69c524d9a03fe2659ca1ce58474ff5785e3ac46b8f1cd0423f0b3ae0c" Feb 25 11:29:30 crc kubenswrapper[5005]: I0225 11:29:30.697065 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc0d45f69c524d9a03fe2659ca1ce58474ff5785e3ac46b8f1cd0423f0b3ae0c"} err="failed to get container status \"fc0d45f69c524d9a03fe2659ca1ce58474ff5785e3ac46b8f1cd0423f0b3ae0c\": rpc error: code = NotFound desc = could not find container \"fc0d45f69c524d9a03fe2659ca1ce58474ff5785e3ac46b8f1cd0423f0b3ae0c\": container with ID starting with fc0d45f69c524d9a03fe2659ca1ce58474ff5785e3ac46b8f1cd0423f0b3ae0c not found: ID does not exist" Feb 25 11:29:31 crc kubenswrapper[5005]: I0225 11:29:31.431072 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dsd74_03175783-f1a5-4ac6-b942-91a23ab4439d/kube-multus/0.log" Feb 25 11:29:31 crc kubenswrapper[5005]: I0225 11:29:31.431227 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dsd74" event={"ID":"03175783-f1a5-4ac6-b942-91a23ab4439d","Type":"ContainerStarted","Data":"fa2ee0458d43c7c95b30b742d8b087ce2964a2368a6f083340df883f02f1c2ed"} Feb 25 11:29:31 crc kubenswrapper[5005]: I0225 11:29:31.432790 5005 generic.go:334] "Generic (PLEG): container finished" podID="b7563844-c122-4960-820c-04074aaa9363" containerID="1ae2d756118bea4156627d7cefc79f4bc4402bf8314509cf47acc73b33f3290b" exitCode=0 Feb 25 11:29:31 crc kubenswrapper[5005]: I0225 11:29:31.432867 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" event={"ID":"b7563844-c122-4960-820c-04074aaa9363","Type":"ContainerDied","Data":"1ae2d756118bea4156627d7cefc79f4bc4402bf8314509cf47acc73b33f3290b"} Feb 25 11:29:31 crc kubenswrapper[5005]: I0225 11:29:31.432906 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" event={"ID":"b7563844-c122-4960-820c-04074aaa9363","Type":"ContainerStarted","Data":"0d617574e68b35aa8e5d33f8515bfbc6af086581f8f2b4e35f615d8d635bf8c3"} Feb 25 11:29:32 crc kubenswrapper[5005]: I0225 11:29:32.444123 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" event={"ID":"b7563844-c122-4960-820c-04074aaa9363","Type":"ContainerStarted","Data":"cbfa3dcdf41bf0fb3e58ec3e97b2ec5dd42c28b140c0c0f3dae30de7caf43ae1"} Feb 25 11:29:32 crc kubenswrapper[5005]: I0225 11:29:32.444648 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" event={"ID":"b7563844-c122-4960-820c-04074aaa9363","Type":"ContainerStarted","Data":"b288131526433e1d56e63aad14b9b78346103e76192d4998373004fbf67a86bf"} Feb 25 11:29:32 crc kubenswrapper[5005]: I0225 11:29:32.444679 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" event={"ID":"b7563844-c122-4960-820c-04074aaa9363","Type":"ContainerStarted","Data":"23fa052fcb098af16d791787b6218ea6cc7968f249763472ab8f8b6e5b2792f5"} Feb 25 11:29:32 crc kubenswrapper[5005]: I0225 11:29:32.444699 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" event={"ID":"b7563844-c122-4960-820c-04074aaa9363","Type":"ContainerStarted","Data":"942139b090f4f7d64d1699429b783bc73d5d407854238cb71799b9d3603fd800"} Feb 25 11:29:32 crc kubenswrapper[5005]: I0225 11:29:32.444720 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" event={"ID":"b7563844-c122-4960-820c-04074aaa9363","Type":"ContainerStarted","Data":"79ecd026988111aa738a6582b4d3a8fbc6fc971fa567f371dbc51aeab8d2e3f5"} Feb 25 11:29:32 crc kubenswrapper[5005]: I0225 11:29:32.444739 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" event={"ID":"b7563844-c122-4960-820c-04074aaa9363","Type":"ContainerStarted","Data":"12c7d6fa1ad92e431bdae2ab7e02029cf8f84d13bd7c431a6ebd707091fc2f22"} Feb 25 11:29:35 crc kubenswrapper[5005]: I0225 11:29:35.466562 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" event={"ID":"b7563844-c122-4960-820c-04074aaa9363","Type":"ContainerStarted","Data":"50a3ae1cc89ef20281528cbb8a995da181518f917bea4a7a49e8cae05d26629a"} Feb 25 11:29:37 crc kubenswrapper[5005]: I0225 11:29:37.486800 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" event={"ID":"b7563844-c122-4960-820c-04074aaa9363","Type":"ContainerStarted","Data":"d7901dbd1304d5819b1130298ed14df01313aeedb847cae3fab7126f1b09ef9b"} Feb 25 11:29:37 crc kubenswrapper[5005]: I0225 11:29:37.487540 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:29:37 crc kubenswrapper[5005]: I0225 11:29:37.487565 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:29:37 crc kubenswrapper[5005]: I0225 11:29:37.524404 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" podStartSLOduration=7.5243633370000005 podStartE2EDuration="7.524363337s" podCreationTimestamp="2026-02-25 11:29:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:29:37.522460768 +0000 UTC m=+691.563193135" watchObservedRunningTime="2026-02-25 11:29:37.524363337 +0000 UTC m=+691.565095674" Feb 25 11:29:37 crc kubenswrapper[5005]: I0225 11:29:37.538017 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:29:38 crc kubenswrapper[5005]: I0225 11:29:38.492796 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:29:38 crc kubenswrapper[5005]: I0225 11:29:38.515718 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:30:00 crc kubenswrapper[5005]: I0225 11:30:00.132956 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533650-ntmlt"] Feb 25 11:30:00 crc kubenswrapper[5005]: I0225 11:30:00.134149 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533650-ntmlt" Feb 25 11:30:00 crc kubenswrapper[5005]: I0225 11:30:00.135956 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 11:30:00 crc kubenswrapper[5005]: I0225 11:30:00.136204 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 11:30:00 crc kubenswrapper[5005]: I0225 11:30:00.136416 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 11:30:00 crc kubenswrapper[5005]: I0225 11:30:00.141498 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29533650-4n78n"] Feb 25 11:30:00 crc kubenswrapper[5005]: I0225 11:30:00.142301 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29533650-4n78n" Feb 25 11:30:00 crc kubenswrapper[5005]: I0225 11:30:00.144941 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 25 11:30:00 crc kubenswrapper[5005]: I0225 11:30:00.145304 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 25 11:30:00 crc kubenswrapper[5005]: I0225 11:30:00.146496 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533650-ntmlt"] Feb 25 11:30:00 crc kubenswrapper[5005]: I0225 11:30:00.150790 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29533650-4n78n"] Feb 25 11:30:00 crc kubenswrapper[5005]: I0225 11:30:00.281364 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4grtd\" (UniqueName: \"kubernetes.io/projected/d72ffb45-73a1-482a-8544-24afc078bacf-kube-api-access-4grtd\") pod \"collect-profiles-29533650-4n78n\" (UID: \"d72ffb45-73a1-482a-8544-24afc078bacf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533650-4n78n" Feb 25 11:30:00 crc kubenswrapper[5005]: I0225 11:30:00.281451 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d72ffb45-73a1-482a-8544-24afc078bacf-config-volume\") pod \"collect-profiles-29533650-4n78n\" (UID: \"d72ffb45-73a1-482a-8544-24afc078bacf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533650-4n78n" Feb 25 11:30:00 crc kubenswrapper[5005]: I0225 11:30:00.281490 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgzjr\" (UniqueName: \"kubernetes.io/projected/3cda6332-8370-4c9a-813d-ebc3e97b91b9-kube-api-access-xgzjr\") pod \"auto-csr-approver-29533650-ntmlt\" (UID: \"3cda6332-8370-4c9a-813d-ebc3e97b91b9\") " pod="openshift-infra/auto-csr-approver-29533650-ntmlt" Feb 25 11:30:00 crc kubenswrapper[5005]: I0225 11:30:00.281519 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d72ffb45-73a1-482a-8544-24afc078bacf-secret-volume\") pod \"collect-profiles-29533650-4n78n\" (UID: \"d72ffb45-73a1-482a-8544-24afc078bacf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533650-4n78n" Feb 25 11:30:00 crc kubenswrapper[5005]: I0225 11:30:00.383197 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d72ffb45-73a1-482a-8544-24afc078bacf-secret-volume\") pod \"collect-profiles-29533650-4n78n\" (UID: \"d72ffb45-73a1-482a-8544-24afc078bacf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533650-4n78n" Feb 25 11:30:00 crc kubenswrapper[5005]: I0225 11:30:00.383398 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4grtd\" (UniqueName: \"kubernetes.io/projected/d72ffb45-73a1-482a-8544-24afc078bacf-kube-api-access-4grtd\") pod \"collect-profiles-29533650-4n78n\" (UID: \"d72ffb45-73a1-482a-8544-24afc078bacf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533650-4n78n" Feb 25 11:30:00 crc kubenswrapper[5005]: I0225 11:30:00.383518 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d72ffb45-73a1-482a-8544-24afc078bacf-config-volume\") pod \"collect-profiles-29533650-4n78n\" (UID: \"d72ffb45-73a1-482a-8544-24afc078bacf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533650-4n78n" Feb 25 11:30:00 crc kubenswrapper[5005]: I0225 11:30:00.383571 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgzjr\" (UniqueName: \"kubernetes.io/projected/3cda6332-8370-4c9a-813d-ebc3e97b91b9-kube-api-access-xgzjr\") pod \"auto-csr-approver-29533650-ntmlt\" (UID: \"3cda6332-8370-4c9a-813d-ebc3e97b91b9\") " pod="openshift-infra/auto-csr-approver-29533650-ntmlt" Feb 25 11:30:00 crc kubenswrapper[5005]: I0225 11:30:00.385182 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d72ffb45-73a1-482a-8544-24afc078bacf-config-volume\") pod \"collect-profiles-29533650-4n78n\" (UID: \"d72ffb45-73a1-482a-8544-24afc078bacf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533650-4n78n" Feb 25 11:30:00 crc kubenswrapper[5005]: I0225 11:30:00.391041 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d72ffb45-73a1-482a-8544-24afc078bacf-secret-volume\") pod \"collect-profiles-29533650-4n78n\" (UID: \"d72ffb45-73a1-482a-8544-24afc078bacf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533650-4n78n" Feb 25 11:30:00 crc kubenswrapper[5005]: I0225 11:30:00.399889 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgzjr\" (UniqueName: \"kubernetes.io/projected/3cda6332-8370-4c9a-813d-ebc3e97b91b9-kube-api-access-xgzjr\") pod \"auto-csr-approver-29533650-ntmlt\" (UID: \"3cda6332-8370-4c9a-813d-ebc3e97b91b9\") " pod="openshift-infra/auto-csr-approver-29533650-ntmlt" Feb 25 11:30:00 crc kubenswrapper[5005]: I0225 11:30:00.400172 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4grtd\" (UniqueName: \"kubernetes.io/projected/d72ffb45-73a1-482a-8544-24afc078bacf-kube-api-access-4grtd\") pod \"collect-profiles-29533650-4n78n\" (UID: \"d72ffb45-73a1-482a-8544-24afc078bacf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533650-4n78n" Feb 25 11:30:00 crc kubenswrapper[5005]: I0225 11:30:00.461891 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533650-ntmlt" Feb 25 11:30:00 crc kubenswrapper[5005]: I0225 11:30:00.470202 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29533650-4n78n" Feb 25 11:30:00 crc kubenswrapper[5005]: I0225 11:30:00.489933 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-shcv2" Feb 25 11:30:00 crc kubenswrapper[5005]: I0225 11:30:00.759146 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533650-ntmlt"] Feb 25 11:30:00 crc kubenswrapper[5005]: I0225 11:30:00.925493 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29533650-4n78n"] Feb 25 11:30:00 crc kubenswrapper[5005]: W0225 11:30:00.931541 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd72ffb45_73a1_482a_8544_24afc078bacf.slice/crio-559415942ed1d4bf9edbeb01b22961f6097f2162ab384e974d4a9cb6382cf229 WatchSource:0}: Error finding container 559415942ed1d4bf9edbeb01b22961f6097f2162ab384e974d4a9cb6382cf229: Status 404 returned error can't find the container with id 559415942ed1d4bf9edbeb01b22961f6097f2162ab384e974d4a9cb6382cf229 Feb 25 11:30:01 crc kubenswrapper[5005]: I0225 11:30:01.643687 5005 generic.go:334] "Generic (PLEG): container finished" podID="d72ffb45-73a1-482a-8544-24afc078bacf" containerID="0dd0599eae82e590a3588421ff6b9b4947e5f8b24397ad51c51b1cfb39a2ad29" exitCode=0 Feb 25 11:30:01 crc kubenswrapper[5005]: I0225 11:30:01.643743 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29533650-4n78n" event={"ID":"d72ffb45-73a1-482a-8544-24afc078bacf","Type":"ContainerDied","Data":"0dd0599eae82e590a3588421ff6b9b4947e5f8b24397ad51c51b1cfb39a2ad29"} Feb 25 11:30:01 crc kubenswrapper[5005]: I0225 11:30:01.644194 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29533650-4n78n" event={"ID":"d72ffb45-73a1-482a-8544-24afc078bacf","Type":"ContainerStarted","Data":"559415942ed1d4bf9edbeb01b22961f6097f2162ab384e974d4a9cb6382cf229"} Feb 25 11:30:01 crc kubenswrapper[5005]: I0225 11:30:01.645486 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533650-ntmlt" event={"ID":"3cda6332-8370-4c9a-813d-ebc3e97b91b9","Type":"ContainerStarted","Data":"c96704422c7daf937be49cb8363c9b09a662f466a154a0c8d56e8ddac77c14f5"} Feb 25 11:30:02 crc kubenswrapper[5005]: I0225 11:30:02.905230 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29533650-4n78n" Feb 25 11:30:03 crc kubenswrapper[5005]: I0225 11:30:03.016594 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d72ffb45-73a1-482a-8544-24afc078bacf-config-volume\") pod \"d72ffb45-73a1-482a-8544-24afc078bacf\" (UID: \"d72ffb45-73a1-482a-8544-24afc078bacf\") " Feb 25 11:30:03 crc kubenswrapper[5005]: I0225 11:30:03.016664 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d72ffb45-73a1-482a-8544-24afc078bacf-secret-volume\") pod \"d72ffb45-73a1-482a-8544-24afc078bacf\" (UID: \"d72ffb45-73a1-482a-8544-24afc078bacf\") " Feb 25 11:30:03 crc kubenswrapper[5005]: I0225 11:30:03.016730 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4grtd\" (UniqueName: \"kubernetes.io/projected/d72ffb45-73a1-482a-8544-24afc078bacf-kube-api-access-4grtd\") pod \"d72ffb45-73a1-482a-8544-24afc078bacf\" (UID: \"d72ffb45-73a1-482a-8544-24afc078bacf\") " Feb 25 11:30:03 crc kubenswrapper[5005]: I0225 11:30:03.017919 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d72ffb45-73a1-482a-8544-24afc078bacf-config-volume" (OuterVolumeSpecName: "config-volume") pod "d72ffb45-73a1-482a-8544-24afc078bacf" (UID: "d72ffb45-73a1-482a-8544-24afc078bacf"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:30:03 crc kubenswrapper[5005]: I0225 11:30:03.022008 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d72ffb45-73a1-482a-8544-24afc078bacf-kube-api-access-4grtd" (OuterVolumeSpecName: "kube-api-access-4grtd") pod "d72ffb45-73a1-482a-8544-24afc078bacf" (UID: "d72ffb45-73a1-482a-8544-24afc078bacf"). InnerVolumeSpecName "kube-api-access-4grtd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:30:03 crc kubenswrapper[5005]: I0225 11:30:03.023464 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d72ffb45-73a1-482a-8544-24afc078bacf-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d72ffb45-73a1-482a-8544-24afc078bacf" (UID: "d72ffb45-73a1-482a-8544-24afc078bacf"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:30:03 crc kubenswrapper[5005]: I0225 11:30:03.118357 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4grtd\" (UniqueName: \"kubernetes.io/projected/d72ffb45-73a1-482a-8544-24afc078bacf-kube-api-access-4grtd\") on node \"crc\" DevicePath \"\"" Feb 25 11:30:03 crc kubenswrapper[5005]: I0225 11:30:03.118776 5005 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d72ffb45-73a1-482a-8544-24afc078bacf-config-volume\") on node \"crc\" DevicePath \"\"" Feb 25 11:30:03 crc kubenswrapper[5005]: I0225 11:30:03.118798 5005 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d72ffb45-73a1-482a-8544-24afc078bacf-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 25 11:30:03 crc kubenswrapper[5005]: I0225 11:30:03.659494 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533650-ntmlt" event={"ID":"3cda6332-8370-4c9a-813d-ebc3e97b91b9","Type":"ContainerStarted","Data":"1be24398d3f466f46170c106963bc8cab76c212b3fac2c6aa1d291ede2930cde"} Feb 25 11:30:03 crc kubenswrapper[5005]: I0225 11:30:03.663502 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29533650-4n78n" event={"ID":"d72ffb45-73a1-482a-8544-24afc078bacf","Type":"ContainerDied","Data":"559415942ed1d4bf9edbeb01b22961f6097f2162ab384e974d4a9cb6382cf229"} Feb 25 11:30:03 crc kubenswrapper[5005]: I0225 11:30:03.663540 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="559415942ed1d4bf9edbeb01b22961f6097f2162ab384e974d4a9cb6382cf229" Feb 25 11:30:03 crc kubenswrapper[5005]: I0225 11:30:03.663590 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29533650-4n78n" Feb 25 11:30:03 crc kubenswrapper[5005]: I0225 11:30:03.935040 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29533650-ntmlt" podStartSLOduration=1.444733577 podStartE2EDuration="3.935019525s" podCreationTimestamp="2026-02-25 11:30:00 +0000 UTC" firstStartedPulling="2026-02-25 11:30:00.761473522 +0000 UTC m=+714.802205869" lastFinishedPulling="2026-02-25 11:30:03.25175948 +0000 UTC m=+717.292491817" observedRunningTime="2026-02-25 11:30:03.683925618 +0000 UTC m=+717.724657945" watchObservedRunningTime="2026-02-25 11:30:03.935019525 +0000 UTC m=+717.975751852" Feb 25 11:30:04 crc kubenswrapper[5005]: I0225 11:30:04.671627 5005 generic.go:334] "Generic (PLEG): container finished" podID="3cda6332-8370-4c9a-813d-ebc3e97b91b9" containerID="1be24398d3f466f46170c106963bc8cab76c212b3fac2c6aa1d291ede2930cde" exitCode=0 Feb 25 11:30:04 crc kubenswrapper[5005]: I0225 11:30:04.671694 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533650-ntmlt" event={"ID":"3cda6332-8370-4c9a-813d-ebc3e97b91b9","Type":"ContainerDied","Data":"1be24398d3f466f46170c106963bc8cab76c212b3fac2c6aa1d291ede2930cde"} Feb 25 11:30:06 crc kubenswrapper[5005]: I0225 11:30:06.039610 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533650-ntmlt" Feb 25 11:30:06 crc kubenswrapper[5005]: I0225 11:30:06.157250 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgzjr\" (UniqueName: \"kubernetes.io/projected/3cda6332-8370-4c9a-813d-ebc3e97b91b9-kube-api-access-xgzjr\") pod \"3cda6332-8370-4c9a-813d-ebc3e97b91b9\" (UID: \"3cda6332-8370-4c9a-813d-ebc3e97b91b9\") " Feb 25 11:30:06 crc kubenswrapper[5005]: I0225 11:30:06.163333 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cda6332-8370-4c9a-813d-ebc3e97b91b9-kube-api-access-xgzjr" (OuterVolumeSpecName: "kube-api-access-xgzjr") pod "3cda6332-8370-4c9a-813d-ebc3e97b91b9" (UID: "3cda6332-8370-4c9a-813d-ebc3e97b91b9"). InnerVolumeSpecName "kube-api-access-xgzjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:30:06 crc kubenswrapper[5005]: I0225 11:30:06.259366 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgzjr\" (UniqueName: \"kubernetes.io/projected/3cda6332-8370-4c9a-813d-ebc3e97b91b9-kube-api-access-xgzjr\") on node \"crc\" DevicePath \"\"" Feb 25 11:30:06 crc kubenswrapper[5005]: I0225 11:30:06.688033 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533650-ntmlt" Feb 25 11:30:06 crc kubenswrapper[5005]: I0225 11:30:06.709261 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533650-ntmlt" event={"ID":"3cda6332-8370-4c9a-813d-ebc3e97b91b9","Type":"ContainerDied","Data":"c96704422c7daf937be49cb8363c9b09a662f466a154a0c8d56e8ddac77c14f5"} Feb 25 11:30:06 crc kubenswrapper[5005]: I0225 11:30:06.709327 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c96704422c7daf937be49cb8363c9b09a662f466a154a0c8d56e8ddac77c14f5" Feb 25 11:30:07 crc kubenswrapper[5005]: I0225 11:30:07.092500 5005 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 25 11:30:07 crc kubenswrapper[5005]: I0225 11:30:07.096582 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533644-tplgw"] Feb 25 11:30:07 crc kubenswrapper[5005]: I0225 11:30:07.100659 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533644-tplgw"] Feb 25 11:30:08 crc kubenswrapper[5005]: I0225 11:30:08.693822 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f84fbc8f-fd0e-4c51-8667-f0e672a3aebb" path="/var/lib/kubelet/pods/f84fbc8f-fd0e-4c51-8667-f0e672a3aebb/volumes" Feb 25 11:30:10 crc kubenswrapper[5005]: I0225 11:30:10.828185 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawrrnc"] Feb 25 11:30:10 crc kubenswrapper[5005]: E0225 11:30:10.828439 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cda6332-8370-4c9a-813d-ebc3e97b91b9" containerName="oc" Feb 25 11:30:10 crc kubenswrapper[5005]: I0225 11:30:10.828456 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cda6332-8370-4c9a-813d-ebc3e97b91b9" containerName="oc" Feb 25 11:30:10 crc kubenswrapper[5005]: E0225 11:30:10.828476 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d72ffb45-73a1-482a-8544-24afc078bacf" containerName="collect-profiles" Feb 25 11:30:10 crc kubenswrapper[5005]: I0225 11:30:10.828483 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="d72ffb45-73a1-482a-8544-24afc078bacf" containerName="collect-profiles" Feb 25 11:30:10 crc kubenswrapper[5005]: I0225 11:30:10.828596 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="d72ffb45-73a1-482a-8544-24afc078bacf" containerName="collect-profiles" Feb 25 11:30:10 crc kubenswrapper[5005]: I0225 11:30:10.828606 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cda6332-8370-4c9a-813d-ebc3e97b91b9" containerName="oc" Feb 25 11:30:10 crc kubenswrapper[5005]: I0225 11:30:10.829714 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawrrnc" Feb 25 11:30:10 crc kubenswrapper[5005]: I0225 11:30:10.831691 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 25 11:30:10 crc kubenswrapper[5005]: I0225 11:30:10.844132 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawrrnc"] Feb 25 11:30:10 crc kubenswrapper[5005]: I0225 11:30:10.923148 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8ecc824f-780b-43ba-8d32-9cc548278165-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawrrnc\" (UID: \"8ecc824f-780b-43ba-8d32-9cc548278165\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawrrnc" Feb 25 11:30:10 crc kubenswrapper[5005]: I0225 11:30:10.923361 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-694st\" (UniqueName: \"kubernetes.io/projected/8ecc824f-780b-43ba-8d32-9cc548278165-kube-api-access-694st\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawrrnc\" (UID: \"8ecc824f-780b-43ba-8d32-9cc548278165\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawrrnc" Feb 25 11:30:10 crc kubenswrapper[5005]: I0225 11:30:10.923616 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8ecc824f-780b-43ba-8d32-9cc548278165-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawrrnc\" (UID: \"8ecc824f-780b-43ba-8d32-9cc548278165\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawrrnc" Feb 25 11:30:11 crc kubenswrapper[5005]: I0225 11:30:11.025033 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8ecc824f-780b-43ba-8d32-9cc548278165-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawrrnc\" (UID: \"8ecc824f-780b-43ba-8d32-9cc548278165\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawrrnc" Feb 25 11:30:11 crc kubenswrapper[5005]: I0225 11:30:11.025116 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8ecc824f-780b-43ba-8d32-9cc548278165-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawrrnc\" (UID: \"8ecc824f-780b-43ba-8d32-9cc548278165\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawrrnc" Feb 25 11:30:11 crc kubenswrapper[5005]: I0225 11:30:11.025166 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-694st\" (UniqueName: \"kubernetes.io/projected/8ecc824f-780b-43ba-8d32-9cc548278165-kube-api-access-694st\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawrrnc\" (UID: \"8ecc824f-780b-43ba-8d32-9cc548278165\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawrrnc" Feb 25 11:30:11 crc kubenswrapper[5005]: I0225 11:30:11.026166 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8ecc824f-780b-43ba-8d32-9cc548278165-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawrrnc\" (UID: \"8ecc824f-780b-43ba-8d32-9cc548278165\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawrrnc" Feb 25 11:30:11 crc kubenswrapper[5005]: I0225 11:30:11.026245 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8ecc824f-780b-43ba-8d32-9cc548278165-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawrrnc\" (UID: \"8ecc824f-780b-43ba-8d32-9cc548278165\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawrrnc" Feb 25 11:30:11 crc kubenswrapper[5005]: I0225 11:30:11.057459 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-694st\" (UniqueName: \"kubernetes.io/projected/8ecc824f-780b-43ba-8d32-9cc548278165-kube-api-access-694st\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawrrnc\" (UID: \"8ecc824f-780b-43ba-8d32-9cc548278165\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawrrnc" Feb 25 11:30:11 crc kubenswrapper[5005]: I0225 11:30:11.155998 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawrrnc" Feb 25 11:30:11 crc kubenswrapper[5005]: I0225 11:30:11.409615 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawrrnc"] Feb 25 11:30:11 crc kubenswrapper[5005]: W0225 11:30:11.413539 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ecc824f_780b_43ba_8d32_9cc548278165.slice/crio-bbb77bab212249a199d88bb495ea2b4052abdb043c9c9285861f215cdd458688 WatchSource:0}: Error finding container bbb77bab212249a199d88bb495ea2b4052abdb043c9c9285861f215cdd458688: Status 404 returned error can't find the container with id bbb77bab212249a199d88bb495ea2b4052abdb043c9c9285861f215cdd458688 Feb 25 11:30:11 crc kubenswrapper[5005]: I0225 11:30:11.723421 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawrrnc" event={"ID":"8ecc824f-780b-43ba-8d32-9cc548278165","Type":"ContainerStarted","Data":"e573e3e896472774a7a6a64b06d81e28d7e86e3ca7332331363862e64cb7ac91"} Feb 25 11:30:11 crc kubenswrapper[5005]: I0225 11:30:11.723496 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawrrnc" event={"ID":"8ecc824f-780b-43ba-8d32-9cc548278165","Type":"ContainerStarted","Data":"bbb77bab212249a199d88bb495ea2b4052abdb043c9c9285861f215cdd458688"} Feb 25 11:30:12 crc kubenswrapper[5005]: I0225 11:30:12.731778 5005 generic.go:334] "Generic (PLEG): container finished" podID="8ecc824f-780b-43ba-8d32-9cc548278165" containerID="e573e3e896472774a7a6a64b06d81e28d7e86e3ca7332331363862e64cb7ac91" exitCode=0 Feb 25 11:30:12 crc kubenswrapper[5005]: I0225 11:30:12.731834 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawrrnc" event={"ID":"8ecc824f-780b-43ba-8d32-9cc548278165","Type":"ContainerDied","Data":"e573e3e896472774a7a6a64b06d81e28d7e86e3ca7332331363862e64cb7ac91"} Feb 25 11:30:13 crc kubenswrapper[5005]: I0225 11:30:13.073684 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bnpxl"] Feb 25 11:30:13 crc kubenswrapper[5005]: I0225 11:30:13.075734 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bnpxl" Feb 25 11:30:13 crc kubenswrapper[5005]: I0225 11:30:13.097223 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bnpxl"] Feb 25 11:30:13 crc kubenswrapper[5005]: I0225 11:30:13.153066 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db5119bb-b706-4a76-b465-e6dee47d8944-utilities\") pod \"redhat-operators-bnpxl\" (UID: \"db5119bb-b706-4a76-b465-e6dee47d8944\") " pod="openshift-marketplace/redhat-operators-bnpxl" Feb 25 11:30:13 crc kubenswrapper[5005]: I0225 11:30:13.153201 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db5119bb-b706-4a76-b465-e6dee47d8944-catalog-content\") pod \"redhat-operators-bnpxl\" (UID: \"db5119bb-b706-4a76-b465-e6dee47d8944\") " pod="openshift-marketplace/redhat-operators-bnpxl" Feb 25 11:30:13 crc kubenswrapper[5005]: I0225 11:30:13.153256 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g769p\" (UniqueName: \"kubernetes.io/projected/db5119bb-b706-4a76-b465-e6dee47d8944-kube-api-access-g769p\") pod \"redhat-operators-bnpxl\" (UID: \"db5119bb-b706-4a76-b465-e6dee47d8944\") " pod="openshift-marketplace/redhat-operators-bnpxl" Feb 25 11:30:13 crc kubenswrapper[5005]: I0225 11:30:13.255061 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db5119bb-b706-4a76-b465-e6dee47d8944-utilities\") pod \"redhat-operators-bnpxl\" (UID: \"db5119bb-b706-4a76-b465-e6dee47d8944\") " pod="openshift-marketplace/redhat-operators-bnpxl" Feb 25 11:30:13 crc kubenswrapper[5005]: I0225 11:30:13.255932 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db5119bb-b706-4a76-b465-e6dee47d8944-utilities\") pod \"redhat-operators-bnpxl\" (UID: \"db5119bb-b706-4a76-b465-e6dee47d8944\") " pod="openshift-marketplace/redhat-operators-bnpxl" Feb 25 11:30:13 crc kubenswrapper[5005]: I0225 11:30:13.256226 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db5119bb-b706-4a76-b465-e6dee47d8944-catalog-content\") pod \"redhat-operators-bnpxl\" (UID: \"db5119bb-b706-4a76-b465-e6dee47d8944\") " pod="openshift-marketplace/redhat-operators-bnpxl" Feb 25 11:30:13 crc kubenswrapper[5005]: I0225 11:30:13.255295 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db5119bb-b706-4a76-b465-e6dee47d8944-catalog-content\") pod \"redhat-operators-bnpxl\" (UID: \"db5119bb-b706-4a76-b465-e6dee47d8944\") " pod="openshift-marketplace/redhat-operators-bnpxl" Feb 25 11:30:13 crc kubenswrapper[5005]: I0225 11:30:13.256526 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g769p\" (UniqueName: \"kubernetes.io/projected/db5119bb-b706-4a76-b465-e6dee47d8944-kube-api-access-g769p\") pod \"redhat-operators-bnpxl\" (UID: \"db5119bb-b706-4a76-b465-e6dee47d8944\") " pod="openshift-marketplace/redhat-operators-bnpxl" Feb 25 11:30:13 crc kubenswrapper[5005]: I0225 11:30:13.295673 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g769p\" (UniqueName: \"kubernetes.io/projected/db5119bb-b706-4a76-b465-e6dee47d8944-kube-api-access-g769p\") pod \"redhat-operators-bnpxl\" (UID: \"db5119bb-b706-4a76-b465-e6dee47d8944\") " pod="openshift-marketplace/redhat-operators-bnpxl" Feb 25 11:30:13 crc kubenswrapper[5005]: I0225 11:30:13.448982 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bnpxl" Feb 25 11:30:13 crc kubenswrapper[5005]: I0225 11:30:13.649663 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bnpxl"] Feb 25 11:30:13 crc kubenswrapper[5005]: W0225 11:30:13.657655 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb5119bb_b706_4a76_b465_e6dee47d8944.slice/crio-6a05f5f341193c377ba46daa321dc2706b3a382917332c0ca1bcecece288b7f3 WatchSource:0}: Error finding container 6a05f5f341193c377ba46daa321dc2706b3a382917332c0ca1bcecece288b7f3: Status 404 returned error can't find the container with id 6a05f5f341193c377ba46daa321dc2706b3a382917332c0ca1bcecece288b7f3 Feb 25 11:30:13 crc kubenswrapper[5005]: I0225 11:30:13.737723 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bnpxl" event={"ID":"db5119bb-b706-4a76-b465-e6dee47d8944","Type":"ContainerStarted","Data":"6a05f5f341193c377ba46daa321dc2706b3a382917332c0ca1bcecece288b7f3"} Feb 25 11:30:14 crc kubenswrapper[5005]: I0225 11:30:14.745076 5005 generic.go:334] "Generic (PLEG): container finished" podID="db5119bb-b706-4a76-b465-e6dee47d8944" containerID="7319dd6b0f31864e31ffe1632e267b4d0f4d61834ca86b00b265837940f98401" exitCode=0 Feb 25 11:30:14 crc kubenswrapper[5005]: I0225 11:30:14.745150 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bnpxl" event={"ID":"db5119bb-b706-4a76-b465-e6dee47d8944","Type":"ContainerDied","Data":"7319dd6b0f31864e31ffe1632e267b4d0f4d61834ca86b00b265837940f98401"} Feb 25 11:30:14 crc kubenswrapper[5005]: I0225 11:30:14.749187 5005 generic.go:334] "Generic (PLEG): container finished" podID="8ecc824f-780b-43ba-8d32-9cc548278165" containerID="dbf07c898d7fc64edd71b63d2464e2e304815dd9151b94f96e88d1189868b5a9" exitCode=0 Feb 25 11:30:14 crc kubenswrapper[5005]: I0225 11:30:14.749221 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawrrnc" event={"ID":"8ecc824f-780b-43ba-8d32-9cc548278165","Type":"ContainerDied","Data":"dbf07c898d7fc64edd71b63d2464e2e304815dd9151b94f96e88d1189868b5a9"} Feb 25 11:30:15 crc kubenswrapper[5005]: I0225 11:30:15.757704 5005 generic.go:334] "Generic (PLEG): container finished" podID="8ecc824f-780b-43ba-8d32-9cc548278165" containerID="68d58507b63779991f1e264bef61a2991e4f3c420043bda2e7634efd7c6c00dc" exitCode=0 Feb 25 11:30:15 crc kubenswrapper[5005]: I0225 11:30:15.757990 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawrrnc" event={"ID":"8ecc824f-780b-43ba-8d32-9cc548278165","Type":"ContainerDied","Data":"68d58507b63779991f1e264bef61a2991e4f3c420043bda2e7634efd7c6c00dc"} Feb 25 11:30:15 crc kubenswrapper[5005]: I0225 11:30:15.761684 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bnpxl" event={"ID":"db5119bb-b706-4a76-b465-e6dee47d8944","Type":"ContainerStarted","Data":"c0eb983547ce3a6d24395a0c731324cb0895af2bfe0daa7e116f0fec968a96eb"} Feb 25 11:30:16 crc kubenswrapper[5005]: I0225 11:30:16.772536 5005 generic.go:334] "Generic (PLEG): container finished" podID="db5119bb-b706-4a76-b465-e6dee47d8944" containerID="c0eb983547ce3a6d24395a0c731324cb0895af2bfe0daa7e116f0fec968a96eb" exitCode=0 Feb 25 11:30:16 crc kubenswrapper[5005]: I0225 11:30:16.772741 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bnpxl" event={"ID":"db5119bb-b706-4a76-b465-e6dee47d8944","Type":"ContainerDied","Data":"c0eb983547ce3a6d24395a0c731324cb0895af2bfe0daa7e116f0fec968a96eb"} Feb 25 11:30:17 crc kubenswrapper[5005]: I0225 11:30:17.085441 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawrrnc" Feb 25 11:30:17 crc kubenswrapper[5005]: I0225 11:30:17.354119 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8ecc824f-780b-43ba-8d32-9cc548278165-bundle\") pod \"8ecc824f-780b-43ba-8d32-9cc548278165\" (UID: \"8ecc824f-780b-43ba-8d32-9cc548278165\") " Feb 25 11:30:17 crc kubenswrapper[5005]: I0225 11:30:17.354171 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8ecc824f-780b-43ba-8d32-9cc548278165-util\") pod \"8ecc824f-780b-43ba-8d32-9cc548278165\" (UID: \"8ecc824f-780b-43ba-8d32-9cc548278165\") " Feb 25 11:30:17 crc kubenswrapper[5005]: I0225 11:30:17.354233 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-694st\" (UniqueName: \"kubernetes.io/projected/8ecc824f-780b-43ba-8d32-9cc548278165-kube-api-access-694st\") pod \"8ecc824f-780b-43ba-8d32-9cc548278165\" (UID: \"8ecc824f-780b-43ba-8d32-9cc548278165\") " Feb 25 11:30:17 crc kubenswrapper[5005]: I0225 11:30:17.355567 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ecc824f-780b-43ba-8d32-9cc548278165-bundle" (OuterVolumeSpecName: "bundle") pod "8ecc824f-780b-43ba-8d32-9cc548278165" (UID: "8ecc824f-780b-43ba-8d32-9cc548278165"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:30:17 crc kubenswrapper[5005]: I0225 11:30:17.363436 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ecc824f-780b-43ba-8d32-9cc548278165-kube-api-access-694st" (OuterVolumeSpecName: "kube-api-access-694st") pod "8ecc824f-780b-43ba-8d32-9cc548278165" (UID: "8ecc824f-780b-43ba-8d32-9cc548278165"). InnerVolumeSpecName "kube-api-access-694st". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:30:17 crc kubenswrapper[5005]: I0225 11:30:17.393156 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ecc824f-780b-43ba-8d32-9cc548278165-util" (OuterVolumeSpecName: "util") pod "8ecc824f-780b-43ba-8d32-9cc548278165" (UID: "8ecc824f-780b-43ba-8d32-9cc548278165"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:30:17 crc kubenswrapper[5005]: I0225 11:30:17.456771 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-694st\" (UniqueName: \"kubernetes.io/projected/8ecc824f-780b-43ba-8d32-9cc548278165-kube-api-access-694st\") on node \"crc\" DevicePath \"\"" Feb 25 11:30:17 crc kubenswrapper[5005]: I0225 11:30:17.456827 5005 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8ecc824f-780b-43ba-8d32-9cc548278165-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 11:30:17 crc kubenswrapper[5005]: I0225 11:30:17.456847 5005 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8ecc824f-780b-43ba-8d32-9cc548278165-util\") on node \"crc\" DevicePath \"\"" Feb 25 11:30:17 crc kubenswrapper[5005]: I0225 11:30:17.784851 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawrrnc" event={"ID":"8ecc824f-780b-43ba-8d32-9cc548278165","Type":"ContainerDied","Data":"bbb77bab212249a199d88bb495ea2b4052abdb043c9c9285861f215cdd458688"} Feb 25 11:30:17 crc kubenswrapper[5005]: I0225 11:30:17.785431 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbb77bab212249a199d88bb495ea2b4052abdb043c9c9285861f215cdd458688" Feb 25 11:30:17 crc kubenswrapper[5005]: I0225 11:30:17.785120 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawrrnc" Feb 25 11:30:17 crc kubenswrapper[5005]: I0225 11:30:17.789180 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bnpxl" event={"ID":"db5119bb-b706-4a76-b465-e6dee47d8944","Type":"ContainerStarted","Data":"3a54f9bfef47698fd02a411d83d90449a72a976c5d6bf9ccfb3f19f58ef9b3b4"} Feb 25 11:30:17 crc kubenswrapper[5005]: I0225 11:30:17.812657 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bnpxl" podStartSLOduration=2.133831641 podStartE2EDuration="4.812616021s" podCreationTimestamp="2026-02-25 11:30:13 +0000 UTC" firstStartedPulling="2026-02-25 11:30:14.748506247 +0000 UTC m=+728.789238574" lastFinishedPulling="2026-02-25 11:30:17.427290627 +0000 UTC m=+731.468022954" observedRunningTime="2026-02-25 11:30:17.809325929 +0000 UTC m=+731.850058266" watchObservedRunningTime="2026-02-25 11:30:17.812616021 +0000 UTC m=+731.853348368" Feb 25 11:30:20 crc kubenswrapper[5005]: I0225 11:30:20.947267 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-49k98"] Feb 25 11:30:20 crc kubenswrapper[5005]: E0225 11:30:20.947501 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ecc824f-780b-43ba-8d32-9cc548278165" containerName="util" Feb 25 11:30:20 crc kubenswrapper[5005]: I0225 11:30:20.947514 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ecc824f-780b-43ba-8d32-9cc548278165" containerName="util" Feb 25 11:30:20 crc kubenswrapper[5005]: E0225 11:30:20.947528 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ecc824f-780b-43ba-8d32-9cc548278165" containerName="extract" Feb 25 11:30:20 crc kubenswrapper[5005]: I0225 11:30:20.947534 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ecc824f-780b-43ba-8d32-9cc548278165" containerName="extract" Feb 25 11:30:20 crc kubenswrapper[5005]: E0225 11:30:20.947543 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ecc824f-780b-43ba-8d32-9cc548278165" containerName="pull" Feb 25 11:30:20 crc kubenswrapper[5005]: I0225 11:30:20.947549 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ecc824f-780b-43ba-8d32-9cc548278165" containerName="pull" Feb 25 11:30:20 crc kubenswrapper[5005]: I0225 11:30:20.947659 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ecc824f-780b-43ba-8d32-9cc548278165" containerName="extract" Feb 25 11:30:20 crc kubenswrapper[5005]: I0225 11:30:20.948027 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-49k98" Feb 25 11:30:20 crc kubenswrapper[5005]: I0225 11:30:20.950425 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 25 11:30:20 crc kubenswrapper[5005]: I0225 11:30:20.950492 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-b2r2t" Feb 25 11:30:20 crc kubenswrapper[5005]: I0225 11:30:20.950709 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 25 11:30:20 crc kubenswrapper[5005]: I0225 11:30:20.955981 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-49k98"] Feb 25 11:30:21 crc kubenswrapper[5005]: I0225 11:30:21.004130 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8rdv\" (UniqueName: \"kubernetes.io/projected/e9d3d9e7-9646-45be-b459-10fce735fd04-kube-api-access-p8rdv\") pod \"nmstate-operator-694c9596b7-49k98\" (UID: \"e9d3d9e7-9646-45be-b459-10fce735fd04\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-49k98" Feb 25 11:30:21 crc kubenswrapper[5005]: I0225 11:30:21.104999 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8rdv\" (UniqueName: \"kubernetes.io/projected/e9d3d9e7-9646-45be-b459-10fce735fd04-kube-api-access-p8rdv\") pod \"nmstate-operator-694c9596b7-49k98\" (UID: \"e9d3d9e7-9646-45be-b459-10fce735fd04\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-49k98" Feb 25 11:30:21 crc kubenswrapper[5005]: I0225 11:30:21.127138 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8rdv\" (UniqueName: \"kubernetes.io/projected/e9d3d9e7-9646-45be-b459-10fce735fd04-kube-api-access-p8rdv\") pod \"nmstate-operator-694c9596b7-49k98\" (UID: \"e9d3d9e7-9646-45be-b459-10fce735fd04\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-49k98" Feb 25 11:30:21 crc kubenswrapper[5005]: I0225 11:30:21.264937 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-49k98" Feb 25 11:30:21 crc kubenswrapper[5005]: I0225 11:30:21.451666 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-49k98"] Feb 25 11:30:21 crc kubenswrapper[5005]: W0225 11:30:21.462499 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9d3d9e7_9646_45be_b459_10fce735fd04.slice/crio-0b46850e999bff25d146e1470edd1024dfa26ef717d28e65aaf4e017ee0a8897 WatchSource:0}: Error finding container 0b46850e999bff25d146e1470edd1024dfa26ef717d28e65aaf4e017ee0a8897: Status 404 returned error can't find the container with id 0b46850e999bff25d146e1470edd1024dfa26ef717d28e65aaf4e017ee0a8897 Feb 25 11:30:21 crc kubenswrapper[5005]: I0225 11:30:21.814633 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-49k98" event={"ID":"e9d3d9e7-9646-45be-b459-10fce735fd04","Type":"ContainerStarted","Data":"0b46850e999bff25d146e1470edd1024dfa26ef717d28e65aaf4e017ee0a8897"} Feb 25 11:30:23 crc kubenswrapper[5005]: I0225 11:30:23.449792 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bnpxl" Feb 25 11:30:23 crc kubenswrapper[5005]: I0225 11:30:23.450060 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bnpxl" Feb 25 11:30:24 crc kubenswrapper[5005]: I0225 11:30:24.485854 5005 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bnpxl" podUID="db5119bb-b706-4a76-b465-e6dee47d8944" containerName="registry-server" probeResult="failure" output=< Feb 25 11:30:24 crc kubenswrapper[5005]: timeout: failed to connect service ":50051" within 1s Feb 25 11:30:24 crc kubenswrapper[5005]: > Feb 25 11:30:24 crc kubenswrapper[5005]: I0225 11:30:24.841061 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-49k98" event={"ID":"e9d3d9e7-9646-45be-b459-10fce735fd04","Type":"ContainerStarted","Data":"12251609e23eade4dda216c553340e75513762ed6f4ff59964e2bc7b3a5d2967"} Feb 25 11:30:24 crc kubenswrapper[5005]: I0225 11:30:24.868062 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-49k98" podStartSLOduration=2.363748789 podStartE2EDuration="4.868023792s" podCreationTimestamp="2026-02-25 11:30:20 +0000 UTC" firstStartedPulling="2026-02-25 11:30:21.465826909 +0000 UTC m=+735.506559236" lastFinishedPulling="2026-02-25 11:30:23.970101912 +0000 UTC m=+738.010834239" observedRunningTime="2026-02-25 11:30:24.857043854 +0000 UTC m=+738.897776211" watchObservedRunningTime="2026-02-25 11:30:24.868023792 +0000 UTC m=+738.908756169" Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.054780 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-7v8sm"] Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.062307 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-7v8sm" Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.069787 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-jg2hs"] Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.070506 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-jg2hs" Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.073184 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-7v8sm"] Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.077616 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.077877 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-lpwvw" Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.096892 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-vjwbs"] Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.097642 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-vjwbs" Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.134062 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-jg2hs"] Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.249913 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/e395d174-8019-4c2d-b6eb-01de557ad7f0-ovs-socket\") pod \"nmstate-handler-vjwbs\" (UID: \"e395d174-8019-4c2d-b6eb-01de557ad7f0\") " pod="openshift-nmstate/nmstate-handler-vjwbs" Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.249947 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q57d6\" (UniqueName: \"kubernetes.io/projected/fb9e60b3-a104-48de-8db3-d056e7803ed1-kube-api-access-q57d6\") pod \"nmstate-metrics-58c85c668d-7v8sm\" (UID: \"fb9e60b3-a104-48de-8db3-d056e7803ed1\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-7v8sm" Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.250000 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/f9b21027-4967-42b7-bb45-800971bccae6-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-jg2hs\" (UID: \"f9b21027-4967-42b7-bb45-800971bccae6\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-jg2hs" Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.250020 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/e395d174-8019-4c2d-b6eb-01de557ad7f0-nmstate-lock\") pod \"nmstate-handler-vjwbs\" (UID: \"e395d174-8019-4c2d-b6eb-01de557ad7f0\") " pod="openshift-nmstate/nmstate-handler-vjwbs" Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.250035 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbsfj\" (UniqueName: \"kubernetes.io/projected/f9b21027-4967-42b7-bb45-800971bccae6-kube-api-access-xbsfj\") pod \"nmstate-webhook-866bcb46dc-jg2hs\" (UID: \"f9b21027-4967-42b7-bb45-800971bccae6\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-jg2hs" Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.250049 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wd5jx\" (UniqueName: \"kubernetes.io/projected/e395d174-8019-4c2d-b6eb-01de557ad7f0-kube-api-access-wd5jx\") pod \"nmstate-handler-vjwbs\" (UID: \"e395d174-8019-4c2d-b6eb-01de557ad7f0\") " pod="openshift-nmstate/nmstate-handler-vjwbs" Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.250066 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/e395d174-8019-4c2d-b6eb-01de557ad7f0-dbus-socket\") pod \"nmstate-handler-vjwbs\" (UID: \"e395d174-8019-4c2d-b6eb-01de557ad7f0\") " pod="openshift-nmstate/nmstate-handler-vjwbs" Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.271945 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-j5r7m"] Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.272612 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-j5r7m" Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.274894 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.275085 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-cdtbn" Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.275198 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.280804 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-j5r7m"] Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.351169 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/e395d174-8019-4c2d-b6eb-01de557ad7f0-dbus-socket\") pod \"nmstate-handler-vjwbs\" (UID: \"e395d174-8019-4c2d-b6eb-01de557ad7f0\") " pod="openshift-nmstate/nmstate-handler-vjwbs" Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.351481 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/e395d174-8019-4c2d-b6eb-01de557ad7f0-ovs-socket\") pod \"nmstate-handler-vjwbs\" (UID: \"e395d174-8019-4c2d-b6eb-01de557ad7f0\") " pod="openshift-nmstate/nmstate-handler-vjwbs" Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.351657 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q57d6\" (UniqueName: \"kubernetes.io/projected/fb9e60b3-a104-48de-8db3-d056e7803ed1-kube-api-access-q57d6\") pod \"nmstate-metrics-58c85c668d-7v8sm\" (UID: \"fb9e60b3-a104-48de-8db3-d056e7803ed1\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-7v8sm" Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.351872 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/f9b21027-4967-42b7-bb45-800971bccae6-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-jg2hs\" (UID: \"f9b21027-4967-42b7-bb45-800971bccae6\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-jg2hs" Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.352084 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/e395d174-8019-4c2d-b6eb-01de557ad7f0-nmstate-lock\") pod \"nmstate-handler-vjwbs\" (UID: \"e395d174-8019-4c2d-b6eb-01de557ad7f0\") " pod="openshift-nmstate/nmstate-handler-vjwbs" Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.352204 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wd5jx\" (UniqueName: \"kubernetes.io/projected/e395d174-8019-4c2d-b6eb-01de557ad7f0-kube-api-access-wd5jx\") pod \"nmstate-handler-vjwbs\" (UID: \"e395d174-8019-4c2d-b6eb-01de557ad7f0\") " pod="openshift-nmstate/nmstate-handler-vjwbs" Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.352542 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbsfj\" (UniqueName: \"kubernetes.io/projected/f9b21027-4967-42b7-bb45-800971bccae6-kube-api-access-xbsfj\") pod \"nmstate-webhook-866bcb46dc-jg2hs\" (UID: \"f9b21027-4967-42b7-bb45-800971bccae6\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-jg2hs" Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.351604 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/e395d174-8019-4c2d-b6eb-01de557ad7f0-dbus-socket\") pod \"nmstate-handler-vjwbs\" (UID: \"e395d174-8019-4c2d-b6eb-01de557ad7f0\") " pod="openshift-nmstate/nmstate-handler-vjwbs" Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.352178 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/e395d174-8019-4c2d-b6eb-01de557ad7f0-nmstate-lock\") pod \"nmstate-handler-vjwbs\" (UID: \"e395d174-8019-4c2d-b6eb-01de557ad7f0\") " pod="openshift-nmstate/nmstate-handler-vjwbs" Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.351613 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/e395d174-8019-4c2d-b6eb-01de557ad7f0-ovs-socket\") pod \"nmstate-handler-vjwbs\" (UID: \"e395d174-8019-4c2d-b6eb-01de557ad7f0\") " pod="openshift-nmstate/nmstate-handler-vjwbs" Feb 25 11:30:31 crc kubenswrapper[5005]: E0225 11:30:31.352050 5005 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 25 11:30:31 crc kubenswrapper[5005]: E0225 11:30:31.353063 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f9b21027-4967-42b7-bb45-800971bccae6-tls-key-pair podName:f9b21027-4967-42b7-bb45-800971bccae6 nodeName:}" failed. No retries permitted until 2026-02-25 11:30:31.853044508 +0000 UTC m=+745.893776835 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/f9b21027-4967-42b7-bb45-800971bccae6-tls-key-pair") pod "nmstate-webhook-866bcb46dc-jg2hs" (UID: "f9b21027-4967-42b7-bb45-800971bccae6") : secret "openshift-nmstate-webhook" not found Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.370957 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbsfj\" (UniqueName: \"kubernetes.io/projected/f9b21027-4967-42b7-bb45-800971bccae6-kube-api-access-xbsfj\") pod \"nmstate-webhook-866bcb46dc-jg2hs\" (UID: \"f9b21027-4967-42b7-bb45-800971bccae6\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-jg2hs" Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.374719 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wd5jx\" (UniqueName: \"kubernetes.io/projected/e395d174-8019-4c2d-b6eb-01de557ad7f0-kube-api-access-wd5jx\") pod \"nmstate-handler-vjwbs\" (UID: \"e395d174-8019-4c2d-b6eb-01de557ad7f0\") " pod="openshift-nmstate/nmstate-handler-vjwbs" Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.374933 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q57d6\" (UniqueName: \"kubernetes.io/projected/fb9e60b3-a104-48de-8db3-d056e7803ed1-kube-api-access-q57d6\") pod \"nmstate-metrics-58c85c668d-7v8sm\" (UID: \"fb9e60b3-a104-48de-8db3-d056e7803ed1\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-7v8sm" Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.384728 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-7v8sm" Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.419930 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-vjwbs" Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.453465 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/eb9a2251-b746-4e9b-88b9-2db41d719a6b-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-j5r7m\" (UID: \"eb9a2251-b746-4e9b-88b9-2db41d719a6b\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-j5r7m" Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.453667 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/eb9a2251-b746-4e9b-88b9-2db41d719a6b-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-j5r7m\" (UID: \"eb9a2251-b746-4e9b-88b9-2db41d719a6b\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-j5r7m" Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.453773 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljvg7\" (UniqueName: \"kubernetes.io/projected/eb9a2251-b746-4e9b-88b9-2db41d719a6b-kube-api-access-ljvg7\") pod \"nmstate-console-plugin-5c78fc5d65-j5r7m\" (UID: \"eb9a2251-b746-4e9b-88b9-2db41d719a6b\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-j5r7m" Feb 25 11:30:31 crc kubenswrapper[5005]: W0225 11:30:31.460153 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode395d174_8019_4c2d_b6eb_01de557ad7f0.slice/crio-5b3988169863d24ca1720f589415b1ff7033c220590c3ebca2d0d9cf820fb23a WatchSource:0}: Error finding container 5b3988169863d24ca1720f589415b1ff7033c220590c3ebca2d0d9cf820fb23a: Status 404 returned error can't find the container with id 5b3988169863d24ca1720f589415b1ff7033c220590c3ebca2d0d9cf820fb23a Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.478954 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-85cd4889fc-c9c2m"] Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.479880 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-85cd4889fc-c9c2m" Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.491909 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-85cd4889fc-c9c2m"] Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.554834 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/eb9a2251-b746-4e9b-88b9-2db41d719a6b-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-j5r7m\" (UID: \"eb9a2251-b746-4e9b-88b9-2db41d719a6b\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-j5r7m" Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.554898 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/eb9a2251-b746-4e9b-88b9-2db41d719a6b-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-j5r7m\" (UID: \"eb9a2251-b746-4e9b-88b9-2db41d719a6b\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-j5r7m" Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.554939 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e1950e5d-1abe-46b3-9508-cd825777fb4d-console-serving-cert\") pod \"console-85cd4889fc-c9c2m\" (UID: \"e1950e5d-1abe-46b3-9508-cd825777fb4d\") " pod="openshift-console/console-85cd4889fc-c9c2m" Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.554960 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e1950e5d-1abe-46b3-9508-cd825777fb4d-service-ca\") pod \"console-85cd4889fc-c9c2m\" (UID: \"e1950e5d-1abe-46b3-9508-cd825777fb4d\") " pod="openshift-console/console-85cd4889fc-c9c2m" Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.554977 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e1950e5d-1abe-46b3-9508-cd825777fb4d-trusted-ca-bundle\") pod \"console-85cd4889fc-c9c2m\" (UID: \"e1950e5d-1abe-46b3-9508-cd825777fb4d\") " pod="openshift-console/console-85cd4889fc-c9c2m" Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.555000 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljvg7\" (UniqueName: \"kubernetes.io/projected/eb9a2251-b746-4e9b-88b9-2db41d719a6b-kube-api-access-ljvg7\") pod \"nmstate-console-plugin-5c78fc5d65-j5r7m\" (UID: \"eb9a2251-b746-4e9b-88b9-2db41d719a6b\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-j5r7m" Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.555036 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxbsf\" (UniqueName: \"kubernetes.io/projected/e1950e5d-1abe-46b3-9508-cd825777fb4d-kube-api-access-hxbsf\") pod \"console-85cd4889fc-c9c2m\" (UID: \"e1950e5d-1abe-46b3-9508-cd825777fb4d\") " pod="openshift-console/console-85cd4889fc-c9c2m" Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.555121 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e1950e5d-1abe-46b3-9508-cd825777fb4d-oauth-serving-cert\") pod \"console-85cd4889fc-c9c2m\" (UID: \"e1950e5d-1abe-46b3-9508-cd825777fb4d\") " pod="openshift-console/console-85cd4889fc-c9c2m" Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.555152 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e1950e5d-1abe-46b3-9508-cd825777fb4d-console-oauth-config\") pod \"console-85cd4889fc-c9c2m\" (UID: \"e1950e5d-1abe-46b3-9508-cd825777fb4d\") " pod="openshift-console/console-85cd4889fc-c9c2m" Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.555186 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e1950e5d-1abe-46b3-9508-cd825777fb4d-console-config\") pod \"console-85cd4889fc-c9c2m\" (UID: \"e1950e5d-1abe-46b3-9508-cd825777fb4d\") " pod="openshift-console/console-85cd4889fc-c9c2m" Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.555825 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/eb9a2251-b746-4e9b-88b9-2db41d719a6b-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-j5r7m\" (UID: \"eb9a2251-b746-4e9b-88b9-2db41d719a6b\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-j5r7m" Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.558325 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/eb9a2251-b746-4e9b-88b9-2db41d719a6b-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-j5r7m\" (UID: \"eb9a2251-b746-4e9b-88b9-2db41d719a6b\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-j5r7m" Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.568929 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljvg7\" (UniqueName: \"kubernetes.io/projected/eb9a2251-b746-4e9b-88b9-2db41d719a6b-kube-api-access-ljvg7\") pod \"nmstate-console-plugin-5c78fc5d65-j5r7m\" (UID: \"eb9a2251-b746-4e9b-88b9-2db41d719a6b\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-j5r7m" Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.590836 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-j5r7m" Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.596491 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-7v8sm"] Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.660841 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e1950e5d-1abe-46b3-9508-cd825777fb4d-oauth-serving-cert\") pod \"console-85cd4889fc-c9c2m\" (UID: \"e1950e5d-1abe-46b3-9508-cd825777fb4d\") " pod="openshift-console/console-85cd4889fc-c9c2m" Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.660906 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e1950e5d-1abe-46b3-9508-cd825777fb4d-console-config\") pod \"console-85cd4889fc-c9c2m\" (UID: \"e1950e5d-1abe-46b3-9508-cd825777fb4d\") " pod="openshift-console/console-85cd4889fc-c9c2m" Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.660930 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e1950e5d-1abe-46b3-9508-cd825777fb4d-console-oauth-config\") pod \"console-85cd4889fc-c9c2m\" (UID: \"e1950e5d-1abe-46b3-9508-cd825777fb4d\") " pod="openshift-console/console-85cd4889fc-c9c2m" Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.660970 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e1950e5d-1abe-46b3-9508-cd825777fb4d-console-serving-cert\") pod \"console-85cd4889fc-c9c2m\" (UID: \"e1950e5d-1abe-46b3-9508-cd825777fb4d\") " pod="openshift-console/console-85cd4889fc-c9c2m" Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.660986 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e1950e5d-1abe-46b3-9508-cd825777fb4d-service-ca\") pod \"console-85cd4889fc-c9c2m\" (UID: \"e1950e5d-1abe-46b3-9508-cd825777fb4d\") " pod="openshift-console/console-85cd4889fc-c9c2m" Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.661003 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e1950e5d-1abe-46b3-9508-cd825777fb4d-trusted-ca-bundle\") pod \"console-85cd4889fc-c9c2m\" (UID: \"e1950e5d-1abe-46b3-9508-cd825777fb4d\") " pod="openshift-console/console-85cd4889fc-c9c2m" Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.661027 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxbsf\" (UniqueName: \"kubernetes.io/projected/e1950e5d-1abe-46b3-9508-cd825777fb4d-kube-api-access-hxbsf\") pod \"console-85cd4889fc-c9c2m\" (UID: \"e1950e5d-1abe-46b3-9508-cd825777fb4d\") " pod="openshift-console/console-85cd4889fc-c9c2m" Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.661825 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e1950e5d-1abe-46b3-9508-cd825777fb4d-console-config\") pod \"console-85cd4889fc-c9c2m\" (UID: \"e1950e5d-1abe-46b3-9508-cd825777fb4d\") " pod="openshift-console/console-85cd4889fc-c9c2m" Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.661861 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e1950e5d-1abe-46b3-9508-cd825777fb4d-service-ca\") pod \"console-85cd4889fc-c9c2m\" (UID: \"e1950e5d-1abe-46b3-9508-cd825777fb4d\") " pod="openshift-console/console-85cd4889fc-c9c2m" Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.662145 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e1950e5d-1abe-46b3-9508-cd825777fb4d-oauth-serving-cert\") pod \"console-85cd4889fc-c9c2m\" (UID: \"e1950e5d-1abe-46b3-9508-cd825777fb4d\") " pod="openshift-console/console-85cd4889fc-c9c2m" Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.662251 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e1950e5d-1abe-46b3-9508-cd825777fb4d-trusted-ca-bundle\") pod \"console-85cd4889fc-c9c2m\" (UID: \"e1950e5d-1abe-46b3-9508-cd825777fb4d\") " pod="openshift-console/console-85cd4889fc-c9c2m" Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.665625 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e1950e5d-1abe-46b3-9508-cd825777fb4d-console-serving-cert\") pod \"console-85cd4889fc-c9c2m\" (UID: \"e1950e5d-1abe-46b3-9508-cd825777fb4d\") " pod="openshift-console/console-85cd4889fc-c9c2m" Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.666725 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e1950e5d-1abe-46b3-9508-cd825777fb4d-console-oauth-config\") pod \"console-85cd4889fc-c9c2m\" (UID: \"e1950e5d-1abe-46b3-9508-cd825777fb4d\") " pod="openshift-console/console-85cd4889fc-c9c2m" Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.676955 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxbsf\" (UniqueName: \"kubernetes.io/projected/e1950e5d-1abe-46b3-9508-cd825777fb4d-kube-api-access-hxbsf\") pod \"console-85cd4889fc-c9c2m\" (UID: \"e1950e5d-1abe-46b3-9508-cd825777fb4d\") " pod="openshift-console/console-85cd4889fc-c9c2m" Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.775328 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-j5r7m"] Feb 25 11:30:31 crc kubenswrapper[5005]: W0225 11:30:31.783771 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb9a2251_b746_4e9b_88b9_2db41d719a6b.slice/crio-1d9eec90556108550562afcc8d952a866c243c28c918931a3ff5b8d26e930465 WatchSource:0}: Error finding container 1d9eec90556108550562afcc8d952a866c243c28c918931a3ff5b8d26e930465: Status 404 returned error can't find the container with id 1d9eec90556108550562afcc8d952a866c243c28c918931a3ff5b8d26e930465 Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.815304 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-85cd4889fc-c9c2m" Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.863620 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/f9b21027-4967-42b7-bb45-800971bccae6-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-jg2hs\" (UID: \"f9b21027-4967-42b7-bb45-800971bccae6\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-jg2hs" Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.868507 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/f9b21027-4967-42b7-bb45-800971bccae6-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-jg2hs\" (UID: \"f9b21027-4967-42b7-bb45-800971bccae6\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-jg2hs" Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.890270 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-j5r7m" event={"ID":"eb9a2251-b746-4e9b-88b9-2db41d719a6b","Type":"ContainerStarted","Data":"1d9eec90556108550562afcc8d952a866c243c28c918931a3ff5b8d26e930465"} Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.897303 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-7v8sm" event={"ID":"fb9e60b3-a104-48de-8db3-d056e7803ed1","Type":"ContainerStarted","Data":"125dd2a9d5b2c28b742df0de101737898729ff406e0c77067ac71d10f1c4feff"} Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.898821 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-vjwbs" event={"ID":"e395d174-8019-4c2d-b6eb-01de557ad7f0","Type":"ContainerStarted","Data":"5b3988169863d24ca1720f589415b1ff7033c220590c3ebca2d0d9cf820fb23a"} Feb 25 11:30:31 crc kubenswrapper[5005]: I0225 11:30:31.996387 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-jg2hs" Feb 25 11:30:32 crc kubenswrapper[5005]: I0225 11:30:32.041694 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-85cd4889fc-c9c2m"] Feb 25 11:30:32 crc kubenswrapper[5005]: W0225 11:30:32.050657 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode1950e5d_1abe_46b3_9508_cd825777fb4d.slice/crio-35db010800373efccc0059615254ca75d388d4d90dbc69137e50b865e81814e3 WatchSource:0}: Error finding container 35db010800373efccc0059615254ca75d388d4d90dbc69137e50b865e81814e3: Status 404 returned error can't find the container with id 35db010800373efccc0059615254ca75d388d4d90dbc69137e50b865e81814e3 Feb 25 11:30:32 crc kubenswrapper[5005]: I0225 11:30:32.251515 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-jg2hs"] Feb 25 11:30:32 crc kubenswrapper[5005]: W0225 11:30:32.255299 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9b21027_4967_42b7_bb45_800971bccae6.slice/crio-39c2f33a1ea61eaaec341116c5b3deefc8cce9bf6691b7289894ffb469fef57b WatchSource:0}: Error finding container 39c2f33a1ea61eaaec341116c5b3deefc8cce9bf6691b7289894ffb469fef57b: Status 404 returned error can't find the container with id 39c2f33a1ea61eaaec341116c5b3deefc8cce9bf6691b7289894ffb469fef57b Feb 25 11:30:32 crc kubenswrapper[5005]: I0225 11:30:32.907059 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-jg2hs" event={"ID":"f9b21027-4967-42b7-bb45-800971bccae6","Type":"ContainerStarted","Data":"39c2f33a1ea61eaaec341116c5b3deefc8cce9bf6691b7289894ffb469fef57b"} Feb 25 11:30:32 crc kubenswrapper[5005]: I0225 11:30:32.911069 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-85cd4889fc-c9c2m" event={"ID":"e1950e5d-1abe-46b3-9508-cd825777fb4d","Type":"ContainerStarted","Data":"881f478787f7fced6b105fc78307d3363c5d77e87c2cbabbe26e9db4a1ff2968"} Feb 25 11:30:32 crc kubenswrapper[5005]: I0225 11:30:32.911105 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-85cd4889fc-c9c2m" event={"ID":"e1950e5d-1abe-46b3-9508-cd825777fb4d","Type":"ContainerStarted","Data":"35db010800373efccc0059615254ca75d388d4d90dbc69137e50b865e81814e3"} Feb 25 11:30:32 crc kubenswrapper[5005]: I0225 11:30:32.940421 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-85cd4889fc-c9c2m" podStartSLOduration=1.940397074 podStartE2EDuration="1.940397074s" podCreationTimestamp="2026-02-25 11:30:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:30:32.936856515 +0000 UTC m=+746.977588842" watchObservedRunningTime="2026-02-25 11:30:32.940397074 +0000 UTC m=+746.981129411" Feb 25 11:30:33 crc kubenswrapper[5005]: I0225 11:30:33.518745 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bnpxl" Feb 25 11:30:33 crc kubenswrapper[5005]: I0225 11:30:33.606843 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bnpxl" Feb 25 11:30:33 crc kubenswrapper[5005]: I0225 11:30:33.748693 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bnpxl"] Feb 25 11:30:34 crc kubenswrapper[5005]: I0225 11:30:34.925776 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-vjwbs" event={"ID":"e395d174-8019-4c2d-b6eb-01de557ad7f0","Type":"ContainerStarted","Data":"9e338369d54531589999c0790d7815bf16d2282a73c2fe137f62e30b2886b690"} Feb 25 11:30:34 crc kubenswrapper[5005]: I0225 11:30:34.926271 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-vjwbs" Feb 25 11:30:34 crc kubenswrapper[5005]: I0225 11:30:34.927418 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-j5r7m" event={"ID":"eb9a2251-b746-4e9b-88b9-2db41d719a6b","Type":"ContainerStarted","Data":"588c0acc97000feaea6aa51698bd45258e4a7d560a833e3a7917260301c79d56"} Feb 25 11:30:34 crc kubenswrapper[5005]: I0225 11:30:34.928969 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-7v8sm" event={"ID":"fb9e60b3-a104-48de-8db3-d056e7803ed1","Type":"ContainerStarted","Data":"a9eeaf8f74018e020358a30e1bc69615ffa444374739b5f90ce388a21129a167"} Feb 25 11:30:34 crc kubenswrapper[5005]: I0225 11:30:34.930728 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-jg2hs" event={"ID":"f9b21027-4967-42b7-bb45-800971bccae6","Type":"ContainerStarted","Data":"5c0909aefa731bd7b88fd66f0fe15630d121c2a491fdf2c76c2a22c77f7c339b"} Feb 25 11:30:34 crc kubenswrapper[5005]: I0225 11:30:34.930827 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bnpxl" podUID="db5119bb-b706-4a76-b465-e6dee47d8944" containerName="registry-server" containerID="cri-o://3a54f9bfef47698fd02a411d83d90449a72a976c5d6bf9ccfb3f19f58ef9b3b4" gracePeriod=2 Feb 25 11:30:34 crc kubenswrapper[5005]: I0225 11:30:34.930968 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-jg2hs" Feb 25 11:30:34 crc kubenswrapper[5005]: I0225 11:30:34.968748 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-vjwbs" podStartSLOduration=1.075637164 podStartE2EDuration="3.968723409s" podCreationTimestamp="2026-02-25 11:30:31 +0000 UTC" firstStartedPulling="2026-02-25 11:30:31.463753069 +0000 UTC m=+745.504485396" lastFinishedPulling="2026-02-25 11:30:34.356839314 +0000 UTC m=+748.397571641" observedRunningTime="2026-02-25 11:30:34.95124089 +0000 UTC m=+748.991973257" watchObservedRunningTime="2026-02-25 11:30:34.968723409 +0000 UTC m=+749.009455766" Feb 25 11:30:34 crc kubenswrapper[5005]: I0225 11:30:34.979495 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-j5r7m" podStartSLOduration=1.4081848940000001 podStartE2EDuration="3.979479221s" podCreationTimestamp="2026-02-25 11:30:31 +0000 UTC" firstStartedPulling="2026-02-25 11:30:31.785830095 +0000 UTC m=+745.826562422" lastFinishedPulling="2026-02-25 11:30:34.357124432 +0000 UTC m=+748.397856749" observedRunningTime="2026-02-25 11:30:34.967243814 +0000 UTC m=+749.007976181" watchObservedRunningTime="2026-02-25 11:30:34.979479221 +0000 UTC m=+749.020211548" Feb 25 11:30:35 crc kubenswrapper[5005]: I0225 11:30:35.012635 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-jg2hs" podStartSLOduration=1.909418869 podStartE2EDuration="4.012616402s" podCreationTimestamp="2026-02-25 11:30:31 +0000 UTC" firstStartedPulling="2026-02-25 11:30:32.257931833 +0000 UTC m=+746.298664160" lastFinishedPulling="2026-02-25 11:30:34.361129366 +0000 UTC m=+748.401861693" observedRunningTime="2026-02-25 11:30:35.012303632 +0000 UTC m=+749.053035959" watchObservedRunningTime="2026-02-25 11:30:35.012616402 +0000 UTC m=+749.053348729" Feb 25 11:30:35 crc kubenswrapper[5005]: I0225 11:30:35.275101 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bnpxl" Feb 25 11:30:35 crc kubenswrapper[5005]: I0225 11:30:35.322143 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g769p\" (UniqueName: \"kubernetes.io/projected/db5119bb-b706-4a76-b465-e6dee47d8944-kube-api-access-g769p\") pod \"db5119bb-b706-4a76-b465-e6dee47d8944\" (UID: \"db5119bb-b706-4a76-b465-e6dee47d8944\") " Feb 25 11:30:35 crc kubenswrapper[5005]: I0225 11:30:35.322429 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db5119bb-b706-4a76-b465-e6dee47d8944-utilities\") pod \"db5119bb-b706-4a76-b465-e6dee47d8944\" (UID: \"db5119bb-b706-4a76-b465-e6dee47d8944\") " Feb 25 11:30:35 crc kubenswrapper[5005]: I0225 11:30:35.322484 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db5119bb-b706-4a76-b465-e6dee47d8944-catalog-content\") pod \"db5119bb-b706-4a76-b465-e6dee47d8944\" (UID: \"db5119bb-b706-4a76-b465-e6dee47d8944\") " Feb 25 11:30:35 crc kubenswrapper[5005]: I0225 11:30:35.323542 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db5119bb-b706-4a76-b465-e6dee47d8944-utilities" (OuterVolumeSpecName: "utilities") pod "db5119bb-b706-4a76-b465-e6dee47d8944" (UID: "db5119bb-b706-4a76-b465-e6dee47d8944"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:30:35 crc kubenswrapper[5005]: I0225 11:30:35.330629 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db5119bb-b706-4a76-b465-e6dee47d8944-kube-api-access-g769p" (OuterVolumeSpecName: "kube-api-access-g769p") pod "db5119bb-b706-4a76-b465-e6dee47d8944" (UID: "db5119bb-b706-4a76-b465-e6dee47d8944"). InnerVolumeSpecName "kube-api-access-g769p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:30:35 crc kubenswrapper[5005]: I0225 11:30:35.423428 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db5119bb-b706-4a76-b465-e6dee47d8944-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 11:30:35 crc kubenswrapper[5005]: I0225 11:30:35.423469 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g769p\" (UniqueName: \"kubernetes.io/projected/db5119bb-b706-4a76-b465-e6dee47d8944-kube-api-access-g769p\") on node \"crc\" DevicePath \"\"" Feb 25 11:30:35 crc kubenswrapper[5005]: I0225 11:30:35.466983 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db5119bb-b706-4a76-b465-e6dee47d8944-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "db5119bb-b706-4a76-b465-e6dee47d8944" (UID: "db5119bb-b706-4a76-b465-e6dee47d8944"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:30:35 crc kubenswrapper[5005]: I0225 11:30:35.525158 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db5119bb-b706-4a76-b465-e6dee47d8944-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 11:30:35 crc kubenswrapper[5005]: I0225 11:30:35.939835 5005 generic.go:334] "Generic (PLEG): container finished" podID="db5119bb-b706-4a76-b465-e6dee47d8944" containerID="3a54f9bfef47698fd02a411d83d90449a72a976c5d6bf9ccfb3f19f58ef9b3b4" exitCode=0 Feb 25 11:30:35 crc kubenswrapper[5005]: I0225 11:30:35.939916 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bnpxl" event={"ID":"db5119bb-b706-4a76-b465-e6dee47d8944","Type":"ContainerDied","Data":"3a54f9bfef47698fd02a411d83d90449a72a976c5d6bf9ccfb3f19f58ef9b3b4"} Feb 25 11:30:35 crc kubenswrapper[5005]: I0225 11:30:35.939932 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bnpxl" Feb 25 11:30:35 crc kubenswrapper[5005]: I0225 11:30:35.940123 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bnpxl" event={"ID":"db5119bb-b706-4a76-b465-e6dee47d8944","Type":"ContainerDied","Data":"6a05f5f341193c377ba46daa321dc2706b3a382917332c0ca1bcecece288b7f3"} Feb 25 11:30:35 crc kubenswrapper[5005]: I0225 11:30:35.940147 5005 scope.go:117] "RemoveContainer" containerID="3a54f9bfef47698fd02a411d83d90449a72a976c5d6bf9ccfb3f19f58ef9b3b4" Feb 25 11:30:35 crc kubenswrapper[5005]: I0225 11:30:35.963032 5005 scope.go:117] "RemoveContainer" containerID="c0eb983547ce3a6d24395a0c731324cb0895af2bfe0daa7e116f0fec968a96eb" Feb 25 11:30:35 crc kubenswrapper[5005]: I0225 11:30:35.985672 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bnpxl"] Feb 25 11:30:35 crc kubenswrapper[5005]: I0225 11:30:35.989194 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bnpxl"] Feb 25 11:30:35 crc kubenswrapper[5005]: I0225 11:30:35.998527 5005 scope.go:117] "RemoveContainer" containerID="7319dd6b0f31864e31ffe1632e267b4d0f4d61834ca86b00b265837940f98401" Feb 25 11:30:36 crc kubenswrapper[5005]: I0225 11:30:36.018701 5005 scope.go:117] "RemoveContainer" containerID="3a54f9bfef47698fd02a411d83d90449a72a976c5d6bf9ccfb3f19f58ef9b3b4" Feb 25 11:30:36 crc kubenswrapper[5005]: E0225 11:30:36.019190 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a54f9bfef47698fd02a411d83d90449a72a976c5d6bf9ccfb3f19f58ef9b3b4\": container with ID starting with 3a54f9bfef47698fd02a411d83d90449a72a976c5d6bf9ccfb3f19f58ef9b3b4 not found: ID does not exist" containerID="3a54f9bfef47698fd02a411d83d90449a72a976c5d6bf9ccfb3f19f58ef9b3b4" Feb 25 11:30:36 crc kubenswrapper[5005]: I0225 11:30:36.019223 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a54f9bfef47698fd02a411d83d90449a72a976c5d6bf9ccfb3f19f58ef9b3b4"} err="failed to get container status \"3a54f9bfef47698fd02a411d83d90449a72a976c5d6bf9ccfb3f19f58ef9b3b4\": rpc error: code = NotFound desc = could not find container \"3a54f9bfef47698fd02a411d83d90449a72a976c5d6bf9ccfb3f19f58ef9b3b4\": container with ID starting with 3a54f9bfef47698fd02a411d83d90449a72a976c5d6bf9ccfb3f19f58ef9b3b4 not found: ID does not exist" Feb 25 11:30:36 crc kubenswrapper[5005]: I0225 11:30:36.019243 5005 scope.go:117] "RemoveContainer" containerID="c0eb983547ce3a6d24395a0c731324cb0895af2bfe0daa7e116f0fec968a96eb" Feb 25 11:30:36 crc kubenswrapper[5005]: E0225 11:30:36.019585 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0eb983547ce3a6d24395a0c731324cb0895af2bfe0daa7e116f0fec968a96eb\": container with ID starting with c0eb983547ce3a6d24395a0c731324cb0895af2bfe0daa7e116f0fec968a96eb not found: ID does not exist" containerID="c0eb983547ce3a6d24395a0c731324cb0895af2bfe0daa7e116f0fec968a96eb" Feb 25 11:30:36 crc kubenswrapper[5005]: I0225 11:30:36.019609 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0eb983547ce3a6d24395a0c731324cb0895af2bfe0daa7e116f0fec968a96eb"} err="failed to get container status \"c0eb983547ce3a6d24395a0c731324cb0895af2bfe0daa7e116f0fec968a96eb\": rpc error: code = NotFound desc = could not find container \"c0eb983547ce3a6d24395a0c731324cb0895af2bfe0daa7e116f0fec968a96eb\": container with ID starting with c0eb983547ce3a6d24395a0c731324cb0895af2bfe0daa7e116f0fec968a96eb not found: ID does not exist" Feb 25 11:30:36 crc kubenswrapper[5005]: I0225 11:30:36.019622 5005 scope.go:117] "RemoveContainer" containerID="7319dd6b0f31864e31ffe1632e267b4d0f4d61834ca86b00b265837940f98401" Feb 25 11:30:36 crc kubenswrapper[5005]: E0225 11:30:36.019839 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7319dd6b0f31864e31ffe1632e267b4d0f4d61834ca86b00b265837940f98401\": container with ID starting with 7319dd6b0f31864e31ffe1632e267b4d0f4d61834ca86b00b265837940f98401 not found: ID does not exist" containerID="7319dd6b0f31864e31ffe1632e267b4d0f4d61834ca86b00b265837940f98401" Feb 25 11:30:36 crc kubenswrapper[5005]: I0225 11:30:36.019859 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7319dd6b0f31864e31ffe1632e267b4d0f4d61834ca86b00b265837940f98401"} err="failed to get container status \"7319dd6b0f31864e31ffe1632e267b4d0f4d61834ca86b00b265837940f98401\": rpc error: code = NotFound desc = could not find container \"7319dd6b0f31864e31ffe1632e267b4d0f4d61834ca86b00b265837940f98401\": container with ID starting with 7319dd6b0f31864e31ffe1632e267b4d0f4d61834ca86b00b265837940f98401 not found: ID does not exist" Feb 25 11:30:36 crc kubenswrapper[5005]: I0225 11:30:36.698230 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db5119bb-b706-4a76-b465-e6dee47d8944" path="/var/lib/kubelet/pods/db5119bb-b706-4a76-b465-e6dee47d8944/volumes" Feb 25 11:30:36 crc kubenswrapper[5005]: I0225 11:30:36.951627 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-7v8sm" event={"ID":"fb9e60b3-a104-48de-8db3-d056e7803ed1","Type":"ContainerStarted","Data":"18ad99d29c34e304b9bcabd9016aeb03bcd5de59daafd3a18239dad168ca0912"} Feb 25 11:30:41 crc kubenswrapper[5005]: I0225 11:30:41.458017 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-vjwbs" Feb 25 11:30:41 crc kubenswrapper[5005]: I0225 11:30:41.491792 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-7v8sm" podStartSLOduration=5.325396645 podStartE2EDuration="10.491755174s" podCreationTimestamp="2026-02-25 11:30:31 +0000 UTC" firstStartedPulling="2026-02-25 11:30:31.613876565 +0000 UTC m=+745.654608892" lastFinishedPulling="2026-02-25 11:30:36.780235094 +0000 UTC m=+750.820967421" observedRunningTime="2026-02-25 11:30:36.968875077 +0000 UTC m=+751.009607454" watchObservedRunningTime="2026-02-25 11:30:41.491755174 +0000 UTC m=+755.532487551" Feb 25 11:30:41 crc kubenswrapper[5005]: I0225 11:30:41.816549 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-85cd4889fc-c9c2m" Feb 25 11:30:41 crc kubenswrapper[5005]: I0225 11:30:41.816641 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-85cd4889fc-c9c2m" Feb 25 11:30:41 crc kubenswrapper[5005]: I0225 11:30:41.826496 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-85cd4889fc-c9c2m" Feb 25 11:30:42 crc kubenswrapper[5005]: I0225 11:30:42.004939 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-85cd4889fc-c9c2m" Feb 25 11:30:42 crc kubenswrapper[5005]: I0225 11:30:42.077155 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-277gg"] Feb 25 11:30:52 crc kubenswrapper[5005]: I0225 11:30:52.004763 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-jg2hs" Feb 25 11:30:58 crc kubenswrapper[5005]: I0225 11:30:58.087331 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 11:30:58 crc kubenswrapper[5005]: I0225 11:30:58.088017 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 11:31:04 crc kubenswrapper[5005]: I0225 11:31:04.337440 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p84xp"] Feb 25 11:31:04 crc kubenswrapper[5005]: E0225 11:31:04.338173 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db5119bb-b706-4a76-b465-e6dee47d8944" containerName="extract-content" Feb 25 11:31:04 crc kubenswrapper[5005]: I0225 11:31:04.338188 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="db5119bb-b706-4a76-b465-e6dee47d8944" containerName="extract-content" Feb 25 11:31:04 crc kubenswrapper[5005]: E0225 11:31:04.338213 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db5119bb-b706-4a76-b465-e6dee47d8944" containerName="extract-utilities" Feb 25 11:31:04 crc kubenswrapper[5005]: I0225 11:31:04.338219 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="db5119bb-b706-4a76-b465-e6dee47d8944" containerName="extract-utilities" Feb 25 11:31:04 crc kubenswrapper[5005]: E0225 11:31:04.338226 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db5119bb-b706-4a76-b465-e6dee47d8944" containerName="registry-server" Feb 25 11:31:04 crc kubenswrapper[5005]: I0225 11:31:04.338231 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="db5119bb-b706-4a76-b465-e6dee47d8944" containerName="registry-server" Feb 25 11:31:04 crc kubenswrapper[5005]: I0225 11:31:04.338329 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="db5119bb-b706-4a76-b465-e6dee47d8944" containerName="registry-server" Feb 25 11:31:04 crc kubenswrapper[5005]: I0225 11:31:04.339113 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p84xp" Feb 25 11:31:04 crc kubenswrapper[5005]: I0225 11:31:04.341221 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 25 11:31:04 crc kubenswrapper[5005]: I0225 11:31:04.354319 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p84xp"] Feb 25 11:31:04 crc kubenswrapper[5005]: I0225 11:31:04.536677 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/63201af9-c23b-44a7-9d91-97243558a963-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p84xp\" (UID: \"63201af9-c23b-44a7-9d91-97243558a963\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p84xp" Feb 25 11:31:04 crc kubenswrapper[5005]: I0225 11:31:04.536816 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddk4d\" (UniqueName: \"kubernetes.io/projected/63201af9-c23b-44a7-9d91-97243558a963-kube-api-access-ddk4d\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p84xp\" (UID: \"63201af9-c23b-44a7-9d91-97243558a963\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p84xp" Feb 25 11:31:04 crc kubenswrapper[5005]: I0225 11:31:04.536875 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/63201af9-c23b-44a7-9d91-97243558a963-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p84xp\" (UID: \"63201af9-c23b-44a7-9d91-97243558a963\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p84xp" Feb 25 11:31:04 crc kubenswrapper[5005]: I0225 11:31:04.638053 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddk4d\" (UniqueName: \"kubernetes.io/projected/63201af9-c23b-44a7-9d91-97243558a963-kube-api-access-ddk4d\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p84xp\" (UID: \"63201af9-c23b-44a7-9d91-97243558a963\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p84xp" Feb 25 11:31:04 crc kubenswrapper[5005]: I0225 11:31:04.638169 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/63201af9-c23b-44a7-9d91-97243558a963-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p84xp\" (UID: \"63201af9-c23b-44a7-9d91-97243558a963\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p84xp" Feb 25 11:31:04 crc kubenswrapper[5005]: I0225 11:31:04.638234 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/63201af9-c23b-44a7-9d91-97243558a963-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p84xp\" (UID: \"63201af9-c23b-44a7-9d91-97243558a963\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p84xp" Feb 25 11:31:04 crc kubenswrapper[5005]: I0225 11:31:04.638658 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/63201af9-c23b-44a7-9d91-97243558a963-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p84xp\" (UID: \"63201af9-c23b-44a7-9d91-97243558a963\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p84xp" Feb 25 11:31:04 crc kubenswrapper[5005]: I0225 11:31:04.638806 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/63201af9-c23b-44a7-9d91-97243558a963-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p84xp\" (UID: \"63201af9-c23b-44a7-9d91-97243558a963\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p84xp" Feb 25 11:31:04 crc kubenswrapper[5005]: I0225 11:31:04.660045 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddk4d\" (UniqueName: \"kubernetes.io/projected/63201af9-c23b-44a7-9d91-97243558a963-kube-api-access-ddk4d\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p84xp\" (UID: \"63201af9-c23b-44a7-9d91-97243558a963\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p84xp" Feb 25 11:31:04 crc kubenswrapper[5005]: I0225 11:31:04.957471 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p84xp" Feb 25 11:31:05 crc kubenswrapper[5005]: I0225 11:31:05.206903 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p84xp"] Feb 25 11:31:06 crc kubenswrapper[5005]: I0225 11:31:06.157429 5005 generic.go:334] "Generic (PLEG): container finished" podID="63201af9-c23b-44a7-9d91-97243558a963" containerID="caa9941a5d8df812ff223b0f3ac0d942ebbed737790df35600a9dca6532b5035" exitCode=0 Feb 25 11:31:06 crc kubenswrapper[5005]: I0225 11:31:06.157542 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p84xp" event={"ID":"63201af9-c23b-44a7-9d91-97243558a963","Type":"ContainerDied","Data":"caa9941a5d8df812ff223b0f3ac0d942ebbed737790df35600a9dca6532b5035"} Feb 25 11:31:06 crc kubenswrapper[5005]: I0225 11:31:06.157763 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p84xp" event={"ID":"63201af9-c23b-44a7-9d91-97243558a963","Type":"ContainerStarted","Data":"974383abb7dcbce138b5e234920dccdd06a916c1bcaeed25a01ce531c20eae76"} Feb 25 11:31:06 crc kubenswrapper[5005]: I0225 11:31:06.159767 5005 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 25 11:31:07 crc kubenswrapper[5005]: I0225 11:31:07.144318 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-277gg" podUID="cf5c0827-c687-4ab2-a02f-7b74d00a57db" containerName="console" containerID="cri-o://128d82747dd14ddb57ac9863c67ba1916c3c0891b7d9360683c34c2caf2380ec" gracePeriod=15 Feb 25 11:31:07 crc kubenswrapper[5005]: I0225 11:31:07.370580 5005 scope.go:117] "RemoveContainer" containerID="2e7fea44d02b32208a7cd6f694dcd414aba496a9311ea694238a071f385562da" Feb 25 11:31:07 crc kubenswrapper[5005]: I0225 11:31:07.505611 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-277gg_cf5c0827-c687-4ab2-a02f-7b74d00a57db/console/0.log" Feb 25 11:31:07 crc kubenswrapper[5005]: I0225 11:31:07.506009 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-277gg" Feb 25 11:31:07 crc kubenswrapper[5005]: I0225 11:31:07.693749 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cf5c0827-c687-4ab2-a02f-7b74d00a57db-service-ca\") pod \"cf5c0827-c687-4ab2-a02f-7b74d00a57db\" (UID: \"cf5c0827-c687-4ab2-a02f-7b74d00a57db\") " Feb 25 11:31:07 crc kubenswrapper[5005]: I0225 11:31:07.693919 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf5c0827-c687-4ab2-a02f-7b74d00a57db-trusted-ca-bundle\") pod \"cf5c0827-c687-4ab2-a02f-7b74d00a57db\" (UID: \"cf5c0827-c687-4ab2-a02f-7b74d00a57db\") " Feb 25 11:31:07 crc kubenswrapper[5005]: I0225 11:31:07.693989 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4s8rv\" (UniqueName: \"kubernetes.io/projected/cf5c0827-c687-4ab2-a02f-7b74d00a57db-kube-api-access-4s8rv\") pod \"cf5c0827-c687-4ab2-a02f-7b74d00a57db\" (UID: \"cf5c0827-c687-4ab2-a02f-7b74d00a57db\") " Feb 25 11:31:07 crc kubenswrapper[5005]: I0225 11:31:07.694036 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cf5c0827-c687-4ab2-a02f-7b74d00a57db-console-serving-cert\") pod \"cf5c0827-c687-4ab2-a02f-7b74d00a57db\" (UID: \"cf5c0827-c687-4ab2-a02f-7b74d00a57db\") " Feb 25 11:31:07 crc kubenswrapper[5005]: I0225 11:31:07.694071 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cf5c0827-c687-4ab2-a02f-7b74d00a57db-oauth-serving-cert\") pod \"cf5c0827-c687-4ab2-a02f-7b74d00a57db\" (UID: \"cf5c0827-c687-4ab2-a02f-7b74d00a57db\") " Feb 25 11:31:07 crc kubenswrapper[5005]: I0225 11:31:07.694135 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cf5c0827-c687-4ab2-a02f-7b74d00a57db-console-config\") pod \"cf5c0827-c687-4ab2-a02f-7b74d00a57db\" (UID: \"cf5c0827-c687-4ab2-a02f-7b74d00a57db\") " Feb 25 11:31:07 crc kubenswrapper[5005]: I0225 11:31:07.694233 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cf5c0827-c687-4ab2-a02f-7b74d00a57db-console-oauth-config\") pod \"cf5c0827-c687-4ab2-a02f-7b74d00a57db\" (UID: \"cf5c0827-c687-4ab2-a02f-7b74d00a57db\") " Feb 25 11:31:07 crc kubenswrapper[5005]: I0225 11:31:07.694985 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf5c0827-c687-4ab2-a02f-7b74d00a57db-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "cf5c0827-c687-4ab2-a02f-7b74d00a57db" (UID: "cf5c0827-c687-4ab2-a02f-7b74d00a57db"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:31:07 crc kubenswrapper[5005]: I0225 11:31:07.694999 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf5c0827-c687-4ab2-a02f-7b74d00a57db-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "cf5c0827-c687-4ab2-a02f-7b74d00a57db" (UID: "cf5c0827-c687-4ab2-a02f-7b74d00a57db"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:31:07 crc kubenswrapper[5005]: I0225 11:31:07.695682 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf5c0827-c687-4ab2-a02f-7b74d00a57db-console-config" (OuterVolumeSpecName: "console-config") pod "cf5c0827-c687-4ab2-a02f-7b74d00a57db" (UID: "cf5c0827-c687-4ab2-a02f-7b74d00a57db"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:31:07 crc kubenswrapper[5005]: I0225 11:31:07.695929 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf5c0827-c687-4ab2-a02f-7b74d00a57db-service-ca" (OuterVolumeSpecName: "service-ca") pod "cf5c0827-c687-4ab2-a02f-7b74d00a57db" (UID: "cf5c0827-c687-4ab2-a02f-7b74d00a57db"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:31:07 crc kubenswrapper[5005]: I0225 11:31:07.700793 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf5c0827-c687-4ab2-a02f-7b74d00a57db-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "cf5c0827-c687-4ab2-a02f-7b74d00a57db" (UID: "cf5c0827-c687-4ab2-a02f-7b74d00a57db"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:31:07 crc kubenswrapper[5005]: I0225 11:31:07.702772 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf5c0827-c687-4ab2-a02f-7b74d00a57db-kube-api-access-4s8rv" (OuterVolumeSpecName: "kube-api-access-4s8rv") pod "cf5c0827-c687-4ab2-a02f-7b74d00a57db" (UID: "cf5c0827-c687-4ab2-a02f-7b74d00a57db"). InnerVolumeSpecName "kube-api-access-4s8rv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:31:07 crc kubenswrapper[5005]: I0225 11:31:07.703554 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf5c0827-c687-4ab2-a02f-7b74d00a57db-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "cf5c0827-c687-4ab2-a02f-7b74d00a57db" (UID: "cf5c0827-c687-4ab2-a02f-7b74d00a57db"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:31:07 crc kubenswrapper[5005]: I0225 11:31:07.795837 5005 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cf5c0827-c687-4ab2-a02f-7b74d00a57db-service-ca\") on node \"crc\" DevicePath \"\"" Feb 25 11:31:07 crc kubenswrapper[5005]: I0225 11:31:07.795878 5005 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf5c0827-c687-4ab2-a02f-7b74d00a57db-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 11:31:07 crc kubenswrapper[5005]: I0225 11:31:07.795895 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4s8rv\" (UniqueName: \"kubernetes.io/projected/cf5c0827-c687-4ab2-a02f-7b74d00a57db-kube-api-access-4s8rv\") on node \"crc\" DevicePath \"\"" Feb 25 11:31:07 crc kubenswrapper[5005]: I0225 11:31:07.795907 5005 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cf5c0827-c687-4ab2-a02f-7b74d00a57db-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 25 11:31:07 crc kubenswrapper[5005]: I0225 11:31:07.795920 5005 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cf5c0827-c687-4ab2-a02f-7b74d00a57db-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 25 11:31:07 crc kubenswrapper[5005]: I0225 11:31:07.795932 5005 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cf5c0827-c687-4ab2-a02f-7b74d00a57db-console-config\") on node \"crc\" DevicePath \"\"" Feb 25 11:31:07 crc kubenswrapper[5005]: I0225 11:31:07.795944 5005 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cf5c0827-c687-4ab2-a02f-7b74d00a57db-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 25 11:31:08 crc kubenswrapper[5005]: I0225 11:31:08.176133 5005 generic.go:334] "Generic (PLEG): container finished" podID="63201af9-c23b-44a7-9d91-97243558a963" containerID="a2b8fbebf157fcf624fc4c2e9975bcdbcbb598f1fe00fcca0a5e409a12af76d5" exitCode=0 Feb 25 11:31:08 crc kubenswrapper[5005]: I0225 11:31:08.176199 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p84xp" event={"ID":"63201af9-c23b-44a7-9d91-97243558a963","Type":"ContainerDied","Data":"a2b8fbebf157fcf624fc4c2e9975bcdbcbb598f1fe00fcca0a5e409a12af76d5"} Feb 25 11:31:08 crc kubenswrapper[5005]: I0225 11:31:08.178025 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-277gg_cf5c0827-c687-4ab2-a02f-7b74d00a57db/console/0.log" Feb 25 11:31:08 crc kubenswrapper[5005]: I0225 11:31:08.178100 5005 generic.go:334] "Generic (PLEG): container finished" podID="cf5c0827-c687-4ab2-a02f-7b74d00a57db" containerID="128d82747dd14ddb57ac9863c67ba1916c3c0891b7d9360683c34c2caf2380ec" exitCode=2 Feb 25 11:31:08 crc kubenswrapper[5005]: I0225 11:31:08.178155 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-277gg" event={"ID":"cf5c0827-c687-4ab2-a02f-7b74d00a57db","Type":"ContainerDied","Data":"128d82747dd14ddb57ac9863c67ba1916c3c0891b7d9360683c34c2caf2380ec"} Feb 25 11:31:08 crc kubenswrapper[5005]: I0225 11:31:08.178209 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-277gg" event={"ID":"cf5c0827-c687-4ab2-a02f-7b74d00a57db","Type":"ContainerDied","Data":"063de13593698c3e6a9b8840a86e4ac3c7f7b9d112c02c0c914c432e5765ec32"} Feb 25 11:31:08 crc kubenswrapper[5005]: I0225 11:31:08.178168 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-277gg" Feb 25 11:31:08 crc kubenswrapper[5005]: I0225 11:31:08.178253 5005 scope.go:117] "RemoveContainer" containerID="128d82747dd14ddb57ac9863c67ba1916c3c0891b7d9360683c34c2caf2380ec" Feb 25 11:31:08 crc kubenswrapper[5005]: I0225 11:31:08.200271 5005 scope.go:117] "RemoveContainer" containerID="128d82747dd14ddb57ac9863c67ba1916c3c0891b7d9360683c34c2caf2380ec" Feb 25 11:31:08 crc kubenswrapper[5005]: E0225 11:31:08.201120 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"128d82747dd14ddb57ac9863c67ba1916c3c0891b7d9360683c34c2caf2380ec\": container with ID starting with 128d82747dd14ddb57ac9863c67ba1916c3c0891b7d9360683c34c2caf2380ec not found: ID does not exist" containerID="128d82747dd14ddb57ac9863c67ba1916c3c0891b7d9360683c34c2caf2380ec" Feb 25 11:31:08 crc kubenswrapper[5005]: I0225 11:31:08.201168 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"128d82747dd14ddb57ac9863c67ba1916c3c0891b7d9360683c34c2caf2380ec"} err="failed to get container status \"128d82747dd14ddb57ac9863c67ba1916c3c0891b7d9360683c34c2caf2380ec\": rpc error: code = NotFound desc = could not find container \"128d82747dd14ddb57ac9863c67ba1916c3c0891b7d9360683c34c2caf2380ec\": container with ID starting with 128d82747dd14ddb57ac9863c67ba1916c3c0891b7d9360683c34c2caf2380ec not found: ID does not exist" Feb 25 11:31:08 crc kubenswrapper[5005]: I0225 11:31:08.221572 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-277gg"] Feb 25 11:31:08 crc kubenswrapper[5005]: I0225 11:31:08.226203 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-277gg"] Feb 25 11:31:08 crc kubenswrapper[5005]: I0225 11:31:08.696166 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf5c0827-c687-4ab2-a02f-7b74d00a57db" path="/var/lib/kubelet/pods/cf5c0827-c687-4ab2-a02f-7b74d00a57db/volumes" Feb 25 11:31:09 crc kubenswrapper[5005]: I0225 11:31:09.189156 5005 generic.go:334] "Generic (PLEG): container finished" podID="63201af9-c23b-44a7-9d91-97243558a963" containerID="a89ca06eb2c2b1127e42b02bd74eb32121a87f195878356ef7db9494a69c7179" exitCode=0 Feb 25 11:31:09 crc kubenswrapper[5005]: I0225 11:31:09.189206 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p84xp" event={"ID":"63201af9-c23b-44a7-9d91-97243558a963","Type":"ContainerDied","Data":"a89ca06eb2c2b1127e42b02bd74eb32121a87f195878356ef7db9494a69c7179"} Feb 25 11:31:10 crc kubenswrapper[5005]: I0225 11:31:10.478544 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p84xp" Feb 25 11:31:10 crc kubenswrapper[5005]: I0225 11:31:10.629511 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/63201af9-c23b-44a7-9d91-97243558a963-util\") pod \"63201af9-c23b-44a7-9d91-97243558a963\" (UID: \"63201af9-c23b-44a7-9d91-97243558a963\") " Feb 25 11:31:10 crc kubenswrapper[5005]: I0225 11:31:10.629554 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddk4d\" (UniqueName: \"kubernetes.io/projected/63201af9-c23b-44a7-9d91-97243558a963-kube-api-access-ddk4d\") pod \"63201af9-c23b-44a7-9d91-97243558a963\" (UID: \"63201af9-c23b-44a7-9d91-97243558a963\") " Feb 25 11:31:10 crc kubenswrapper[5005]: I0225 11:31:10.629583 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/63201af9-c23b-44a7-9d91-97243558a963-bundle\") pod \"63201af9-c23b-44a7-9d91-97243558a963\" (UID: \"63201af9-c23b-44a7-9d91-97243558a963\") " Feb 25 11:31:10 crc kubenswrapper[5005]: I0225 11:31:10.630832 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63201af9-c23b-44a7-9d91-97243558a963-bundle" (OuterVolumeSpecName: "bundle") pod "63201af9-c23b-44a7-9d91-97243558a963" (UID: "63201af9-c23b-44a7-9d91-97243558a963"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:31:10 crc kubenswrapper[5005]: I0225 11:31:10.638700 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63201af9-c23b-44a7-9d91-97243558a963-kube-api-access-ddk4d" (OuterVolumeSpecName: "kube-api-access-ddk4d") pod "63201af9-c23b-44a7-9d91-97243558a963" (UID: "63201af9-c23b-44a7-9d91-97243558a963"). InnerVolumeSpecName "kube-api-access-ddk4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:31:10 crc kubenswrapper[5005]: I0225 11:31:10.660119 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63201af9-c23b-44a7-9d91-97243558a963-util" (OuterVolumeSpecName: "util") pod "63201af9-c23b-44a7-9d91-97243558a963" (UID: "63201af9-c23b-44a7-9d91-97243558a963"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:31:10 crc kubenswrapper[5005]: I0225 11:31:10.731022 5005 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/63201af9-c23b-44a7-9d91-97243558a963-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 11:31:10 crc kubenswrapper[5005]: I0225 11:31:10.731054 5005 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/63201af9-c23b-44a7-9d91-97243558a963-util\") on node \"crc\" DevicePath \"\"" Feb 25 11:31:10 crc kubenswrapper[5005]: I0225 11:31:10.731067 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddk4d\" (UniqueName: \"kubernetes.io/projected/63201af9-c23b-44a7-9d91-97243558a963-kube-api-access-ddk4d\") on node \"crc\" DevicePath \"\"" Feb 25 11:31:11 crc kubenswrapper[5005]: I0225 11:31:11.204627 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p84xp" event={"ID":"63201af9-c23b-44a7-9d91-97243558a963","Type":"ContainerDied","Data":"974383abb7dcbce138b5e234920dccdd06a916c1bcaeed25a01ce531c20eae76"} Feb 25 11:31:11 crc kubenswrapper[5005]: I0225 11:31:11.204673 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="974383abb7dcbce138b5e234920dccdd06a916c1bcaeed25a01ce531c20eae76" Feb 25 11:31:11 crc kubenswrapper[5005]: I0225 11:31:11.204736 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p84xp" Feb 25 11:31:19 crc kubenswrapper[5005]: I0225 11:31:19.468786 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-845fb74b78-rxqvx"] Feb 25 11:31:19 crc kubenswrapper[5005]: E0225 11:31:19.469246 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf5c0827-c687-4ab2-a02f-7b74d00a57db" containerName="console" Feb 25 11:31:19 crc kubenswrapper[5005]: I0225 11:31:19.469257 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf5c0827-c687-4ab2-a02f-7b74d00a57db" containerName="console" Feb 25 11:31:19 crc kubenswrapper[5005]: E0225 11:31:19.469269 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63201af9-c23b-44a7-9d91-97243558a963" containerName="util" Feb 25 11:31:19 crc kubenswrapper[5005]: I0225 11:31:19.469276 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="63201af9-c23b-44a7-9d91-97243558a963" containerName="util" Feb 25 11:31:19 crc kubenswrapper[5005]: E0225 11:31:19.469285 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63201af9-c23b-44a7-9d91-97243558a963" containerName="pull" Feb 25 11:31:19 crc kubenswrapper[5005]: I0225 11:31:19.469292 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="63201af9-c23b-44a7-9d91-97243558a963" containerName="pull" Feb 25 11:31:19 crc kubenswrapper[5005]: E0225 11:31:19.469307 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63201af9-c23b-44a7-9d91-97243558a963" containerName="extract" Feb 25 11:31:19 crc kubenswrapper[5005]: I0225 11:31:19.469313 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="63201af9-c23b-44a7-9d91-97243558a963" containerName="extract" Feb 25 11:31:19 crc kubenswrapper[5005]: I0225 11:31:19.469411 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="63201af9-c23b-44a7-9d91-97243558a963" containerName="extract" Feb 25 11:31:19 crc kubenswrapper[5005]: I0225 11:31:19.469422 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf5c0827-c687-4ab2-a02f-7b74d00a57db" containerName="console" Feb 25 11:31:19 crc kubenswrapper[5005]: I0225 11:31:19.469796 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-845fb74b78-rxqvx" Feb 25 11:31:19 crc kubenswrapper[5005]: I0225 11:31:19.472038 5005 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 25 11:31:19 crc kubenswrapper[5005]: I0225 11:31:19.472101 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 25 11:31:19 crc kubenswrapper[5005]: I0225 11:31:19.472935 5005 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-5x6kz" Feb 25 11:31:19 crc kubenswrapper[5005]: I0225 11:31:19.473755 5005 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 25 11:31:19 crc kubenswrapper[5005]: I0225 11:31:19.476269 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 25 11:31:19 crc kubenswrapper[5005]: I0225 11:31:19.497232 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-845fb74b78-rxqvx"] Feb 25 11:31:19 crc kubenswrapper[5005]: I0225 11:31:19.543701 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9e72e13f-7621-408a-b858-4b83e090769b-apiservice-cert\") pod \"metallb-operator-controller-manager-845fb74b78-rxqvx\" (UID: \"9e72e13f-7621-408a-b858-4b83e090769b\") " pod="metallb-system/metallb-operator-controller-manager-845fb74b78-rxqvx" Feb 25 11:31:19 crc kubenswrapper[5005]: I0225 11:31:19.543779 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9e72e13f-7621-408a-b858-4b83e090769b-webhook-cert\") pod \"metallb-operator-controller-manager-845fb74b78-rxqvx\" (UID: \"9e72e13f-7621-408a-b858-4b83e090769b\") " pod="metallb-system/metallb-operator-controller-manager-845fb74b78-rxqvx" Feb 25 11:31:19 crc kubenswrapper[5005]: I0225 11:31:19.543806 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbv4x\" (UniqueName: \"kubernetes.io/projected/9e72e13f-7621-408a-b858-4b83e090769b-kube-api-access-sbv4x\") pod \"metallb-operator-controller-manager-845fb74b78-rxqvx\" (UID: \"9e72e13f-7621-408a-b858-4b83e090769b\") " pod="metallb-system/metallb-operator-controller-manager-845fb74b78-rxqvx" Feb 25 11:31:19 crc kubenswrapper[5005]: I0225 11:31:19.644646 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9e72e13f-7621-408a-b858-4b83e090769b-apiservice-cert\") pod \"metallb-operator-controller-manager-845fb74b78-rxqvx\" (UID: \"9e72e13f-7621-408a-b858-4b83e090769b\") " pod="metallb-system/metallb-operator-controller-manager-845fb74b78-rxqvx" Feb 25 11:31:19 crc kubenswrapper[5005]: I0225 11:31:19.644737 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9e72e13f-7621-408a-b858-4b83e090769b-webhook-cert\") pod \"metallb-operator-controller-manager-845fb74b78-rxqvx\" (UID: \"9e72e13f-7621-408a-b858-4b83e090769b\") " pod="metallb-system/metallb-operator-controller-manager-845fb74b78-rxqvx" Feb 25 11:31:19 crc kubenswrapper[5005]: I0225 11:31:19.644764 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbv4x\" (UniqueName: \"kubernetes.io/projected/9e72e13f-7621-408a-b858-4b83e090769b-kube-api-access-sbv4x\") pod \"metallb-operator-controller-manager-845fb74b78-rxqvx\" (UID: \"9e72e13f-7621-408a-b858-4b83e090769b\") " pod="metallb-system/metallb-operator-controller-manager-845fb74b78-rxqvx" Feb 25 11:31:19 crc kubenswrapper[5005]: I0225 11:31:19.657575 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9e72e13f-7621-408a-b858-4b83e090769b-apiservice-cert\") pod \"metallb-operator-controller-manager-845fb74b78-rxqvx\" (UID: \"9e72e13f-7621-408a-b858-4b83e090769b\") " pod="metallb-system/metallb-operator-controller-manager-845fb74b78-rxqvx" Feb 25 11:31:19 crc kubenswrapper[5005]: I0225 11:31:19.665963 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9e72e13f-7621-408a-b858-4b83e090769b-webhook-cert\") pod \"metallb-operator-controller-manager-845fb74b78-rxqvx\" (UID: \"9e72e13f-7621-408a-b858-4b83e090769b\") " pod="metallb-system/metallb-operator-controller-manager-845fb74b78-rxqvx" Feb 25 11:31:19 crc kubenswrapper[5005]: I0225 11:31:19.682348 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbv4x\" (UniqueName: \"kubernetes.io/projected/9e72e13f-7621-408a-b858-4b83e090769b-kube-api-access-sbv4x\") pod \"metallb-operator-controller-manager-845fb74b78-rxqvx\" (UID: \"9e72e13f-7621-408a-b858-4b83e090769b\") " pod="metallb-system/metallb-operator-controller-manager-845fb74b78-rxqvx" Feb 25 11:31:19 crc kubenswrapper[5005]: I0225 11:31:19.777073 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-58d6fc4c6d-lmf6n"] Feb 25 11:31:19 crc kubenswrapper[5005]: I0225 11:31:19.777729 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-58d6fc4c6d-lmf6n" Feb 25 11:31:19 crc kubenswrapper[5005]: I0225 11:31:19.779961 5005 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 25 11:31:19 crc kubenswrapper[5005]: I0225 11:31:19.786812 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-845fb74b78-rxqvx" Feb 25 11:31:19 crc kubenswrapper[5005]: I0225 11:31:19.786893 5005 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-5zvb2" Feb 25 11:31:19 crc kubenswrapper[5005]: I0225 11:31:19.787928 5005 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 25 11:31:19 crc kubenswrapper[5005]: I0225 11:31:19.802131 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-58d6fc4c6d-lmf6n"] Feb 25 11:31:19 crc kubenswrapper[5005]: I0225 11:31:19.847012 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4gdl\" (UniqueName: \"kubernetes.io/projected/389ed2af-b504-4485-8232-bf3a8fed70e8-kube-api-access-m4gdl\") pod \"metallb-operator-webhook-server-58d6fc4c6d-lmf6n\" (UID: \"389ed2af-b504-4485-8232-bf3a8fed70e8\") " pod="metallb-system/metallb-operator-webhook-server-58d6fc4c6d-lmf6n" Feb 25 11:31:19 crc kubenswrapper[5005]: I0225 11:31:19.847049 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/389ed2af-b504-4485-8232-bf3a8fed70e8-webhook-cert\") pod \"metallb-operator-webhook-server-58d6fc4c6d-lmf6n\" (UID: \"389ed2af-b504-4485-8232-bf3a8fed70e8\") " pod="metallb-system/metallb-operator-webhook-server-58d6fc4c6d-lmf6n" Feb 25 11:31:19 crc kubenswrapper[5005]: I0225 11:31:19.847109 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/389ed2af-b504-4485-8232-bf3a8fed70e8-apiservice-cert\") pod \"metallb-operator-webhook-server-58d6fc4c6d-lmf6n\" (UID: \"389ed2af-b504-4485-8232-bf3a8fed70e8\") " pod="metallb-system/metallb-operator-webhook-server-58d6fc4c6d-lmf6n" Feb 25 11:31:19 crc kubenswrapper[5005]: I0225 11:31:19.950928 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/389ed2af-b504-4485-8232-bf3a8fed70e8-apiservice-cert\") pod \"metallb-operator-webhook-server-58d6fc4c6d-lmf6n\" (UID: \"389ed2af-b504-4485-8232-bf3a8fed70e8\") " pod="metallb-system/metallb-operator-webhook-server-58d6fc4c6d-lmf6n" Feb 25 11:31:19 crc kubenswrapper[5005]: I0225 11:31:19.950999 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4gdl\" (UniqueName: \"kubernetes.io/projected/389ed2af-b504-4485-8232-bf3a8fed70e8-kube-api-access-m4gdl\") pod \"metallb-operator-webhook-server-58d6fc4c6d-lmf6n\" (UID: \"389ed2af-b504-4485-8232-bf3a8fed70e8\") " pod="metallb-system/metallb-operator-webhook-server-58d6fc4c6d-lmf6n" Feb 25 11:31:19 crc kubenswrapper[5005]: I0225 11:31:19.951020 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/389ed2af-b504-4485-8232-bf3a8fed70e8-webhook-cert\") pod \"metallb-operator-webhook-server-58d6fc4c6d-lmf6n\" (UID: \"389ed2af-b504-4485-8232-bf3a8fed70e8\") " pod="metallb-system/metallb-operator-webhook-server-58d6fc4c6d-lmf6n" Feb 25 11:31:19 crc kubenswrapper[5005]: I0225 11:31:19.960640 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/389ed2af-b504-4485-8232-bf3a8fed70e8-webhook-cert\") pod \"metallb-operator-webhook-server-58d6fc4c6d-lmf6n\" (UID: \"389ed2af-b504-4485-8232-bf3a8fed70e8\") " pod="metallb-system/metallb-operator-webhook-server-58d6fc4c6d-lmf6n" Feb 25 11:31:19 crc kubenswrapper[5005]: I0225 11:31:19.961076 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/389ed2af-b504-4485-8232-bf3a8fed70e8-apiservice-cert\") pod \"metallb-operator-webhook-server-58d6fc4c6d-lmf6n\" (UID: \"389ed2af-b504-4485-8232-bf3a8fed70e8\") " pod="metallb-system/metallb-operator-webhook-server-58d6fc4c6d-lmf6n" Feb 25 11:31:19 crc kubenswrapper[5005]: I0225 11:31:19.968830 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4gdl\" (UniqueName: \"kubernetes.io/projected/389ed2af-b504-4485-8232-bf3a8fed70e8-kube-api-access-m4gdl\") pod \"metallb-operator-webhook-server-58d6fc4c6d-lmf6n\" (UID: \"389ed2af-b504-4485-8232-bf3a8fed70e8\") " pod="metallb-system/metallb-operator-webhook-server-58d6fc4c6d-lmf6n" Feb 25 11:31:20 crc kubenswrapper[5005]: I0225 11:31:20.144692 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-58d6fc4c6d-lmf6n" Feb 25 11:31:20 crc kubenswrapper[5005]: I0225 11:31:20.275085 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-845fb74b78-rxqvx"] Feb 25 11:31:20 crc kubenswrapper[5005]: W0225 11:31:20.287143 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e72e13f_7621_408a_b858_4b83e090769b.slice/crio-14c5de021a44097bef04eeb5f05b09be0b44ab4199fc81c24454295f8a2f1abc WatchSource:0}: Error finding container 14c5de021a44097bef04eeb5f05b09be0b44ab4199fc81c24454295f8a2f1abc: Status 404 returned error can't find the container with id 14c5de021a44097bef04eeb5f05b09be0b44ab4199fc81c24454295f8a2f1abc Feb 25 11:31:20 crc kubenswrapper[5005]: I0225 11:31:20.559119 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-58d6fc4c6d-lmf6n"] Feb 25 11:31:20 crc kubenswrapper[5005]: W0225 11:31:20.569916 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod389ed2af_b504_4485_8232_bf3a8fed70e8.slice/crio-1ffcef1525092ae5191be7670702ffa0e25ae9c25cc0a18cae2407676de826f7 WatchSource:0}: Error finding container 1ffcef1525092ae5191be7670702ffa0e25ae9c25cc0a18cae2407676de826f7: Status 404 returned error can't find the container with id 1ffcef1525092ae5191be7670702ffa0e25ae9c25cc0a18cae2407676de826f7 Feb 25 11:31:21 crc kubenswrapper[5005]: I0225 11:31:21.274867 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-58d6fc4c6d-lmf6n" event={"ID":"389ed2af-b504-4485-8232-bf3a8fed70e8","Type":"ContainerStarted","Data":"1ffcef1525092ae5191be7670702ffa0e25ae9c25cc0a18cae2407676de826f7"} Feb 25 11:31:21 crc kubenswrapper[5005]: I0225 11:31:21.275671 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-845fb74b78-rxqvx" event={"ID":"9e72e13f-7621-408a-b858-4b83e090769b","Type":"ContainerStarted","Data":"14c5de021a44097bef04eeb5f05b09be0b44ab4199fc81c24454295f8a2f1abc"} Feb 25 11:31:25 crc kubenswrapper[5005]: I0225 11:31:25.301918 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-845fb74b78-rxqvx" event={"ID":"9e72e13f-7621-408a-b858-4b83e090769b","Type":"ContainerStarted","Data":"67ab67a119aeda351cc631e529cd53fc634da7a47bdc537ae0191ea5e4877ec1"} Feb 25 11:31:25 crc kubenswrapper[5005]: I0225 11:31:25.302532 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-845fb74b78-rxqvx" Feb 25 11:31:25 crc kubenswrapper[5005]: I0225 11:31:25.304140 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-58d6fc4c6d-lmf6n" event={"ID":"389ed2af-b504-4485-8232-bf3a8fed70e8","Type":"ContainerStarted","Data":"454d386035f09d50bd77dc01457f3c32fc65dcee2464661f00077f89ff638b12"} Feb 25 11:31:25 crc kubenswrapper[5005]: I0225 11:31:25.304277 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-58d6fc4c6d-lmf6n" Feb 25 11:31:25 crc kubenswrapper[5005]: I0225 11:31:25.327130 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-845fb74b78-rxqvx" podStartSLOduration=2.264801648 podStartE2EDuration="6.327110755s" podCreationTimestamp="2026-02-25 11:31:19 +0000 UTC" firstStartedPulling="2026-02-25 11:31:20.299146676 +0000 UTC m=+794.339879003" lastFinishedPulling="2026-02-25 11:31:24.361455783 +0000 UTC m=+798.402188110" observedRunningTime="2026-02-25 11:31:25.324410962 +0000 UTC m=+799.365143289" watchObservedRunningTime="2026-02-25 11:31:25.327110755 +0000 UTC m=+799.367843072" Feb 25 11:31:25 crc kubenswrapper[5005]: I0225 11:31:25.346865 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-58d6fc4c6d-lmf6n" podStartSLOduration=2.5374944729999998 podStartE2EDuration="6.346846602s" podCreationTimestamp="2026-02-25 11:31:19 +0000 UTC" firstStartedPulling="2026-02-25 11:31:20.574474332 +0000 UTC m=+794.615206669" lastFinishedPulling="2026-02-25 11:31:24.383826471 +0000 UTC m=+798.424558798" observedRunningTime="2026-02-25 11:31:25.344663235 +0000 UTC m=+799.385395562" watchObservedRunningTime="2026-02-25 11:31:25.346846602 +0000 UTC m=+799.387578929" Feb 25 11:31:28 crc kubenswrapper[5005]: I0225 11:31:28.087118 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 11:31:28 crc kubenswrapper[5005]: I0225 11:31:28.087817 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 11:31:40 crc kubenswrapper[5005]: I0225 11:31:40.151907 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-58d6fc4c6d-lmf6n" Feb 25 11:31:58 crc kubenswrapper[5005]: I0225 11:31:58.086856 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 11:31:58 crc kubenswrapper[5005]: I0225 11:31:58.088598 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 11:31:58 crc kubenswrapper[5005]: I0225 11:31:58.088705 5005 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" Feb 25 11:31:58 crc kubenswrapper[5005]: I0225 11:31:58.089575 5005 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"725882823e770cccc62c7fb201d52910ad904925f31e4c131e1bed3c3ec5a21f"} pod="openshift-machine-config-operator/machine-config-daemon-tct5q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 25 11:31:58 crc kubenswrapper[5005]: I0225 11:31:58.089674 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" containerID="cri-o://725882823e770cccc62c7fb201d52910ad904925f31e4c131e1bed3c3ec5a21f" gracePeriod=600 Feb 25 11:31:58 crc kubenswrapper[5005]: I0225 11:31:58.531395 5005 generic.go:334] "Generic (PLEG): container finished" podID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerID="725882823e770cccc62c7fb201d52910ad904925f31e4c131e1bed3c3ec5a21f" exitCode=0 Feb 25 11:31:58 crc kubenswrapper[5005]: I0225 11:31:58.531413 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerDied","Data":"725882823e770cccc62c7fb201d52910ad904925f31e4c131e1bed3c3ec5a21f"} Feb 25 11:31:58 crc kubenswrapper[5005]: I0225 11:31:58.531725 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerStarted","Data":"bd803ca207119b9d3512cdb7e88427039abb6164280815395a904d77de0c108e"} Feb 25 11:31:58 crc kubenswrapper[5005]: I0225 11:31:58.531809 5005 scope.go:117] "RemoveContainer" containerID="ddfc1e4f47aa879f9f576a0bce4a8073c09097db07a6b229402441fe59b04947" Feb 25 11:31:59 crc kubenswrapper[5005]: I0225 11:31:59.789787 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-845fb74b78-rxqvx" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.135423 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533652-h5tzk"] Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.136593 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533652-h5tzk" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.138394 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.138811 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.138977 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.139906 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533652-h5tzk"] Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.292276 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckdcl\" (UniqueName: \"kubernetes.io/projected/919273df-651a-4f1d-a11c-739f7dabad38-kube-api-access-ckdcl\") pod \"auto-csr-approver-29533652-h5tzk\" (UID: \"919273df-651a-4f1d-a11c-739f7dabad38\") " pod="openshift-infra/auto-csr-approver-29533652-h5tzk" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.394225 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckdcl\" (UniqueName: \"kubernetes.io/projected/919273df-651a-4f1d-a11c-739f7dabad38-kube-api-access-ckdcl\") pod \"auto-csr-approver-29533652-h5tzk\" (UID: \"919273df-651a-4f1d-a11c-739f7dabad38\") " pod="openshift-infra/auto-csr-approver-29533652-h5tzk" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.416486 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckdcl\" (UniqueName: \"kubernetes.io/projected/919273df-651a-4f1d-a11c-739f7dabad38-kube-api-access-ckdcl\") pod \"auto-csr-approver-29533652-h5tzk\" (UID: \"919273df-651a-4f1d-a11c-739f7dabad38\") " pod="openshift-infra/auto-csr-approver-29533652-h5tzk" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.452192 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533652-h5tzk" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.531870 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-2g97t"] Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.535619 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-2g97t" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.538914 5005 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.539008 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.539022 5005 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-dmf74" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.557115 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-fljdj"] Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.557977 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-fljdj" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.560610 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-fljdj"] Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.560829 5005 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.644755 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-9h4p9"] Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.646228 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-9h4p9" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.652237 5005 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.652265 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.653398 5005 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.664542 5005 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-xqsqc" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.675658 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-j6mkr"] Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.676799 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-j6mkr" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.682067 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-j6mkr"] Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.687528 5005 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.708198 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/a3c7dfa8-0263-4f57-84c7-c61b75fab65c-frr-conf\") pod \"frr-k8s-2g97t\" (UID: \"a3c7dfa8-0263-4f57-84c7-c61b75fab65c\") " pod="metallb-system/frr-k8s-2g97t" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.708267 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhrkd\" (UniqueName: \"kubernetes.io/projected/4f1e84e7-d8d7-4b92-acc0-28ff4a1c94ae-kube-api-access-nhrkd\") pod \"frr-k8s-webhook-server-78b44bf5bb-fljdj\" (UID: \"4f1e84e7-d8d7-4b92-acc0-28ff4a1c94ae\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-fljdj" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.708290 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdhbk\" (UniqueName: \"kubernetes.io/projected/a3c7dfa8-0263-4f57-84c7-c61b75fab65c-kube-api-access-wdhbk\") pod \"frr-k8s-2g97t\" (UID: \"a3c7dfa8-0263-4f57-84c7-c61b75fab65c\") " pod="metallb-system/frr-k8s-2g97t" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.708321 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4f1e84e7-d8d7-4b92-acc0-28ff4a1c94ae-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-fljdj\" (UID: \"4f1e84e7-d8d7-4b92-acc0-28ff4a1c94ae\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-fljdj" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.708339 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a3c7dfa8-0263-4f57-84c7-c61b75fab65c-metrics-certs\") pod \"frr-k8s-2g97t\" (UID: \"a3c7dfa8-0263-4f57-84c7-c61b75fab65c\") " pod="metallb-system/frr-k8s-2g97t" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.708361 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/a3c7dfa8-0263-4f57-84c7-c61b75fab65c-frr-startup\") pod \"frr-k8s-2g97t\" (UID: \"a3c7dfa8-0263-4f57-84c7-c61b75fab65c\") " pod="metallb-system/frr-k8s-2g97t" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.708402 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/a3c7dfa8-0263-4f57-84c7-c61b75fab65c-frr-sockets\") pod \"frr-k8s-2g97t\" (UID: \"a3c7dfa8-0263-4f57-84c7-c61b75fab65c\") " pod="metallb-system/frr-k8s-2g97t" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.708422 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/a3c7dfa8-0263-4f57-84c7-c61b75fab65c-reloader\") pod \"frr-k8s-2g97t\" (UID: \"a3c7dfa8-0263-4f57-84c7-c61b75fab65c\") " pod="metallb-system/frr-k8s-2g97t" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.708443 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/a3c7dfa8-0263-4f57-84c7-c61b75fab65c-metrics\") pod \"frr-k8s-2g97t\" (UID: \"a3c7dfa8-0263-4f57-84c7-c61b75fab65c\") " pod="metallb-system/frr-k8s-2g97t" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.811922 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4f1e84e7-d8d7-4b92-acc0-28ff4a1c94ae-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-fljdj\" (UID: \"4f1e84e7-d8d7-4b92-acc0-28ff4a1c94ae\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-fljdj" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.812249 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c2ccc2c1-2a28-4507-8713-2aad869af209-cert\") pod \"controller-69bbfbf88f-j6mkr\" (UID: \"c2ccc2c1-2a28-4507-8713-2aad869af209\") " pod="metallb-system/controller-69bbfbf88f-j6mkr" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.812273 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a3c7dfa8-0263-4f57-84c7-c61b75fab65c-metrics-certs\") pod \"frr-k8s-2g97t\" (UID: \"a3c7dfa8-0263-4f57-84c7-c61b75fab65c\") " pod="metallb-system/frr-k8s-2g97t" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.812301 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/a3c7dfa8-0263-4f57-84c7-c61b75fab65c-frr-startup\") pod \"frr-k8s-2g97t\" (UID: \"a3c7dfa8-0263-4f57-84c7-c61b75fab65c\") " pod="metallb-system/frr-k8s-2g97t" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.812323 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/0c5c912d-da16-4405-864f-3459d2ef4e9c-metallb-excludel2\") pod \"speaker-9h4p9\" (UID: \"0c5c912d-da16-4405-864f-3459d2ef4e9c\") " pod="metallb-system/speaker-9h4p9" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.812346 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/a3c7dfa8-0263-4f57-84c7-c61b75fab65c-frr-sockets\") pod \"frr-k8s-2g97t\" (UID: \"a3c7dfa8-0263-4f57-84c7-c61b75fab65c\") " pod="metallb-system/frr-k8s-2g97t" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.812365 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/a3c7dfa8-0263-4f57-84c7-c61b75fab65c-reloader\") pod \"frr-k8s-2g97t\" (UID: \"a3c7dfa8-0263-4f57-84c7-c61b75fab65c\") " pod="metallb-system/frr-k8s-2g97t" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.812470 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwhvh\" (UniqueName: \"kubernetes.io/projected/0c5c912d-da16-4405-864f-3459d2ef4e9c-kube-api-access-kwhvh\") pod \"speaker-9h4p9\" (UID: \"0c5c912d-da16-4405-864f-3459d2ef4e9c\") " pod="metallb-system/speaker-9h4p9" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.812536 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/a3c7dfa8-0263-4f57-84c7-c61b75fab65c-metrics\") pod \"frr-k8s-2g97t\" (UID: \"a3c7dfa8-0263-4f57-84c7-c61b75fab65c\") " pod="metallb-system/frr-k8s-2g97t" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.812568 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/a3c7dfa8-0263-4f57-84c7-c61b75fab65c-frr-conf\") pod \"frr-k8s-2g97t\" (UID: \"a3c7dfa8-0263-4f57-84c7-c61b75fab65c\") " pod="metallb-system/frr-k8s-2g97t" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.812630 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/0c5c912d-da16-4405-864f-3459d2ef4e9c-memberlist\") pod \"speaker-9h4p9\" (UID: \"0c5c912d-da16-4405-864f-3459d2ef4e9c\") " pod="metallb-system/speaker-9h4p9" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.812680 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhrkd\" (UniqueName: \"kubernetes.io/projected/4f1e84e7-d8d7-4b92-acc0-28ff4a1c94ae-kube-api-access-nhrkd\") pod \"frr-k8s-webhook-server-78b44bf5bb-fljdj\" (UID: \"4f1e84e7-d8d7-4b92-acc0-28ff4a1c94ae\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-fljdj" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.812700 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0c5c912d-da16-4405-864f-3459d2ef4e9c-metrics-certs\") pod \"speaker-9h4p9\" (UID: \"0c5c912d-da16-4405-864f-3459d2ef4e9c\") " pod="metallb-system/speaker-9h4p9" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.812735 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdhbk\" (UniqueName: \"kubernetes.io/projected/a3c7dfa8-0263-4f57-84c7-c61b75fab65c-kube-api-access-wdhbk\") pod \"frr-k8s-2g97t\" (UID: \"a3c7dfa8-0263-4f57-84c7-c61b75fab65c\") " pod="metallb-system/frr-k8s-2g97t" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.812762 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l99hn\" (UniqueName: \"kubernetes.io/projected/c2ccc2c1-2a28-4507-8713-2aad869af209-kube-api-access-l99hn\") pod \"controller-69bbfbf88f-j6mkr\" (UID: \"c2ccc2c1-2a28-4507-8713-2aad869af209\") " pod="metallb-system/controller-69bbfbf88f-j6mkr" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.812763 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/a3c7dfa8-0263-4f57-84c7-c61b75fab65c-reloader\") pod \"frr-k8s-2g97t\" (UID: \"a3c7dfa8-0263-4f57-84c7-c61b75fab65c\") " pod="metallb-system/frr-k8s-2g97t" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.812836 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c2ccc2c1-2a28-4507-8713-2aad869af209-metrics-certs\") pod \"controller-69bbfbf88f-j6mkr\" (UID: \"c2ccc2c1-2a28-4507-8713-2aad869af209\") " pod="metallb-system/controller-69bbfbf88f-j6mkr" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.813217 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/a3c7dfa8-0263-4f57-84c7-c61b75fab65c-metrics\") pod \"frr-k8s-2g97t\" (UID: \"a3c7dfa8-0263-4f57-84c7-c61b75fab65c\") " pod="metallb-system/frr-k8s-2g97t" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.813420 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/a3c7dfa8-0263-4f57-84c7-c61b75fab65c-frr-conf\") pod \"frr-k8s-2g97t\" (UID: \"a3c7dfa8-0263-4f57-84c7-c61b75fab65c\") " pod="metallb-system/frr-k8s-2g97t" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.814636 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/a3c7dfa8-0263-4f57-84c7-c61b75fab65c-frr-startup\") pod \"frr-k8s-2g97t\" (UID: \"a3c7dfa8-0263-4f57-84c7-c61b75fab65c\") " pod="metallb-system/frr-k8s-2g97t" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.815233 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/a3c7dfa8-0263-4f57-84c7-c61b75fab65c-frr-sockets\") pod \"frr-k8s-2g97t\" (UID: \"a3c7dfa8-0263-4f57-84c7-c61b75fab65c\") " pod="metallb-system/frr-k8s-2g97t" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.820548 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a3c7dfa8-0263-4f57-84c7-c61b75fab65c-metrics-certs\") pod \"frr-k8s-2g97t\" (UID: \"a3c7dfa8-0263-4f57-84c7-c61b75fab65c\") " pod="metallb-system/frr-k8s-2g97t" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.860190 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4f1e84e7-d8d7-4b92-acc0-28ff4a1c94ae-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-fljdj\" (UID: \"4f1e84e7-d8d7-4b92-acc0-28ff4a1c94ae\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-fljdj" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.861972 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdhbk\" (UniqueName: \"kubernetes.io/projected/a3c7dfa8-0263-4f57-84c7-c61b75fab65c-kube-api-access-wdhbk\") pod \"frr-k8s-2g97t\" (UID: \"a3c7dfa8-0263-4f57-84c7-c61b75fab65c\") " pod="metallb-system/frr-k8s-2g97t" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.867308 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhrkd\" (UniqueName: \"kubernetes.io/projected/4f1e84e7-d8d7-4b92-acc0-28ff4a1c94ae-kube-api-access-nhrkd\") pod \"frr-k8s-webhook-server-78b44bf5bb-fljdj\" (UID: \"4f1e84e7-d8d7-4b92-acc0-28ff4a1c94ae\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-fljdj" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.903459 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-2g97t" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.912717 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-fljdj" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.913508 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l99hn\" (UniqueName: \"kubernetes.io/projected/c2ccc2c1-2a28-4507-8713-2aad869af209-kube-api-access-l99hn\") pod \"controller-69bbfbf88f-j6mkr\" (UID: \"c2ccc2c1-2a28-4507-8713-2aad869af209\") " pod="metallb-system/controller-69bbfbf88f-j6mkr" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.913545 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c2ccc2c1-2a28-4507-8713-2aad869af209-metrics-certs\") pod \"controller-69bbfbf88f-j6mkr\" (UID: \"c2ccc2c1-2a28-4507-8713-2aad869af209\") " pod="metallb-system/controller-69bbfbf88f-j6mkr" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.913583 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c2ccc2c1-2a28-4507-8713-2aad869af209-cert\") pod \"controller-69bbfbf88f-j6mkr\" (UID: \"c2ccc2c1-2a28-4507-8713-2aad869af209\") " pod="metallb-system/controller-69bbfbf88f-j6mkr" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.913614 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/0c5c912d-da16-4405-864f-3459d2ef4e9c-metallb-excludel2\") pod \"speaker-9h4p9\" (UID: \"0c5c912d-da16-4405-864f-3459d2ef4e9c\") " pod="metallb-system/speaker-9h4p9" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.913647 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwhvh\" (UniqueName: \"kubernetes.io/projected/0c5c912d-da16-4405-864f-3459d2ef4e9c-kube-api-access-kwhvh\") pod \"speaker-9h4p9\" (UID: \"0c5c912d-da16-4405-864f-3459d2ef4e9c\") " pod="metallb-system/speaker-9h4p9" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.913672 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/0c5c912d-da16-4405-864f-3459d2ef4e9c-memberlist\") pod \"speaker-9h4p9\" (UID: \"0c5c912d-da16-4405-864f-3459d2ef4e9c\") " pod="metallb-system/speaker-9h4p9" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.913691 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0c5c912d-da16-4405-864f-3459d2ef4e9c-metrics-certs\") pod \"speaker-9h4p9\" (UID: \"0c5c912d-da16-4405-864f-3459d2ef4e9c\") " pod="metallb-system/speaker-9h4p9" Feb 25 11:32:00 crc kubenswrapper[5005]: E0225 11:32:00.914538 5005 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 25 11:32:00 crc kubenswrapper[5005]: E0225 11:32:00.914564 5005 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Feb 25 11:32:00 crc kubenswrapper[5005]: E0225 11:32:00.914633 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c5c912d-da16-4405-864f-3459d2ef4e9c-memberlist podName:0c5c912d-da16-4405-864f-3459d2ef4e9c nodeName:}" failed. No retries permitted until 2026-02-25 11:32:01.414607303 +0000 UTC m=+835.455339630 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/0c5c912d-da16-4405-864f-3459d2ef4e9c-memberlist") pod "speaker-9h4p9" (UID: "0c5c912d-da16-4405-864f-3459d2ef4e9c") : secret "metallb-memberlist" not found Feb 25 11:32:00 crc kubenswrapper[5005]: E0225 11:32:00.914650 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2ccc2c1-2a28-4507-8713-2aad869af209-metrics-certs podName:c2ccc2c1-2a28-4507-8713-2aad869af209 nodeName:}" failed. No retries permitted until 2026-02-25 11:32:01.414644204 +0000 UTC m=+835.455376531 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c2ccc2c1-2a28-4507-8713-2aad869af209-metrics-certs") pod "controller-69bbfbf88f-j6mkr" (UID: "c2ccc2c1-2a28-4507-8713-2aad869af209") : secret "controller-certs-secret" not found Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.914860 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/0c5c912d-da16-4405-864f-3459d2ef4e9c-metallb-excludel2\") pod \"speaker-9h4p9\" (UID: \"0c5c912d-da16-4405-864f-3459d2ef4e9c\") " pod="metallb-system/speaker-9h4p9" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.918796 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c2ccc2c1-2a28-4507-8713-2aad869af209-cert\") pod \"controller-69bbfbf88f-j6mkr\" (UID: \"c2ccc2c1-2a28-4507-8713-2aad869af209\") " pod="metallb-system/controller-69bbfbf88f-j6mkr" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.921624 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0c5c912d-da16-4405-864f-3459d2ef4e9c-metrics-certs\") pod \"speaker-9h4p9\" (UID: \"0c5c912d-da16-4405-864f-3459d2ef4e9c\") " pod="metallb-system/speaker-9h4p9" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.933872 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533652-h5tzk"] Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.940861 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwhvh\" (UniqueName: \"kubernetes.io/projected/0c5c912d-da16-4405-864f-3459d2ef4e9c-kube-api-access-kwhvh\") pod \"speaker-9h4p9\" (UID: \"0c5c912d-da16-4405-864f-3459d2ef4e9c\") " pod="metallb-system/speaker-9h4p9" Feb 25 11:32:00 crc kubenswrapper[5005]: I0225 11:32:00.943919 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l99hn\" (UniqueName: \"kubernetes.io/projected/c2ccc2c1-2a28-4507-8713-2aad869af209-kube-api-access-l99hn\") pod \"controller-69bbfbf88f-j6mkr\" (UID: \"c2ccc2c1-2a28-4507-8713-2aad869af209\") " pod="metallb-system/controller-69bbfbf88f-j6mkr" Feb 25 11:32:01 crc kubenswrapper[5005]: I0225 11:32:01.125893 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-fljdj"] Feb 25 11:32:01 crc kubenswrapper[5005]: W0225 11:32:01.131053 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f1e84e7_d8d7_4b92_acc0_28ff4a1c94ae.slice/crio-a17e1935e4c931968625add6d187bf2f2a450c185a9f74691b0aac3a8d052b66 WatchSource:0}: Error finding container a17e1935e4c931968625add6d187bf2f2a450c185a9f74691b0aac3a8d052b66: Status 404 returned error can't find the container with id a17e1935e4c931968625add6d187bf2f2a450c185a9f74691b0aac3a8d052b66 Feb 25 11:32:01 crc kubenswrapper[5005]: I0225 11:32:01.419928 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/0c5c912d-da16-4405-864f-3459d2ef4e9c-memberlist\") pod \"speaker-9h4p9\" (UID: \"0c5c912d-da16-4405-864f-3459d2ef4e9c\") " pod="metallb-system/speaker-9h4p9" Feb 25 11:32:01 crc kubenswrapper[5005]: I0225 11:32:01.420062 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c2ccc2c1-2a28-4507-8713-2aad869af209-metrics-certs\") pod \"controller-69bbfbf88f-j6mkr\" (UID: \"c2ccc2c1-2a28-4507-8713-2aad869af209\") " pod="metallb-system/controller-69bbfbf88f-j6mkr" Feb 25 11:32:01 crc kubenswrapper[5005]: E0225 11:32:01.420474 5005 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 25 11:32:01 crc kubenswrapper[5005]: E0225 11:32:01.420560 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c5c912d-da16-4405-864f-3459d2ef4e9c-memberlist podName:0c5c912d-da16-4405-864f-3459d2ef4e9c nodeName:}" failed. No retries permitted until 2026-02-25 11:32:02.42053368 +0000 UTC m=+836.461266077 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/0c5c912d-da16-4405-864f-3459d2ef4e9c-memberlist") pod "speaker-9h4p9" (UID: "0c5c912d-da16-4405-864f-3459d2ef4e9c") : secret "metallb-memberlist" not found Feb 25 11:32:01 crc kubenswrapper[5005]: I0225 11:32:01.427713 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c2ccc2c1-2a28-4507-8713-2aad869af209-metrics-certs\") pod \"controller-69bbfbf88f-j6mkr\" (UID: \"c2ccc2c1-2a28-4507-8713-2aad869af209\") " pod="metallb-system/controller-69bbfbf88f-j6mkr" Feb 25 11:32:01 crc kubenswrapper[5005]: I0225 11:32:01.553769 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533652-h5tzk" event={"ID":"919273df-651a-4f1d-a11c-739f7dabad38","Type":"ContainerStarted","Data":"584614b162a3df1b900068209afeae705f10e1db852b5116f27dbba7a1802edc"} Feb 25 11:32:01 crc kubenswrapper[5005]: I0225 11:32:01.555648 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-fljdj" event={"ID":"4f1e84e7-d8d7-4b92-acc0-28ff4a1c94ae","Type":"ContainerStarted","Data":"a17e1935e4c931968625add6d187bf2f2a450c185a9f74691b0aac3a8d052b66"} Feb 25 11:32:01 crc kubenswrapper[5005]: I0225 11:32:01.557487 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2g97t" event={"ID":"a3c7dfa8-0263-4f57-84c7-c61b75fab65c","Type":"ContainerStarted","Data":"58680fd9e1aa0c912c9d4ef75eeb19b323f6df747fe78e7904af8c16f6e4fa44"} Feb 25 11:32:01 crc kubenswrapper[5005]: I0225 11:32:01.631256 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-j6mkr" Feb 25 11:32:01 crc kubenswrapper[5005]: I0225 11:32:01.865975 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-j6mkr"] Feb 25 11:32:02 crc kubenswrapper[5005]: I0225 11:32:02.435880 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/0c5c912d-da16-4405-864f-3459d2ef4e9c-memberlist\") pod \"speaker-9h4p9\" (UID: \"0c5c912d-da16-4405-864f-3459d2ef4e9c\") " pod="metallb-system/speaker-9h4p9" Feb 25 11:32:02 crc kubenswrapper[5005]: I0225 11:32:02.444133 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/0c5c912d-da16-4405-864f-3459d2ef4e9c-memberlist\") pod \"speaker-9h4p9\" (UID: \"0c5c912d-da16-4405-864f-3459d2ef4e9c\") " pod="metallb-system/speaker-9h4p9" Feb 25 11:32:02 crc kubenswrapper[5005]: I0225 11:32:02.501034 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-9h4p9" Feb 25 11:32:02 crc kubenswrapper[5005]: W0225 11:32:02.520709 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c5c912d_da16_4405_864f_3459d2ef4e9c.slice/crio-dd87832e796ef8d7ff3b357a4928295f50ace196776b9db43849a425fd05864a WatchSource:0}: Error finding container dd87832e796ef8d7ff3b357a4928295f50ace196776b9db43849a425fd05864a: Status 404 returned error can't find the container with id dd87832e796ef8d7ff3b357a4928295f50ace196776b9db43849a425fd05864a Feb 25 11:32:02 crc kubenswrapper[5005]: I0225 11:32:02.570011 5005 generic.go:334] "Generic (PLEG): container finished" podID="919273df-651a-4f1d-a11c-739f7dabad38" containerID="cae1af438d132ed2bc47900ecaee536e0d8e691d2548cb3fa92684d743d8c949" exitCode=0 Feb 25 11:32:02 crc kubenswrapper[5005]: I0225 11:32:02.570087 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533652-h5tzk" event={"ID":"919273df-651a-4f1d-a11c-739f7dabad38","Type":"ContainerDied","Data":"cae1af438d132ed2bc47900ecaee536e0d8e691d2548cb3fa92684d743d8c949"} Feb 25 11:32:02 crc kubenswrapper[5005]: I0225 11:32:02.571825 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-9h4p9" event={"ID":"0c5c912d-da16-4405-864f-3459d2ef4e9c","Type":"ContainerStarted","Data":"dd87832e796ef8d7ff3b357a4928295f50ace196776b9db43849a425fd05864a"} Feb 25 11:32:02 crc kubenswrapper[5005]: I0225 11:32:02.574168 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-j6mkr" event={"ID":"c2ccc2c1-2a28-4507-8713-2aad869af209","Type":"ContainerStarted","Data":"0caf74626f2a0c6cae1adb53cc217d3e4dccd17c9f715b1f622e8d3cd5d7ece5"} Feb 25 11:32:02 crc kubenswrapper[5005]: I0225 11:32:02.574195 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-j6mkr" event={"ID":"c2ccc2c1-2a28-4507-8713-2aad869af209","Type":"ContainerStarted","Data":"1592443d36903dc786a947c3a487117e6a7106c492068ee3b2cdca127d1a6003"} Feb 25 11:32:02 crc kubenswrapper[5005]: I0225 11:32:02.574207 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-j6mkr" event={"ID":"c2ccc2c1-2a28-4507-8713-2aad869af209","Type":"ContainerStarted","Data":"46e72a6103a8a30b8b379c5953ae65c90f35175d78a93360afb2adc2f605acdd"} Feb 25 11:32:03 crc kubenswrapper[5005]: I0225 11:32:03.583128 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-9h4p9" event={"ID":"0c5c912d-da16-4405-864f-3459d2ef4e9c","Type":"ContainerStarted","Data":"7be00c987035523d61cbe1d524e47ac2ae7d23028738b54e6a6ac398959eb8ff"} Feb 25 11:32:03 crc kubenswrapper[5005]: I0225 11:32:03.583620 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-9h4p9" event={"ID":"0c5c912d-da16-4405-864f-3459d2ef4e9c","Type":"ContainerStarted","Data":"70f86edfdf9ee48763f36a5d466120e818cd2f33e7f8aa0637b394cf6caed9b2"} Feb 25 11:32:03 crc kubenswrapper[5005]: I0225 11:32:03.583958 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-j6mkr" Feb 25 11:32:03 crc kubenswrapper[5005]: I0225 11:32:03.583980 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-9h4p9" Feb 25 11:32:03 crc kubenswrapper[5005]: I0225 11:32:03.596308 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-9h4p9" podStartSLOduration=3.596293429 podStartE2EDuration="3.596293429s" podCreationTimestamp="2026-02-25 11:32:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:32:03.595613609 +0000 UTC m=+837.636345936" watchObservedRunningTime="2026-02-25 11:32:03.596293429 +0000 UTC m=+837.637025756" Feb 25 11:32:03 crc kubenswrapper[5005]: I0225 11:32:03.596998 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-j6mkr" podStartSLOduration=3.596994041 podStartE2EDuration="3.596994041s" podCreationTimestamp="2026-02-25 11:32:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:32:02.60467448 +0000 UTC m=+836.645406807" watchObservedRunningTime="2026-02-25 11:32:03.596994041 +0000 UTC m=+837.637726368" Feb 25 11:32:03 crc kubenswrapper[5005]: I0225 11:32:03.902335 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533652-h5tzk" Feb 25 11:32:04 crc kubenswrapper[5005]: I0225 11:32:04.063242 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckdcl\" (UniqueName: \"kubernetes.io/projected/919273df-651a-4f1d-a11c-739f7dabad38-kube-api-access-ckdcl\") pod \"919273df-651a-4f1d-a11c-739f7dabad38\" (UID: \"919273df-651a-4f1d-a11c-739f7dabad38\") " Feb 25 11:32:04 crc kubenswrapper[5005]: I0225 11:32:04.072705 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/919273df-651a-4f1d-a11c-739f7dabad38-kube-api-access-ckdcl" (OuterVolumeSpecName: "kube-api-access-ckdcl") pod "919273df-651a-4f1d-a11c-739f7dabad38" (UID: "919273df-651a-4f1d-a11c-739f7dabad38"). InnerVolumeSpecName "kube-api-access-ckdcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:32:04 crc kubenswrapper[5005]: I0225 11:32:04.164993 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ckdcl\" (UniqueName: \"kubernetes.io/projected/919273df-651a-4f1d-a11c-739f7dabad38-kube-api-access-ckdcl\") on node \"crc\" DevicePath \"\"" Feb 25 11:32:04 crc kubenswrapper[5005]: I0225 11:32:04.596565 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533652-h5tzk" Feb 25 11:32:04 crc kubenswrapper[5005]: I0225 11:32:04.596623 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533652-h5tzk" event={"ID":"919273df-651a-4f1d-a11c-739f7dabad38","Type":"ContainerDied","Data":"584614b162a3df1b900068209afeae705f10e1db852b5116f27dbba7a1802edc"} Feb 25 11:32:04 crc kubenswrapper[5005]: I0225 11:32:04.596654 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="584614b162a3df1b900068209afeae705f10e1db852b5116f27dbba7a1802edc" Feb 25 11:32:04 crc kubenswrapper[5005]: I0225 11:32:04.952856 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533646-4dlls"] Feb 25 11:32:04 crc kubenswrapper[5005]: I0225 11:32:04.952905 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533646-4dlls"] Feb 25 11:32:06 crc kubenswrapper[5005]: I0225 11:32:06.694785 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e30c92cc-84df-4ffb-892f-38caccfc092a" path="/var/lib/kubelet/pods/e30c92cc-84df-4ffb-892f-38caccfc092a/volumes" Feb 25 11:32:07 crc kubenswrapper[5005]: I0225 11:32:07.440734 5005 scope.go:117] "RemoveContainer" containerID="f7ce4101d45dd9b9f387ed5810f32fabdf1e7feca4ebd40e50c2319cc53b7e95" Feb 25 11:32:07 crc kubenswrapper[5005]: I0225 11:32:07.506773 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vzrvq"] Feb 25 11:32:07 crc kubenswrapper[5005]: E0225 11:32:07.506992 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="919273df-651a-4f1d-a11c-739f7dabad38" containerName="oc" Feb 25 11:32:07 crc kubenswrapper[5005]: I0225 11:32:07.507003 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="919273df-651a-4f1d-a11c-739f7dabad38" containerName="oc" Feb 25 11:32:07 crc kubenswrapper[5005]: I0225 11:32:07.507099 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="919273df-651a-4f1d-a11c-739f7dabad38" containerName="oc" Feb 25 11:32:07 crc kubenswrapper[5005]: I0225 11:32:07.508262 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vzrvq" Feb 25 11:32:07 crc kubenswrapper[5005]: I0225 11:32:07.526066 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vzrvq"] Feb 25 11:32:07 crc kubenswrapper[5005]: I0225 11:32:07.618380 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1390187-dc05-4fdf-808f-c99094e3756c-utilities\") pod \"redhat-marketplace-vzrvq\" (UID: \"f1390187-dc05-4fdf-808f-c99094e3756c\") " pod="openshift-marketplace/redhat-marketplace-vzrvq" Feb 25 11:32:07 crc kubenswrapper[5005]: I0225 11:32:07.618441 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9rl8\" (UniqueName: \"kubernetes.io/projected/f1390187-dc05-4fdf-808f-c99094e3756c-kube-api-access-r9rl8\") pod \"redhat-marketplace-vzrvq\" (UID: \"f1390187-dc05-4fdf-808f-c99094e3756c\") " pod="openshift-marketplace/redhat-marketplace-vzrvq" Feb 25 11:32:07 crc kubenswrapper[5005]: I0225 11:32:07.618505 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1390187-dc05-4fdf-808f-c99094e3756c-catalog-content\") pod \"redhat-marketplace-vzrvq\" (UID: \"f1390187-dc05-4fdf-808f-c99094e3756c\") " pod="openshift-marketplace/redhat-marketplace-vzrvq" Feb 25 11:32:07 crc kubenswrapper[5005]: I0225 11:32:07.720151 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1390187-dc05-4fdf-808f-c99094e3756c-catalog-content\") pod \"redhat-marketplace-vzrvq\" (UID: \"f1390187-dc05-4fdf-808f-c99094e3756c\") " pod="openshift-marketplace/redhat-marketplace-vzrvq" Feb 25 11:32:07 crc kubenswrapper[5005]: I0225 11:32:07.720569 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1390187-dc05-4fdf-808f-c99094e3756c-utilities\") pod \"redhat-marketplace-vzrvq\" (UID: \"f1390187-dc05-4fdf-808f-c99094e3756c\") " pod="openshift-marketplace/redhat-marketplace-vzrvq" Feb 25 11:32:07 crc kubenswrapper[5005]: I0225 11:32:07.720602 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9rl8\" (UniqueName: \"kubernetes.io/projected/f1390187-dc05-4fdf-808f-c99094e3756c-kube-api-access-r9rl8\") pod \"redhat-marketplace-vzrvq\" (UID: \"f1390187-dc05-4fdf-808f-c99094e3756c\") " pod="openshift-marketplace/redhat-marketplace-vzrvq" Feb 25 11:32:07 crc kubenswrapper[5005]: I0225 11:32:07.721795 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1390187-dc05-4fdf-808f-c99094e3756c-catalog-content\") pod \"redhat-marketplace-vzrvq\" (UID: \"f1390187-dc05-4fdf-808f-c99094e3756c\") " pod="openshift-marketplace/redhat-marketplace-vzrvq" Feb 25 11:32:07 crc kubenswrapper[5005]: I0225 11:32:07.722083 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1390187-dc05-4fdf-808f-c99094e3756c-utilities\") pod \"redhat-marketplace-vzrvq\" (UID: \"f1390187-dc05-4fdf-808f-c99094e3756c\") " pod="openshift-marketplace/redhat-marketplace-vzrvq" Feb 25 11:32:07 crc kubenswrapper[5005]: I0225 11:32:07.740352 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9rl8\" (UniqueName: \"kubernetes.io/projected/f1390187-dc05-4fdf-808f-c99094e3756c-kube-api-access-r9rl8\") pod \"redhat-marketplace-vzrvq\" (UID: \"f1390187-dc05-4fdf-808f-c99094e3756c\") " pod="openshift-marketplace/redhat-marketplace-vzrvq" Feb 25 11:32:07 crc kubenswrapper[5005]: I0225 11:32:07.837806 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vzrvq" Feb 25 11:32:08 crc kubenswrapper[5005]: I0225 11:32:08.532524 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vzrvq"] Feb 25 11:32:08 crc kubenswrapper[5005]: W0225 11:32:08.537939 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1390187_dc05_4fdf_808f_c99094e3756c.slice/crio-31758176adaba3274ac49c699430ac24dbe79ed23882a04768fb24c8ab68a1d2 WatchSource:0}: Error finding container 31758176adaba3274ac49c699430ac24dbe79ed23882a04768fb24c8ab68a1d2: Status 404 returned error can't find the container with id 31758176adaba3274ac49c699430ac24dbe79ed23882a04768fb24c8ab68a1d2 Feb 25 11:32:08 crc kubenswrapper[5005]: I0225 11:32:08.620168 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vzrvq" event={"ID":"f1390187-dc05-4fdf-808f-c99094e3756c","Type":"ContainerStarted","Data":"31758176adaba3274ac49c699430ac24dbe79ed23882a04768fb24c8ab68a1d2"} Feb 25 11:32:08 crc kubenswrapper[5005]: I0225 11:32:08.621409 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-fljdj" event={"ID":"4f1e84e7-d8d7-4b92-acc0-28ff4a1c94ae","Type":"ContainerStarted","Data":"5ae3a4567c3451151d595ed3a65a2c3076a19d9e926ab31c0010244b6ae6fca2"} Feb 25 11:32:08 crc kubenswrapper[5005]: I0225 11:32:08.621575 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-fljdj" Feb 25 11:32:08 crc kubenswrapper[5005]: I0225 11:32:08.623275 5005 generic.go:334] "Generic (PLEG): container finished" podID="a3c7dfa8-0263-4f57-84c7-c61b75fab65c" containerID="d42add7600bd7c09c7c7545a7ca65db2857071b54c31e5c770c0faebf84038a4" exitCode=0 Feb 25 11:32:08 crc kubenswrapper[5005]: I0225 11:32:08.623317 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2g97t" event={"ID":"a3c7dfa8-0263-4f57-84c7-c61b75fab65c","Type":"ContainerDied","Data":"d42add7600bd7c09c7c7545a7ca65db2857071b54c31e5c770c0faebf84038a4"} Feb 25 11:32:08 crc kubenswrapper[5005]: I0225 11:32:08.636650 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-fljdj" podStartSLOduration=1.578350452 podStartE2EDuration="8.636632679s" podCreationTimestamp="2026-02-25 11:32:00 +0000 UTC" firstStartedPulling="2026-02-25 11:32:01.133277867 +0000 UTC m=+835.174010194" lastFinishedPulling="2026-02-25 11:32:08.191560094 +0000 UTC m=+842.232292421" observedRunningTime="2026-02-25 11:32:08.634201434 +0000 UTC m=+842.674933761" watchObservedRunningTime="2026-02-25 11:32:08.636632679 +0000 UTC m=+842.677365006" Feb 25 11:32:09 crc kubenswrapper[5005]: I0225 11:32:09.633870 5005 generic.go:334] "Generic (PLEG): container finished" podID="f1390187-dc05-4fdf-808f-c99094e3756c" containerID="19ba83910ea494bafb154ee65e970681032c0bf4c1a56232791c3a800ade52fe" exitCode=0 Feb 25 11:32:09 crc kubenswrapper[5005]: I0225 11:32:09.634053 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vzrvq" event={"ID":"f1390187-dc05-4fdf-808f-c99094e3756c","Type":"ContainerDied","Data":"19ba83910ea494bafb154ee65e970681032c0bf4c1a56232791c3a800ade52fe"} Feb 25 11:32:09 crc kubenswrapper[5005]: I0225 11:32:09.640737 5005 generic.go:334] "Generic (PLEG): container finished" podID="a3c7dfa8-0263-4f57-84c7-c61b75fab65c" containerID="a6746e80c6161e8e76858573a0c416d19025477c53337c31969581839c7a6d2e" exitCode=0 Feb 25 11:32:09 crc kubenswrapper[5005]: I0225 11:32:09.641333 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2g97t" event={"ID":"a3c7dfa8-0263-4f57-84c7-c61b75fab65c","Type":"ContainerDied","Data":"a6746e80c6161e8e76858573a0c416d19025477c53337c31969581839c7a6d2e"} Feb 25 11:32:10 crc kubenswrapper[5005]: I0225 11:32:10.649927 5005 generic.go:334] "Generic (PLEG): container finished" podID="a3c7dfa8-0263-4f57-84c7-c61b75fab65c" containerID="e9bca2c59e14f71492289520f9b7c0b7b77325105d3b8920f9d38cd454e539e9" exitCode=0 Feb 25 11:32:10 crc kubenswrapper[5005]: I0225 11:32:10.649992 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2g97t" event={"ID":"a3c7dfa8-0263-4f57-84c7-c61b75fab65c","Type":"ContainerDied","Data":"e9bca2c59e14f71492289520f9b7c0b7b77325105d3b8920f9d38cd454e539e9"} Feb 25 11:32:10 crc kubenswrapper[5005]: I0225 11:32:10.653460 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vzrvq" event={"ID":"f1390187-dc05-4fdf-808f-c99094e3756c","Type":"ContainerStarted","Data":"3ee2fb8d71739ccb2f8dc6edded2e2b3999a38a469da34c9d308774cca6e532c"} Feb 25 11:32:11 crc kubenswrapper[5005]: I0225 11:32:11.663596 5005 generic.go:334] "Generic (PLEG): container finished" podID="f1390187-dc05-4fdf-808f-c99094e3756c" containerID="3ee2fb8d71739ccb2f8dc6edded2e2b3999a38a469da34c9d308774cca6e532c" exitCode=0 Feb 25 11:32:11 crc kubenswrapper[5005]: I0225 11:32:11.663704 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vzrvq" event={"ID":"f1390187-dc05-4fdf-808f-c99094e3756c","Type":"ContainerDied","Data":"3ee2fb8d71739ccb2f8dc6edded2e2b3999a38a469da34c9d308774cca6e532c"} Feb 25 11:32:11 crc kubenswrapper[5005]: I0225 11:32:11.674603 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2g97t" event={"ID":"a3c7dfa8-0263-4f57-84c7-c61b75fab65c","Type":"ContainerStarted","Data":"7805bb5a224e3bc6a743986ed13bdad8432c189fe3f1f7895a0c99d89b2a7dec"} Feb 25 11:32:11 crc kubenswrapper[5005]: I0225 11:32:11.674646 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2g97t" event={"ID":"a3c7dfa8-0263-4f57-84c7-c61b75fab65c","Type":"ContainerStarted","Data":"f60a477ca54875b1ad95f923cd924dfd856ed0bd01abfdceb522a9d5ae31a72a"} Feb 25 11:32:11 crc kubenswrapper[5005]: I0225 11:32:11.674660 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2g97t" event={"ID":"a3c7dfa8-0263-4f57-84c7-c61b75fab65c","Type":"ContainerStarted","Data":"0291cf3d92534da90274c6e6e203cd4e4e8369f8653f91cec3ca74150b831614"} Feb 25 11:32:11 crc kubenswrapper[5005]: I0225 11:32:11.674673 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2g97t" event={"ID":"a3c7dfa8-0263-4f57-84c7-c61b75fab65c","Type":"ContainerStarted","Data":"ab1833ae8fb9ef335ba2111d69f64f4210c1a03a06a5418a5f3daa84529a7e43"} Feb 25 11:32:11 crc kubenswrapper[5005]: I0225 11:32:11.674683 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2g97t" event={"ID":"a3c7dfa8-0263-4f57-84c7-c61b75fab65c","Type":"ContainerStarted","Data":"ae7255c3bc4f0772e89824e6ffa7c822349e91cc31dc0eddb159dda66de42902"} Feb 25 11:32:11 crc kubenswrapper[5005]: I0225 11:32:11.674694 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2g97t" event={"ID":"a3c7dfa8-0263-4f57-84c7-c61b75fab65c","Type":"ContainerStarted","Data":"0bd173327d0a42ebe5f4f0d3660521061cb56b9acd3409c5786f6061204f3f91"} Feb 25 11:32:11 crc kubenswrapper[5005]: I0225 11:32:11.674798 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-2g97t" Feb 25 11:32:11 crc kubenswrapper[5005]: I0225 11:32:11.722751 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-2g97t" podStartSLOduration=4.572897975 podStartE2EDuration="11.722730407s" podCreationTimestamp="2026-02-25 11:32:00 +0000 UTC" firstStartedPulling="2026-02-25 11:32:01.050086189 +0000 UTC m=+835.090818516" lastFinishedPulling="2026-02-25 11:32:08.199918611 +0000 UTC m=+842.240650948" observedRunningTime="2026-02-25 11:32:11.718006564 +0000 UTC m=+845.758738901" watchObservedRunningTime="2026-02-25 11:32:11.722730407 +0000 UTC m=+845.763462744" Feb 25 11:32:12 crc kubenswrapper[5005]: I0225 11:32:12.505876 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-9h4p9" Feb 25 11:32:12 crc kubenswrapper[5005]: I0225 11:32:12.698298 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vzrvq" event={"ID":"f1390187-dc05-4fdf-808f-c99094e3756c","Type":"ContainerStarted","Data":"613c17bb702a3b26016e411a2b9090119f8ee74bfa9edc0ad5e848e173a7f432"} Feb 25 11:32:12 crc kubenswrapper[5005]: I0225 11:32:12.720202 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vzrvq" podStartSLOduration=3.192359741 podStartE2EDuration="5.720165578s" podCreationTimestamp="2026-02-25 11:32:07 +0000 UTC" firstStartedPulling="2026-02-25 11:32:09.635423179 +0000 UTC m=+843.676155516" lastFinishedPulling="2026-02-25 11:32:12.163229026 +0000 UTC m=+846.203961353" observedRunningTime="2026-02-25 11:32:12.7182436 +0000 UTC m=+846.758975997" watchObservedRunningTime="2026-02-25 11:32:12.720165578 +0000 UTC m=+846.760897905" Feb 25 11:32:15 crc kubenswrapper[5005]: I0225 11:32:15.086246 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-kvnzd"] Feb 25 11:32:15 crc kubenswrapper[5005]: I0225 11:32:15.087879 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-kvnzd" Feb 25 11:32:15 crc kubenswrapper[5005]: I0225 11:32:15.094915 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-hkbnq" Feb 25 11:32:15 crc kubenswrapper[5005]: I0225 11:32:15.094999 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-kvnzd"] Feb 25 11:32:15 crc kubenswrapper[5005]: I0225 11:32:15.095290 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 25 11:32:15 crc kubenswrapper[5005]: I0225 11:32:15.096900 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 25 11:32:15 crc kubenswrapper[5005]: I0225 11:32:15.213212 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blmc5\" (UniqueName: \"kubernetes.io/projected/a4a5f00e-5b08-43e4-9e1e-b09aca89ff2d-kube-api-access-blmc5\") pod \"openstack-operator-index-kvnzd\" (UID: \"a4a5f00e-5b08-43e4-9e1e-b09aca89ff2d\") " pod="openstack-operators/openstack-operator-index-kvnzd" Feb 25 11:32:15 crc kubenswrapper[5005]: I0225 11:32:15.315021 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blmc5\" (UniqueName: \"kubernetes.io/projected/a4a5f00e-5b08-43e4-9e1e-b09aca89ff2d-kube-api-access-blmc5\") pod \"openstack-operator-index-kvnzd\" (UID: \"a4a5f00e-5b08-43e4-9e1e-b09aca89ff2d\") " pod="openstack-operators/openstack-operator-index-kvnzd" Feb 25 11:32:15 crc kubenswrapper[5005]: I0225 11:32:15.336788 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blmc5\" (UniqueName: \"kubernetes.io/projected/a4a5f00e-5b08-43e4-9e1e-b09aca89ff2d-kube-api-access-blmc5\") pod \"openstack-operator-index-kvnzd\" (UID: \"a4a5f00e-5b08-43e4-9e1e-b09aca89ff2d\") " pod="openstack-operators/openstack-operator-index-kvnzd" Feb 25 11:32:15 crc kubenswrapper[5005]: I0225 11:32:15.404975 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-kvnzd" Feb 25 11:32:15 crc kubenswrapper[5005]: I0225 11:32:15.871093 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-kvnzd"] Feb 25 11:32:15 crc kubenswrapper[5005]: I0225 11:32:15.904439 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-2g97t" Feb 25 11:32:15 crc kubenswrapper[5005]: I0225 11:32:15.973524 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-2g97t" Feb 25 11:32:16 crc kubenswrapper[5005]: I0225 11:32:16.725226 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-kvnzd" event={"ID":"a4a5f00e-5b08-43e4-9e1e-b09aca89ff2d","Type":"ContainerStarted","Data":"2d9287ce8c835fc5fd3fc6b63008928eac180d67c58db25e59831a9c09424e2e"} Feb 25 11:32:17 crc kubenswrapper[5005]: I0225 11:32:17.841335 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vzrvq" Feb 25 11:32:17 crc kubenswrapper[5005]: I0225 11:32:17.843350 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vzrvq" Feb 25 11:32:17 crc kubenswrapper[5005]: I0225 11:32:17.921875 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vzrvq" Feb 25 11:32:18 crc kubenswrapper[5005]: I0225 11:32:18.743330 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-kvnzd" event={"ID":"a4a5f00e-5b08-43e4-9e1e-b09aca89ff2d","Type":"ContainerStarted","Data":"bfea8187b49ce2bf4452df2244a82d0efff31cb3178c168705fda1fa159eaea7"} Feb 25 11:32:18 crc kubenswrapper[5005]: I0225 11:32:18.763893 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-kvnzd" podStartSLOduration=1.3094115130000001 podStartE2EDuration="3.763874003s" podCreationTimestamp="2026-02-25 11:32:15 +0000 UTC" firstStartedPulling="2026-02-25 11:32:15.886361072 +0000 UTC m=+849.927093409" lastFinishedPulling="2026-02-25 11:32:18.340823572 +0000 UTC m=+852.381555899" observedRunningTime="2026-02-25 11:32:18.763442389 +0000 UTC m=+852.804174776" watchObservedRunningTime="2026-02-25 11:32:18.763874003 +0000 UTC m=+852.804606330" Feb 25 11:32:18 crc kubenswrapper[5005]: I0225 11:32:18.811193 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vzrvq" Feb 25 11:32:19 crc kubenswrapper[5005]: I0225 11:32:19.681521 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-kvnzd"] Feb 25 11:32:20 crc kubenswrapper[5005]: I0225 11:32:20.494409 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-g97mm"] Feb 25 11:32:20 crc kubenswrapper[5005]: I0225 11:32:20.495684 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-g97mm" Feb 25 11:32:20 crc kubenswrapper[5005]: I0225 11:32:20.512341 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-g97mm"] Feb 25 11:32:20 crc kubenswrapper[5005]: I0225 11:32:20.590342 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2kkt\" (UniqueName: \"kubernetes.io/projected/16ce7172-a1e1-4af8-a26b-6700cad253a3-kube-api-access-s2kkt\") pod \"openstack-operator-index-g97mm\" (UID: \"16ce7172-a1e1-4af8-a26b-6700cad253a3\") " pod="openstack-operators/openstack-operator-index-g97mm" Feb 25 11:32:20 crc kubenswrapper[5005]: I0225 11:32:20.692203 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kkt\" (UniqueName: \"kubernetes.io/projected/16ce7172-a1e1-4af8-a26b-6700cad253a3-kube-api-access-s2kkt\") pod \"openstack-operator-index-g97mm\" (UID: \"16ce7172-a1e1-4af8-a26b-6700cad253a3\") " pod="openstack-operators/openstack-operator-index-g97mm" Feb 25 11:32:20 crc kubenswrapper[5005]: I0225 11:32:20.721460 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kkt\" (UniqueName: \"kubernetes.io/projected/16ce7172-a1e1-4af8-a26b-6700cad253a3-kube-api-access-s2kkt\") pod \"openstack-operator-index-g97mm\" (UID: \"16ce7172-a1e1-4af8-a26b-6700cad253a3\") " pod="openstack-operators/openstack-operator-index-g97mm" Feb 25 11:32:20 crc kubenswrapper[5005]: I0225 11:32:20.757248 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-kvnzd" podUID="a4a5f00e-5b08-43e4-9e1e-b09aca89ff2d" containerName="registry-server" containerID="cri-o://bfea8187b49ce2bf4452df2244a82d0efff31cb3178c168705fda1fa159eaea7" gracePeriod=2 Feb 25 11:32:20 crc kubenswrapper[5005]: I0225 11:32:20.888462 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-g97mm" Feb 25 11:32:20 crc kubenswrapper[5005]: I0225 11:32:20.906166 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-2g97t" Feb 25 11:32:20 crc kubenswrapper[5005]: I0225 11:32:20.925675 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-fljdj" Feb 25 11:32:21 crc kubenswrapper[5005]: I0225 11:32:21.139123 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-kvnzd" Feb 25 11:32:21 crc kubenswrapper[5005]: I0225 11:32:21.204558 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blmc5\" (UniqueName: \"kubernetes.io/projected/a4a5f00e-5b08-43e4-9e1e-b09aca89ff2d-kube-api-access-blmc5\") pod \"a4a5f00e-5b08-43e4-9e1e-b09aca89ff2d\" (UID: \"a4a5f00e-5b08-43e4-9e1e-b09aca89ff2d\") " Feb 25 11:32:21 crc kubenswrapper[5005]: I0225 11:32:21.211254 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4a5f00e-5b08-43e4-9e1e-b09aca89ff2d-kube-api-access-blmc5" (OuterVolumeSpecName: "kube-api-access-blmc5") pod "a4a5f00e-5b08-43e4-9e1e-b09aca89ff2d" (UID: "a4a5f00e-5b08-43e4-9e1e-b09aca89ff2d"). InnerVolumeSpecName "kube-api-access-blmc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:32:21 crc kubenswrapper[5005]: I0225 11:32:21.306612 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blmc5\" (UniqueName: \"kubernetes.io/projected/a4a5f00e-5b08-43e4-9e1e-b09aca89ff2d-kube-api-access-blmc5\") on node \"crc\" DevicePath \"\"" Feb 25 11:32:21 crc kubenswrapper[5005]: I0225 11:32:21.372054 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-g97mm"] Feb 25 11:32:21 crc kubenswrapper[5005]: W0225 11:32:21.377307 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16ce7172_a1e1_4af8_a26b_6700cad253a3.slice/crio-a9d881a456a1cb7f4341876a0659729bca5f900c7d55c294b08812583103082c WatchSource:0}: Error finding container a9d881a456a1cb7f4341876a0659729bca5f900c7d55c294b08812583103082c: Status 404 returned error can't find the container with id a9d881a456a1cb7f4341876a0659729bca5f900c7d55c294b08812583103082c Feb 25 11:32:21 crc kubenswrapper[5005]: I0225 11:32:21.635545 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-j6mkr" Feb 25 11:32:21 crc kubenswrapper[5005]: I0225 11:32:21.764846 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-g97mm" event={"ID":"16ce7172-a1e1-4af8-a26b-6700cad253a3","Type":"ContainerStarted","Data":"3fad75681c27fd93bff29c8857da6e29796d3cc41c0a7bd6a13a83ee770290f8"} Feb 25 11:32:21 crc kubenswrapper[5005]: I0225 11:32:21.764893 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-g97mm" event={"ID":"16ce7172-a1e1-4af8-a26b-6700cad253a3","Type":"ContainerStarted","Data":"a9d881a456a1cb7f4341876a0659729bca5f900c7d55c294b08812583103082c"} Feb 25 11:32:21 crc kubenswrapper[5005]: I0225 11:32:21.766872 5005 generic.go:334] "Generic (PLEG): container finished" podID="a4a5f00e-5b08-43e4-9e1e-b09aca89ff2d" containerID="bfea8187b49ce2bf4452df2244a82d0efff31cb3178c168705fda1fa159eaea7" exitCode=0 Feb 25 11:32:21 crc kubenswrapper[5005]: I0225 11:32:21.766927 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-kvnzd" Feb 25 11:32:21 crc kubenswrapper[5005]: I0225 11:32:21.766926 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-kvnzd" event={"ID":"a4a5f00e-5b08-43e4-9e1e-b09aca89ff2d","Type":"ContainerDied","Data":"bfea8187b49ce2bf4452df2244a82d0efff31cb3178c168705fda1fa159eaea7"} Feb 25 11:32:21 crc kubenswrapper[5005]: I0225 11:32:21.767000 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-kvnzd" event={"ID":"a4a5f00e-5b08-43e4-9e1e-b09aca89ff2d","Type":"ContainerDied","Data":"2d9287ce8c835fc5fd3fc6b63008928eac180d67c58db25e59831a9c09424e2e"} Feb 25 11:32:21 crc kubenswrapper[5005]: I0225 11:32:21.767030 5005 scope.go:117] "RemoveContainer" containerID="bfea8187b49ce2bf4452df2244a82d0efff31cb3178c168705fda1fa159eaea7" Feb 25 11:32:21 crc kubenswrapper[5005]: I0225 11:32:21.789410 5005 scope.go:117] "RemoveContainer" containerID="bfea8187b49ce2bf4452df2244a82d0efff31cb3178c168705fda1fa159eaea7" Feb 25 11:32:21 crc kubenswrapper[5005]: E0225 11:32:21.789830 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfea8187b49ce2bf4452df2244a82d0efff31cb3178c168705fda1fa159eaea7\": container with ID starting with bfea8187b49ce2bf4452df2244a82d0efff31cb3178c168705fda1fa159eaea7 not found: ID does not exist" containerID="bfea8187b49ce2bf4452df2244a82d0efff31cb3178c168705fda1fa159eaea7" Feb 25 11:32:21 crc kubenswrapper[5005]: I0225 11:32:21.789863 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfea8187b49ce2bf4452df2244a82d0efff31cb3178c168705fda1fa159eaea7"} err="failed to get container status \"bfea8187b49ce2bf4452df2244a82d0efff31cb3178c168705fda1fa159eaea7\": rpc error: code = NotFound desc = could not find container \"bfea8187b49ce2bf4452df2244a82d0efff31cb3178c168705fda1fa159eaea7\": container with ID starting with bfea8187b49ce2bf4452df2244a82d0efff31cb3178c168705fda1fa159eaea7 not found: ID does not exist" Feb 25 11:32:21 crc kubenswrapper[5005]: I0225 11:32:21.791665 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-g97mm" podStartSLOduration=1.724243108 podStartE2EDuration="1.791642788s" podCreationTimestamp="2026-02-25 11:32:20 +0000 UTC" firstStartedPulling="2026-02-25 11:32:21.383318913 +0000 UTC m=+855.424051240" lastFinishedPulling="2026-02-25 11:32:21.450718593 +0000 UTC m=+855.491450920" observedRunningTime="2026-02-25 11:32:21.781532543 +0000 UTC m=+855.822264880" watchObservedRunningTime="2026-02-25 11:32:21.791642788 +0000 UTC m=+855.832375125" Feb 25 11:32:21 crc kubenswrapper[5005]: I0225 11:32:21.809388 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-kvnzd"] Feb 25 11:32:21 crc kubenswrapper[5005]: I0225 11:32:21.815717 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-kvnzd"] Feb 25 11:32:22 crc kubenswrapper[5005]: I0225 11:32:22.277966 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vzrvq"] Feb 25 11:32:22 crc kubenswrapper[5005]: I0225 11:32:22.278651 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vzrvq" podUID="f1390187-dc05-4fdf-808f-c99094e3756c" containerName="registry-server" containerID="cri-o://613c17bb702a3b26016e411a2b9090119f8ee74bfa9edc0ad5e848e173a7f432" gracePeriod=2 Feb 25 11:32:22 crc kubenswrapper[5005]: I0225 11:32:22.702187 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4a5f00e-5b08-43e4-9e1e-b09aca89ff2d" path="/var/lib/kubelet/pods/a4a5f00e-5b08-43e4-9e1e-b09aca89ff2d/volumes" Feb 25 11:32:22 crc kubenswrapper[5005]: I0225 11:32:22.748409 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vzrvq" Feb 25 11:32:22 crc kubenswrapper[5005]: I0225 11:32:22.781500 5005 generic.go:334] "Generic (PLEG): container finished" podID="f1390187-dc05-4fdf-808f-c99094e3756c" containerID="613c17bb702a3b26016e411a2b9090119f8ee74bfa9edc0ad5e848e173a7f432" exitCode=0 Feb 25 11:32:22 crc kubenswrapper[5005]: I0225 11:32:22.781602 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vzrvq" Feb 25 11:32:22 crc kubenswrapper[5005]: I0225 11:32:22.781615 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vzrvq" event={"ID":"f1390187-dc05-4fdf-808f-c99094e3756c","Type":"ContainerDied","Data":"613c17bb702a3b26016e411a2b9090119f8ee74bfa9edc0ad5e848e173a7f432"} Feb 25 11:32:22 crc kubenswrapper[5005]: I0225 11:32:22.781686 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vzrvq" event={"ID":"f1390187-dc05-4fdf-808f-c99094e3756c","Type":"ContainerDied","Data":"31758176adaba3274ac49c699430ac24dbe79ed23882a04768fb24c8ab68a1d2"} Feb 25 11:32:22 crc kubenswrapper[5005]: I0225 11:32:22.781718 5005 scope.go:117] "RemoveContainer" containerID="613c17bb702a3b26016e411a2b9090119f8ee74bfa9edc0ad5e848e173a7f432" Feb 25 11:32:22 crc kubenswrapper[5005]: I0225 11:32:22.804756 5005 scope.go:117] "RemoveContainer" containerID="3ee2fb8d71739ccb2f8dc6edded2e2b3999a38a469da34c9d308774cca6e532c" Feb 25 11:32:22 crc kubenswrapper[5005]: I0225 11:32:22.823809 5005 scope.go:117] "RemoveContainer" containerID="19ba83910ea494bafb154ee65e970681032c0bf4c1a56232791c3a800ade52fe" Feb 25 11:32:22 crc kubenswrapper[5005]: I0225 11:32:22.825934 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9rl8\" (UniqueName: \"kubernetes.io/projected/f1390187-dc05-4fdf-808f-c99094e3756c-kube-api-access-r9rl8\") pod \"f1390187-dc05-4fdf-808f-c99094e3756c\" (UID: \"f1390187-dc05-4fdf-808f-c99094e3756c\") " Feb 25 11:32:22 crc kubenswrapper[5005]: I0225 11:32:22.826115 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1390187-dc05-4fdf-808f-c99094e3756c-catalog-content\") pod \"f1390187-dc05-4fdf-808f-c99094e3756c\" (UID: \"f1390187-dc05-4fdf-808f-c99094e3756c\") " Feb 25 11:32:22 crc kubenswrapper[5005]: I0225 11:32:22.826231 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1390187-dc05-4fdf-808f-c99094e3756c-utilities\") pod \"f1390187-dc05-4fdf-808f-c99094e3756c\" (UID: \"f1390187-dc05-4fdf-808f-c99094e3756c\") " Feb 25 11:32:22 crc kubenswrapper[5005]: I0225 11:32:22.828105 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1390187-dc05-4fdf-808f-c99094e3756c-utilities" (OuterVolumeSpecName: "utilities") pod "f1390187-dc05-4fdf-808f-c99094e3756c" (UID: "f1390187-dc05-4fdf-808f-c99094e3756c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:32:22 crc kubenswrapper[5005]: I0225 11:32:22.832676 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1390187-dc05-4fdf-808f-c99094e3756c-kube-api-access-r9rl8" (OuterVolumeSpecName: "kube-api-access-r9rl8") pod "f1390187-dc05-4fdf-808f-c99094e3756c" (UID: "f1390187-dc05-4fdf-808f-c99094e3756c"). InnerVolumeSpecName "kube-api-access-r9rl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:32:22 crc kubenswrapper[5005]: I0225 11:32:22.866280 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1390187-dc05-4fdf-808f-c99094e3756c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f1390187-dc05-4fdf-808f-c99094e3756c" (UID: "f1390187-dc05-4fdf-808f-c99094e3756c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:32:22 crc kubenswrapper[5005]: I0225 11:32:22.868102 5005 scope.go:117] "RemoveContainer" containerID="613c17bb702a3b26016e411a2b9090119f8ee74bfa9edc0ad5e848e173a7f432" Feb 25 11:32:22 crc kubenswrapper[5005]: E0225 11:32:22.868706 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"613c17bb702a3b26016e411a2b9090119f8ee74bfa9edc0ad5e848e173a7f432\": container with ID starting with 613c17bb702a3b26016e411a2b9090119f8ee74bfa9edc0ad5e848e173a7f432 not found: ID does not exist" containerID="613c17bb702a3b26016e411a2b9090119f8ee74bfa9edc0ad5e848e173a7f432" Feb 25 11:32:22 crc kubenswrapper[5005]: I0225 11:32:22.868750 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"613c17bb702a3b26016e411a2b9090119f8ee74bfa9edc0ad5e848e173a7f432"} err="failed to get container status \"613c17bb702a3b26016e411a2b9090119f8ee74bfa9edc0ad5e848e173a7f432\": rpc error: code = NotFound desc = could not find container \"613c17bb702a3b26016e411a2b9090119f8ee74bfa9edc0ad5e848e173a7f432\": container with ID starting with 613c17bb702a3b26016e411a2b9090119f8ee74bfa9edc0ad5e848e173a7f432 not found: ID does not exist" Feb 25 11:32:22 crc kubenswrapper[5005]: I0225 11:32:22.868781 5005 scope.go:117] "RemoveContainer" containerID="3ee2fb8d71739ccb2f8dc6edded2e2b3999a38a469da34c9d308774cca6e532c" Feb 25 11:32:22 crc kubenswrapper[5005]: E0225 11:32:22.869267 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ee2fb8d71739ccb2f8dc6edded2e2b3999a38a469da34c9d308774cca6e532c\": container with ID starting with 3ee2fb8d71739ccb2f8dc6edded2e2b3999a38a469da34c9d308774cca6e532c not found: ID does not exist" containerID="3ee2fb8d71739ccb2f8dc6edded2e2b3999a38a469da34c9d308774cca6e532c" Feb 25 11:32:22 crc kubenswrapper[5005]: I0225 11:32:22.869323 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ee2fb8d71739ccb2f8dc6edded2e2b3999a38a469da34c9d308774cca6e532c"} err="failed to get container status \"3ee2fb8d71739ccb2f8dc6edded2e2b3999a38a469da34c9d308774cca6e532c\": rpc error: code = NotFound desc = could not find container \"3ee2fb8d71739ccb2f8dc6edded2e2b3999a38a469da34c9d308774cca6e532c\": container with ID starting with 3ee2fb8d71739ccb2f8dc6edded2e2b3999a38a469da34c9d308774cca6e532c not found: ID does not exist" Feb 25 11:32:22 crc kubenswrapper[5005]: I0225 11:32:22.869353 5005 scope.go:117] "RemoveContainer" containerID="19ba83910ea494bafb154ee65e970681032c0bf4c1a56232791c3a800ade52fe" Feb 25 11:32:22 crc kubenswrapper[5005]: E0225 11:32:22.869867 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19ba83910ea494bafb154ee65e970681032c0bf4c1a56232791c3a800ade52fe\": container with ID starting with 19ba83910ea494bafb154ee65e970681032c0bf4c1a56232791c3a800ade52fe not found: ID does not exist" containerID="19ba83910ea494bafb154ee65e970681032c0bf4c1a56232791c3a800ade52fe" Feb 25 11:32:22 crc kubenswrapper[5005]: I0225 11:32:22.869896 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19ba83910ea494bafb154ee65e970681032c0bf4c1a56232791c3a800ade52fe"} err="failed to get container status \"19ba83910ea494bafb154ee65e970681032c0bf4c1a56232791c3a800ade52fe\": rpc error: code = NotFound desc = could not find container \"19ba83910ea494bafb154ee65e970681032c0bf4c1a56232791c3a800ade52fe\": container with ID starting with 19ba83910ea494bafb154ee65e970681032c0bf4c1a56232791c3a800ade52fe not found: ID does not exist" Feb 25 11:32:22 crc kubenswrapper[5005]: I0225 11:32:22.928166 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1390187-dc05-4fdf-808f-c99094e3756c-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 11:32:22 crc kubenswrapper[5005]: I0225 11:32:22.928203 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9rl8\" (UniqueName: \"kubernetes.io/projected/f1390187-dc05-4fdf-808f-c99094e3756c-kube-api-access-r9rl8\") on node \"crc\" DevicePath \"\"" Feb 25 11:32:22 crc kubenswrapper[5005]: I0225 11:32:22.928217 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1390187-dc05-4fdf-808f-c99094e3756c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 11:32:23 crc kubenswrapper[5005]: I0225 11:32:23.121083 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vzrvq"] Feb 25 11:32:23 crc kubenswrapper[5005]: I0225 11:32:23.123963 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vzrvq"] Feb 25 11:32:24 crc kubenswrapper[5005]: I0225 11:32:24.692236 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1390187-dc05-4fdf-808f-c99094e3756c" path="/var/lib/kubelet/pods/f1390187-dc05-4fdf-808f-c99094e3756c/volumes" Feb 25 11:32:25 crc kubenswrapper[5005]: I0225 11:32:25.492317 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qf5xc"] Feb 25 11:32:25 crc kubenswrapper[5005]: E0225 11:32:25.492636 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1390187-dc05-4fdf-808f-c99094e3756c" containerName="registry-server" Feb 25 11:32:25 crc kubenswrapper[5005]: I0225 11:32:25.492652 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1390187-dc05-4fdf-808f-c99094e3756c" containerName="registry-server" Feb 25 11:32:25 crc kubenswrapper[5005]: E0225 11:32:25.492665 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4a5f00e-5b08-43e4-9e1e-b09aca89ff2d" containerName="registry-server" Feb 25 11:32:25 crc kubenswrapper[5005]: I0225 11:32:25.492673 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4a5f00e-5b08-43e4-9e1e-b09aca89ff2d" containerName="registry-server" Feb 25 11:32:25 crc kubenswrapper[5005]: E0225 11:32:25.492691 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1390187-dc05-4fdf-808f-c99094e3756c" containerName="extract-utilities" Feb 25 11:32:25 crc kubenswrapper[5005]: I0225 11:32:25.492699 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1390187-dc05-4fdf-808f-c99094e3756c" containerName="extract-utilities" Feb 25 11:32:25 crc kubenswrapper[5005]: E0225 11:32:25.492716 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1390187-dc05-4fdf-808f-c99094e3756c" containerName="extract-content" Feb 25 11:32:25 crc kubenswrapper[5005]: I0225 11:32:25.492725 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1390187-dc05-4fdf-808f-c99094e3756c" containerName="extract-content" Feb 25 11:32:25 crc kubenswrapper[5005]: I0225 11:32:25.492848 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1390187-dc05-4fdf-808f-c99094e3756c" containerName="registry-server" Feb 25 11:32:25 crc kubenswrapper[5005]: I0225 11:32:25.492864 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4a5f00e-5b08-43e4-9e1e-b09aca89ff2d" containerName="registry-server" Feb 25 11:32:25 crc kubenswrapper[5005]: I0225 11:32:25.493903 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qf5xc" Feb 25 11:32:25 crc kubenswrapper[5005]: I0225 11:32:25.509604 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qf5xc"] Feb 25 11:32:25 crc kubenswrapper[5005]: I0225 11:32:25.581977 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjnc8\" (UniqueName: \"kubernetes.io/projected/d620b83b-7942-481c-9af6-64a537b573a9-kube-api-access-wjnc8\") pod \"community-operators-qf5xc\" (UID: \"d620b83b-7942-481c-9af6-64a537b573a9\") " pod="openshift-marketplace/community-operators-qf5xc" Feb 25 11:32:25 crc kubenswrapper[5005]: I0225 11:32:25.582068 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d620b83b-7942-481c-9af6-64a537b573a9-utilities\") pod \"community-operators-qf5xc\" (UID: \"d620b83b-7942-481c-9af6-64a537b573a9\") " pod="openshift-marketplace/community-operators-qf5xc" Feb 25 11:32:25 crc kubenswrapper[5005]: I0225 11:32:25.582274 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d620b83b-7942-481c-9af6-64a537b573a9-catalog-content\") pod \"community-operators-qf5xc\" (UID: \"d620b83b-7942-481c-9af6-64a537b573a9\") " pod="openshift-marketplace/community-operators-qf5xc" Feb 25 11:32:25 crc kubenswrapper[5005]: I0225 11:32:25.683551 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjnc8\" (UniqueName: \"kubernetes.io/projected/d620b83b-7942-481c-9af6-64a537b573a9-kube-api-access-wjnc8\") pod \"community-operators-qf5xc\" (UID: \"d620b83b-7942-481c-9af6-64a537b573a9\") " pod="openshift-marketplace/community-operators-qf5xc" Feb 25 11:32:25 crc kubenswrapper[5005]: I0225 11:32:25.683604 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d620b83b-7942-481c-9af6-64a537b573a9-utilities\") pod \"community-operators-qf5xc\" (UID: \"d620b83b-7942-481c-9af6-64a537b573a9\") " pod="openshift-marketplace/community-operators-qf5xc" Feb 25 11:32:25 crc kubenswrapper[5005]: I0225 11:32:25.683655 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d620b83b-7942-481c-9af6-64a537b573a9-catalog-content\") pod \"community-operators-qf5xc\" (UID: \"d620b83b-7942-481c-9af6-64a537b573a9\") " pod="openshift-marketplace/community-operators-qf5xc" Feb 25 11:32:25 crc kubenswrapper[5005]: I0225 11:32:25.684173 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d620b83b-7942-481c-9af6-64a537b573a9-catalog-content\") pod \"community-operators-qf5xc\" (UID: \"d620b83b-7942-481c-9af6-64a537b573a9\") " pod="openshift-marketplace/community-operators-qf5xc" Feb 25 11:32:25 crc kubenswrapper[5005]: I0225 11:32:25.684272 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d620b83b-7942-481c-9af6-64a537b573a9-utilities\") pod \"community-operators-qf5xc\" (UID: \"d620b83b-7942-481c-9af6-64a537b573a9\") " pod="openshift-marketplace/community-operators-qf5xc" Feb 25 11:32:25 crc kubenswrapper[5005]: I0225 11:32:25.705633 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjnc8\" (UniqueName: \"kubernetes.io/projected/d620b83b-7942-481c-9af6-64a537b573a9-kube-api-access-wjnc8\") pod \"community-operators-qf5xc\" (UID: \"d620b83b-7942-481c-9af6-64a537b573a9\") " pod="openshift-marketplace/community-operators-qf5xc" Feb 25 11:32:25 crc kubenswrapper[5005]: I0225 11:32:25.821644 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qf5xc" Feb 25 11:32:26 crc kubenswrapper[5005]: I0225 11:32:26.059439 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qf5xc"] Feb 25 11:32:26 crc kubenswrapper[5005]: I0225 11:32:26.811563 5005 generic.go:334] "Generic (PLEG): container finished" podID="d620b83b-7942-481c-9af6-64a537b573a9" containerID="aa78a00fa8ad71040ca4794c535176fcefd54186bb68b8bb5a0fa63d1b67b705" exitCode=0 Feb 25 11:32:26 crc kubenswrapper[5005]: I0225 11:32:26.811673 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qf5xc" event={"ID":"d620b83b-7942-481c-9af6-64a537b573a9","Type":"ContainerDied","Data":"aa78a00fa8ad71040ca4794c535176fcefd54186bb68b8bb5a0fa63d1b67b705"} Feb 25 11:32:26 crc kubenswrapper[5005]: I0225 11:32:26.812017 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qf5xc" event={"ID":"d620b83b-7942-481c-9af6-64a537b573a9","Type":"ContainerStarted","Data":"bcb7d481a26fbf52d39b9a07ac758a58c31bda85047545638c385cdfbe99425a"} Feb 25 11:32:27 crc kubenswrapper[5005]: I0225 11:32:27.819702 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qf5xc" event={"ID":"d620b83b-7942-481c-9af6-64a537b573a9","Type":"ContainerStarted","Data":"0028481352eee34a8124c1467bccc084348c2bf10570de745f60935eea0e3c1b"} Feb 25 11:32:28 crc kubenswrapper[5005]: I0225 11:32:28.834462 5005 generic.go:334] "Generic (PLEG): container finished" podID="d620b83b-7942-481c-9af6-64a537b573a9" containerID="0028481352eee34a8124c1467bccc084348c2bf10570de745f60935eea0e3c1b" exitCode=0 Feb 25 11:32:28 crc kubenswrapper[5005]: I0225 11:32:28.834508 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qf5xc" event={"ID":"d620b83b-7942-481c-9af6-64a537b573a9","Type":"ContainerDied","Data":"0028481352eee34a8124c1467bccc084348c2bf10570de745f60935eea0e3c1b"} Feb 25 11:32:29 crc kubenswrapper[5005]: I0225 11:32:29.841140 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qf5xc" event={"ID":"d620b83b-7942-481c-9af6-64a537b573a9","Type":"ContainerStarted","Data":"a7ebb4507a81b4990109e526a9ca55b29f6a4ca84a22ca4358604c8b6e34aea6"} Feb 25 11:32:29 crc kubenswrapper[5005]: I0225 11:32:29.861826 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qf5xc" podStartSLOduration=2.459360845 podStartE2EDuration="4.861811169s" podCreationTimestamp="2026-02-25 11:32:25 +0000 UTC" firstStartedPulling="2026-02-25 11:32:26.812714859 +0000 UTC m=+860.853447216" lastFinishedPulling="2026-02-25 11:32:29.215165223 +0000 UTC m=+863.255897540" observedRunningTime="2026-02-25 11:32:29.857456188 +0000 UTC m=+863.898188515" watchObservedRunningTime="2026-02-25 11:32:29.861811169 +0000 UTC m=+863.902543496" Feb 25 11:32:30 crc kubenswrapper[5005]: I0225 11:32:30.889648 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-g97mm" Feb 25 11:32:30 crc kubenswrapper[5005]: I0225 11:32:30.890199 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-g97mm" Feb 25 11:32:30 crc kubenswrapper[5005]: I0225 11:32:30.914470 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-g97mm" Feb 25 11:32:31 crc kubenswrapper[5005]: I0225 11:32:31.888565 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-g97mm" Feb 25 11:32:33 crc kubenswrapper[5005]: I0225 11:32:33.516056 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/d73e3e6ee0845b4b1f6787829b10c723b3732e094d91c565492c41e8a8hj2s8"] Feb 25 11:32:33 crc kubenswrapper[5005]: I0225 11:32:33.517403 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d73e3e6ee0845b4b1f6787829b10c723b3732e094d91c565492c41e8a8hj2s8" Feb 25 11:32:33 crc kubenswrapper[5005]: I0225 11:32:33.519247 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-2ldjr" Feb 25 11:32:33 crc kubenswrapper[5005]: I0225 11:32:33.530883 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/d73e3e6ee0845b4b1f6787829b10c723b3732e094d91c565492c41e8a8hj2s8"] Feb 25 11:32:33 crc kubenswrapper[5005]: I0225 11:32:33.591023 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqg6s\" (UniqueName: \"kubernetes.io/projected/84c3a6b4-399c-427a-aa4c-a19b41e897f6-kube-api-access-zqg6s\") pod \"d73e3e6ee0845b4b1f6787829b10c723b3732e094d91c565492c41e8a8hj2s8\" (UID: \"84c3a6b4-399c-427a-aa4c-a19b41e897f6\") " pod="openstack-operators/d73e3e6ee0845b4b1f6787829b10c723b3732e094d91c565492c41e8a8hj2s8" Feb 25 11:32:33 crc kubenswrapper[5005]: I0225 11:32:33.591072 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/84c3a6b4-399c-427a-aa4c-a19b41e897f6-util\") pod \"d73e3e6ee0845b4b1f6787829b10c723b3732e094d91c565492c41e8a8hj2s8\" (UID: \"84c3a6b4-399c-427a-aa4c-a19b41e897f6\") " pod="openstack-operators/d73e3e6ee0845b4b1f6787829b10c723b3732e094d91c565492c41e8a8hj2s8" Feb 25 11:32:33 crc kubenswrapper[5005]: I0225 11:32:33.591104 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/84c3a6b4-399c-427a-aa4c-a19b41e897f6-bundle\") pod \"d73e3e6ee0845b4b1f6787829b10c723b3732e094d91c565492c41e8a8hj2s8\" (UID: \"84c3a6b4-399c-427a-aa4c-a19b41e897f6\") " pod="openstack-operators/d73e3e6ee0845b4b1f6787829b10c723b3732e094d91c565492c41e8a8hj2s8" Feb 25 11:32:33 crc kubenswrapper[5005]: I0225 11:32:33.692098 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqg6s\" (UniqueName: \"kubernetes.io/projected/84c3a6b4-399c-427a-aa4c-a19b41e897f6-kube-api-access-zqg6s\") pod \"d73e3e6ee0845b4b1f6787829b10c723b3732e094d91c565492c41e8a8hj2s8\" (UID: \"84c3a6b4-399c-427a-aa4c-a19b41e897f6\") " pod="openstack-operators/d73e3e6ee0845b4b1f6787829b10c723b3732e094d91c565492c41e8a8hj2s8" Feb 25 11:32:33 crc kubenswrapper[5005]: I0225 11:32:33.692143 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/84c3a6b4-399c-427a-aa4c-a19b41e897f6-util\") pod \"d73e3e6ee0845b4b1f6787829b10c723b3732e094d91c565492c41e8a8hj2s8\" (UID: \"84c3a6b4-399c-427a-aa4c-a19b41e897f6\") " pod="openstack-operators/d73e3e6ee0845b4b1f6787829b10c723b3732e094d91c565492c41e8a8hj2s8" Feb 25 11:32:33 crc kubenswrapper[5005]: I0225 11:32:33.692177 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/84c3a6b4-399c-427a-aa4c-a19b41e897f6-bundle\") pod \"d73e3e6ee0845b4b1f6787829b10c723b3732e094d91c565492c41e8a8hj2s8\" (UID: \"84c3a6b4-399c-427a-aa4c-a19b41e897f6\") " pod="openstack-operators/d73e3e6ee0845b4b1f6787829b10c723b3732e094d91c565492c41e8a8hj2s8" Feb 25 11:32:33 crc kubenswrapper[5005]: I0225 11:32:33.692598 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/84c3a6b4-399c-427a-aa4c-a19b41e897f6-bundle\") pod \"d73e3e6ee0845b4b1f6787829b10c723b3732e094d91c565492c41e8a8hj2s8\" (UID: \"84c3a6b4-399c-427a-aa4c-a19b41e897f6\") " pod="openstack-operators/d73e3e6ee0845b4b1f6787829b10c723b3732e094d91c565492c41e8a8hj2s8" Feb 25 11:32:33 crc kubenswrapper[5005]: I0225 11:32:33.692616 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/84c3a6b4-399c-427a-aa4c-a19b41e897f6-util\") pod \"d73e3e6ee0845b4b1f6787829b10c723b3732e094d91c565492c41e8a8hj2s8\" (UID: \"84c3a6b4-399c-427a-aa4c-a19b41e897f6\") " pod="openstack-operators/d73e3e6ee0845b4b1f6787829b10c723b3732e094d91c565492c41e8a8hj2s8" Feb 25 11:32:33 crc kubenswrapper[5005]: I0225 11:32:33.710962 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqg6s\" (UniqueName: \"kubernetes.io/projected/84c3a6b4-399c-427a-aa4c-a19b41e897f6-kube-api-access-zqg6s\") pod \"d73e3e6ee0845b4b1f6787829b10c723b3732e094d91c565492c41e8a8hj2s8\" (UID: \"84c3a6b4-399c-427a-aa4c-a19b41e897f6\") " pod="openstack-operators/d73e3e6ee0845b4b1f6787829b10c723b3732e094d91c565492c41e8a8hj2s8" Feb 25 11:32:33 crc kubenswrapper[5005]: I0225 11:32:33.834417 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d73e3e6ee0845b4b1f6787829b10c723b3732e094d91c565492c41e8a8hj2s8" Feb 25 11:32:34 crc kubenswrapper[5005]: I0225 11:32:34.124629 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/d73e3e6ee0845b4b1f6787829b10c723b3732e094d91c565492c41e8a8hj2s8"] Feb 25 11:32:34 crc kubenswrapper[5005]: W0225 11:32:34.129968 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84c3a6b4_399c_427a_aa4c_a19b41e897f6.slice/crio-3d0b8c2c0d79999a8e1e296feb4b66fbb9482d591e210819a19bda68a5bc1a10 WatchSource:0}: Error finding container 3d0b8c2c0d79999a8e1e296feb4b66fbb9482d591e210819a19bda68a5bc1a10: Status 404 returned error can't find the container with id 3d0b8c2c0d79999a8e1e296feb4b66fbb9482d591e210819a19bda68a5bc1a10 Feb 25 11:32:34 crc kubenswrapper[5005]: I0225 11:32:34.696296 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-srfpw"] Feb 25 11:32:34 crc kubenswrapper[5005]: I0225 11:32:34.698853 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-srfpw" Feb 25 11:32:34 crc kubenswrapper[5005]: I0225 11:32:34.716315 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-srfpw"] Feb 25 11:32:34 crc kubenswrapper[5005]: I0225 11:32:34.808064 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhr5m\" (UniqueName: \"kubernetes.io/projected/f7913552-d866-495c-b760-d4eec2c2c2ff-kube-api-access-nhr5m\") pod \"certified-operators-srfpw\" (UID: \"f7913552-d866-495c-b760-d4eec2c2c2ff\") " pod="openshift-marketplace/certified-operators-srfpw" Feb 25 11:32:34 crc kubenswrapper[5005]: I0225 11:32:34.808144 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7913552-d866-495c-b760-d4eec2c2c2ff-catalog-content\") pod \"certified-operators-srfpw\" (UID: \"f7913552-d866-495c-b760-d4eec2c2c2ff\") " pod="openshift-marketplace/certified-operators-srfpw" Feb 25 11:32:34 crc kubenswrapper[5005]: I0225 11:32:34.808186 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7913552-d866-495c-b760-d4eec2c2c2ff-utilities\") pod \"certified-operators-srfpw\" (UID: \"f7913552-d866-495c-b760-d4eec2c2c2ff\") " pod="openshift-marketplace/certified-operators-srfpw" Feb 25 11:32:34 crc kubenswrapper[5005]: I0225 11:32:34.882929 5005 generic.go:334] "Generic (PLEG): container finished" podID="84c3a6b4-399c-427a-aa4c-a19b41e897f6" containerID="5cab76dbfec1ee77a04773fd1200cd3ceb0f3a19968cad6bd3949309d82fcd1f" exitCode=0 Feb 25 11:32:34 crc kubenswrapper[5005]: I0225 11:32:34.882992 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d73e3e6ee0845b4b1f6787829b10c723b3732e094d91c565492c41e8a8hj2s8" event={"ID":"84c3a6b4-399c-427a-aa4c-a19b41e897f6","Type":"ContainerDied","Data":"5cab76dbfec1ee77a04773fd1200cd3ceb0f3a19968cad6bd3949309d82fcd1f"} Feb 25 11:32:34 crc kubenswrapper[5005]: I0225 11:32:34.883029 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d73e3e6ee0845b4b1f6787829b10c723b3732e094d91c565492c41e8a8hj2s8" event={"ID":"84c3a6b4-399c-427a-aa4c-a19b41e897f6","Type":"ContainerStarted","Data":"3d0b8c2c0d79999a8e1e296feb4b66fbb9482d591e210819a19bda68a5bc1a10"} Feb 25 11:32:34 crc kubenswrapper[5005]: I0225 11:32:34.910131 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhr5m\" (UniqueName: \"kubernetes.io/projected/f7913552-d866-495c-b760-d4eec2c2c2ff-kube-api-access-nhr5m\") pod \"certified-operators-srfpw\" (UID: \"f7913552-d866-495c-b760-d4eec2c2c2ff\") " pod="openshift-marketplace/certified-operators-srfpw" Feb 25 11:32:34 crc kubenswrapper[5005]: I0225 11:32:34.910189 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7913552-d866-495c-b760-d4eec2c2c2ff-catalog-content\") pod \"certified-operators-srfpw\" (UID: \"f7913552-d866-495c-b760-d4eec2c2c2ff\") " pod="openshift-marketplace/certified-operators-srfpw" Feb 25 11:32:34 crc kubenswrapper[5005]: I0225 11:32:34.910211 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7913552-d866-495c-b760-d4eec2c2c2ff-utilities\") pod \"certified-operators-srfpw\" (UID: \"f7913552-d866-495c-b760-d4eec2c2c2ff\") " pod="openshift-marketplace/certified-operators-srfpw" Feb 25 11:32:34 crc kubenswrapper[5005]: I0225 11:32:34.910773 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7913552-d866-495c-b760-d4eec2c2c2ff-utilities\") pod \"certified-operators-srfpw\" (UID: \"f7913552-d866-495c-b760-d4eec2c2c2ff\") " pod="openshift-marketplace/certified-operators-srfpw" Feb 25 11:32:34 crc kubenswrapper[5005]: I0225 11:32:34.910860 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7913552-d866-495c-b760-d4eec2c2c2ff-catalog-content\") pod \"certified-operators-srfpw\" (UID: \"f7913552-d866-495c-b760-d4eec2c2c2ff\") " pod="openshift-marketplace/certified-operators-srfpw" Feb 25 11:32:34 crc kubenswrapper[5005]: I0225 11:32:34.940939 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhr5m\" (UniqueName: \"kubernetes.io/projected/f7913552-d866-495c-b760-d4eec2c2c2ff-kube-api-access-nhr5m\") pod \"certified-operators-srfpw\" (UID: \"f7913552-d866-495c-b760-d4eec2c2c2ff\") " pod="openshift-marketplace/certified-operators-srfpw" Feb 25 11:32:35 crc kubenswrapper[5005]: I0225 11:32:35.030676 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-srfpw" Feb 25 11:32:35 crc kubenswrapper[5005]: I0225 11:32:35.405729 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-srfpw"] Feb 25 11:32:35 crc kubenswrapper[5005]: I0225 11:32:35.821779 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qf5xc" Feb 25 11:32:35 crc kubenswrapper[5005]: I0225 11:32:35.822072 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qf5xc" Feb 25 11:32:35 crc kubenswrapper[5005]: I0225 11:32:35.868288 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qf5xc" Feb 25 11:32:35 crc kubenswrapper[5005]: I0225 11:32:35.891246 5005 generic.go:334] "Generic (PLEG): container finished" podID="84c3a6b4-399c-427a-aa4c-a19b41e897f6" containerID="9e675689a371cfde0ce0992f02af203904c469b3ad12b0da09c763202bc2f6fc" exitCode=0 Feb 25 11:32:35 crc kubenswrapper[5005]: I0225 11:32:35.891291 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d73e3e6ee0845b4b1f6787829b10c723b3732e094d91c565492c41e8a8hj2s8" event={"ID":"84c3a6b4-399c-427a-aa4c-a19b41e897f6","Type":"ContainerDied","Data":"9e675689a371cfde0ce0992f02af203904c469b3ad12b0da09c763202bc2f6fc"} Feb 25 11:32:35 crc kubenswrapper[5005]: I0225 11:32:35.893619 5005 generic.go:334] "Generic (PLEG): container finished" podID="f7913552-d866-495c-b760-d4eec2c2c2ff" containerID="71e7e9ae85655d2627e1ff02fd93bc9bc29f3ff8067b7890a6205f9ca5a91348" exitCode=0 Feb 25 11:32:35 crc kubenswrapper[5005]: I0225 11:32:35.893741 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-srfpw" event={"ID":"f7913552-d866-495c-b760-d4eec2c2c2ff","Type":"ContainerDied","Data":"71e7e9ae85655d2627e1ff02fd93bc9bc29f3ff8067b7890a6205f9ca5a91348"} Feb 25 11:32:35 crc kubenswrapper[5005]: I0225 11:32:35.893792 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-srfpw" event={"ID":"f7913552-d866-495c-b760-d4eec2c2c2ff","Type":"ContainerStarted","Data":"2a33d9678b38088a5db6d6932ab7e2c4807d87732963ff1626941baf2be6d5b1"} Feb 25 11:32:35 crc kubenswrapper[5005]: I0225 11:32:35.936231 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qf5xc" Feb 25 11:32:36 crc kubenswrapper[5005]: I0225 11:32:36.903551 5005 generic.go:334] "Generic (PLEG): container finished" podID="84c3a6b4-399c-427a-aa4c-a19b41e897f6" containerID="c7dab0f2c95678646b1b397ca79c81b8ead5b7208595d30a87d39cae8993e9bf" exitCode=0 Feb 25 11:32:36 crc kubenswrapper[5005]: I0225 11:32:36.903607 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d73e3e6ee0845b4b1f6787829b10c723b3732e094d91c565492c41e8a8hj2s8" event={"ID":"84c3a6b4-399c-427a-aa4c-a19b41e897f6","Type":"ContainerDied","Data":"c7dab0f2c95678646b1b397ca79c81b8ead5b7208595d30a87d39cae8993e9bf"} Feb 25 11:32:36 crc kubenswrapper[5005]: I0225 11:32:36.906187 5005 generic.go:334] "Generic (PLEG): container finished" podID="f7913552-d866-495c-b760-d4eec2c2c2ff" containerID="bddc27004ef1380b7262a3eba3d8a61f685f8e5cc5ab654a6f41db825980ede7" exitCode=0 Feb 25 11:32:36 crc kubenswrapper[5005]: I0225 11:32:36.906282 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-srfpw" event={"ID":"f7913552-d866-495c-b760-d4eec2c2c2ff","Type":"ContainerDied","Data":"bddc27004ef1380b7262a3eba3d8a61f685f8e5cc5ab654a6f41db825980ede7"} Feb 25 11:32:37 crc kubenswrapper[5005]: I0225 11:32:37.915887 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-srfpw" event={"ID":"f7913552-d866-495c-b760-d4eec2c2c2ff","Type":"ContainerStarted","Data":"ae0ccc75338ef1910f8158646778e538c6b3b7c03ec91b3835796b28344d2d19"} Feb 25 11:32:37 crc kubenswrapper[5005]: I0225 11:32:37.934032 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-srfpw" podStartSLOduration=2.47675096 podStartE2EDuration="3.934002403s" podCreationTimestamp="2026-02-25 11:32:34 +0000 UTC" firstStartedPulling="2026-02-25 11:32:35.89677727 +0000 UTC m=+869.937509597" lastFinishedPulling="2026-02-25 11:32:37.354028703 +0000 UTC m=+871.394761040" observedRunningTime="2026-02-25 11:32:37.932295561 +0000 UTC m=+871.973027898" watchObservedRunningTime="2026-02-25 11:32:37.934002403 +0000 UTC m=+871.974734780" Feb 25 11:32:38 crc kubenswrapper[5005]: I0225 11:32:38.077843 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qf5xc"] Feb 25 11:32:38 crc kubenswrapper[5005]: I0225 11:32:38.078084 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qf5xc" podUID="d620b83b-7942-481c-9af6-64a537b573a9" containerName="registry-server" containerID="cri-o://a7ebb4507a81b4990109e526a9ca55b29f6a4ca84a22ca4358604c8b6e34aea6" gracePeriod=2 Feb 25 11:32:38 crc kubenswrapper[5005]: I0225 11:32:38.220620 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d73e3e6ee0845b4b1f6787829b10c723b3732e094d91c565492c41e8a8hj2s8" Feb 25 11:32:38 crc kubenswrapper[5005]: I0225 11:32:38.361346 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/84c3a6b4-399c-427a-aa4c-a19b41e897f6-util\") pod \"84c3a6b4-399c-427a-aa4c-a19b41e897f6\" (UID: \"84c3a6b4-399c-427a-aa4c-a19b41e897f6\") " Feb 25 11:32:38 crc kubenswrapper[5005]: I0225 11:32:38.361426 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqg6s\" (UniqueName: \"kubernetes.io/projected/84c3a6b4-399c-427a-aa4c-a19b41e897f6-kube-api-access-zqg6s\") pod \"84c3a6b4-399c-427a-aa4c-a19b41e897f6\" (UID: \"84c3a6b4-399c-427a-aa4c-a19b41e897f6\") " Feb 25 11:32:38 crc kubenswrapper[5005]: I0225 11:32:38.361459 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/84c3a6b4-399c-427a-aa4c-a19b41e897f6-bundle\") pod \"84c3a6b4-399c-427a-aa4c-a19b41e897f6\" (UID: \"84c3a6b4-399c-427a-aa4c-a19b41e897f6\") " Feb 25 11:32:38 crc kubenswrapper[5005]: I0225 11:32:38.362274 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84c3a6b4-399c-427a-aa4c-a19b41e897f6-bundle" (OuterVolumeSpecName: "bundle") pod "84c3a6b4-399c-427a-aa4c-a19b41e897f6" (UID: "84c3a6b4-399c-427a-aa4c-a19b41e897f6"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:32:38 crc kubenswrapper[5005]: I0225 11:32:38.376630 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84c3a6b4-399c-427a-aa4c-a19b41e897f6-util" (OuterVolumeSpecName: "util") pod "84c3a6b4-399c-427a-aa4c-a19b41e897f6" (UID: "84c3a6b4-399c-427a-aa4c-a19b41e897f6"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:32:38 crc kubenswrapper[5005]: I0225 11:32:38.381607 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84c3a6b4-399c-427a-aa4c-a19b41e897f6-kube-api-access-zqg6s" (OuterVolumeSpecName: "kube-api-access-zqg6s") pod "84c3a6b4-399c-427a-aa4c-a19b41e897f6" (UID: "84c3a6b4-399c-427a-aa4c-a19b41e897f6"). InnerVolumeSpecName "kube-api-access-zqg6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:32:38 crc kubenswrapper[5005]: I0225 11:32:38.452501 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qf5xc" Feb 25 11:32:38 crc kubenswrapper[5005]: I0225 11:32:38.462865 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqg6s\" (UniqueName: \"kubernetes.io/projected/84c3a6b4-399c-427a-aa4c-a19b41e897f6-kube-api-access-zqg6s\") on node \"crc\" DevicePath \"\"" Feb 25 11:32:38 crc kubenswrapper[5005]: I0225 11:32:38.462892 5005 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/84c3a6b4-399c-427a-aa4c-a19b41e897f6-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 11:32:38 crc kubenswrapper[5005]: I0225 11:32:38.462901 5005 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/84c3a6b4-399c-427a-aa4c-a19b41e897f6-util\") on node \"crc\" DevicePath \"\"" Feb 25 11:32:38 crc kubenswrapper[5005]: I0225 11:32:38.563866 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjnc8\" (UniqueName: \"kubernetes.io/projected/d620b83b-7942-481c-9af6-64a537b573a9-kube-api-access-wjnc8\") pod \"d620b83b-7942-481c-9af6-64a537b573a9\" (UID: \"d620b83b-7942-481c-9af6-64a537b573a9\") " Feb 25 11:32:38 crc kubenswrapper[5005]: I0225 11:32:38.564349 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d620b83b-7942-481c-9af6-64a537b573a9-catalog-content\") pod \"d620b83b-7942-481c-9af6-64a537b573a9\" (UID: \"d620b83b-7942-481c-9af6-64a537b573a9\") " Feb 25 11:32:38 crc kubenswrapper[5005]: I0225 11:32:38.564420 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d620b83b-7942-481c-9af6-64a537b573a9-utilities\") pod \"d620b83b-7942-481c-9af6-64a537b573a9\" (UID: \"d620b83b-7942-481c-9af6-64a537b573a9\") " Feb 25 11:32:38 crc kubenswrapper[5005]: I0225 11:32:38.565707 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d620b83b-7942-481c-9af6-64a537b573a9-utilities" (OuterVolumeSpecName: "utilities") pod "d620b83b-7942-481c-9af6-64a537b573a9" (UID: "d620b83b-7942-481c-9af6-64a537b573a9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:32:38 crc kubenswrapper[5005]: I0225 11:32:38.570576 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d620b83b-7942-481c-9af6-64a537b573a9-kube-api-access-wjnc8" (OuterVolumeSpecName: "kube-api-access-wjnc8") pod "d620b83b-7942-481c-9af6-64a537b573a9" (UID: "d620b83b-7942-481c-9af6-64a537b573a9"). InnerVolumeSpecName "kube-api-access-wjnc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:32:38 crc kubenswrapper[5005]: I0225 11:32:38.666124 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d620b83b-7942-481c-9af6-64a537b573a9-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 11:32:38 crc kubenswrapper[5005]: I0225 11:32:38.666177 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjnc8\" (UniqueName: \"kubernetes.io/projected/d620b83b-7942-481c-9af6-64a537b573a9-kube-api-access-wjnc8\") on node \"crc\" DevicePath \"\"" Feb 25 11:32:38 crc kubenswrapper[5005]: I0225 11:32:38.674961 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d620b83b-7942-481c-9af6-64a537b573a9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d620b83b-7942-481c-9af6-64a537b573a9" (UID: "d620b83b-7942-481c-9af6-64a537b573a9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:32:38 crc kubenswrapper[5005]: I0225 11:32:38.767656 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d620b83b-7942-481c-9af6-64a537b573a9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 11:32:38 crc kubenswrapper[5005]: I0225 11:32:38.923329 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d73e3e6ee0845b4b1f6787829b10c723b3732e094d91c565492c41e8a8hj2s8" event={"ID":"84c3a6b4-399c-427a-aa4c-a19b41e897f6","Type":"ContainerDied","Data":"3d0b8c2c0d79999a8e1e296feb4b66fbb9482d591e210819a19bda68a5bc1a10"} Feb 25 11:32:38 crc kubenswrapper[5005]: I0225 11:32:38.923391 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d0b8c2c0d79999a8e1e296feb4b66fbb9482d591e210819a19bda68a5bc1a10" Feb 25 11:32:38 crc kubenswrapper[5005]: I0225 11:32:38.923630 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d73e3e6ee0845b4b1f6787829b10c723b3732e094d91c565492c41e8a8hj2s8" Feb 25 11:32:38 crc kubenswrapper[5005]: I0225 11:32:38.926182 5005 generic.go:334] "Generic (PLEG): container finished" podID="d620b83b-7942-481c-9af6-64a537b573a9" containerID="a7ebb4507a81b4990109e526a9ca55b29f6a4ca84a22ca4358604c8b6e34aea6" exitCode=0 Feb 25 11:32:38 crc kubenswrapper[5005]: I0225 11:32:38.926255 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qf5xc" event={"ID":"d620b83b-7942-481c-9af6-64a537b573a9","Type":"ContainerDied","Data":"a7ebb4507a81b4990109e526a9ca55b29f6a4ca84a22ca4358604c8b6e34aea6"} Feb 25 11:32:38 crc kubenswrapper[5005]: I0225 11:32:38.926307 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qf5xc" event={"ID":"d620b83b-7942-481c-9af6-64a537b573a9","Type":"ContainerDied","Data":"bcb7d481a26fbf52d39b9a07ac758a58c31bda85047545638c385cdfbe99425a"} Feb 25 11:32:38 crc kubenswrapper[5005]: I0225 11:32:38.926313 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qf5xc" Feb 25 11:32:38 crc kubenswrapper[5005]: I0225 11:32:38.926331 5005 scope.go:117] "RemoveContainer" containerID="a7ebb4507a81b4990109e526a9ca55b29f6a4ca84a22ca4358604c8b6e34aea6" Feb 25 11:32:38 crc kubenswrapper[5005]: I0225 11:32:38.948743 5005 scope.go:117] "RemoveContainer" containerID="0028481352eee34a8124c1467bccc084348c2bf10570de745f60935eea0e3c1b" Feb 25 11:32:38 crc kubenswrapper[5005]: I0225 11:32:38.951804 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qf5xc"] Feb 25 11:32:38 crc kubenswrapper[5005]: I0225 11:32:38.959959 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qf5xc"] Feb 25 11:32:38 crc kubenswrapper[5005]: I0225 11:32:38.965665 5005 scope.go:117] "RemoveContainer" containerID="aa78a00fa8ad71040ca4794c535176fcefd54186bb68b8bb5a0fa63d1b67b705" Feb 25 11:32:38 crc kubenswrapper[5005]: I0225 11:32:38.987978 5005 scope.go:117] "RemoveContainer" containerID="a7ebb4507a81b4990109e526a9ca55b29f6a4ca84a22ca4358604c8b6e34aea6" Feb 25 11:32:38 crc kubenswrapper[5005]: E0225 11:32:38.991712 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7ebb4507a81b4990109e526a9ca55b29f6a4ca84a22ca4358604c8b6e34aea6\": container with ID starting with a7ebb4507a81b4990109e526a9ca55b29f6a4ca84a22ca4358604c8b6e34aea6 not found: ID does not exist" containerID="a7ebb4507a81b4990109e526a9ca55b29f6a4ca84a22ca4358604c8b6e34aea6" Feb 25 11:32:38 crc kubenswrapper[5005]: I0225 11:32:38.991761 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7ebb4507a81b4990109e526a9ca55b29f6a4ca84a22ca4358604c8b6e34aea6"} err="failed to get container status \"a7ebb4507a81b4990109e526a9ca55b29f6a4ca84a22ca4358604c8b6e34aea6\": rpc error: code = NotFound desc = could not find container \"a7ebb4507a81b4990109e526a9ca55b29f6a4ca84a22ca4358604c8b6e34aea6\": container with ID starting with a7ebb4507a81b4990109e526a9ca55b29f6a4ca84a22ca4358604c8b6e34aea6 not found: ID does not exist" Feb 25 11:32:38 crc kubenswrapper[5005]: I0225 11:32:38.991795 5005 scope.go:117] "RemoveContainer" containerID="0028481352eee34a8124c1467bccc084348c2bf10570de745f60935eea0e3c1b" Feb 25 11:32:38 crc kubenswrapper[5005]: E0225 11:32:38.993043 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0028481352eee34a8124c1467bccc084348c2bf10570de745f60935eea0e3c1b\": container with ID starting with 0028481352eee34a8124c1467bccc084348c2bf10570de745f60935eea0e3c1b not found: ID does not exist" containerID="0028481352eee34a8124c1467bccc084348c2bf10570de745f60935eea0e3c1b" Feb 25 11:32:38 crc kubenswrapper[5005]: I0225 11:32:38.993086 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0028481352eee34a8124c1467bccc084348c2bf10570de745f60935eea0e3c1b"} err="failed to get container status \"0028481352eee34a8124c1467bccc084348c2bf10570de745f60935eea0e3c1b\": rpc error: code = NotFound desc = could not find container \"0028481352eee34a8124c1467bccc084348c2bf10570de745f60935eea0e3c1b\": container with ID starting with 0028481352eee34a8124c1467bccc084348c2bf10570de745f60935eea0e3c1b not found: ID does not exist" Feb 25 11:32:38 crc kubenswrapper[5005]: I0225 11:32:38.993116 5005 scope.go:117] "RemoveContainer" containerID="aa78a00fa8ad71040ca4794c535176fcefd54186bb68b8bb5a0fa63d1b67b705" Feb 25 11:32:38 crc kubenswrapper[5005]: E0225 11:32:38.995814 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa78a00fa8ad71040ca4794c535176fcefd54186bb68b8bb5a0fa63d1b67b705\": container with ID starting with aa78a00fa8ad71040ca4794c535176fcefd54186bb68b8bb5a0fa63d1b67b705 not found: ID does not exist" containerID="aa78a00fa8ad71040ca4794c535176fcefd54186bb68b8bb5a0fa63d1b67b705" Feb 25 11:32:38 crc kubenswrapper[5005]: I0225 11:32:38.995848 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa78a00fa8ad71040ca4794c535176fcefd54186bb68b8bb5a0fa63d1b67b705"} err="failed to get container status \"aa78a00fa8ad71040ca4794c535176fcefd54186bb68b8bb5a0fa63d1b67b705\": rpc error: code = NotFound desc = could not find container \"aa78a00fa8ad71040ca4794c535176fcefd54186bb68b8bb5a0fa63d1b67b705\": container with ID starting with aa78a00fa8ad71040ca4794c535176fcefd54186bb68b8bb5a0fa63d1b67b705 not found: ID does not exist" Feb 25 11:32:40 crc kubenswrapper[5005]: I0225 11:32:40.697297 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d620b83b-7942-481c-9af6-64a537b573a9" path="/var/lib/kubelet/pods/d620b83b-7942-481c-9af6-64a537b573a9/volumes" Feb 25 11:32:44 crc kubenswrapper[5005]: I0225 11:32:44.458009 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-75d447ff8b-vmmgj"] Feb 25 11:32:44 crc kubenswrapper[5005]: E0225 11:32:44.458581 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d620b83b-7942-481c-9af6-64a537b573a9" containerName="extract-utilities" Feb 25 11:32:44 crc kubenswrapper[5005]: I0225 11:32:44.458593 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="d620b83b-7942-481c-9af6-64a537b573a9" containerName="extract-utilities" Feb 25 11:32:44 crc kubenswrapper[5005]: E0225 11:32:44.458605 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84c3a6b4-399c-427a-aa4c-a19b41e897f6" containerName="extract" Feb 25 11:32:44 crc kubenswrapper[5005]: I0225 11:32:44.458611 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="84c3a6b4-399c-427a-aa4c-a19b41e897f6" containerName="extract" Feb 25 11:32:44 crc kubenswrapper[5005]: E0225 11:32:44.458620 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d620b83b-7942-481c-9af6-64a537b573a9" containerName="registry-server" Feb 25 11:32:44 crc kubenswrapper[5005]: I0225 11:32:44.458627 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="d620b83b-7942-481c-9af6-64a537b573a9" containerName="registry-server" Feb 25 11:32:44 crc kubenswrapper[5005]: E0225 11:32:44.458634 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84c3a6b4-399c-427a-aa4c-a19b41e897f6" containerName="pull" Feb 25 11:32:44 crc kubenswrapper[5005]: I0225 11:32:44.458640 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="84c3a6b4-399c-427a-aa4c-a19b41e897f6" containerName="pull" Feb 25 11:32:44 crc kubenswrapper[5005]: E0225 11:32:44.458657 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84c3a6b4-399c-427a-aa4c-a19b41e897f6" containerName="util" Feb 25 11:32:44 crc kubenswrapper[5005]: I0225 11:32:44.458663 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="84c3a6b4-399c-427a-aa4c-a19b41e897f6" containerName="util" Feb 25 11:32:44 crc kubenswrapper[5005]: E0225 11:32:44.458671 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d620b83b-7942-481c-9af6-64a537b573a9" containerName="extract-content" Feb 25 11:32:44 crc kubenswrapper[5005]: I0225 11:32:44.458677 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="d620b83b-7942-481c-9af6-64a537b573a9" containerName="extract-content" Feb 25 11:32:44 crc kubenswrapper[5005]: I0225 11:32:44.458777 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="d620b83b-7942-481c-9af6-64a537b573a9" containerName="registry-server" Feb 25 11:32:44 crc kubenswrapper[5005]: I0225 11:32:44.458789 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="84c3a6b4-399c-427a-aa4c-a19b41e897f6" containerName="extract" Feb 25 11:32:44 crc kubenswrapper[5005]: I0225 11:32:44.459141 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-75d447ff8b-vmmgj" Feb 25 11:32:44 crc kubenswrapper[5005]: I0225 11:32:44.465402 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-9zzpt" Feb 25 11:32:44 crc kubenswrapper[5005]: I0225 11:32:44.476269 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-75d447ff8b-vmmgj"] Feb 25 11:32:44 crc kubenswrapper[5005]: I0225 11:32:44.552332 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jkp8\" (UniqueName: \"kubernetes.io/projected/6972a9d4-7b95-4a9d-9b01-ee67bb1fb9af-kube-api-access-6jkp8\") pod \"openstack-operator-controller-init-75d447ff8b-vmmgj\" (UID: \"6972a9d4-7b95-4a9d-9b01-ee67bb1fb9af\") " pod="openstack-operators/openstack-operator-controller-init-75d447ff8b-vmmgj" Feb 25 11:32:44 crc kubenswrapper[5005]: I0225 11:32:44.653497 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jkp8\" (UniqueName: \"kubernetes.io/projected/6972a9d4-7b95-4a9d-9b01-ee67bb1fb9af-kube-api-access-6jkp8\") pod \"openstack-operator-controller-init-75d447ff8b-vmmgj\" (UID: \"6972a9d4-7b95-4a9d-9b01-ee67bb1fb9af\") " pod="openstack-operators/openstack-operator-controller-init-75d447ff8b-vmmgj" Feb 25 11:32:44 crc kubenswrapper[5005]: I0225 11:32:44.672613 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jkp8\" (UniqueName: \"kubernetes.io/projected/6972a9d4-7b95-4a9d-9b01-ee67bb1fb9af-kube-api-access-6jkp8\") pod \"openstack-operator-controller-init-75d447ff8b-vmmgj\" (UID: \"6972a9d4-7b95-4a9d-9b01-ee67bb1fb9af\") " pod="openstack-operators/openstack-operator-controller-init-75d447ff8b-vmmgj" Feb 25 11:32:44 crc kubenswrapper[5005]: I0225 11:32:44.778043 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-75d447ff8b-vmmgj" Feb 25 11:32:45 crc kubenswrapper[5005]: I0225 11:32:45.031686 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-srfpw" Feb 25 11:32:45 crc kubenswrapper[5005]: I0225 11:32:45.032238 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-srfpw" Feb 25 11:32:45 crc kubenswrapper[5005]: I0225 11:32:45.087076 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-srfpw" Feb 25 11:32:45 crc kubenswrapper[5005]: I0225 11:32:45.208110 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-75d447ff8b-vmmgj"] Feb 25 11:32:45 crc kubenswrapper[5005]: I0225 11:32:45.975546 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-75d447ff8b-vmmgj" event={"ID":"6972a9d4-7b95-4a9d-9b01-ee67bb1fb9af","Type":"ContainerStarted","Data":"2d30d776c565c9143eaf1e776841daaf852eeb959d259d486fe61813d61c4412"} Feb 25 11:32:46 crc kubenswrapper[5005]: I0225 11:32:46.010881 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-srfpw" Feb 25 11:32:46 crc kubenswrapper[5005]: I0225 11:32:46.477985 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-srfpw"] Feb 25 11:32:47 crc kubenswrapper[5005]: I0225 11:32:47.989473 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-srfpw" podUID="f7913552-d866-495c-b760-d4eec2c2c2ff" containerName="registry-server" containerID="cri-o://ae0ccc75338ef1910f8158646778e538c6b3b7c03ec91b3835796b28344d2d19" gracePeriod=2 Feb 25 11:32:48 crc kubenswrapper[5005]: I0225 11:32:48.996606 5005 generic.go:334] "Generic (PLEG): container finished" podID="f7913552-d866-495c-b760-d4eec2c2c2ff" containerID="ae0ccc75338ef1910f8158646778e538c6b3b7c03ec91b3835796b28344d2d19" exitCode=0 Feb 25 11:32:48 crc kubenswrapper[5005]: I0225 11:32:48.996671 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-srfpw" event={"ID":"f7913552-d866-495c-b760-d4eec2c2c2ff","Type":"ContainerDied","Data":"ae0ccc75338ef1910f8158646778e538c6b3b7c03ec91b3835796b28344d2d19"} Feb 25 11:32:48 crc kubenswrapper[5005]: I0225 11:32:48.996864 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-srfpw" event={"ID":"f7913552-d866-495c-b760-d4eec2c2c2ff","Type":"ContainerDied","Data":"2a33d9678b38088a5db6d6932ab7e2c4807d87732963ff1626941baf2be6d5b1"} Feb 25 11:32:48 crc kubenswrapper[5005]: I0225 11:32:48.996875 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a33d9678b38088a5db6d6932ab7e2c4807d87732963ff1626941baf2be6d5b1" Feb 25 11:32:49 crc kubenswrapper[5005]: I0225 11:32:49.048277 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-srfpw" Feb 25 11:32:49 crc kubenswrapper[5005]: I0225 11:32:49.122050 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7913552-d866-495c-b760-d4eec2c2c2ff-utilities\") pod \"f7913552-d866-495c-b760-d4eec2c2c2ff\" (UID: \"f7913552-d866-495c-b760-d4eec2c2c2ff\") " Feb 25 11:32:49 crc kubenswrapper[5005]: I0225 11:32:49.122456 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhr5m\" (UniqueName: \"kubernetes.io/projected/f7913552-d866-495c-b760-d4eec2c2c2ff-kube-api-access-nhr5m\") pod \"f7913552-d866-495c-b760-d4eec2c2c2ff\" (UID: \"f7913552-d866-495c-b760-d4eec2c2c2ff\") " Feb 25 11:32:49 crc kubenswrapper[5005]: I0225 11:32:49.122492 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7913552-d866-495c-b760-d4eec2c2c2ff-catalog-content\") pod \"f7913552-d866-495c-b760-d4eec2c2c2ff\" (UID: \"f7913552-d866-495c-b760-d4eec2c2c2ff\") " Feb 25 11:32:49 crc kubenswrapper[5005]: I0225 11:32:49.123096 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7913552-d866-495c-b760-d4eec2c2c2ff-utilities" (OuterVolumeSpecName: "utilities") pod "f7913552-d866-495c-b760-d4eec2c2c2ff" (UID: "f7913552-d866-495c-b760-d4eec2c2c2ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:32:49 crc kubenswrapper[5005]: I0225 11:32:49.128067 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7913552-d866-495c-b760-d4eec2c2c2ff-kube-api-access-nhr5m" (OuterVolumeSpecName: "kube-api-access-nhr5m") pod "f7913552-d866-495c-b760-d4eec2c2c2ff" (UID: "f7913552-d866-495c-b760-d4eec2c2c2ff"). InnerVolumeSpecName "kube-api-access-nhr5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:32:49 crc kubenswrapper[5005]: I0225 11:32:49.170190 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7913552-d866-495c-b760-d4eec2c2c2ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f7913552-d866-495c-b760-d4eec2c2c2ff" (UID: "f7913552-d866-495c-b760-d4eec2c2c2ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:32:49 crc kubenswrapper[5005]: I0225 11:32:49.224092 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7913552-d866-495c-b760-d4eec2c2c2ff-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 11:32:49 crc kubenswrapper[5005]: I0225 11:32:49.224358 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhr5m\" (UniqueName: \"kubernetes.io/projected/f7913552-d866-495c-b760-d4eec2c2c2ff-kube-api-access-nhr5m\") on node \"crc\" DevicePath \"\"" Feb 25 11:32:49 crc kubenswrapper[5005]: I0225 11:32:49.224438 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7913552-d866-495c-b760-d4eec2c2c2ff-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 11:32:50 crc kubenswrapper[5005]: I0225 11:32:50.005412 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-srfpw" Feb 25 11:32:50 crc kubenswrapper[5005]: I0225 11:32:50.005417 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-75d447ff8b-vmmgj" event={"ID":"6972a9d4-7b95-4a9d-9b01-ee67bb1fb9af","Type":"ContainerStarted","Data":"b9a000ca98a8ae92bc5a7c6abe5d2ac742d768094e16f843e776b174808788f4"} Feb 25 11:32:50 crc kubenswrapper[5005]: I0225 11:32:50.006431 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-75d447ff8b-vmmgj" Feb 25 11:32:50 crc kubenswrapper[5005]: I0225 11:32:50.040254 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-75d447ff8b-vmmgj" podStartSLOduration=2.404829867 podStartE2EDuration="6.040226439s" podCreationTimestamp="2026-02-25 11:32:44 +0000 UTC" firstStartedPulling="2026-02-25 11:32:45.227813072 +0000 UTC m=+879.268545399" lastFinishedPulling="2026-02-25 11:32:48.863209654 +0000 UTC m=+882.903941971" observedRunningTime="2026-02-25 11:32:50.035460444 +0000 UTC m=+884.076192791" watchObservedRunningTime="2026-02-25 11:32:50.040226439 +0000 UTC m=+884.080958796" Feb 25 11:32:50 crc kubenswrapper[5005]: I0225 11:32:50.055167 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-srfpw"] Feb 25 11:32:50 crc kubenswrapper[5005]: I0225 11:32:50.059294 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-srfpw"] Feb 25 11:32:50 crc kubenswrapper[5005]: I0225 11:32:50.700225 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7913552-d866-495c-b760-d4eec2c2c2ff" path="/var/lib/kubelet/pods/f7913552-d866-495c-b760-d4eec2c2c2ff/volumes" Feb 25 11:32:54 crc kubenswrapper[5005]: I0225 11:32:54.781286 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-75d447ff8b-vmmgj" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.156127 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-6hqnr"] Feb 25 11:33:14 crc kubenswrapper[5005]: E0225 11:33:14.157862 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7913552-d866-495c-b760-d4eec2c2c2ff" containerName="registry-server" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.157935 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7913552-d866-495c-b760-d4eec2c2c2ff" containerName="registry-server" Feb 25 11:33:14 crc kubenswrapper[5005]: E0225 11:33:14.158036 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7913552-d866-495c-b760-d4eec2c2c2ff" containerName="extract-utilities" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.158111 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7913552-d866-495c-b760-d4eec2c2c2ff" containerName="extract-utilities" Feb 25 11:33:14 crc kubenswrapper[5005]: E0225 11:33:14.158174 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7913552-d866-495c-b760-d4eec2c2c2ff" containerName="extract-content" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.158230 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7913552-d866-495c-b760-d4eec2c2c2ff" containerName="extract-content" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.158414 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7913552-d866-495c-b760-d4eec2c2c2ff" containerName="registry-server" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.158881 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-6hqnr" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.161423 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-swlsl"] Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.162541 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-b9jkb" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.162649 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-swlsl" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.165235 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-prqg8" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.210239 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-swlsl"] Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.217309 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-2flh2"] Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.218838 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-2flh2" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.222228 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-swc7z" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.226979 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-784b5bb6c5-b4p92"] Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.227936 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-b4p92" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.238452 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-2flh2"] Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.241982 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-784b5bb6c5-b4p92"] Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.245477 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-c24l5"] Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.245825 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-ddw2r" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.246603 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-c24l5" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.250530 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-hxjsp" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.266098 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-6hqnr"] Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.275253 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-kmwrd"] Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.276155 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-kmwrd" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.278496 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-mptrd" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.284911 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-c24l5"] Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.299218 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl4d4\" (UniqueName: \"kubernetes.io/projected/6a567e4b-427c-4355-a59b-22f247ce374f-kube-api-access-cl4d4\") pod \"cinder-operator-controller-manager-55d77d7b5c-swlsl\" (UID: \"6a567e4b-427c-4355-a59b-22f247ce374f\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-swlsl" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.299274 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktj7q\" (UniqueName: \"kubernetes.io/projected/e43cd401-1094-4b7e-89cd-08216d652cee-kube-api-access-ktj7q\") pod \"barbican-operator-controller-manager-868647ff47-6hqnr\" (UID: \"e43cd401-1094-4b7e-89cd-08216d652cee\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-6hqnr" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.299396 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfn6g\" (UniqueName: \"kubernetes.io/projected/d4d77380-132e-40ee-859f-ed77a83e2f0a-kube-api-access-pfn6g\") pod \"glance-operator-controller-manager-784b5bb6c5-b4p92\" (UID: \"d4d77380-132e-40ee-859f-ed77a83e2f0a\") " pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-b4p92" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.312116 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-kmwrd"] Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.316570 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-8jrrj"] Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.317342 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-8jrrj" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.320699 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-pv4q2" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.320836 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.343426 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-79ddk"] Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.344181 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-79ddk" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.354120 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-ckjhh" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.357353 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-8jrrj"] Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.361806 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-79ddk"] Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.364691 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-4kqj7"] Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.365619 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-4kqj7" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.369404 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-dntq6" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.375450 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-4kqj7"] Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.385423 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-zlm8p"] Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.386198 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-67d996989d-zlm8p" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.401028 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwbf9\" (UniqueName: \"kubernetes.io/projected/bbe120fb-31bc-4979-afb0-e629a69b4c80-kube-api-access-zwbf9\") pod \"designate-operator-controller-manager-6d8bf5c495-2flh2\" (UID: \"bbe120fb-31bc-4979-afb0-e629a69b4c80\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-2flh2" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.401076 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x88ls\" (UniqueName: \"kubernetes.io/projected/153c478a-59a9-4d31-8822-cfb3b62d9c39-kube-api-access-x88ls\") pod \"infra-operator-controller-manager-79d975b745-8jrrj\" (UID: \"153c478a-59a9-4d31-8822-cfb3b62d9c39\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-8jrrj" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.401099 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/153c478a-59a9-4d31-8822-cfb3b62d9c39-cert\") pod \"infra-operator-controller-manager-79d975b745-8jrrj\" (UID: \"153c478a-59a9-4d31-8822-cfb3b62d9c39\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-8jrrj" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.401115 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svj6l\" (UniqueName: \"kubernetes.io/projected/baa1eb2e-998d-46b3-8641-f2274bb32274-kube-api-access-svj6l\") pod \"horizon-operator-controller-manager-5b9b8895d5-c24l5\" (UID: \"baa1eb2e-998d-46b3-8641-f2274bb32274\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-c24l5" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.401138 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cl4d4\" (UniqueName: \"kubernetes.io/projected/6a567e4b-427c-4355-a59b-22f247ce374f-kube-api-access-cl4d4\") pod \"cinder-operator-controller-manager-55d77d7b5c-swlsl\" (UID: \"6a567e4b-427c-4355-a59b-22f247ce374f\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-swlsl" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.401159 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qjld\" (UniqueName: \"kubernetes.io/projected/e5913871-3107-4c84-b940-34c8f4171fc2-kube-api-access-5qjld\") pod \"ironic-operator-controller-manager-554564d7fc-79ddk\" (UID: \"e5913871-3107-4c84-b940-34c8f4171fc2\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-79ddk" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.401175 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn777\" (UniqueName: \"kubernetes.io/projected/a87589fa-1024-43bc-85ec-e9c3bf944db3-kube-api-access-wn777\") pod \"keystone-operator-controller-manager-b4d948c87-4kqj7\" (UID: \"a87589fa-1024-43bc-85ec-e9c3bf944db3\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-4kqj7" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.401192 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktj7q\" (UniqueName: \"kubernetes.io/projected/e43cd401-1094-4b7e-89cd-08216d652cee-kube-api-access-ktj7q\") pod \"barbican-operator-controller-manager-868647ff47-6hqnr\" (UID: \"e43cd401-1094-4b7e-89cd-08216d652cee\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-6hqnr" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.401213 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g78x4\" (UniqueName: \"kubernetes.io/projected/c70b1ffc-e4b5-4e6e-9b0e-e3141c223d33-kube-api-access-g78x4\") pod \"manila-operator-controller-manager-67d996989d-zlm8p\" (UID: \"c70b1ffc-e4b5-4e6e-9b0e-e3141c223d33\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-zlm8p" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.401718 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-gljfx" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.401959 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btbr7\" (UniqueName: \"kubernetes.io/projected/10203f00-712d-4f78-87eb-973cd8b82e16-kube-api-access-btbr7\") pod \"heat-operator-controller-manager-69f49c598c-kmwrd\" (UID: \"10203f00-712d-4f78-87eb-973cd8b82e16\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-kmwrd" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.402047 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfn6g\" (UniqueName: \"kubernetes.io/projected/d4d77380-132e-40ee-859f-ed77a83e2f0a-kube-api-access-pfn6g\") pod \"glance-operator-controller-manager-784b5bb6c5-b4p92\" (UID: \"d4d77380-132e-40ee-859f-ed77a83e2f0a\") " pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-b4p92" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.411412 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-8mtqq"] Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.412119 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-8mtqq" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.418743 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-9w824" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.425838 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-zlm8p"] Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.426503 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktj7q\" (UniqueName: \"kubernetes.io/projected/e43cd401-1094-4b7e-89cd-08216d652cee-kube-api-access-ktj7q\") pod \"barbican-operator-controller-manager-868647ff47-6hqnr\" (UID: \"e43cd401-1094-4b7e-89cd-08216d652cee\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-6hqnr" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.426587 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfn6g\" (UniqueName: \"kubernetes.io/projected/d4d77380-132e-40ee-859f-ed77a83e2f0a-kube-api-access-pfn6g\") pod \"glance-operator-controller-manager-784b5bb6c5-b4p92\" (UID: \"d4d77380-132e-40ee-859f-ed77a83e2f0a\") " pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-b4p92" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.434137 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cl4d4\" (UniqueName: \"kubernetes.io/projected/6a567e4b-427c-4355-a59b-22f247ce374f-kube-api-access-cl4d4\") pod \"cinder-operator-controller-manager-55d77d7b5c-swlsl\" (UID: \"6a567e4b-427c-4355-a59b-22f247ce374f\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-swlsl" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.441708 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6bd4687957-5gzxv"] Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.442708 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-5gzxv" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.445587 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-4xktw" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.452731 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-8mtqq"] Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.460208 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-xr4bs"] Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.464419 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-xr4bs" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.467482 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-ktvdz" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.467976 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-xr4bs"] Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.491081 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6bd4687957-5gzxv"] Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.494966 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-659dc6bbfc-xfzrq"] Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.495730 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-xfzrq" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.497720 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-hmb2f" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.502230 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-659dc6bbfc-xfzrq"] Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.502582 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4sfd\" (UniqueName: \"kubernetes.io/projected/bb12a9fa-312f-4ecd-9732-513717eeb77e-kube-api-access-q4sfd\") pod \"mariadb-operator-controller-manager-6994f66f48-8mtqq\" (UID: \"bb12a9fa-312f-4ecd-9732-513717eeb77e\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-8mtqq" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.502648 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwbf9\" (UniqueName: \"kubernetes.io/projected/bbe120fb-31bc-4979-afb0-e629a69b4c80-kube-api-access-zwbf9\") pod \"designate-operator-controller-manager-6d8bf5c495-2flh2\" (UID: \"bbe120fb-31bc-4979-afb0-e629a69b4c80\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-2flh2" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.502684 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x88ls\" (UniqueName: \"kubernetes.io/projected/153c478a-59a9-4d31-8822-cfb3b62d9c39-kube-api-access-x88ls\") pod \"infra-operator-controller-manager-79d975b745-8jrrj\" (UID: \"153c478a-59a9-4d31-8822-cfb3b62d9c39\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-8jrrj" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.502720 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/153c478a-59a9-4d31-8822-cfb3b62d9c39-cert\") pod \"infra-operator-controller-manager-79d975b745-8jrrj\" (UID: \"153c478a-59a9-4d31-8822-cfb3b62d9c39\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-8jrrj" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.502746 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svj6l\" (UniqueName: \"kubernetes.io/projected/baa1eb2e-998d-46b3-8641-f2274bb32274-kube-api-access-svj6l\") pod \"horizon-operator-controller-manager-5b9b8895d5-c24l5\" (UID: \"baa1eb2e-998d-46b3-8641-f2274bb32274\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-c24l5" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.502776 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qjld\" (UniqueName: \"kubernetes.io/projected/e5913871-3107-4c84-b940-34c8f4171fc2-kube-api-access-5qjld\") pod \"ironic-operator-controller-manager-554564d7fc-79ddk\" (UID: \"e5913871-3107-4c84-b940-34c8f4171fc2\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-79ddk" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.502805 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn777\" (UniqueName: \"kubernetes.io/projected/a87589fa-1024-43bc-85ec-e9c3bf944db3-kube-api-access-wn777\") pod \"keystone-operator-controller-manager-b4d948c87-4kqj7\" (UID: \"a87589fa-1024-43bc-85ec-e9c3bf944db3\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-4kqj7" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.503289 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g78x4\" (UniqueName: \"kubernetes.io/projected/c70b1ffc-e4b5-4e6e-9b0e-e3141c223d33-kube-api-access-g78x4\") pod \"manila-operator-controller-manager-67d996989d-zlm8p\" (UID: \"c70b1ffc-e4b5-4e6e-9b0e-e3141c223d33\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-zlm8p" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.503342 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr4s7\" (UniqueName: \"kubernetes.io/projected/12ea1888-c73d-4a23-a3c2-ba52588b8eba-kube-api-access-rr4s7\") pod \"octavia-operator-controller-manager-659dc6bbfc-xfzrq\" (UID: \"12ea1888-c73d-4a23-a3c2-ba52588b8eba\") " pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-xfzrq" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.503423 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx8rv\" (UniqueName: \"kubernetes.io/projected/ef3a922b-0335-4403-b695-5924b4ce2650-kube-api-access-hx8rv\") pod \"nova-operator-controller-manager-567668f5cf-xr4bs\" (UID: \"ef3a922b-0335-4403-b695-5924b4ce2650\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-xr4bs" Feb 25 11:33:14 crc kubenswrapper[5005]: E0225 11:33:14.503530 5005 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.503514 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btbr7\" (UniqueName: \"kubernetes.io/projected/10203f00-712d-4f78-87eb-973cd8b82e16-kube-api-access-btbr7\") pod \"heat-operator-controller-manager-69f49c598c-kmwrd\" (UID: \"10203f00-712d-4f78-87eb-973cd8b82e16\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-kmwrd" Feb 25 11:33:14 crc kubenswrapper[5005]: E0225 11:33:14.503612 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/153c478a-59a9-4d31-8822-cfb3b62d9c39-cert podName:153c478a-59a9-4d31-8822-cfb3b62d9c39 nodeName:}" failed. No retries permitted until 2026-02-25 11:33:15.003592166 +0000 UTC m=+909.044324493 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/153c478a-59a9-4d31-8822-cfb3b62d9c39-cert") pod "infra-operator-controller-manager-79d975b745-8jrrj" (UID: "153c478a-59a9-4d31-8822-cfb3b62d9c39") : secret "infra-operator-webhook-server-cert" not found Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.503850 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bg5jh\" (UniqueName: \"kubernetes.io/projected/681d6a40-d67d-4d22-985d-7f3b6f10a1d7-kube-api-access-bg5jh\") pod \"neutron-operator-controller-manager-6bd4687957-5gzxv\" (UID: \"681d6a40-d67d-4d22-985d-7f3b6f10a1d7\") " pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-5gzxv" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.506959 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-6hqnr" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.509413 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ck47t4"] Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.510423 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ck47t4" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.513289 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-swlsl" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.513301 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.513636 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-qr6pz" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.544553 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5955d8c787-xfwvf"] Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.546104 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-xfwvf" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.547577 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-b4p92" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.551857 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-qgjzv" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.556793 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn777\" (UniqueName: \"kubernetes.io/projected/a87589fa-1024-43bc-85ec-e9c3bf944db3-kube-api-access-wn777\") pod \"keystone-operator-controller-manager-b4d948c87-4kqj7\" (UID: \"a87589fa-1024-43bc-85ec-e9c3bf944db3\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-4kqj7" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.557938 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x88ls\" (UniqueName: \"kubernetes.io/projected/153c478a-59a9-4d31-8822-cfb3b62d9c39-kube-api-access-x88ls\") pod \"infra-operator-controller-manager-79d975b745-8jrrj\" (UID: \"153c478a-59a9-4d31-8822-cfb3b62d9c39\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-8jrrj" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.558071 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwbf9\" (UniqueName: \"kubernetes.io/projected/bbe120fb-31bc-4979-afb0-e629a69b4c80-kube-api-access-zwbf9\") pod \"designate-operator-controller-manager-6d8bf5c495-2flh2\" (UID: \"bbe120fb-31bc-4979-afb0-e629a69b4c80\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-2flh2" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.558624 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svj6l\" (UniqueName: \"kubernetes.io/projected/baa1eb2e-998d-46b3-8641-f2274bb32274-kube-api-access-svj6l\") pod \"horizon-operator-controller-manager-5b9b8895d5-c24l5\" (UID: \"baa1eb2e-998d-46b3-8641-f2274bb32274\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-c24l5" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.560778 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qjld\" (UniqueName: \"kubernetes.io/projected/e5913871-3107-4c84-b940-34c8f4171fc2-kube-api-access-5qjld\") pod \"ironic-operator-controller-manager-554564d7fc-79ddk\" (UID: \"e5913871-3107-4c84-b940-34c8f4171fc2\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-79ddk" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.561206 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btbr7\" (UniqueName: \"kubernetes.io/projected/10203f00-712d-4f78-87eb-973cd8b82e16-kube-api-access-btbr7\") pod \"heat-operator-controller-manager-69f49c598c-kmwrd\" (UID: \"10203f00-712d-4f78-87eb-973cd8b82e16\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-kmwrd" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.561078 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g78x4\" (UniqueName: \"kubernetes.io/projected/c70b1ffc-e4b5-4e6e-9b0e-e3141c223d33-kube-api-access-g78x4\") pod \"manila-operator-controller-manager-67d996989d-zlm8p\" (UID: \"c70b1ffc-e4b5-4e6e-9b0e-e3141c223d33\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-zlm8p" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.578635 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-c24l5" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.580605 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5955d8c787-xfwvf"] Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.585472 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ck47t4"] Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.593180 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-5rslz"] Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.594190 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-5rslz" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.598273 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-b27xm"] Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.598551 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-fqw5k" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.599063 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-b27xm" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.602536 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-dmrgv" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.604632 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrlj4\" (UniqueName: \"kubernetes.io/projected/100cda48-3590-4281-8f00-6881497a2420-kube-api-access-nrlj4\") pod \"ovn-operator-controller-manager-5955d8c787-xfwvf\" (UID: \"100cda48-3590-4281-8f00-6881497a2420\") " pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-xfwvf" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.604768 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bg5jh\" (UniqueName: \"kubernetes.io/projected/681d6a40-d67d-4d22-985d-7f3b6f10a1d7-kube-api-access-bg5jh\") pod \"neutron-operator-controller-manager-6bd4687957-5gzxv\" (UID: \"681d6a40-d67d-4d22-985d-7f3b6f10a1d7\") " pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-5gzxv" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.604796 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4sfd\" (UniqueName: \"kubernetes.io/projected/bb12a9fa-312f-4ecd-9732-513717eeb77e-kube-api-access-q4sfd\") pod \"mariadb-operator-controller-manager-6994f66f48-8mtqq\" (UID: \"bb12a9fa-312f-4ecd-9732-513717eeb77e\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-8mtqq" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.604879 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ce436a93-47a7-48b4-8134-04bd264c6105-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ck47t4\" (UID: \"ce436a93-47a7-48b4-8134-04bd264c6105\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ck47t4" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.604956 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbjjm\" (UniqueName: \"kubernetes.io/projected/ec29cb03-6d5b-4922-bf56-da937019444d-kube-api-access-tbjjm\") pod \"placement-operator-controller-manager-8497b45c89-5rslz\" (UID: \"ec29cb03-6d5b-4922-bf56-da937019444d\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-5rslz" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.604982 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9hf7\" (UniqueName: \"kubernetes.io/projected/ce436a93-47a7-48b4-8134-04bd264c6105-kube-api-access-l9hf7\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ck47t4\" (UID: \"ce436a93-47a7-48b4-8134-04bd264c6105\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ck47t4" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.605031 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpnjd\" (UniqueName: \"kubernetes.io/projected/22145e61-39f2-479f-a6a9-82afb5c654df-kube-api-access-hpnjd\") pod \"swift-operator-controller-manager-68f46476f-b27xm\" (UID: \"22145e61-39f2-479f-a6a9-82afb5c654df\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-b27xm" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.610418 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rr4s7\" (UniqueName: \"kubernetes.io/projected/12ea1888-c73d-4a23-a3c2-ba52588b8eba-kube-api-access-rr4s7\") pod \"octavia-operator-controller-manager-659dc6bbfc-xfzrq\" (UID: \"12ea1888-c73d-4a23-a3c2-ba52588b8eba\") " pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-xfzrq" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.610559 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hx8rv\" (UniqueName: \"kubernetes.io/projected/ef3a922b-0335-4403-b695-5924b4ce2650-kube-api-access-hx8rv\") pod \"nova-operator-controller-manager-567668f5cf-xr4bs\" (UID: \"ef3a922b-0335-4403-b695-5924b4ce2650\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-xr4bs" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.617210 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-kmwrd" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.621457 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-5rslz"] Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.623009 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rr4s7\" (UniqueName: \"kubernetes.io/projected/12ea1888-c73d-4a23-a3c2-ba52588b8eba-kube-api-access-rr4s7\") pod \"octavia-operator-controller-manager-659dc6bbfc-xfzrq\" (UID: \"12ea1888-c73d-4a23-a3c2-ba52588b8eba\") " pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-xfzrq" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.626045 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bg5jh\" (UniqueName: \"kubernetes.io/projected/681d6a40-d67d-4d22-985d-7f3b6f10a1d7-kube-api-access-bg5jh\") pod \"neutron-operator-controller-manager-6bd4687957-5gzxv\" (UID: \"681d6a40-d67d-4d22-985d-7f3b6f10a1d7\") " pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-5gzxv" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.630190 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4sfd\" (UniqueName: \"kubernetes.io/projected/bb12a9fa-312f-4ecd-9732-513717eeb77e-kube-api-access-q4sfd\") pod \"mariadb-operator-controller-manager-6994f66f48-8mtqq\" (UID: \"bb12a9fa-312f-4ecd-9732-513717eeb77e\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-8mtqq" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.634084 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hx8rv\" (UniqueName: \"kubernetes.io/projected/ef3a922b-0335-4403-b695-5924b4ce2650-kube-api-access-hx8rv\") pod \"nova-operator-controller-manager-567668f5cf-xr4bs\" (UID: \"ef3a922b-0335-4403-b695-5924b4ce2650\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-xr4bs" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.642271 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-b27xm"] Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.667324 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-589c568786-x5c6m"] Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.668298 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-x5c6m" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.670299 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-d9rj6" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.672913 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-589c568786-x5c6m"] Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.680714 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-79ddk" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.713829 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-4kqj7" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.714064 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ce436a93-47a7-48b4-8134-04bd264c6105-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ck47t4\" (UID: \"ce436a93-47a7-48b4-8134-04bd264c6105\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ck47t4" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.714140 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbjjm\" (UniqueName: \"kubernetes.io/projected/ec29cb03-6d5b-4922-bf56-da937019444d-kube-api-access-tbjjm\") pod \"placement-operator-controller-manager-8497b45c89-5rslz\" (UID: \"ec29cb03-6d5b-4922-bf56-da937019444d\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-5rslz" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.714165 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9hf7\" (UniqueName: \"kubernetes.io/projected/ce436a93-47a7-48b4-8134-04bd264c6105-kube-api-access-l9hf7\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ck47t4\" (UID: \"ce436a93-47a7-48b4-8134-04bd264c6105\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ck47t4" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.714189 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpnjd\" (UniqueName: \"kubernetes.io/projected/22145e61-39f2-479f-a6a9-82afb5c654df-kube-api-access-hpnjd\") pod \"swift-operator-controller-manager-68f46476f-b27xm\" (UID: \"22145e61-39f2-479f-a6a9-82afb5c654df\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-b27xm" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.714229 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pfdw\" (UniqueName: \"kubernetes.io/projected/667f5dd6-b885-416e-b0da-aa3af5f91d4e-kube-api-access-9pfdw\") pod \"telemetry-operator-controller-manager-589c568786-x5c6m\" (UID: \"667f5dd6-b885-416e-b0da-aa3af5f91d4e\") " pod="openstack-operators/telemetry-operator-controller-manager-589c568786-x5c6m" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.714252 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrlj4\" (UniqueName: \"kubernetes.io/projected/100cda48-3590-4281-8f00-6881497a2420-kube-api-access-nrlj4\") pod \"ovn-operator-controller-manager-5955d8c787-xfwvf\" (UID: \"100cda48-3590-4281-8f00-6881497a2420\") " pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-xfwvf" Feb 25 11:33:14 crc kubenswrapper[5005]: E0225 11:33:14.714509 5005 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 25 11:33:14 crc kubenswrapper[5005]: E0225 11:33:14.714553 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce436a93-47a7-48b4-8134-04bd264c6105-cert podName:ce436a93-47a7-48b4-8134-04bd264c6105 nodeName:}" failed. No retries permitted until 2026-02-25 11:33:15.214537228 +0000 UTC m=+909.255269555 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ce436a93-47a7-48b4-8134-04bd264c6105-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9ck47t4" (UID: "ce436a93-47a7-48b4-8134-04bd264c6105") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.755646 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpnjd\" (UniqueName: \"kubernetes.io/projected/22145e61-39f2-479f-a6a9-82afb5c654df-kube-api-access-hpnjd\") pod \"swift-operator-controller-manager-68f46476f-b27xm\" (UID: \"22145e61-39f2-479f-a6a9-82afb5c654df\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-b27xm" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.758091 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbjjm\" (UniqueName: \"kubernetes.io/projected/ec29cb03-6d5b-4922-bf56-da937019444d-kube-api-access-tbjjm\") pod \"placement-operator-controller-manager-8497b45c89-5rslz\" (UID: \"ec29cb03-6d5b-4922-bf56-da937019444d\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-5rslz" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.768648 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-67d996989d-zlm8p" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.779229 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-8mtqq" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.786979 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-5gzxv" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.793058 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-xr4bs" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.810261 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-xfzrq" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.819992 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pfdw\" (UniqueName: \"kubernetes.io/projected/667f5dd6-b885-416e-b0da-aa3af5f91d4e-kube-api-access-9pfdw\") pod \"telemetry-operator-controller-manager-589c568786-x5c6m\" (UID: \"667f5dd6-b885-416e-b0da-aa3af5f91d4e\") " pod="openstack-operators/telemetry-operator-controller-manager-589c568786-x5c6m" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.821497 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrlj4\" (UniqueName: \"kubernetes.io/projected/100cda48-3590-4281-8f00-6881497a2420-kube-api-access-nrlj4\") pod \"ovn-operator-controller-manager-5955d8c787-xfwvf\" (UID: \"100cda48-3590-4281-8f00-6881497a2420\") " pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-xfwvf" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.826623 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9hf7\" (UniqueName: \"kubernetes.io/projected/ce436a93-47a7-48b4-8134-04bd264c6105-kube-api-access-l9hf7\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ck47t4\" (UID: \"ce436a93-47a7-48b4-8134-04bd264c6105\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ck47t4" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.840813 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-2flh2" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.844406 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5d94c77696-97dh4"] Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.845436 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5d94c77696-97dh4" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.857030 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-xsdbg" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.860511 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5d94c77696-97dh4"] Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.886476 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-zzcv7"] Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.887275 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-zzcv7"] Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.887348 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-zzcv7" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.910242 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-x5s2l" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.921699 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lghxc\" (UniqueName: \"kubernetes.io/projected/5a255675-1625-4741-a27d-dbd287a31276-kube-api-access-lghxc\") pod \"test-operator-controller-manager-5d94c77696-97dh4\" (UID: \"5a255675-1625-4741-a27d-dbd287a31276\") " pod="openstack-operators/test-operator-controller-manager-5d94c77696-97dh4" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.921765 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hdnb\" (UniqueName: \"kubernetes.io/projected/8855d562-69b7-4122-804f-dd87a6a3031a-kube-api-access-5hdnb\") pod \"watcher-operator-controller-manager-bccc79885-zzcv7\" (UID: \"8855d562-69b7-4122-804f-dd87a6a3031a\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-zzcv7" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.932423 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-xfwvf" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.940097 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pfdw\" (UniqueName: \"kubernetes.io/projected/667f5dd6-b885-416e-b0da-aa3af5f91d4e-kube-api-access-9pfdw\") pod \"telemetry-operator-controller-manager-589c568786-x5c6m\" (UID: \"667f5dd6-b885-416e-b0da-aa3af5f91d4e\") " pod="openstack-operators/telemetry-operator-controller-manager-589c568786-x5c6m" Feb 25 11:33:14 crc kubenswrapper[5005]: I0225 11:33:14.999547 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-5rslz" Feb 25 11:33:15 crc kubenswrapper[5005]: I0225 11:33:15.014352 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-655db7587-5x4xq"] Feb 25 11:33:15 crc kubenswrapper[5005]: I0225 11:33:15.015133 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-655db7587-5x4xq" Feb 25 11:33:15 crc kubenswrapper[5005]: I0225 11:33:15.023002 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lghxc\" (UniqueName: \"kubernetes.io/projected/5a255675-1625-4741-a27d-dbd287a31276-kube-api-access-lghxc\") pod \"test-operator-controller-manager-5d94c77696-97dh4\" (UID: \"5a255675-1625-4741-a27d-dbd287a31276\") " pod="openstack-operators/test-operator-controller-manager-5d94c77696-97dh4" Feb 25 11:33:15 crc kubenswrapper[5005]: I0225 11:33:15.023069 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hdnb\" (UniqueName: \"kubernetes.io/projected/8855d562-69b7-4122-804f-dd87a6a3031a-kube-api-access-5hdnb\") pod \"watcher-operator-controller-manager-bccc79885-zzcv7\" (UID: \"8855d562-69b7-4122-804f-dd87a6a3031a\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-zzcv7" Feb 25 11:33:15 crc kubenswrapper[5005]: I0225 11:33:15.023096 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/153c478a-59a9-4d31-8822-cfb3b62d9c39-cert\") pod \"infra-operator-controller-manager-79d975b745-8jrrj\" (UID: \"153c478a-59a9-4d31-8822-cfb3b62d9c39\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-8jrrj" Feb 25 11:33:15 crc kubenswrapper[5005]: E0225 11:33:15.023251 5005 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 25 11:33:15 crc kubenswrapper[5005]: E0225 11:33:15.023298 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/153c478a-59a9-4d31-8822-cfb3b62d9c39-cert podName:153c478a-59a9-4d31-8822-cfb3b62d9c39 nodeName:}" failed. No retries permitted until 2026-02-25 11:33:16.02328525 +0000 UTC m=+910.064017577 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/153c478a-59a9-4d31-8822-cfb3b62d9c39-cert") pod "infra-operator-controller-manager-79d975b745-8jrrj" (UID: "153c478a-59a9-4d31-8822-cfb3b62d9c39") : secret "infra-operator-webhook-server-cert" not found Feb 25 11:33:15 crc kubenswrapper[5005]: I0225 11:33:15.023626 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 25 11:33:15 crc kubenswrapper[5005]: I0225 11:33:15.025123 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-tdq6g" Feb 25 11:33:15 crc kubenswrapper[5005]: I0225 11:33:15.025934 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 25 11:33:15 crc kubenswrapper[5005]: I0225 11:33:15.039119 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-655db7587-5x4xq"] Feb 25 11:33:15 crc kubenswrapper[5005]: I0225 11:33:15.050874 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-b27xm" Feb 25 11:33:15 crc kubenswrapper[5005]: I0225 11:33:15.070125 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hdnb\" (UniqueName: \"kubernetes.io/projected/8855d562-69b7-4122-804f-dd87a6a3031a-kube-api-access-5hdnb\") pod \"watcher-operator-controller-manager-bccc79885-zzcv7\" (UID: \"8855d562-69b7-4122-804f-dd87a6a3031a\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-zzcv7" Feb 25 11:33:15 crc kubenswrapper[5005]: I0225 11:33:15.072747 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lghxc\" (UniqueName: \"kubernetes.io/projected/5a255675-1625-4741-a27d-dbd287a31276-kube-api-access-lghxc\") pod \"test-operator-controller-manager-5d94c77696-97dh4\" (UID: \"5a255675-1625-4741-a27d-dbd287a31276\") " pod="openstack-operators/test-operator-controller-manager-5d94c77696-97dh4" Feb 25 11:33:15 crc kubenswrapper[5005]: I0225 11:33:15.079405 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fc4gc"] Feb 25 11:33:15 crc kubenswrapper[5005]: I0225 11:33:15.080432 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fc4gc" Feb 25 11:33:15 crc kubenswrapper[5005]: I0225 11:33:15.081928 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-x5c6m" Feb 25 11:33:15 crc kubenswrapper[5005]: I0225 11:33:15.083157 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-2jz7f" Feb 25 11:33:15 crc kubenswrapper[5005]: I0225 11:33:15.123842 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fc4gc"] Feb 25 11:33:15 crc kubenswrapper[5005]: I0225 11:33:15.124167 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng9f5\" (UniqueName: \"kubernetes.io/projected/626ae184-1624-447b-a9be-7e4d92dc4e67-kube-api-access-ng9f5\") pod \"rabbitmq-cluster-operator-manager-668c99d594-fc4gc\" (UID: \"626ae184-1624-447b-a9be-7e4d92dc4e67\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fc4gc" Feb 25 11:33:15 crc kubenswrapper[5005]: I0225 11:33:15.124238 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gwvr\" (UniqueName: \"kubernetes.io/projected/c058ccb9-0d11-4b5b-bbc8-bb88b605a661-kube-api-access-8gwvr\") pod \"openstack-operator-controller-manager-655db7587-5x4xq\" (UID: \"c058ccb9-0d11-4b5b-bbc8-bb88b605a661\") " pod="openstack-operators/openstack-operator-controller-manager-655db7587-5x4xq" Feb 25 11:33:15 crc kubenswrapper[5005]: I0225 11:33:15.127175 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c058ccb9-0d11-4b5b-bbc8-bb88b605a661-metrics-certs\") pod \"openstack-operator-controller-manager-655db7587-5x4xq\" (UID: \"c058ccb9-0d11-4b5b-bbc8-bb88b605a661\") " pod="openstack-operators/openstack-operator-controller-manager-655db7587-5x4xq" Feb 25 11:33:15 crc kubenswrapper[5005]: I0225 11:33:15.127265 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c058ccb9-0d11-4b5b-bbc8-bb88b605a661-webhook-certs\") pod \"openstack-operator-controller-manager-655db7587-5x4xq\" (UID: \"c058ccb9-0d11-4b5b-bbc8-bb88b605a661\") " pod="openstack-operators/openstack-operator-controller-manager-655db7587-5x4xq" Feb 25 11:33:15 crc kubenswrapper[5005]: I0225 11:33:15.172234 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-784b5bb6c5-b4p92"] Feb 25 11:33:15 crc kubenswrapper[5005]: I0225 11:33:15.207088 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-swlsl"] Feb 25 11:33:15 crc kubenswrapper[5005]: I0225 11:33:15.229315 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c058ccb9-0d11-4b5b-bbc8-bb88b605a661-webhook-certs\") pod \"openstack-operator-controller-manager-655db7587-5x4xq\" (UID: \"c058ccb9-0d11-4b5b-bbc8-bb88b605a661\") " pod="openstack-operators/openstack-operator-controller-manager-655db7587-5x4xq" Feb 25 11:33:15 crc kubenswrapper[5005]: I0225 11:33:15.229386 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ce436a93-47a7-48b4-8134-04bd264c6105-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ck47t4\" (UID: \"ce436a93-47a7-48b4-8134-04bd264c6105\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ck47t4" Feb 25 11:33:15 crc kubenswrapper[5005]: I0225 11:33:15.229415 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ng9f5\" (UniqueName: \"kubernetes.io/projected/626ae184-1624-447b-a9be-7e4d92dc4e67-kube-api-access-ng9f5\") pod \"rabbitmq-cluster-operator-manager-668c99d594-fc4gc\" (UID: \"626ae184-1624-447b-a9be-7e4d92dc4e67\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fc4gc" Feb 25 11:33:15 crc kubenswrapper[5005]: I0225 11:33:15.229496 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gwvr\" (UniqueName: \"kubernetes.io/projected/c058ccb9-0d11-4b5b-bbc8-bb88b605a661-kube-api-access-8gwvr\") pod \"openstack-operator-controller-manager-655db7587-5x4xq\" (UID: \"c058ccb9-0d11-4b5b-bbc8-bb88b605a661\") " pod="openstack-operators/openstack-operator-controller-manager-655db7587-5x4xq" Feb 25 11:33:15 crc kubenswrapper[5005]: I0225 11:33:15.229520 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c058ccb9-0d11-4b5b-bbc8-bb88b605a661-metrics-certs\") pod \"openstack-operator-controller-manager-655db7587-5x4xq\" (UID: \"c058ccb9-0d11-4b5b-bbc8-bb88b605a661\") " pod="openstack-operators/openstack-operator-controller-manager-655db7587-5x4xq" Feb 25 11:33:15 crc kubenswrapper[5005]: E0225 11:33:15.229572 5005 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 25 11:33:15 crc kubenswrapper[5005]: E0225 11:33:15.229652 5005 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 25 11:33:15 crc kubenswrapper[5005]: E0225 11:33:15.229681 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c058ccb9-0d11-4b5b-bbc8-bb88b605a661-webhook-certs podName:c058ccb9-0d11-4b5b-bbc8-bb88b605a661 nodeName:}" failed. No retries permitted until 2026-02-25 11:33:15.729642384 +0000 UTC m=+909.770374781 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c058ccb9-0d11-4b5b-bbc8-bb88b605a661-webhook-certs") pod "openstack-operator-controller-manager-655db7587-5x4xq" (UID: "c058ccb9-0d11-4b5b-bbc8-bb88b605a661") : secret "webhook-server-cert" not found Feb 25 11:33:15 crc kubenswrapper[5005]: E0225 11:33:15.229582 5005 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 25 11:33:15 crc kubenswrapper[5005]: E0225 11:33:15.229821 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce436a93-47a7-48b4-8134-04bd264c6105-cert podName:ce436a93-47a7-48b4-8134-04bd264c6105 nodeName:}" failed. No retries permitted until 2026-02-25 11:33:16.229801879 +0000 UTC m=+910.270534276 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ce436a93-47a7-48b4-8134-04bd264c6105-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9ck47t4" (UID: "ce436a93-47a7-48b4-8134-04bd264c6105") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 25 11:33:15 crc kubenswrapper[5005]: E0225 11:33:15.229841 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c058ccb9-0d11-4b5b-bbc8-bb88b605a661-metrics-certs podName:c058ccb9-0d11-4b5b-bbc8-bb88b605a661 nodeName:}" failed. No retries permitted until 2026-02-25 11:33:15.72982892 +0000 UTC m=+909.770561297 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c058ccb9-0d11-4b5b-bbc8-bb88b605a661-metrics-certs") pod "openstack-operator-controller-manager-655db7587-5x4xq" (UID: "c058ccb9-0d11-4b5b-bbc8-bb88b605a661") : secret "metrics-server-cert" not found Feb 25 11:33:15 crc kubenswrapper[5005]: I0225 11:33:15.248672 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ng9f5\" (UniqueName: \"kubernetes.io/projected/626ae184-1624-447b-a9be-7e4d92dc4e67-kube-api-access-ng9f5\") pod \"rabbitmq-cluster-operator-manager-668c99d594-fc4gc\" (UID: \"626ae184-1624-447b-a9be-7e4d92dc4e67\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fc4gc" Feb 25 11:33:15 crc kubenswrapper[5005]: I0225 11:33:15.252004 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gwvr\" (UniqueName: \"kubernetes.io/projected/c058ccb9-0d11-4b5b-bbc8-bb88b605a661-kube-api-access-8gwvr\") pod \"openstack-operator-controller-manager-655db7587-5x4xq\" (UID: \"c058ccb9-0d11-4b5b-bbc8-bb88b605a661\") " pod="openstack-operators/openstack-operator-controller-manager-655db7587-5x4xq" Feb 25 11:33:15 crc kubenswrapper[5005]: I0225 11:33:15.261690 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-zzcv7" Feb 25 11:33:15 crc kubenswrapper[5005]: I0225 11:33:15.263704 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5d94c77696-97dh4" Feb 25 11:33:15 crc kubenswrapper[5005]: I0225 11:33:15.425622 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-6hqnr"] Feb 25 11:33:15 crc kubenswrapper[5005]: I0225 11:33:15.425930 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fc4gc" Feb 25 11:33:15 crc kubenswrapper[5005]: I0225 11:33:15.665232 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-8mtqq"] Feb 25 11:33:15 crc kubenswrapper[5005]: W0225 11:33:15.678606 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb12a9fa_312f_4ecd_9732_513717eeb77e.slice/crio-615ce95163178b41ce1af98f2f07507404dba26fdaee71b2f76ea5ce9dd11b54 WatchSource:0}: Error finding container 615ce95163178b41ce1af98f2f07507404dba26fdaee71b2f76ea5ce9dd11b54: Status 404 returned error can't find the container with id 615ce95163178b41ce1af98f2f07507404dba26fdaee71b2f76ea5ce9dd11b54 Feb 25 11:33:15 crc kubenswrapper[5005]: I0225 11:33:15.683364 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6bd4687957-5gzxv"] Feb 25 11:33:15 crc kubenswrapper[5005]: I0225 11:33:15.726900 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-c24l5"] Feb 25 11:33:15 crc kubenswrapper[5005]: I0225 11:33:15.736846 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-79ddk"] Feb 25 11:33:15 crc kubenswrapper[5005]: I0225 11:33:15.742654 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c058ccb9-0d11-4b5b-bbc8-bb88b605a661-webhook-certs\") pod \"openstack-operator-controller-manager-655db7587-5x4xq\" (UID: \"c058ccb9-0d11-4b5b-bbc8-bb88b605a661\") " pod="openstack-operators/openstack-operator-controller-manager-655db7587-5x4xq" Feb 25 11:33:15 crc kubenswrapper[5005]: I0225 11:33:15.742813 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c058ccb9-0d11-4b5b-bbc8-bb88b605a661-metrics-certs\") pod \"openstack-operator-controller-manager-655db7587-5x4xq\" (UID: \"c058ccb9-0d11-4b5b-bbc8-bb88b605a661\") " pod="openstack-operators/openstack-operator-controller-manager-655db7587-5x4xq" Feb 25 11:33:15 crc kubenswrapper[5005]: E0225 11:33:15.742980 5005 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 25 11:33:15 crc kubenswrapper[5005]: E0225 11:33:15.743055 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c058ccb9-0d11-4b5b-bbc8-bb88b605a661-metrics-certs podName:c058ccb9-0d11-4b5b-bbc8-bb88b605a661 nodeName:}" failed. No retries permitted until 2026-02-25 11:33:16.743040429 +0000 UTC m=+910.783772756 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c058ccb9-0d11-4b5b-bbc8-bb88b605a661-metrics-certs") pod "openstack-operator-controller-manager-655db7587-5x4xq" (UID: "c058ccb9-0d11-4b5b-bbc8-bb88b605a661") : secret "metrics-server-cert" not found Feb 25 11:33:15 crc kubenswrapper[5005]: E0225 11:33:15.746076 5005 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 25 11:33:15 crc kubenswrapper[5005]: E0225 11:33:15.746131 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c058ccb9-0d11-4b5b-bbc8-bb88b605a661-webhook-certs podName:c058ccb9-0d11-4b5b-bbc8-bb88b605a661 nodeName:}" failed. No retries permitted until 2026-02-25 11:33:16.746113552 +0000 UTC m=+910.786845879 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c058ccb9-0d11-4b5b-bbc8-bb88b605a661-webhook-certs") pod "openstack-operator-controller-manager-655db7587-5x4xq" (UID: "c058ccb9-0d11-4b5b-bbc8-bb88b605a661") : secret "webhook-server-cert" not found Feb 25 11:33:15 crc kubenswrapper[5005]: I0225 11:33:15.757548 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-4kqj7"] Feb 25 11:33:15 crc kubenswrapper[5005]: W0225 11:33:15.758247 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda87589fa_1024_43bc_85ec_e9c3bf944db3.slice/crio-943e5caf59994aaf1f38e10b080281766d9520c93b71244cb6e943f41d496fc8 WatchSource:0}: Error finding container 943e5caf59994aaf1f38e10b080281766d9520c93b71244cb6e943f41d496fc8: Status 404 returned error can't find the container with id 943e5caf59994aaf1f38e10b080281766d9520c93b71244cb6e943f41d496fc8 Feb 25 11:33:15 crc kubenswrapper[5005]: I0225 11:33:15.762912 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-kmwrd"] Feb 25 11:33:15 crc kubenswrapper[5005]: W0225 11:33:15.766925 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10203f00_712d_4f78_87eb_973cd8b82e16.slice/crio-309d8edb35d04984128ed8474e5705003e551d6f6fa1bb4fe8e5de8562520135 WatchSource:0}: Error finding container 309d8edb35d04984128ed8474e5705003e551d6f6fa1bb4fe8e5de8562520135: Status 404 returned error can't find the container with id 309d8edb35d04984128ed8474e5705003e551d6f6fa1bb4fe8e5de8562520135 Feb 25 11:33:15 crc kubenswrapper[5005]: I0225 11:33:15.808945 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-2flh2"] Feb 25 11:33:15 crc kubenswrapper[5005]: I0225 11:33:15.815974 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5955d8c787-xfwvf"] Feb 25 11:33:15 crc kubenswrapper[5005]: W0225 11:33:15.822656 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12ea1888_c73d_4a23_a3c2_ba52588b8eba.slice/crio-40f9cff4c2a20ff6f3d9e1f9b7040bf0be98b4f0f6cc3eccb8bf38548b62a007 WatchSource:0}: Error finding container 40f9cff4c2a20ff6f3d9e1f9b7040bf0be98b4f0f6cc3eccb8bf38548b62a007: Status 404 returned error can't find the container with id 40f9cff4c2a20ff6f3d9e1f9b7040bf0be98b4f0f6cc3eccb8bf38548b62a007 Feb 25 11:33:15 crc kubenswrapper[5005]: I0225 11:33:15.828989 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-659dc6bbfc-xfzrq"] Feb 25 11:33:15 crc kubenswrapper[5005]: W0225 11:33:15.832844 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc70b1ffc_e4b5_4e6e_9b0e_e3141c223d33.slice/crio-e003bfd9bda6050148f2ca0eac4ca13946f492d7b01717c3cc1af093f4f7553e WatchSource:0}: Error finding container e003bfd9bda6050148f2ca0eac4ca13946f492d7b01717c3cc1af093f4f7553e: Status 404 returned error can't find the container with id e003bfd9bda6050148f2ca0eac4ca13946f492d7b01717c3cc1af093f4f7553e Feb 25 11:33:15 crc kubenswrapper[5005]: I0225 11:33:15.833605 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-zlm8p"] Feb 25 11:33:15 crc kubenswrapper[5005]: E0225 11:33:15.834502 5005 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:f1158ec4d879c4646eee4323bc501eba4d377beb2ad6fbe08ed30070c441ac26,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g78x4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-67d996989d-zlm8p_openstack-operators(c70b1ffc-e4b5-4e6e-9b0e-e3141c223d33): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 25 11:33:15 crc kubenswrapper[5005]: E0225 11:33:15.836048 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/manila-operator-controller-manager-67d996989d-zlm8p" podUID="c70b1ffc-e4b5-4e6e-9b0e-e3141c223d33" Feb 25 11:33:15 crc kubenswrapper[5005]: I0225 11:33:15.959872 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-zzcv7"] Feb 25 11:33:15 crc kubenswrapper[5005]: I0225 11:33:15.968692 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-xr4bs"] Feb 25 11:33:15 crc kubenswrapper[5005]: W0225 11:33:15.974102 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8855d562_69b7_4122_804f_dd87a6a3031a.slice/crio-03374abc3d8c02a522221e0554a3c2a8a3e62238e1b5185465008fd20d1d4938 WatchSource:0}: Error finding container 03374abc3d8c02a522221e0554a3c2a8a3e62238e1b5185465008fd20d1d4938: Status 404 returned error can't find the container with id 03374abc3d8c02a522221e0554a3c2a8a3e62238e1b5185465008fd20d1d4938 Feb 25 11:33:15 crc kubenswrapper[5005]: I0225 11:33:15.974569 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-5rslz"] Feb 25 11:33:15 crc kubenswrapper[5005]: I0225 11:33:15.979803 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-b27xm"] Feb 25 11:33:15 crc kubenswrapper[5005]: W0225 11:33:15.983776 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef3a922b_0335_4403_b695_5924b4ce2650.slice/crio-953d06db265373364c4d66296f3fbe729d03781edebaf0f4628fd722208ab37f WatchSource:0}: Error finding container 953d06db265373364c4d66296f3fbe729d03781edebaf0f4628fd722208ab37f: Status 404 returned error can't find the container with id 953d06db265373364c4d66296f3fbe729d03781edebaf0f4628fd722208ab37f Feb 25 11:33:15 crc kubenswrapper[5005]: E0225 11:33:15.988736 5005 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tbjjm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-8497b45c89-5rslz_openstack-operators(ec29cb03-6d5b-4922-bf56-da937019444d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 25 11:33:15 crc kubenswrapper[5005]: E0225 11:33:15.989837 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-5rslz" podUID="ec29cb03-6d5b-4922-bf56-da937019444d" Feb 25 11:33:15 crc kubenswrapper[5005]: E0225 11:33:15.990217 5005 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hx8rv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-567668f5cf-xr4bs_openstack-operators(ef3a922b-0335-4403-b695-5924b4ce2650): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 25 11:33:15 crc kubenswrapper[5005]: E0225 11:33:15.993049 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-xr4bs" podUID="ef3a922b-0335-4403-b695-5924b4ce2650" Feb 25 11:33:15 crc kubenswrapper[5005]: E0225 11:33:15.999206 5005 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hpnjd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68f46476f-b27xm_openstack-operators(22145e61-39f2-479f-a6a9-82afb5c654df): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 25 11:33:16 crc kubenswrapper[5005]: I0225 11:33:16.000454 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5d94c77696-97dh4"] Feb 25 11:33:16 crc kubenswrapper[5005]: E0225 11:33:16.000476 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-b27xm" podUID="22145e61-39f2-479f-a6a9-82afb5c654df" Feb 25 11:33:16 crc kubenswrapper[5005]: I0225 11:33:16.004508 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fc4gc"] Feb 25 11:33:16 crc kubenswrapper[5005]: E0225 11:33:16.008818 5005 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.89:5001/openstack-k8s-operators/test-operator:d20663b7d233ae81245fb9370fd707e869eadfde,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lghxc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5d94c77696-97dh4_openstack-operators(5a255675-1625-4741-a27d-dbd287a31276): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 25 11:33:16 crc kubenswrapper[5005]: I0225 11:33:16.009254 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-589c568786-x5c6m"] Feb 25 11:33:16 crc kubenswrapper[5005]: W0225 11:33:16.009752 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod667f5dd6_b885_416e_b0da_aa3af5f91d4e.slice/crio-0742cb7fbb038be98c0f5b59a6322364934a83985dde900a0005b629e8e824ac WatchSource:0}: Error finding container 0742cb7fbb038be98c0f5b59a6322364934a83985dde900a0005b629e8e824ac: Status 404 returned error can't find the container with id 0742cb7fbb038be98c0f5b59a6322364934a83985dde900a0005b629e8e824ac Feb 25 11:33:16 crc kubenswrapper[5005]: E0225 11:33:16.009916 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-5d94c77696-97dh4" podUID="5a255675-1625-4741-a27d-dbd287a31276" Feb 25 11:33:16 crc kubenswrapper[5005]: W0225 11:33:16.010266 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod626ae184_1624_447b_a9be_7e4d92dc4e67.slice/crio-f378fbcf5cc3f82a336f5ef39052239782281a37fa8bec2f6191caf99eadb6cf WatchSource:0}: Error finding container f378fbcf5cc3f82a336f5ef39052239782281a37fa8bec2f6191caf99eadb6cf: Status 404 returned error can't find the container with id f378fbcf5cc3f82a336f5ef39052239782281a37fa8bec2f6191caf99eadb6cf Feb 25 11:33:16 crc kubenswrapper[5005]: E0225 11:33:16.012815 5005 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ng9f5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-fc4gc_openstack-operators(626ae184-1624-447b-a9be-7e4d92dc4e67): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 25 11:33:16 crc kubenswrapper[5005]: E0225 11:33:16.013822 5005 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:4eb8fab5530a08915d3ab3e11e2808aeae16c8a220ed34ee04a186b2ae2303dc,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9pfdw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-589c568786-x5c6m_openstack-operators(667f5dd6-b885-416e-b0da-aa3af5f91d4e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 25 11:33:16 crc kubenswrapper[5005]: E0225 11:33:16.013879 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fc4gc" podUID="626ae184-1624-447b-a9be-7e4d92dc4e67" Feb 25 11:33:16 crc kubenswrapper[5005]: E0225 11:33:16.015888 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-x5c6m" podUID="667f5dd6-b885-416e-b0da-aa3af5f91d4e" Feb 25 11:33:16 crc kubenswrapper[5005]: I0225 11:33:16.046022 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/153c478a-59a9-4d31-8822-cfb3b62d9c39-cert\") pod \"infra-operator-controller-manager-79d975b745-8jrrj\" (UID: \"153c478a-59a9-4d31-8822-cfb3b62d9c39\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-8jrrj" Feb 25 11:33:16 crc kubenswrapper[5005]: E0225 11:33:16.046216 5005 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 25 11:33:16 crc kubenswrapper[5005]: E0225 11:33:16.046297 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/153c478a-59a9-4d31-8822-cfb3b62d9c39-cert podName:153c478a-59a9-4d31-8822-cfb3b62d9c39 nodeName:}" failed. No retries permitted until 2026-02-25 11:33:18.046278744 +0000 UTC m=+912.087011071 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/153c478a-59a9-4d31-8822-cfb3b62d9c39-cert") pod "infra-operator-controller-manager-79d975b745-8jrrj" (UID: "153c478a-59a9-4d31-8822-cfb3b62d9c39") : secret "infra-operator-webhook-server-cert" not found Feb 25 11:33:16 crc kubenswrapper[5005]: I0225 11:33:16.217691 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fc4gc" event={"ID":"626ae184-1624-447b-a9be-7e4d92dc4e67","Type":"ContainerStarted","Data":"f378fbcf5cc3f82a336f5ef39052239782281a37fa8bec2f6191caf99eadb6cf"} Feb 25 11:33:16 crc kubenswrapper[5005]: I0225 11:33:16.219469 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5d94c77696-97dh4" event={"ID":"5a255675-1625-4741-a27d-dbd287a31276","Type":"ContainerStarted","Data":"d0690eb52991fa4762544cbe40f6f506d4dfb7253d88d07725d22b1d4f397e9a"} Feb 25 11:33:16 crc kubenswrapper[5005]: E0225 11:33:16.219617 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fc4gc" podUID="626ae184-1624-447b-a9be-7e4d92dc4e67" Feb 25 11:33:16 crc kubenswrapper[5005]: E0225 11:33:16.221683 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.89:5001/openstack-k8s-operators/test-operator:d20663b7d233ae81245fb9370fd707e869eadfde\\\"\"" pod="openstack-operators/test-operator-controller-manager-5d94c77696-97dh4" podUID="5a255675-1625-4741-a27d-dbd287a31276" Feb 25 11:33:16 crc kubenswrapper[5005]: I0225 11:33:16.222502 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-8mtqq" event={"ID":"bb12a9fa-312f-4ecd-9732-513717eeb77e","Type":"ContainerStarted","Data":"615ce95163178b41ce1af98f2f07507404dba26fdaee71b2f76ea5ce9dd11b54"} Feb 25 11:33:16 crc kubenswrapper[5005]: I0225 11:33:16.223686 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-79ddk" event={"ID":"e5913871-3107-4c84-b940-34c8f4171fc2","Type":"ContainerStarted","Data":"f17303513ac1045455b9a8a3c46788671dca557a42692b764649688422660c66"} Feb 25 11:33:16 crc kubenswrapper[5005]: I0225 11:33:16.229416 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-5gzxv" event={"ID":"681d6a40-d67d-4d22-985d-7f3b6f10a1d7","Type":"ContainerStarted","Data":"ac84861d42bb60d12162672edb8a3286b466f215b71dc3473e2343205524f6a8"} Feb 25 11:33:16 crc kubenswrapper[5005]: I0225 11:33:16.231114 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-kmwrd" event={"ID":"10203f00-712d-4f78-87eb-973cd8b82e16","Type":"ContainerStarted","Data":"309d8edb35d04984128ed8474e5705003e551d6f6fa1bb4fe8e5de8562520135"} Feb 25 11:33:16 crc kubenswrapper[5005]: I0225 11:33:16.245037 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-4kqj7" event={"ID":"a87589fa-1024-43bc-85ec-e9c3bf944db3","Type":"ContainerStarted","Data":"943e5caf59994aaf1f38e10b080281766d9520c93b71244cb6e943f41d496fc8"} Feb 25 11:33:16 crc kubenswrapper[5005]: I0225 11:33:16.246507 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-xfwvf" event={"ID":"100cda48-3590-4281-8f00-6881497a2420","Type":"ContainerStarted","Data":"5bd3893907b249cbec84d993abd054224d991c564ecd6dc1bdc5bfa443c21e1c"} Feb 25 11:33:16 crc kubenswrapper[5005]: I0225 11:33:16.247535 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-6hqnr" event={"ID":"e43cd401-1094-4b7e-89cd-08216d652cee","Type":"ContainerStarted","Data":"ceabbd0942458addc261d5713808a618e621e78c44ada54b18d535a41f18312b"} Feb 25 11:33:16 crc kubenswrapper[5005]: I0225 11:33:16.249338 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ce436a93-47a7-48b4-8134-04bd264c6105-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ck47t4\" (UID: \"ce436a93-47a7-48b4-8134-04bd264c6105\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ck47t4" Feb 25 11:33:16 crc kubenswrapper[5005]: E0225 11:33:16.249530 5005 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 25 11:33:16 crc kubenswrapper[5005]: E0225 11:33:16.249610 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce436a93-47a7-48b4-8134-04bd264c6105-cert podName:ce436a93-47a7-48b4-8134-04bd264c6105 nodeName:}" failed. No retries permitted until 2026-02-25 11:33:18.249591007 +0000 UTC m=+912.290323334 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ce436a93-47a7-48b4-8134-04bd264c6105-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9ck47t4" (UID: "ce436a93-47a7-48b4-8134-04bd264c6105") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 25 11:33:16 crc kubenswrapper[5005]: I0225 11:33:16.250002 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-b27xm" event={"ID":"22145e61-39f2-479f-a6a9-82afb5c654df","Type":"ContainerStarted","Data":"9ba5388c6352bc913f131ec16d8ac7fcd3268f37c587280db30a3ead76033830"} Feb 25 11:33:16 crc kubenswrapper[5005]: I0225 11:33:16.252491 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-zzcv7" event={"ID":"8855d562-69b7-4122-804f-dd87a6a3031a","Type":"ContainerStarted","Data":"03374abc3d8c02a522221e0554a3c2a8a3e62238e1b5185465008fd20d1d4938"} Feb 25 11:33:16 crc kubenswrapper[5005]: E0225 11:33:16.257470 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-b27xm" podUID="22145e61-39f2-479f-a6a9-82afb5c654df" Feb 25 11:33:16 crc kubenswrapper[5005]: I0225 11:33:16.258159 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-swlsl" event={"ID":"6a567e4b-427c-4355-a59b-22f247ce374f","Type":"ContainerStarted","Data":"4a352675c2701ca2d8f0aca3dab00124db7f6c351fb06ec985459d38fd21e0a3"} Feb 25 11:33:16 crc kubenswrapper[5005]: I0225 11:33:16.260983 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-xfzrq" event={"ID":"12ea1888-c73d-4a23-a3c2-ba52588b8eba","Type":"ContainerStarted","Data":"40f9cff4c2a20ff6f3d9e1f9b7040bf0be98b4f0f6cc3eccb8bf38548b62a007"} Feb 25 11:33:16 crc kubenswrapper[5005]: I0225 11:33:16.265338 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-67d996989d-zlm8p" event={"ID":"c70b1ffc-e4b5-4e6e-9b0e-e3141c223d33","Type":"ContainerStarted","Data":"e003bfd9bda6050148f2ca0eac4ca13946f492d7b01717c3cc1af093f4f7553e"} Feb 25 11:33:16 crc kubenswrapper[5005]: E0225 11:33:16.267028 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:f1158ec4d879c4646eee4323bc501eba4d377beb2ad6fbe08ed30070c441ac26\\\"\"" pod="openstack-operators/manila-operator-controller-manager-67d996989d-zlm8p" podUID="c70b1ffc-e4b5-4e6e-9b0e-e3141c223d33" Feb 25 11:33:16 crc kubenswrapper[5005]: I0225 11:33:16.267979 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-c24l5" event={"ID":"baa1eb2e-998d-46b3-8641-f2274bb32274","Type":"ContainerStarted","Data":"ba5ba88757bf638fbbc0f90418f601c78c0b6ad31cf4178e42c36b6057e99882"} Feb 25 11:33:16 crc kubenswrapper[5005]: I0225 11:33:16.269819 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-xr4bs" event={"ID":"ef3a922b-0335-4403-b695-5924b4ce2650","Type":"ContainerStarted","Data":"953d06db265373364c4d66296f3fbe729d03781edebaf0f4628fd722208ab37f"} Feb 25 11:33:16 crc kubenswrapper[5005]: I0225 11:33:16.271072 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-x5c6m" event={"ID":"667f5dd6-b885-416e-b0da-aa3af5f91d4e","Type":"ContainerStarted","Data":"0742cb7fbb038be98c0f5b59a6322364934a83985dde900a0005b629e8e824ac"} Feb 25 11:33:16 crc kubenswrapper[5005]: E0225 11:33:16.274296 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:4eb8fab5530a08915d3ab3e11e2808aeae16c8a220ed34ee04a186b2ae2303dc\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-x5c6m" podUID="667f5dd6-b885-416e-b0da-aa3af5f91d4e" Feb 25 11:33:16 crc kubenswrapper[5005]: I0225 11:33:16.275703 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-2flh2" event={"ID":"bbe120fb-31bc-4979-afb0-e629a69b4c80","Type":"ContainerStarted","Data":"0a6f345f52dcb4870c6aae9a44a244b3da68a969b1c9a049dcfc7ddfca2c3c0d"} Feb 25 11:33:16 crc kubenswrapper[5005]: E0225 11:33:16.277462 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-xr4bs" podUID="ef3a922b-0335-4403-b695-5924b4ce2650" Feb 25 11:33:16 crc kubenswrapper[5005]: I0225 11:33:16.278169 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-b4p92" event={"ID":"d4d77380-132e-40ee-859f-ed77a83e2f0a","Type":"ContainerStarted","Data":"8d475c9a2b9eff6ac3df9543d5fd8f81c354c20111a81076e5a6eafd432d06b1"} Feb 25 11:33:16 crc kubenswrapper[5005]: I0225 11:33:16.279540 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-5rslz" event={"ID":"ec29cb03-6d5b-4922-bf56-da937019444d","Type":"ContainerStarted","Data":"aa1cd41980f76c511e7fb4a7504df61ca1ae7cce694aa791903f2dbad73607d6"} Feb 25 11:33:16 crc kubenswrapper[5005]: E0225 11:33:16.280932 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-5rslz" podUID="ec29cb03-6d5b-4922-bf56-da937019444d" Feb 25 11:33:16 crc kubenswrapper[5005]: I0225 11:33:16.767948 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c058ccb9-0d11-4b5b-bbc8-bb88b605a661-webhook-certs\") pod \"openstack-operator-controller-manager-655db7587-5x4xq\" (UID: \"c058ccb9-0d11-4b5b-bbc8-bb88b605a661\") " pod="openstack-operators/openstack-operator-controller-manager-655db7587-5x4xq" Feb 25 11:33:16 crc kubenswrapper[5005]: E0225 11:33:16.771748 5005 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 25 11:33:16 crc kubenswrapper[5005]: E0225 11:33:16.771823 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c058ccb9-0d11-4b5b-bbc8-bb88b605a661-webhook-certs podName:c058ccb9-0d11-4b5b-bbc8-bb88b605a661 nodeName:}" failed. No retries permitted until 2026-02-25 11:33:18.771806678 +0000 UTC m=+912.812539005 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c058ccb9-0d11-4b5b-bbc8-bb88b605a661-webhook-certs") pod "openstack-operator-controller-manager-655db7587-5x4xq" (UID: "c058ccb9-0d11-4b5b-bbc8-bb88b605a661") : secret "webhook-server-cert" not found Feb 25 11:33:16 crc kubenswrapper[5005]: E0225 11:33:16.772710 5005 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 25 11:33:16 crc kubenswrapper[5005]: E0225 11:33:16.772889 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c058ccb9-0d11-4b5b-bbc8-bb88b605a661-metrics-certs podName:c058ccb9-0d11-4b5b-bbc8-bb88b605a661 nodeName:}" failed. No retries permitted until 2026-02-25 11:33:18.772878821 +0000 UTC m=+912.813611148 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c058ccb9-0d11-4b5b-bbc8-bb88b605a661-metrics-certs") pod "openstack-operator-controller-manager-655db7587-5x4xq" (UID: "c058ccb9-0d11-4b5b-bbc8-bb88b605a661") : secret "metrics-server-cert" not found Feb 25 11:33:16 crc kubenswrapper[5005]: I0225 11:33:16.773551 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c058ccb9-0d11-4b5b-bbc8-bb88b605a661-metrics-certs\") pod \"openstack-operator-controller-manager-655db7587-5x4xq\" (UID: \"c058ccb9-0d11-4b5b-bbc8-bb88b605a661\") " pod="openstack-operators/openstack-operator-controller-manager-655db7587-5x4xq" Feb 25 11:33:17 crc kubenswrapper[5005]: E0225 11:33:17.300433 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-5rslz" podUID="ec29cb03-6d5b-4922-bf56-da937019444d" Feb 25 11:33:17 crc kubenswrapper[5005]: E0225 11:33:17.301037 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.89:5001/openstack-k8s-operators/test-operator:d20663b7d233ae81245fb9370fd707e869eadfde\\\"\"" pod="openstack-operators/test-operator-controller-manager-5d94c77696-97dh4" podUID="5a255675-1625-4741-a27d-dbd287a31276" Feb 25 11:33:17 crc kubenswrapper[5005]: E0225 11:33:17.301217 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:4eb8fab5530a08915d3ab3e11e2808aeae16c8a220ed34ee04a186b2ae2303dc\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-x5c6m" podUID="667f5dd6-b885-416e-b0da-aa3af5f91d4e" Feb 25 11:33:17 crc kubenswrapper[5005]: E0225 11:33:17.301323 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-xr4bs" podUID="ef3a922b-0335-4403-b695-5924b4ce2650" Feb 25 11:33:17 crc kubenswrapper[5005]: E0225 11:33:17.301274 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:f1158ec4d879c4646eee4323bc501eba4d377beb2ad6fbe08ed30070c441ac26\\\"\"" pod="openstack-operators/manila-operator-controller-manager-67d996989d-zlm8p" podUID="c70b1ffc-e4b5-4e6e-9b0e-e3141c223d33" Feb 25 11:33:17 crc kubenswrapper[5005]: E0225 11:33:17.301448 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fc4gc" podUID="626ae184-1624-447b-a9be-7e4d92dc4e67" Feb 25 11:33:17 crc kubenswrapper[5005]: E0225 11:33:17.301739 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-b27xm" podUID="22145e61-39f2-479f-a6a9-82afb5c654df" Feb 25 11:33:18 crc kubenswrapper[5005]: I0225 11:33:18.099852 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/153c478a-59a9-4d31-8822-cfb3b62d9c39-cert\") pod \"infra-operator-controller-manager-79d975b745-8jrrj\" (UID: \"153c478a-59a9-4d31-8822-cfb3b62d9c39\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-8jrrj" Feb 25 11:33:18 crc kubenswrapper[5005]: E0225 11:33:18.100051 5005 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 25 11:33:18 crc kubenswrapper[5005]: E0225 11:33:18.100156 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/153c478a-59a9-4d31-8822-cfb3b62d9c39-cert podName:153c478a-59a9-4d31-8822-cfb3b62d9c39 nodeName:}" failed. No retries permitted until 2026-02-25 11:33:22.100116291 +0000 UTC m=+916.140848618 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/153c478a-59a9-4d31-8822-cfb3b62d9c39-cert") pod "infra-operator-controller-manager-79d975b745-8jrrj" (UID: "153c478a-59a9-4d31-8822-cfb3b62d9c39") : secret "infra-operator-webhook-server-cert" not found Feb 25 11:33:18 crc kubenswrapper[5005]: I0225 11:33:18.302461 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ce436a93-47a7-48b4-8134-04bd264c6105-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ck47t4\" (UID: \"ce436a93-47a7-48b4-8134-04bd264c6105\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ck47t4" Feb 25 11:33:18 crc kubenswrapper[5005]: E0225 11:33:18.302729 5005 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 25 11:33:18 crc kubenswrapper[5005]: E0225 11:33:18.302777 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce436a93-47a7-48b4-8134-04bd264c6105-cert podName:ce436a93-47a7-48b4-8134-04bd264c6105 nodeName:}" failed. No retries permitted until 2026-02-25 11:33:22.302761962 +0000 UTC m=+916.343494289 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ce436a93-47a7-48b4-8134-04bd264c6105-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9ck47t4" (UID: "ce436a93-47a7-48b4-8134-04bd264c6105") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 25 11:33:18 crc kubenswrapper[5005]: I0225 11:33:18.810157 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c058ccb9-0d11-4b5b-bbc8-bb88b605a661-webhook-certs\") pod \"openstack-operator-controller-manager-655db7587-5x4xq\" (UID: \"c058ccb9-0d11-4b5b-bbc8-bb88b605a661\") " pod="openstack-operators/openstack-operator-controller-manager-655db7587-5x4xq" Feb 25 11:33:18 crc kubenswrapper[5005]: I0225 11:33:18.810271 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c058ccb9-0d11-4b5b-bbc8-bb88b605a661-metrics-certs\") pod \"openstack-operator-controller-manager-655db7587-5x4xq\" (UID: \"c058ccb9-0d11-4b5b-bbc8-bb88b605a661\") " pod="openstack-operators/openstack-operator-controller-manager-655db7587-5x4xq" Feb 25 11:33:18 crc kubenswrapper[5005]: E0225 11:33:18.810313 5005 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 25 11:33:18 crc kubenswrapper[5005]: E0225 11:33:18.810394 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c058ccb9-0d11-4b5b-bbc8-bb88b605a661-webhook-certs podName:c058ccb9-0d11-4b5b-bbc8-bb88b605a661 nodeName:}" failed. No retries permitted until 2026-02-25 11:33:22.810352811 +0000 UTC m=+916.851085138 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c058ccb9-0d11-4b5b-bbc8-bb88b605a661-webhook-certs") pod "openstack-operator-controller-manager-655db7587-5x4xq" (UID: "c058ccb9-0d11-4b5b-bbc8-bb88b605a661") : secret "webhook-server-cert" not found Feb 25 11:33:18 crc kubenswrapper[5005]: E0225 11:33:18.810473 5005 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 25 11:33:18 crc kubenswrapper[5005]: E0225 11:33:18.810582 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c058ccb9-0d11-4b5b-bbc8-bb88b605a661-metrics-certs podName:c058ccb9-0d11-4b5b-bbc8-bb88b605a661 nodeName:}" failed. No retries permitted until 2026-02-25 11:33:22.810551258 +0000 UTC m=+916.851283625 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c058ccb9-0d11-4b5b-bbc8-bb88b605a661-metrics-certs") pod "openstack-operator-controller-manager-655db7587-5x4xq" (UID: "c058ccb9-0d11-4b5b-bbc8-bb88b605a661") : secret "metrics-server-cert" not found Feb 25 11:33:22 crc kubenswrapper[5005]: I0225 11:33:22.170482 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/153c478a-59a9-4d31-8822-cfb3b62d9c39-cert\") pod \"infra-operator-controller-manager-79d975b745-8jrrj\" (UID: \"153c478a-59a9-4d31-8822-cfb3b62d9c39\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-8jrrj" Feb 25 11:33:22 crc kubenswrapper[5005]: E0225 11:33:22.170655 5005 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 25 11:33:22 crc kubenswrapper[5005]: E0225 11:33:22.172804 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/153c478a-59a9-4d31-8822-cfb3b62d9c39-cert podName:153c478a-59a9-4d31-8822-cfb3b62d9c39 nodeName:}" failed. No retries permitted until 2026-02-25 11:33:30.172784134 +0000 UTC m=+924.213516461 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/153c478a-59a9-4d31-8822-cfb3b62d9c39-cert") pod "infra-operator-controller-manager-79d975b745-8jrrj" (UID: "153c478a-59a9-4d31-8822-cfb3b62d9c39") : secret "infra-operator-webhook-server-cert" not found Feb 25 11:33:22 crc kubenswrapper[5005]: I0225 11:33:22.375775 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ce436a93-47a7-48b4-8134-04bd264c6105-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ck47t4\" (UID: \"ce436a93-47a7-48b4-8134-04bd264c6105\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ck47t4" Feb 25 11:33:22 crc kubenswrapper[5005]: E0225 11:33:22.375986 5005 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 25 11:33:22 crc kubenswrapper[5005]: E0225 11:33:22.376055 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce436a93-47a7-48b4-8134-04bd264c6105-cert podName:ce436a93-47a7-48b4-8134-04bd264c6105 nodeName:}" failed. No retries permitted until 2026-02-25 11:33:30.376036734 +0000 UTC m=+924.416769071 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ce436a93-47a7-48b4-8134-04bd264c6105-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9ck47t4" (UID: "ce436a93-47a7-48b4-8134-04bd264c6105") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 25 11:33:22 crc kubenswrapper[5005]: I0225 11:33:22.883158 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c058ccb9-0d11-4b5b-bbc8-bb88b605a661-metrics-certs\") pod \"openstack-operator-controller-manager-655db7587-5x4xq\" (UID: \"c058ccb9-0d11-4b5b-bbc8-bb88b605a661\") " pod="openstack-operators/openstack-operator-controller-manager-655db7587-5x4xq" Feb 25 11:33:22 crc kubenswrapper[5005]: E0225 11:33:22.883422 5005 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 25 11:33:22 crc kubenswrapper[5005]: E0225 11:33:22.883661 5005 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 25 11:33:22 crc kubenswrapper[5005]: E0225 11:33:22.883676 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c058ccb9-0d11-4b5b-bbc8-bb88b605a661-metrics-certs podName:c058ccb9-0d11-4b5b-bbc8-bb88b605a661 nodeName:}" failed. No retries permitted until 2026-02-25 11:33:30.883632933 +0000 UTC m=+924.924365450 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c058ccb9-0d11-4b5b-bbc8-bb88b605a661-metrics-certs") pod "openstack-operator-controller-manager-655db7587-5x4xq" (UID: "c058ccb9-0d11-4b5b-bbc8-bb88b605a661") : secret "metrics-server-cert" not found Feb 25 11:33:22 crc kubenswrapper[5005]: E0225 11:33:22.883733 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c058ccb9-0d11-4b5b-bbc8-bb88b605a661-webhook-certs podName:c058ccb9-0d11-4b5b-bbc8-bb88b605a661 nodeName:}" failed. No retries permitted until 2026-02-25 11:33:30.883712036 +0000 UTC m=+924.924444373 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c058ccb9-0d11-4b5b-bbc8-bb88b605a661-webhook-certs") pod "openstack-operator-controller-manager-655db7587-5x4xq" (UID: "c058ccb9-0d11-4b5b-bbc8-bb88b605a661") : secret "webhook-server-cert" not found Feb 25 11:33:22 crc kubenswrapper[5005]: I0225 11:33:22.883532 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c058ccb9-0d11-4b5b-bbc8-bb88b605a661-webhook-certs\") pod \"openstack-operator-controller-manager-655db7587-5x4xq\" (UID: \"c058ccb9-0d11-4b5b-bbc8-bb88b605a661\") " pod="openstack-operators/openstack-operator-controller-manager-655db7587-5x4xq" Feb 25 11:33:28 crc kubenswrapper[5005]: I0225 11:33:28.412341 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-xfzrq" event={"ID":"12ea1888-c73d-4a23-a3c2-ba52588b8eba","Type":"ContainerStarted","Data":"f256be098f9a60d2b946071941ced6ee0815c6ec7f4f6eb5d1b0e8d7f59982fa"} Feb 25 11:33:28 crc kubenswrapper[5005]: I0225 11:33:28.412876 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-xfzrq" Feb 25 11:33:28 crc kubenswrapper[5005]: I0225 11:33:28.415751 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-4kqj7" event={"ID":"a87589fa-1024-43bc-85ec-e9c3bf944db3","Type":"ContainerStarted","Data":"74ccf68a60c319427dd974171ba19559f8b702f3d0c3dcd17a45e80d7cf589f9"} Feb 25 11:33:28 crc kubenswrapper[5005]: I0225 11:33:28.415889 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-4kqj7" Feb 25 11:33:28 crc kubenswrapper[5005]: I0225 11:33:28.417359 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-xfwvf" event={"ID":"100cda48-3590-4281-8f00-6881497a2420","Type":"ContainerStarted","Data":"c5decbd2f8a0edf1a5bcc51c250c91d03731dc01fb5ad1eb3ee06d111a3fadfb"} Feb 25 11:33:28 crc kubenswrapper[5005]: I0225 11:33:28.417513 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-xfwvf" Feb 25 11:33:28 crc kubenswrapper[5005]: I0225 11:33:28.419171 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-6hqnr" event={"ID":"e43cd401-1094-4b7e-89cd-08216d652cee","Type":"ContainerStarted","Data":"9bb435417c36f98ef2d8aa8b506f661b2d8091e861100ff35415440333b45be2"} Feb 25 11:33:28 crc kubenswrapper[5005]: I0225 11:33:28.419561 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-6hqnr" Feb 25 11:33:28 crc kubenswrapper[5005]: I0225 11:33:28.420661 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-zzcv7" event={"ID":"8855d562-69b7-4122-804f-dd87a6a3031a","Type":"ContainerStarted","Data":"174963ea6d5d53ea42a656139411a6cc6eb1f82d3fa8c35111ec8ca141eac80a"} Feb 25 11:33:28 crc kubenswrapper[5005]: I0225 11:33:28.420992 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-zzcv7" Feb 25 11:33:28 crc kubenswrapper[5005]: I0225 11:33:28.426962 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-swlsl" event={"ID":"6a567e4b-427c-4355-a59b-22f247ce374f","Type":"ContainerStarted","Data":"adf9992c69e9d4bd91d483b58d65849a68d6a8beb1a80942c5a308ad22d2708d"} Feb 25 11:33:28 crc kubenswrapper[5005]: I0225 11:33:28.427582 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-swlsl" Feb 25 11:33:28 crc kubenswrapper[5005]: I0225 11:33:28.431841 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-2flh2" event={"ID":"bbe120fb-31bc-4979-afb0-e629a69b4c80","Type":"ContainerStarted","Data":"29182dc484a1463bdea7fe54339209ce4624ab64ef728174aaca716add9397e7"} Feb 25 11:33:28 crc kubenswrapper[5005]: I0225 11:33:28.432337 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-2flh2" Feb 25 11:33:28 crc kubenswrapper[5005]: I0225 11:33:28.433529 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-b4p92" event={"ID":"d4d77380-132e-40ee-859f-ed77a83e2f0a","Type":"ContainerStarted","Data":"76096003d6c8c5c335553df94dc85102df5dfa7d204531d4f6c680063989e2f7"} Feb 25 11:33:28 crc kubenswrapper[5005]: I0225 11:33:28.433874 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-b4p92" Feb 25 11:33:28 crc kubenswrapper[5005]: I0225 11:33:28.435036 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-c24l5" event={"ID":"baa1eb2e-998d-46b3-8641-f2274bb32274","Type":"ContainerStarted","Data":"82b91ad30f3400bf963c6b35a054b5d3f6d0664de7c72ae44274e1eb0a39fd08"} Feb 25 11:33:28 crc kubenswrapper[5005]: I0225 11:33:28.435418 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-c24l5" Feb 25 11:33:28 crc kubenswrapper[5005]: I0225 11:33:28.440440 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-5gzxv" event={"ID":"681d6a40-d67d-4d22-985d-7f3b6f10a1d7","Type":"ContainerStarted","Data":"bde064a117b0e6d6254a4042b2deb71c037bc2380cb987a7c7753c13b7fb673c"} Feb 25 11:33:28 crc kubenswrapper[5005]: I0225 11:33:28.440561 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-5gzxv" Feb 25 11:33:28 crc kubenswrapper[5005]: I0225 11:33:28.443161 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-8mtqq" event={"ID":"bb12a9fa-312f-4ecd-9732-513717eeb77e","Type":"ContainerStarted","Data":"13fe382e82ae751d2e3a4dd6af6742f267dbbb7f5dda212e7a7b002537ff83d2"} Feb 25 11:33:28 crc kubenswrapper[5005]: I0225 11:33:28.443306 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-8mtqq" Feb 25 11:33:28 crc kubenswrapper[5005]: I0225 11:33:28.445174 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-79ddk" event={"ID":"e5913871-3107-4c84-b940-34c8f4171fc2","Type":"ContainerStarted","Data":"d32b39a755ae6e89c4f513c28d01af182650b8ecafe18a3838da36aa48fccf2e"} Feb 25 11:33:28 crc kubenswrapper[5005]: I0225 11:33:28.445617 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-xfzrq" podStartSLOduration=2.950544056 podStartE2EDuration="14.44560736s" podCreationTimestamp="2026-02-25 11:33:14 +0000 UTC" firstStartedPulling="2026-02-25 11:33:15.829865687 +0000 UTC m=+909.870598014" lastFinishedPulling="2026-02-25 11:33:27.324928991 +0000 UTC m=+921.365661318" observedRunningTime="2026-02-25 11:33:28.439767314 +0000 UTC m=+922.480499641" watchObservedRunningTime="2026-02-25 11:33:28.44560736 +0000 UTC m=+922.486339687" Feb 25 11:33:28 crc kubenswrapper[5005]: I0225 11:33:28.445774 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-79ddk" Feb 25 11:33:28 crc kubenswrapper[5005]: I0225 11:33:28.446885 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-kmwrd" event={"ID":"10203f00-712d-4f78-87eb-973cd8b82e16","Type":"ContainerStarted","Data":"b77a3a993e448c2780b484b00b838f80cf87f65fb7316c9d67991b8f9b8032dd"} Feb 25 11:33:28 crc kubenswrapper[5005]: I0225 11:33:28.447606 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-kmwrd" Feb 25 11:33:28 crc kubenswrapper[5005]: I0225 11:33:28.497829 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-b4p92" podStartSLOduration=2.450223347 podStartE2EDuration="14.49781297s" podCreationTimestamp="2026-02-25 11:33:14 +0000 UTC" firstStartedPulling="2026-02-25 11:33:15.284881375 +0000 UTC m=+909.325613702" lastFinishedPulling="2026-02-25 11:33:27.332470968 +0000 UTC m=+921.373203325" observedRunningTime="2026-02-25 11:33:28.494667975 +0000 UTC m=+922.535400302" watchObservedRunningTime="2026-02-25 11:33:28.49781297 +0000 UTC m=+922.538545297" Feb 25 11:33:28 crc kubenswrapper[5005]: I0225 11:33:28.550907 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-swlsl" podStartSLOduration=2.571241749 podStartE2EDuration="14.550892656s" podCreationTimestamp="2026-02-25 11:33:14 +0000 UTC" firstStartedPulling="2026-02-25 11:33:15.307922283 +0000 UTC m=+909.348654610" lastFinishedPulling="2026-02-25 11:33:27.2875732 +0000 UTC m=+921.328305517" observedRunningTime="2026-02-25 11:33:28.544470202 +0000 UTC m=+922.585202529" watchObservedRunningTime="2026-02-25 11:33:28.550892656 +0000 UTC m=+922.591624973" Feb 25 11:33:28 crc kubenswrapper[5005]: I0225 11:33:28.597822 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-6hqnr" podStartSLOduration=2.754147384 podStartE2EDuration="14.597809426s" podCreationTimestamp="2026-02-25 11:33:14 +0000 UTC" firstStartedPulling="2026-02-25 11:33:15.487845997 +0000 UTC m=+909.528578324" lastFinishedPulling="2026-02-25 11:33:27.331508039 +0000 UTC m=+921.372240366" observedRunningTime="2026-02-25 11:33:28.597132855 +0000 UTC m=+922.637865182" watchObservedRunningTime="2026-02-25 11:33:28.597809426 +0000 UTC m=+922.638541753" Feb 25 11:33:28 crc kubenswrapper[5005]: I0225 11:33:28.621548 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-2flh2" podStartSLOduration=3.149445516 podStartE2EDuration="14.621530674s" podCreationTimestamp="2026-02-25 11:33:14 +0000 UTC" firstStartedPulling="2026-02-25 11:33:15.823279137 +0000 UTC m=+909.864011464" lastFinishedPulling="2026-02-25 11:33:27.295364295 +0000 UTC m=+921.336096622" observedRunningTime="2026-02-25 11:33:28.620262395 +0000 UTC m=+922.660994722" watchObservedRunningTime="2026-02-25 11:33:28.621530674 +0000 UTC m=+922.662263001" Feb 25 11:33:28 crc kubenswrapper[5005]: I0225 11:33:28.663945 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-c24l5" podStartSLOduration=3.112124075 podStartE2EDuration="14.663928476s" podCreationTimestamp="2026-02-25 11:33:14 +0000 UTC" firstStartedPulling="2026-02-25 11:33:15.734783309 +0000 UTC m=+909.775515626" lastFinishedPulling="2026-02-25 11:33:27.2865877 +0000 UTC m=+921.327320027" observedRunningTime="2026-02-25 11:33:28.66040277 +0000 UTC m=+922.701135097" watchObservedRunningTime="2026-02-25 11:33:28.663928476 +0000 UTC m=+922.704660803" Feb 25 11:33:28 crc kubenswrapper[5005]: I0225 11:33:28.698961 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-xfwvf" podStartSLOduration=3.225161685 podStartE2EDuration="14.698941406s" podCreationTimestamp="2026-02-25 11:33:14 +0000 UTC" firstStartedPulling="2026-02-25 11:33:15.819868063 +0000 UTC m=+909.860600390" lastFinishedPulling="2026-02-25 11:33:27.293647754 +0000 UTC m=+921.334380111" observedRunningTime="2026-02-25 11:33:28.697531033 +0000 UTC m=+922.738263440" watchObservedRunningTime="2026-02-25 11:33:28.698941406 +0000 UTC m=+922.739673733" Feb 25 11:33:28 crc kubenswrapper[5005]: I0225 11:33:28.742911 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-4kqj7" podStartSLOduration=3.116341834 podStartE2EDuration="14.742892276s" podCreationTimestamp="2026-02-25 11:33:14 +0000 UTC" firstStartedPulling="2026-02-25 11:33:15.76091966 +0000 UTC m=+909.801651987" lastFinishedPulling="2026-02-25 11:33:27.387470102 +0000 UTC m=+921.428202429" observedRunningTime="2026-02-25 11:33:28.73710578 +0000 UTC m=+922.777838107" watchObservedRunningTime="2026-02-25 11:33:28.742892276 +0000 UTC m=+922.783624603" Feb 25 11:33:28 crc kubenswrapper[5005]: I0225 11:33:28.824148 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-zzcv7" podStartSLOduration=3.468119488 podStartE2EDuration="14.824130204s" podCreationTimestamp="2026-02-25 11:33:14 +0000 UTC" firstStartedPulling="2026-02-25 11:33:15.978963078 +0000 UTC m=+910.019695405" lastFinishedPulling="2026-02-25 11:33:27.334973754 +0000 UTC m=+921.375706121" observedRunningTime="2026-02-25 11:33:28.794867259 +0000 UTC m=+922.835599586" watchObservedRunningTime="2026-02-25 11:33:28.824130204 +0000 UTC m=+922.864862531" Feb 25 11:33:28 crc kubenswrapper[5005]: I0225 11:33:28.825752 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-5gzxv" podStartSLOduration=3.234143448 podStartE2EDuration="14.825746893s" podCreationTimestamp="2026-02-25 11:33:14 +0000 UTC" firstStartedPulling="2026-02-25 11:33:15.694986065 +0000 UTC m=+909.735718392" lastFinishedPulling="2026-02-25 11:33:27.28658951 +0000 UTC m=+921.327321837" observedRunningTime="2026-02-25 11:33:28.821118373 +0000 UTC m=+922.861850700" watchObservedRunningTime="2026-02-25 11:33:28.825746893 +0000 UTC m=+922.866479220" Feb 25 11:33:28 crc kubenswrapper[5005]: I0225 11:33:28.864659 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-79ddk" podStartSLOduration=3.3260068179999998 podStartE2EDuration="14.86464188s" podCreationTimestamp="2026-02-25 11:33:14 +0000 UTC" firstStartedPulling="2026-02-25 11:33:15.748064751 +0000 UTC m=+909.788797078" lastFinishedPulling="2026-02-25 11:33:27.286699773 +0000 UTC m=+921.327432140" observedRunningTime="2026-02-25 11:33:28.858551016 +0000 UTC m=+922.899283343" watchObservedRunningTime="2026-02-25 11:33:28.86464188 +0000 UTC m=+922.905374207" Feb 25 11:33:28 crc kubenswrapper[5005]: I0225 11:33:28.886620 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-8mtqq" podStartSLOduration=3.260717522 podStartE2EDuration="14.886603524s" podCreationTimestamp="2026-02-25 11:33:14 +0000 UTC" firstStartedPulling="2026-02-25 11:33:15.680182457 +0000 UTC m=+909.720914784" lastFinishedPulling="2026-02-25 11:33:27.306068459 +0000 UTC m=+921.346800786" observedRunningTime="2026-02-25 11:33:28.883988635 +0000 UTC m=+922.924720972" watchObservedRunningTime="2026-02-25 11:33:28.886603524 +0000 UTC m=+922.927335841" Feb 25 11:33:28 crc kubenswrapper[5005]: I0225 11:33:28.932538 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-kmwrd" podStartSLOduration=3.407777872 podStartE2EDuration="14.932522034s" podCreationTimestamp="2026-02-25 11:33:14 +0000 UTC" firstStartedPulling="2026-02-25 11:33:15.769054516 +0000 UTC m=+909.809786843" lastFinishedPulling="2026-02-25 11:33:27.293798678 +0000 UTC m=+921.334531005" observedRunningTime="2026-02-25 11:33:28.929647647 +0000 UTC m=+922.970379974" watchObservedRunningTime="2026-02-25 11:33:28.932522034 +0000 UTC m=+922.973254361" Feb 25 11:33:30 crc kubenswrapper[5005]: I0225 11:33:30.216155 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/153c478a-59a9-4d31-8822-cfb3b62d9c39-cert\") pod \"infra-operator-controller-manager-79d975b745-8jrrj\" (UID: \"153c478a-59a9-4d31-8822-cfb3b62d9c39\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-8jrrj" Feb 25 11:33:30 crc kubenswrapper[5005]: E0225 11:33:30.216363 5005 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 25 11:33:30 crc kubenswrapper[5005]: E0225 11:33:30.216588 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/153c478a-59a9-4d31-8822-cfb3b62d9c39-cert podName:153c478a-59a9-4d31-8822-cfb3b62d9c39 nodeName:}" failed. No retries permitted until 2026-02-25 11:33:46.216570637 +0000 UTC m=+940.257302964 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/153c478a-59a9-4d31-8822-cfb3b62d9c39-cert") pod "infra-operator-controller-manager-79d975b745-8jrrj" (UID: "153c478a-59a9-4d31-8822-cfb3b62d9c39") : secret "infra-operator-webhook-server-cert" not found Feb 25 11:33:30 crc kubenswrapper[5005]: I0225 11:33:30.419274 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ce436a93-47a7-48b4-8134-04bd264c6105-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ck47t4\" (UID: \"ce436a93-47a7-48b4-8134-04bd264c6105\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ck47t4" Feb 25 11:33:30 crc kubenswrapper[5005]: E0225 11:33:30.419490 5005 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 25 11:33:30 crc kubenswrapper[5005]: E0225 11:33:30.419569 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce436a93-47a7-48b4-8134-04bd264c6105-cert podName:ce436a93-47a7-48b4-8134-04bd264c6105 nodeName:}" failed. No retries permitted until 2026-02-25 11:33:46.419549829 +0000 UTC m=+940.460282146 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ce436a93-47a7-48b4-8134-04bd264c6105-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9ck47t4" (UID: "ce436a93-47a7-48b4-8134-04bd264c6105") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 25 11:33:30 crc kubenswrapper[5005]: I0225 11:33:30.927233 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c058ccb9-0d11-4b5b-bbc8-bb88b605a661-metrics-certs\") pod \"openstack-operator-controller-manager-655db7587-5x4xq\" (UID: \"c058ccb9-0d11-4b5b-bbc8-bb88b605a661\") " pod="openstack-operators/openstack-operator-controller-manager-655db7587-5x4xq" Feb 25 11:33:30 crc kubenswrapper[5005]: I0225 11:33:30.927595 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c058ccb9-0d11-4b5b-bbc8-bb88b605a661-webhook-certs\") pod \"openstack-operator-controller-manager-655db7587-5x4xq\" (UID: \"c058ccb9-0d11-4b5b-bbc8-bb88b605a661\") " pod="openstack-operators/openstack-operator-controller-manager-655db7587-5x4xq" Feb 25 11:33:30 crc kubenswrapper[5005]: E0225 11:33:30.927781 5005 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 25 11:33:30 crc kubenswrapper[5005]: E0225 11:33:30.927856 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c058ccb9-0d11-4b5b-bbc8-bb88b605a661-metrics-certs podName:c058ccb9-0d11-4b5b-bbc8-bb88b605a661 nodeName:}" failed. No retries permitted until 2026-02-25 11:33:46.927839249 +0000 UTC m=+940.968571576 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c058ccb9-0d11-4b5b-bbc8-bb88b605a661-metrics-certs") pod "openstack-operator-controller-manager-655db7587-5x4xq" (UID: "c058ccb9-0d11-4b5b-bbc8-bb88b605a661") : secret "metrics-server-cert" not found Feb 25 11:33:30 crc kubenswrapper[5005]: E0225 11:33:30.927792 5005 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 25 11:33:30 crc kubenswrapper[5005]: E0225 11:33:30.928224 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c058ccb9-0d11-4b5b-bbc8-bb88b605a661-webhook-certs podName:c058ccb9-0d11-4b5b-bbc8-bb88b605a661 nodeName:}" failed. No retries permitted until 2026-02-25 11:33:46.92818547 +0000 UTC m=+940.968917837 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c058ccb9-0d11-4b5b-bbc8-bb88b605a661-webhook-certs") pod "openstack-operator-controller-manager-655db7587-5x4xq" (UID: "c058ccb9-0d11-4b5b-bbc8-bb88b605a661") : secret "webhook-server-cert" not found Feb 25 11:33:32 crc kubenswrapper[5005]: I0225 11:33:32.479688 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-xr4bs" event={"ID":"ef3a922b-0335-4403-b695-5924b4ce2650","Type":"ContainerStarted","Data":"2da1750ceb50b1739fb02bc89bd19ce4dc016c7f7996577915d2934a11f2b17a"} Feb 25 11:33:32 crc kubenswrapper[5005]: I0225 11:33:32.480666 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-xr4bs" Feb 25 11:33:32 crc kubenswrapper[5005]: I0225 11:33:32.482581 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-x5c6m" event={"ID":"667f5dd6-b885-416e-b0da-aa3af5f91d4e","Type":"ContainerStarted","Data":"b7a84d52a534b592e5fa8001d42e63025c1a7812535c7bb371b0ec9a9440f7e8"} Feb 25 11:33:32 crc kubenswrapper[5005]: I0225 11:33:32.483006 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-x5c6m" Feb 25 11:33:32 crc kubenswrapper[5005]: I0225 11:33:32.487631 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5d94c77696-97dh4" event={"ID":"5a255675-1625-4741-a27d-dbd287a31276","Type":"ContainerStarted","Data":"ac02a7ee3b74b083d1ef9d8b963597e7c330721ec1f224e367459e3a818e6c7a"} Feb 25 11:33:32 crc kubenswrapper[5005]: I0225 11:33:32.487802 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5d94c77696-97dh4" Feb 25 11:33:32 crc kubenswrapper[5005]: I0225 11:33:32.497546 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-xr4bs" podStartSLOduration=3.206037989 podStartE2EDuration="18.497527526s" podCreationTimestamp="2026-02-25 11:33:14 +0000 UTC" firstStartedPulling="2026-02-25 11:33:15.990111846 +0000 UTC m=+910.030844173" lastFinishedPulling="2026-02-25 11:33:31.281601373 +0000 UTC m=+925.322333710" observedRunningTime="2026-02-25 11:33:32.493895555 +0000 UTC m=+926.534627882" watchObservedRunningTime="2026-02-25 11:33:32.497527526 +0000 UTC m=+926.538259853" Feb 25 11:33:32 crc kubenswrapper[5005]: I0225 11:33:32.510932 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-x5c6m" podStartSLOduration=3.226181507 podStartE2EDuration="18.51091466s" podCreationTimestamp="2026-02-25 11:33:14 +0000 UTC" firstStartedPulling="2026-02-25 11:33:16.013757391 +0000 UTC m=+910.054489718" lastFinishedPulling="2026-02-25 11:33:31.298490524 +0000 UTC m=+925.339222871" observedRunningTime="2026-02-25 11:33:32.510094196 +0000 UTC m=+926.550826523" watchObservedRunningTime="2026-02-25 11:33:32.51091466 +0000 UTC m=+926.551646987" Feb 25 11:33:32 crc kubenswrapper[5005]: I0225 11:33:32.523328 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5d94c77696-97dh4" podStartSLOduration=3.251304767 podStartE2EDuration="18.523312726s" podCreationTimestamp="2026-02-25 11:33:14 +0000 UTC" firstStartedPulling="2026-02-25 11:33:16.008711488 +0000 UTC m=+910.049443815" lastFinishedPulling="2026-02-25 11:33:31.280719407 +0000 UTC m=+925.321451774" observedRunningTime="2026-02-25 11:33:32.522764189 +0000 UTC m=+926.563496516" watchObservedRunningTime="2026-02-25 11:33:32.523312726 +0000 UTC m=+926.564045053" Feb 25 11:33:34 crc kubenswrapper[5005]: I0225 11:33:34.509580 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-6hqnr" Feb 25 11:33:34 crc kubenswrapper[5005]: I0225 11:33:34.515724 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-swlsl" Feb 25 11:33:34 crc kubenswrapper[5005]: I0225 11:33:34.552224 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-b4p92" Feb 25 11:33:34 crc kubenswrapper[5005]: I0225 11:33:34.581421 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-c24l5" Feb 25 11:33:34 crc kubenswrapper[5005]: I0225 11:33:34.622349 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-kmwrd" Feb 25 11:33:34 crc kubenswrapper[5005]: I0225 11:33:34.694014 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-79ddk" Feb 25 11:33:34 crc kubenswrapper[5005]: I0225 11:33:34.717978 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-4kqj7" Feb 25 11:33:34 crc kubenswrapper[5005]: I0225 11:33:34.782944 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-8mtqq" Feb 25 11:33:34 crc kubenswrapper[5005]: I0225 11:33:34.790146 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-5gzxv" Feb 25 11:33:34 crc kubenswrapper[5005]: I0225 11:33:34.814457 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-xfzrq" Feb 25 11:33:34 crc kubenswrapper[5005]: I0225 11:33:34.843888 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-2flh2" Feb 25 11:33:34 crc kubenswrapper[5005]: I0225 11:33:34.935765 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-xfwvf" Feb 25 11:33:35 crc kubenswrapper[5005]: I0225 11:33:35.264706 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-zzcv7" Feb 25 11:33:38 crc kubenswrapper[5005]: I0225 11:33:38.525848 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-67d996989d-zlm8p" event={"ID":"c70b1ffc-e4b5-4e6e-9b0e-e3141c223d33","Type":"ContainerStarted","Data":"abeb77c1a22c0450a4be1e72ffaa04d9f996638e822c2b10e6f1e153e2f12d77"} Feb 25 11:33:38 crc kubenswrapper[5005]: I0225 11:33:38.526801 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-67d996989d-zlm8p" Feb 25 11:33:38 crc kubenswrapper[5005]: I0225 11:33:38.528545 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-b27xm" event={"ID":"22145e61-39f2-479f-a6a9-82afb5c654df","Type":"ContainerStarted","Data":"62475a9280e46a30fc7e6c43fb188385ce72993578abf4ae463a8c8ce3cc38db"} Feb 25 11:33:38 crc kubenswrapper[5005]: I0225 11:33:38.528864 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-b27xm" Feb 25 11:33:38 crc kubenswrapper[5005]: I0225 11:33:38.530255 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-5rslz" event={"ID":"ec29cb03-6d5b-4922-bf56-da937019444d","Type":"ContainerStarted","Data":"daeefb91b09a471c0fe21d2a47ae116f18763fc329f86f44e10c52c66dc94b7c"} Feb 25 11:33:38 crc kubenswrapper[5005]: I0225 11:33:38.530599 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-5rslz" Feb 25 11:33:38 crc kubenswrapper[5005]: I0225 11:33:38.536139 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fc4gc" event={"ID":"626ae184-1624-447b-a9be-7e4d92dc4e67","Type":"ContainerStarted","Data":"04689fcf73ef7a21c631d637d65ed59e9f4909803cf2f2c427ea35b0a4eaab31"} Feb 25 11:33:38 crc kubenswrapper[5005]: I0225 11:33:38.542975 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-67d996989d-zlm8p" podStartSLOduration=2.126591045 podStartE2EDuration="24.542963182s" podCreationTimestamp="2026-02-25 11:33:14 +0000 UTC" firstStartedPulling="2026-02-25 11:33:15.834355862 +0000 UTC m=+909.875088189" lastFinishedPulling="2026-02-25 11:33:38.250727969 +0000 UTC m=+932.291460326" observedRunningTime="2026-02-25 11:33:38.539867718 +0000 UTC m=+932.580600045" watchObservedRunningTime="2026-02-25 11:33:38.542963182 +0000 UTC m=+932.583695509" Feb 25 11:33:38 crc kubenswrapper[5005]: I0225 11:33:38.556650 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-b27xm" podStartSLOduration=2.304733585 podStartE2EDuration="24.556629266s" podCreationTimestamp="2026-02-25 11:33:14 +0000 UTC" firstStartedPulling="2026-02-25 11:33:15.999071026 +0000 UTC m=+910.039803353" lastFinishedPulling="2026-02-25 11:33:38.250966707 +0000 UTC m=+932.291699034" observedRunningTime="2026-02-25 11:33:38.55378862 +0000 UTC m=+932.594520947" watchObservedRunningTime="2026-02-25 11:33:38.556629266 +0000 UTC m=+932.597361603" Feb 25 11:33:38 crc kubenswrapper[5005]: I0225 11:33:38.578554 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fc4gc" podStartSLOduration=1.3026217199999999 podStartE2EDuration="23.578495177s" podCreationTimestamp="2026-02-25 11:33:15 +0000 UTC" firstStartedPulling="2026-02-25 11:33:16.012722439 +0000 UTC m=+910.053454766" lastFinishedPulling="2026-02-25 11:33:38.288595876 +0000 UTC m=+932.329328223" observedRunningTime="2026-02-25 11:33:38.575756464 +0000 UTC m=+932.616488801" watchObservedRunningTime="2026-02-25 11:33:38.578495177 +0000 UTC m=+932.619227514" Feb 25 11:33:38 crc kubenswrapper[5005]: I0225 11:33:38.604881 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-5rslz" podStartSLOduration=2.378448846 podStartE2EDuration="24.604858025s" podCreationTimestamp="2026-02-25 11:33:14 +0000 UTC" firstStartedPulling="2026-02-25 11:33:15.98861867 +0000 UTC m=+910.029350997" lastFinishedPulling="2026-02-25 11:33:38.215027849 +0000 UTC m=+932.255760176" observedRunningTime="2026-02-25 11:33:38.598992738 +0000 UTC m=+932.639725075" watchObservedRunningTime="2026-02-25 11:33:38.604858025 +0000 UTC m=+932.645590362" Feb 25 11:33:44 crc kubenswrapper[5005]: I0225 11:33:44.772300 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-67d996989d-zlm8p" Feb 25 11:33:44 crc kubenswrapper[5005]: I0225 11:33:44.801057 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-xr4bs" Feb 25 11:33:45 crc kubenswrapper[5005]: I0225 11:33:45.003193 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-5rslz" Feb 25 11:33:45 crc kubenswrapper[5005]: I0225 11:33:45.054758 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-b27xm" Feb 25 11:33:45 crc kubenswrapper[5005]: I0225 11:33:45.087522 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-x5c6m" Feb 25 11:33:45 crc kubenswrapper[5005]: I0225 11:33:45.267328 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5d94c77696-97dh4" Feb 25 11:33:46 crc kubenswrapper[5005]: I0225 11:33:46.307592 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/153c478a-59a9-4d31-8822-cfb3b62d9c39-cert\") pod \"infra-operator-controller-manager-79d975b745-8jrrj\" (UID: \"153c478a-59a9-4d31-8822-cfb3b62d9c39\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-8jrrj" Feb 25 11:33:46 crc kubenswrapper[5005]: I0225 11:33:46.322934 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/153c478a-59a9-4d31-8822-cfb3b62d9c39-cert\") pod \"infra-operator-controller-manager-79d975b745-8jrrj\" (UID: \"153c478a-59a9-4d31-8822-cfb3b62d9c39\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-8jrrj" Feb 25 11:33:46 crc kubenswrapper[5005]: I0225 11:33:46.455330 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-8jrrj" Feb 25 11:33:46 crc kubenswrapper[5005]: I0225 11:33:46.510664 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ce436a93-47a7-48b4-8134-04bd264c6105-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ck47t4\" (UID: \"ce436a93-47a7-48b4-8134-04bd264c6105\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ck47t4" Feb 25 11:33:46 crc kubenswrapper[5005]: I0225 11:33:46.518113 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ce436a93-47a7-48b4-8134-04bd264c6105-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ck47t4\" (UID: \"ce436a93-47a7-48b4-8134-04bd264c6105\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ck47t4" Feb 25 11:33:46 crc kubenswrapper[5005]: I0225 11:33:46.717158 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ck47t4" Feb 25 11:33:46 crc kubenswrapper[5005]: I0225 11:33:46.939938 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-8jrrj"] Feb 25 11:33:47 crc kubenswrapper[5005]: I0225 11:33:47.020098 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c058ccb9-0d11-4b5b-bbc8-bb88b605a661-metrics-certs\") pod \"openstack-operator-controller-manager-655db7587-5x4xq\" (UID: \"c058ccb9-0d11-4b5b-bbc8-bb88b605a661\") " pod="openstack-operators/openstack-operator-controller-manager-655db7587-5x4xq" Feb 25 11:33:47 crc kubenswrapper[5005]: I0225 11:33:47.020228 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c058ccb9-0d11-4b5b-bbc8-bb88b605a661-webhook-certs\") pod \"openstack-operator-controller-manager-655db7587-5x4xq\" (UID: \"c058ccb9-0d11-4b5b-bbc8-bb88b605a661\") " pod="openstack-operators/openstack-operator-controller-manager-655db7587-5x4xq" Feb 25 11:33:47 crc kubenswrapper[5005]: I0225 11:33:47.026351 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c058ccb9-0d11-4b5b-bbc8-bb88b605a661-webhook-certs\") pod \"openstack-operator-controller-manager-655db7587-5x4xq\" (UID: \"c058ccb9-0d11-4b5b-bbc8-bb88b605a661\") " pod="openstack-operators/openstack-operator-controller-manager-655db7587-5x4xq" Feb 25 11:33:47 crc kubenswrapper[5005]: I0225 11:33:47.026834 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c058ccb9-0d11-4b5b-bbc8-bb88b605a661-metrics-certs\") pod \"openstack-operator-controller-manager-655db7587-5x4xq\" (UID: \"c058ccb9-0d11-4b5b-bbc8-bb88b605a661\") " pod="openstack-operators/openstack-operator-controller-manager-655db7587-5x4xq" Feb 25 11:33:47 crc kubenswrapper[5005]: I0225 11:33:47.156691 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-655db7587-5x4xq" Feb 25 11:33:47 crc kubenswrapper[5005]: I0225 11:33:47.158732 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ck47t4"] Feb 25 11:33:47 crc kubenswrapper[5005]: W0225 11:33:47.181781 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce436a93_47a7_48b4_8134_04bd264c6105.slice/crio-49463d9a2a0f9cf03e830748dae18b84591c9be02d082a1c60999de1cf4158db WatchSource:0}: Error finding container 49463d9a2a0f9cf03e830748dae18b84591c9be02d082a1c60999de1cf4158db: Status 404 returned error can't find the container with id 49463d9a2a0f9cf03e830748dae18b84591c9be02d082a1c60999de1cf4158db Feb 25 11:33:47 crc kubenswrapper[5005]: I0225 11:33:47.620523 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-655db7587-5x4xq"] Feb 25 11:33:47 crc kubenswrapper[5005]: I0225 11:33:47.629647 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-8jrrj" event={"ID":"153c478a-59a9-4d31-8822-cfb3b62d9c39","Type":"ContainerStarted","Data":"b333e7235c64d8e7ea6458caa54698a15680af6b433489ea754ce513326c8812"} Feb 25 11:33:47 crc kubenswrapper[5005]: I0225 11:33:47.632131 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ck47t4" event={"ID":"ce436a93-47a7-48b4-8134-04bd264c6105","Type":"ContainerStarted","Data":"49463d9a2a0f9cf03e830748dae18b84591c9be02d082a1c60999de1cf4158db"} Feb 25 11:33:47 crc kubenswrapper[5005]: W0225 11:33:47.634665 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc058ccb9_0d11_4b5b_bbc8_bb88b605a661.slice/crio-e2989b780cab475a6ac1a655df57af9a9ff7e0cccf17da2869b0f0df2aedc57c WatchSource:0}: Error finding container e2989b780cab475a6ac1a655df57af9a9ff7e0cccf17da2869b0f0df2aedc57c: Status 404 returned error can't find the container with id e2989b780cab475a6ac1a655df57af9a9ff7e0cccf17da2869b0f0df2aedc57c Feb 25 11:33:48 crc kubenswrapper[5005]: I0225 11:33:48.641023 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-655db7587-5x4xq" event={"ID":"c058ccb9-0d11-4b5b-bbc8-bb88b605a661","Type":"ContainerStarted","Data":"c5b2c1c3cf0331780e26a9fb4df23d1565ee9931bb014ce678661cd8b15790fe"} Feb 25 11:33:48 crc kubenswrapper[5005]: I0225 11:33:48.641398 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-655db7587-5x4xq" Feb 25 11:33:48 crc kubenswrapper[5005]: I0225 11:33:48.641411 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-655db7587-5x4xq" event={"ID":"c058ccb9-0d11-4b5b-bbc8-bb88b605a661","Type":"ContainerStarted","Data":"e2989b780cab475a6ac1a655df57af9a9ff7e0cccf17da2869b0f0df2aedc57c"} Feb 25 11:33:48 crc kubenswrapper[5005]: I0225 11:33:48.676067 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-655db7587-5x4xq" podStartSLOduration=34.676046002 podStartE2EDuration="34.676046002s" podCreationTimestamp="2026-02-25 11:33:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:33:48.664577185 +0000 UTC m=+942.705309512" watchObservedRunningTime="2026-02-25 11:33:48.676046002 +0000 UTC m=+942.716778329" Feb 25 11:33:49 crc kubenswrapper[5005]: I0225 11:33:49.648709 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-8jrrj" event={"ID":"153c478a-59a9-4d31-8822-cfb3b62d9c39","Type":"ContainerStarted","Data":"ee80df5ac55af850535badbfa2bf2d269aeaecdcd8b661de131334b221525b6d"} Feb 25 11:33:49 crc kubenswrapper[5005]: I0225 11:33:49.649498 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-8jrrj" Feb 25 11:33:49 crc kubenswrapper[5005]: I0225 11:33:49.670757 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-8jrrj" podStartSLOduration=33.798691236 podStartE2EDuration="35.670738901s" podCreationTimestamp="2026-02-25 11:33:14 +0000 UTC" firstStartedPulling="2026-02-25 11:33:46.944699425 +0000 UTC m=+940.985431762" lastFinishedPulling="2026-02-25 11:33:48.8167471 +0000 UTC m=+942.857479427" observedRunningTime="2026-02-25 11:33:49.664988017 +0000 UTC m=+943.705720344" watchObservedRunningTime="2026-02-25 11:33:49.670738901 +0000 UTC m=+943.711471238" Feb 25 11:33:50 crc kubenswrapper[5005]: I0225 11:33:50.663927 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ck47t4" event={"ID":"ce436a93-47a7-48b4-8134-04bd264c6105","Type":"ContainerStarted","Data":"20b0d4f4e45acf15fed3f82d6ad0a0498c508f2f8e7439e3510dabb7915f9250"} Feb 25 11:33:50 crc kubenswrapper[5005]: I0225 11:33:50.704744 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ck47t4" podStartSLOduration=33.858463304 podStartE2EDuration="36.704724618s" podCreationTimestamp="2026-02-25 11:33:14 +0000 UTC" firstStartedPulling="2026-02-25 11:33:47.189011027 +0000 UTC m=+941.229743364" lastFinishedPulling="2026-02-25 11:33:50.035272351 +0000 UTC m=+944.076004678" observedRunningTime="2026-02-25 11:33:50.696128348 +0000 UTC m=+944.736860685" watchObservedRunningTime="2026-02-25 11:33:50.704724618 +0000 UTC m=+944.745456965" Feb 25 11:33:51 crc kubenswrapper[5005]: I0225 11:33:51.671157 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ck47t4" Feb 25 11:33:56 crc kubenswrapper[5005]: I0225 11:33:56.462959 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-8jrrj" Feb 25 11:33:56 crc kubenswrapper[5005]: I0225 11:33:56.724522 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ck47t4" Feb 25 11:33:57 crc kubenswrapper[5005]: I0225 11:33:57.165715 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-655db7587-5x4xq" Feb 25 11:33:58 crc kubenswrapper[5005]: I0225 11:33:58.087076 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 11:33:58 crc kubenswrapper[5005]: I0225 11:33:58.087169 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 11:34:00 crc kubenswrapper[5005]: I0225 11:34:00.139510 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533654-trt7q"] Feb 25 11:34:00 crc kubenswrapper[5005]: I0225 11:34:00.140923 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533654-trt7q" Feb 25 11:34:00 crc kubenswrapper[5005]: I0225 11:34:00.143635 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 11:34:00 crc kubenswrapper[5005]: I0225 11:34:00.145701 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 11:34:00 crc kubenswrapper[5005]: I0225 11:34:00.146031 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 11:34:00 crc kubenswrapper[5005]: I0225 11:34:00.158654 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533654-trt7q"] Feb 25 11:34:00 crc kubenswrapper[5005]: I0225 11:34:00.326437 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbpsk\" (UniqueName: \"kubernetes.io/projected/78bb9afc-1f26-46ce-bf55-2087e295f2e8-kube-api-access-rbpsk\") pod \"auto-csr-approver-29533654-trt7q\" (UID: \"78bb9afc-1f26-46ce-bf55-2087e295f2e8\") " pod="openshift-infra/auto-csr-approver-29533654-trt7q" Feb 25 11:34:00 crc kubenswrapper[5005]: I0225 11:34:00.427330 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbpsk\" (UniqueName: \"kubernetes.io/projected/78bb9afc-1f26-46ce-bf55-2087e295f2e8-kube-api-access-rbpsk\") pod \"auto-csr-approver-29533654-trt7q\" (UID: \"78bb9afc-1f26-46ce-bf55-2087e295f2e8\") " pod="openshift-infra/auto-csr-approver-29533654-trt7q" Feb 25 11:34:00 crc kubenswrapper[5005]: I0225 11:34:00.453906 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbpsk\" (UniqueName: \"kubernetes.io/projected/78bb9afc-1f26-46ce-bf55-2087e295f2e8-kube-api-access-rbpsk\") pod \"auto-csr-approver-29533654-trt7q\" (UID: \"78bb9afc-1f26-46ce-bf55-2087e295f2e8\") " pod="openshift-infra/auto-csr-approver-29533654-trt7q" Feb 25 11:34:00 crc kubenswrapper[5005]: I0225 11:34:00.458383 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533654-trt7q" Feb 25 11:34:00 crc kubenswrapper[5005]: I0225 11:34:00.885038 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533654-trt7q"] Feb 25 11:34:01 crc kubenswrapper[5005]: I0225 11:34:01.747667 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533654-trt7q" event={"ID":"78bb9afc-1f26-46ce-bf55-2087e295f2e8","Type":"ContainerStarted","Data":"db84289336bde776f419af56499daeb5e28d0f291761be41a8a50936adf6ab51"} Feb 25 11:34:02 crc kubenswrapper[5005]: I0225 11:34:02.758525 5005 generic.go:334] "Generic (PLEG): container finished" podID="78bb9afc-1f26-46ce-bf55-2087e295f2e8" containerID="11f599fbfa88123dd57ef003dbd336d8f4f102be15526c05b87074389c64f328" exitCode=0 Feb 25 11:34:02 crc kubenswrapper[5005]: I0225 11:34:02.759856 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533654-trt7q" event={"ID":"78bb9afc-1f26-46ce-bf55-2087e295f2e8","Type":"ContainerDied","Data":"11f599fbfa88123dd57ef003dbd336d8f4f102be15526c05b87074389c64f328"} Feb 25 11:34:04 crc kubenswrapper[5005]: I0225 11:34:04.043345 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533654-trt7q" Feb 25 11:34:04 crc kubenswrapper[5005]: I0225 11:34:04.181285 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbpsk\" (UniqueName: \"kubernetes.io/projected/78bb9afc-1f26-46ce-bf55-2087e295f2e8-kube-api-access-rbpsk\") pod \"78bb9afc-1f26-46ce-bf55-2087e295f2e8\" (UID: \"78bb9afc-1f26-46ce-bf55-2087e295f2e8\") " Feb 25 11:34:04 crc kubenswrapper[5005]: I0225 11:34:04.186938 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78bb9afc-1f26-46ce-bf55-2087e295f2e8-kube-api-access-rbpsk" (OuterVolumeSpecName: "kube-api-access-rbpsk") pod "78bb9afc-1f26-46ce-bf55-2087e295f2e8" (UID: "78bb9afc-1f26-46ce-bf55-2087e295f2e8"). InnerVolumeSpecName "kube-api-access-rbpsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:34:04 crc kubenswrapper[5005]: I0225 11:34:04.282335 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbpsk\" (UniqueName: \"kubernetes.io/projected/78bb9afc-1f26-46ce-bf55-2087e295f2e8-kube-api-access-rbpsk\") on node \"crc\" DevicePath \"\"" Feb 25 11:34:04 crc kubenswrapper[5005]: I0225 11:34:04.775796 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533654-trt7q" event={"ID":"78bb9afc-1f26-46ce-bf55-2087e295f2e8","Type":"ContainerDied","Data":"db84289336bde776f419af56499daeb5e28d0f291761be41a8a50936adf6ab51"} Feb 25 11:34:04 crc kubenswrapper[5005]: I0225 11:34:04.776138 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db84289336bde776f419af56499daeb5e28d0f291761be41a8a50936adf6ab51" Feb 25 11:34:04 crc kubenswrapper[5005]: I0225 11:34:04.775891 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533654-trt7q" Feb 25 11:34:05 crc kubenswrapper[5005]: I0225 11:34:05.114994 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533648-q2lgl"] Feb 25 11:34:05 crc kubenswrapper[5005]: I0225 11:34:05.120947 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533648-q2lgl"] Feb 25 11:34:06 crc kubenswrapper[5005]: I0225 11:34:06.702698 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99160239-c586-4344-b60e-164b1eb940cb" path="/var/lib/kubelet/pods/99160239-c586-4344-b60e-164b1eb940cb/volumes" Feb 25 11:34:08 crc kubenswrapper[5005]: I0225 11:34:08.229434 5005 scope.go:117] "RemoveContainer" containerID="a9983490c071ee48c65d734c038f6ea8e7e95f5ff000491bf47db5fcee24ff43" Feb 25 11:34:15 crc kubenswrapper[5005]: I0225 11:34:15.027040 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8vs7h"] Feb 25 11:34:15 crc kubenswrapper[5005]: E0225 11:34:15.027928 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78bb9afc-1f26-46ce-bf55-2087e295f2e8" containerName="oc" Feb 25 11:34:15 crc kubenswrapper[5005]: I0225 11:34:15.027943 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="78bb9afc-1f26-46ce-bf55-2087e295f2e8" containerName="oc" Feb 25 11:34:15 crc kubenswrapper[5005]: I0225 11:34:15.028110 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="78bb9afc-1f26-46ce-bf55-2087e295f2e8" containerName="oc" Feb 25 11:34:15 crc kubenswrapper[5005]: I0225 11:34:15.034466 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-8vs7h" Feb 25 11:34:15 crc kubenswrapper[5005]: I0225 11:34:15.038283 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-r89dt" Feb 25 11:34:15 crc kubenswrapper[5005]: I0225 11:34:15.038520 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 25 11:34:15 crc kubenswrapper[5005]: I0225 11:34:15.038679 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 25 11:34:15 crc kubenswrapper[5005]: I0225 11:34:15.038788 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 25 11:34:15 crc kubenswrapper[5005]: I0225 11:34:15.048321 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8vs7h"] Feb 25 11:34:15 crc kubenswrapper[5005]: I0225 11:34:15.108659 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-j4gzh"] Feb 25 11:34:15 crc kubenswrapper[5005]: I0225 11:34:15.110353 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-j4gzh" Feb 25 11:34:15 crc kubenswrapper[5005]: I0225 11:34:15.112347 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 25 11:34:15 crc kubenswrapper[5005]: I0225 11:34:15.117834 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-j4gzh"] Feb 25 11:34:15 crc kubenswrapper[5005]: I0225 11:34:15.142901 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx2pr\" (UniqueName: \"kubernetes.io/projected/636c0b8e-9d52-456e-b2d0-31aeb3a4c849-kube-api-access-jx2pr\") pod \"dnsmasq-dns-675f4bcbfc-8vs7h\" (UID: \"636c0b8e-9d52-456e-b2d0-31aeb3a4c849\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8vs7h" Feb 25 11:34:15 crc kubenswrapper[5005]: I0225 11:34:15.142952 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/636c0b8e-9d52-456e-b2d0-31aeb3a4c849-config\") pod \"dnsmasq-dns-675f4bcbfc-8vs7h\" (UID: \"636c0b8e-9d52-456e-b2d0-31aeb3a4c849\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8vs7h" Feb 25 11:34:15 crc kubenswrapper[5005]: I0225 11:34:15.244624 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/636c0b8e-9d52-456e-b2d0-31aeb3a4c849-config\") pod \"dnsmasq-dns-675f4bcbfc-8vs7h\" (UID: \"636c0b8e-9d52-456e-b2d0-31aeb3a4c849\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8vs7h" Feb 25 11:34:15 crc kubenswrapper[5005]: I0225 11:34:15.244724 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2d2bd2a-3b7f-4416-83da-07ad21b753f5-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-j4gzh\" (UID: \"a2d2bd2a-3b7f-4416-83da-07ad21b753f5\") " pod="openstack/dnsmasq-dns-78dd6ddcc-j4gzh" Feb 25 11:34:15 crc kubenswrapper[5005]: I0225 11:34:15.244751 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2d2bd2a-3b7f-4416-83da-07ad21b753f5-config\") pod \"dnsmasq-dns-78dd6ddcc-j4gzh\" (UID: \"a2d2bd2a-3b7f-4416-83da-07ad21b753f5\") " pod="openstack/dnsmasq-dns-78dd6ddcc-j4gzh" Feb 25 11:34:15 crc kubenswrapper[5005]: I0225 11:34:15.244806 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt7w5\" (UniqueName: \"kubernetes.io/projected/a2d2bd2a-3b7f-4416-83da-07ad21b753f5-kube-api-access-jt7w5\") pod \"dnsmasq-dns-78dd6ddcc-j4gzh\" (UID: \"a2d2bd2a-3b7f-4416-83da-07ad21b753f5\") " pod="openstack/dnsmasq-dns-78dd6ddcc-j4gzh" Feb 25 11:34:15 crc kubenswrapper[5005]: I0225 11:34:15.244833 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jx2pr\" (UniqueName: \"kubernetes.io/projected/636c0b8e-9d52-456e-b2d0-31aeb3a4c849-kube-api-access-jx2pr\") pod \"dnsmasq-dns-675f4bcbfc-8vs7h\" (UID: \"636c0b8e-9d52-456e-b2d0-31aeb3a4c849\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8vs7h" Feb 25 11:34:15 crc kubenswrapper[5005]: I0225 11:34:15.245758 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/636c0b8e-9d52-456e-b2d0-31aeb3a4c849-config\") pod \"dnsmasq-dns-675f4bcbfc-8vs7h\" (UID: \"636c0b8e-9d52-456e-b2d0-31aeb3a4c849\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8vs7h" Feb 25 11:34:15 crc kubenswrapper[5005]: I0225 11:34:15.262718 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jx2pr\" (UniqueName: \"kubernetes.io/projected/636c0b8e-9d52-456e-b2d0-31aeb3a4c849-kube-api-access-jx2pr\") pod \"dnsmasq-dns-675f4bcbfc-8vs7h\" (UID: \"636c0b8e-9d52-456e-b2d0-31aeb3a4c849\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8vs7h" Feb 25 11:34:15 crc kubenswrapper[5005]: I0225 11:34:15.346547 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2d2bd2a-3b7f-4416-83da-07ad21b753f5-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-j4gzh\" (UID: \"a2d2bd2a-3b7f-4416-83da-07ad21b753f5\") " pod="openstack/dnsmasq-dns-78dd6ddcc-j4gzh" Feb 25 11:34:15 crc kubenswrapper[5005]: I0225 11:34:15.346607 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2d2bd2a-3b7f-4416-83da-07ad21b753f5-config\") pod \"dnsmasq-dns-78dd6ddcc-j4gzh\" (UID: \"a2d2bd2a-3b7f-4416-83da-07ad21b753f5\") " pod="openstack/dnsmasq-dns-78dd6ddcc-j4gzh" Feb 25 11:34:15 crc kubenswrapper[5005]: I0225 11:34:15.346650 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt7w5\" (UniqueName: \"kubernetes.io/projected/a2d2bd2a-3b7f-4416-83da-07ad21b753f5-kube-api-access-jt7w5\") pod \"dnsmasq-dns-78dd6ddcc-j4gzh\" (UID: \"a2d2bd2a-3b7f-4416-83da-07ad21b753f5\") " pod="openstack/dnsmasq-dns-78dd6ddcc-j4gzh" Feb 25 11:34:15 crc kubenswrapper[5005]: I0225 11:34:15.347533 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2d2bd2a-3b7f-4416-83da-07ad21b753f5-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-j4gzh\" (UID: \"a2d2bd2a-3b7f-4416-83da-07ad21b753f5\") " pod="openstack/dnsmasq-dns-78dd6ddcc-j4gzh" Feb 25 11:34:15 crc kubenswrapper[5005]: I0225 11:34:15.347573 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2d2bd2a-3b7f-4416-83da-07ad21b753f5-config\") pod \"dnsmasq-dns-78dd6ddcc-j4gzh\" (UID: \"a2d2bd2a-3b7f-4416-83da-07ad21b753f5\") " pod="openstack/dnsmasq-dns-78dd6ddcc-j4gzh" Feb 25 11:34:15 crc kubenswrapper[5005]: I0225 11:34:15.356821 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-8vs7h" Feb 25 11:34:15 crc kubenswrapper[5005]: I0225 11:34:15.377286 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt7w5\" (UniqueName: \"kubernetes.io/projected/a2d2bd2a-3b7f-4416-83da-07ad21b753f5-kube-api-access-jt7w5\") pod \"dnsmasq-dns-78dd6ddcc-j4gzh\" (UID: \"a2d2bd2a-3b7f-4416-83da-07ad21b753f5\") " pod="openstack/dnsmasq-dns-78dd6ddcc-j4gzh" Feb 25 11:34:15 crc kubenswrapper[5005]: I0225 11:34:15.425670 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-j4gzh" Feb 25 11:34:15 crc kubenswrapper[5005]: I0225 11:34:15.887596 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8vs7h"] Feb 25 11:34:15 crc kubenswrapper[5005]: W0225 11:34:15.897020 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod636c0b8e_9d52_456e_b2d0_31aeb3a4c849.slice/crio-b3d010f6b771d276532c0de7b2bd4838ac884846a8f9f74f6f0eb3f154f82b37 WatchSource:0}: Error finding container b3d010f6b771d276532c0de7b2bd4838ac884846a8f9f74f6f0eb3f154f82b37: Status 404 returned error can't find the container with id b3d010f6b771d276532c0de7b2bd4838ac884846a8f9f74f6f0eb3f154f82b37 Feb 25 11:34:15 crc kubenswrapper[5005]: I0225 11:34:15.939792 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-j4gzh"] Feb 25 11:34:15 crc kubenswrapper[5005]: W0225 11:34:15.948336 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2d2bd2a_3b7f_4416_83da_07ad21b753f5.slice/crio-d7e42993b3a83b4595753299da73836ea79eb7fb5ec32667144c7fcfea1626ad WatchSource:0}: Error finding container d7e42993b3a83b4595753299da73836ea79eb7fb5ec32667144c7fcfea1626ad: Status 404 returned error can't find the container with id d7e42993b3a83b4595753299da73836ea79eb7fb5ec32667144c7fcfea1626ad Feb 25 11:34:16 crc kubenswrapper[5005]: I0225 11:34:16.883175 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-j4gzh" event={"ID":"a2d2bd2a-3b7f-4416-83da-07ad21b753f5","Type":"ContainerStarted","Data":"d7e42993b3a83b4595753299da73836ea79eb7fb5ec32667144c7fcfea1626ad"} Feb 25 11:34:16 crc kubenswrapper[5005]: I0225 11:34:16.884785 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-8vs7h" event={"ID":"636c0b8e-9d52-456e-b2d0-31aeb3a4c849","Type":"ContainerStarted","Data":"b3d010f6b771d276532c0de7b2bd4838ac884846a8f9f74f6f0eb3f154f82b37"} Feb 25 11:34:17 crc kubenswrapper[5005]: I0225 11:34:17.706995 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8vs7h"] Feb 25 11:34:17 crc kubenswrapper[5005]: I0225 11:34:17.746156 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-9524w"] Feb 25 11:34:17 crc kubenswrapper[5005]: I0225 11:34:17.747347 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-9524w" Feb 25 11:34:17 crc kubenswrapper[5005]: I0225 11:34:17.753848 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-9524w"] Feb 25 11:34:17 crc kubenswrapper[5005]: I0225 11:34:17.885513 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sbtq\" (UniqueName: \"kubernetes.io/projected/d71bf24a-67d7-40ba-8368-5dfb5d2b6036-kube-api-access-8sbtq\") pod \"dnsmasq-dns-666b6646f7-9524w\" (UID: \"d71bf24a-67d7-40ba-8368-5dfb5d2b6036\") " pod="openstack/dnsmasq-dns-666b6646f7-9524w" Feb 25 11:34:17 crc kubenswrapper[5005]: I0225 11:34:17.886400 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d71bf24a-67d7-40ba-8368-5dfb5d2b6036-config\") pod \"dnsmasq-dns-666b6646f7-9524w\" (UID: \"d71bf24a-67d7-40ba-8368-5dfb5d2b6036\") " pod="openstack/dnsmasq-dns-666b6646f7-9524w" Feb 25 11:34:17 crc kubenswrapper[5005]: I0225 11:34:17.886576 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d71bf24a-67d7-40ba-8368-5dfb5d2b6036-dns-svc\") pod \"dnsmasq-dns-666b6646f7-9524w\" (UID: \"d71bf24a-67d7-40ba-8368-5dfb5d2b6036\") " pod="openstack/dnsmasq-dns-666b6646f7-9524w" Feb 25 11:34:17 crc kubenswrapper[5005]: I0225 11:34:17.988348 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sbtq\" (UniqueName: \"kubernetes.io/projected/d71bf24a-67d7-40ba-8368-5dfb5d2b6036-kube-api-access-8sbtq\") pod \"dnsmasq-dns-666b6646f7-9524w\" (UID: \"d71bf24a-67d7-40ba-8368-5dfb5d2b6036\") " pod="openstack/dnsmasq-dns-666b6646f7-9524w" Feb 25 11:34:17 crc kubenswrapper[5005]: I0225 11:34:17.988499 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d71bf24a-67d7-40ba-8368-5dfb5d2b6036-config\") pod \"dnsmasq-dns-666b6646f7-9524w\" (UID: \"d71bf24a-67d7-40ba-8368-5dfb5d2b6036\") " pod="openstack/dnsmasq-dns-666b6646f7-9524w" Feb 25 11:34:17 crc kubenswrapper[5005]: I0225 11:34:17.988530 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d71bf24a-67d7-40ba-8368-5dfb5d2b6036-dns-svc\") pod \"dnsmasq-dns-666b6646f7-9524w\" (UID: \"d71bf24a-67d7-40ba-8368-5dfb5d2b6036\") " pod="openstack/dnsmasq-dns-666b6646f7-9524w" Feb 25 11:34:17 crc kubenswrapper[5005]: I0225 11:34:17.989454 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d71bf24a-67d7-40ba-8368-5dfb5d2b6036-config\") pod \"dnsmasq-dns-666b6646f7-9524w\" (UID: \"d71bf24a-67d7-40ba-8368-5dfb5d2b6036\") " pod="openstack/dnsmasq-dns-666b6646f7-9524w" Feb 25 11:34:17 crc kubenswrapper[5005]: I0225 11:34:17.991118 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d71bf24a-67d7-40ba-8368-5dfb5d2b6036-dns-svc\") pod \"dnsmasq-dns-666b6646f7-9524w\" (UID: \"d71bf24a-67d7-40ba-8368-5dfb5d2b6036\") " pod="openstack/dnsmasq-dns-666b6646f7-9524w" Feb 25 11:34:18 crc kubenswrapper[5005]: I0225 11:34:18.010309 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-j4gzh"] Feb 25 11:34:18 crc kubenswrapper[5005]: I0225 11:34:18.012237 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sbtq\" (UniqueName: \"kubernetes.io/projected/d71bf24a-67d7-40ba-8368-5dfb5d2b6036-kube-api-access-8sbtq\") pod \"dnsmasq-dns-666b6646f7-9524w\" (UID: \"d71bf24a-67d7-40ba-8368-5dfb5d2b6036\") " pod="openstack/dnsmasq-dns-666b6646f7-9524w" Feb 25 11:34:18 crc kubenswrapper[5005]: I0225 11:34:18.037632 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-m4bbm"] Feb 25 11:34:18 crc kubenswrapper[5005]: I0225 11:34:18.042143 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-m4bbm" Feb 25 11:34:18 crc kubenswrapper[5005]: I0225 11:34:18.047356 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-m4bbm"] Feb 25 11:34:18 crc kubenswrapper[5005]: I0225 11:34:18.113684 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-9524w" Feb 25 11:34:18 crc kubenswrapper[5005]: I0225 11:34:18.190999 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3705a4c5-72f2-4423-9987-7b182bba8ae6-config\") pod \"dnsmasq-dns-57d769cc4f-m4bbm\" (UID: \"3705a4c5-72f2-4423-9987-7b182bba8ae6\") " pod="openstack/dnsmasq-dns-57d769cc4f-m4bbm" Feb 25 11:34:18 crc kubenswrapper[5005]: I0225 11:34:18.191135 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh9pr\" (UniqueName: \"kubernetes.io/projected/3705a4c5-72f2-4423-9987-7b182bba8ae6-kube-api-access-zh9pr\") pod \"dnsmasq-dns-57d769cc4f-m4bbm\" (UID: \"3705a4c5-72f2-4423-9987-7b182bba8ae6\") " pod="openstack/dnsmasq-dns-57d769cc4f-m4bbm" Feb 25 11:34:18 crc kubenswrapper[5005]: I0225 11:34:18.191161 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3705a4c5-72f2-4423-9987-7b182bba8ae6-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-m4bbm\" (UID: \"3705a4c5-72f2-4423-9987-7b182bba8ae6\") " pod="openstack/dnsmasq-dns-57d769cc4f-m4bbm" Feb 25 11:34:18 crc kubenswrapper[5005]: I0225 11:34:18.294694 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3705a4c5-72f2-4423-9987-7b182bba8ae6-config\") pod \"dnsmasq-dns-57d769cc4f-m4bbm\" (UID: \"3705a4c5-72f2-4423-9987-7b182bba8ae6\") " pod="openstack/dnsmasq-dns-57d769cc4f-m4bbm" Feb 25 11:34:18 crc kubenswrapper[5005]: I0225 11:34:18.295051 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zh9pr\" (UniqueName: \"kubernetes.io/projected/3705a4c5-72f2-4423-9987-7b182bba8ae6-kube-api-access-zh9pr\") pod \"dnsmasq-dns-57d769cc4f-m4bbm\" (UID: \"3705a4c5-72f2-4423-9987-7b182bba8ae6\") " pod="openstack/dnsmasq-dns-57d769cc4f-m4bbm" Feb 25 11:34:18 crc kubenswrapper[5005]: I0225 11:34:18.295071 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3705a4c5-72f2-4423-9987-7b182bba8ae6-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-m4bbm\" (UID: \"3705a4c5-72f2-4423-9987-7b182bba8ae6\") " pod="openstack/dnsmasq-dns-57d769cc4f-m4bbm" Feb 25 11:34:18 crc kubenswrapper[5005]: I0225 11:34:18.296012 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3705a4c5-72f2-4423-9987-7b182bba8ae6-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-m4bbm\" (UID: \"3705a4c5-72f2-4423-9987-7b182bba8ae6\") " pod="openstack/dnsmasq-dns-57d769cc4f-m4bbm" Feb 25 11:34:18 crc kubenswrapper[5005]: I0225 11:34:18.296105 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3705a4c5-72f2-4423-9987-7b182bba8ae6-config\") pod \"dnsmasq-dns-57d769cc4f-m4bbm\" (UID: \"3705a4c5-72f2-4423-9987-7b182bba8ae6\") " pod="openstack/dnsmasq-dns-57d769cc4f-m4bbm" Feb 25 11:34:18 crc kubenswrapper[5005]: I0225 11:34:18.313329 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh9pr\" (UniqueName: \"kubernetes.io/projected/3705a4c5-72f2-4423-9987-7b182bba8ae6-kube-api-access-zh9pr\") pod \"dnsmasq-dns-57d769cc4f-m4bbm\" (UID: \"3705a4c5-72f2-4423-9987-7b182bba8ae6\") " pod="openstack/dnsmasq-dns-57d769cc4f-m4bbm" Feb 25 11:34:18 crc kubenswrapper[5005]: I0225 11:34:18.368982 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-m4bbm" Feb 25 11:34:18 crc kubenswrapper[5005]: I0225 11:34:18.577472 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-9524w"] Feb 25 11:34:18 crc kubenswrapper[5005]: W0225 11:34:18.590712 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd71bf24a_67d7_40ba_8368_5dfb5d2b6036.slice/crio-50155660ec794158d706bc8193ed8e4be985e8bb7600dca5bdfd229f3bc72cf6 WatchSource:0}: Error finding container 50155660ec794158d706bc8193ed8e4be985e8bb7600dca5bdfd229f3bc72cf6: Status 404 returned error can't find the container with id 50155660ec794158d706bc8193ed8e4be985e8bb7600dca5bdfd229f3bc72cf6 Feb 25 11:34:18 crc kubenswrapper[5005]: I0225 11:34:18.781350 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-m4bbm"] Feb 25 11:34:18 crc kubenswrapper[5005]: I0225 11:34:18.896577 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 25 11:34:18 crc kubenswrapper[5005]: I0225 11:34:18.898191 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 25 11:34:18 crc kubenswrapper[5005]: I0225 11:34:18.901728 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 25 11:34:18 crc kubenswrapper[5005]: I0225 11:34:18.921400 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 25 11:34:18 crc kubenswrapper[5005]: I0225 11:34:18.921578 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 25 11:34:18 crc kubenswrapper[5005]: I0225 11:34:18.921689 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 25 11:34:18 crc kubenswrapper[5005]: I0225 11:34:18.921795 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 25 11:34:18 crc kubenswrapper[5005]: I0225 11:34:18.921695 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-5dcsx" Feb 25 11:34:18 crc kubenswrapper[5005]: I0225 11:34:18.921819 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 25 11:34:18 crc kubenswrapper[5005]: I0225 11:34:18.921880 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 25 11:34:18 crc kubenswrapper[5005]: I0225 11:34:18.935737 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-9524w" event={"ID":"d71bf24a-67d7-40ba-8368-5dfb5d2b6036","Type":"ContainerStarted","Data":"50155660ec794158d706bc8193ed8e4be985e8bb7600dca5bdfd229f3bc72cf6"} Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.005859 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/773c4b57-17bf-4159-9b75-81072c68692e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"773c4b57-17bf-4159-9b75-81072c68692e\") " pod="openstack/rabbitmq-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.005960 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/773c4b57-17bf-4159-9b75-81072c68692e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"773c4b57-17bf-4159-9b75-81072c68692e\") " pod="openstack/rabbitmq-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.006135 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnnc2\" (UniqueName: \"kubernetes.io/projected/773c4b57-17bf-4159-9b75-81072c68692e-kube-api-access-tnnc2\") pod \"rabbitmq-server-0\" (UID: \"773c4b57-17bf-4159-9b75-81072c68692e\") " pod="openstack/rabbitmq-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.006276 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/773c4b57-17bf-4159-9b75-81072c68692e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"773c4b57-17bf-4159-9b75-81072c68692e\") " pod="openstack/rabbitmq-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.006351 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/773c4b57-17bf-4159-9b75-81072c68692e-config-data\") pod \"rabbitmq-server-0\" (UID: \"773c4b57-17bf-4159-9b75-81072c68692e\") " pod="openstack/rabbitmq-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.006426 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/773c4b57-17bf-4159-9b75-81072c68692e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"773c4b57-17bf-4159-9b75-81072c68692e\") " pod="openstack/rabbitmq-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.006479 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/773c4b57-17bf-4159-9b75-81072c68692e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"773c4b57-17bf-4159-9b75-81072c68692e\") " pod="openstack/rabbitmq-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.006528 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"773c4b57-17bf-4159-9b75-81072c68692e\") " pod="openstack/rabbitmq-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.006595 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/773c4b57-17bf-4159-9b75-81072c68692e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"773c4b57-17bf-4159-9b75-81072c68692e\") " pod="openstack/rabbitmq-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.006741 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/773c4b57-17bf-4159-9b75-81072c68692e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"773c4b57-17bf-4159-9b75-81072c68692e\") " pod="openstack/rabbitmq-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.006773 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/773c4b57-17bf-4159-9b75-81072c68692e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"773c4b57-17bf-4159-9b75-81072c68692e\") " pod="openstack/rabbitmq-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.108518 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/773c4b57-17bf-4159-9b75-81072c68692e-config-data\") pod \"rabbitmq-server-0\" (UID: \"773c4b57-17bf-4159-9b75-81072c68692e\") " pod="openstack/rabbitmq-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.108608 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/773c4b57-17bf-4159-9b75-81072c68692e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"773c4b57-17bf-4159-9b75-81072c68692e\") " pod="openstack/rabbitmq-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.108657 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/773c4b57-17bf-4159-9b75-81072c68692e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"773c4b57-17bf-4159-9b75-81072c68692e\") " pod="openstack/rabbitmq-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.108739 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"773c4b57-17bf-4159-9b75-81072c68692e\") " pod="openstack/rabbitmq-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.108968 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/773c4b57-17bf-4159-9b75-81072c68692e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"773c4b57-17bf-4159-9b75-81072c68692e\") " pod="openstack/rabbitmq-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.109024 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/773c4b57-17bf-4159-9b75-81072c68692e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"773c4b57-17bf-4159-9b75-81072c68692e\") " pod="openstack/rabbitmq-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.109047 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/773c4b57-17bf-4159-9b75-81072c68692e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"773c4b57-17bf-4159-9b75-81072c68692e\") " pod="openstack/rabbitmq-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.109074 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/773c4b57-17bf-4159-9b75-81072c68692e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"773c4b57-17bf-4159-9b75-81072c68692e\") " pod="openstack/rabbitmq-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.109114 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/773c4b57-17bf-4159-9b75-81072c68692e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"773c4b57-17bf-4159-9b75-81072c68692e\") " pod="openstack/rabbitmq-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.109150 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnnc2\" (UniqueName: \"kubernetes.io/projected/773c4b57-17bf-4159-9b75-81072c68692e-kube-api-access-tnnc2\") pod \"rabbitmq-server-0\" (UID: \"773c4b57-17bf-4159-9b75-81072c68692e\") " pod="openstack/rabbitmq-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.109187 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/773c4b57-17bf-4159-9b75-81072c68692e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"773c4b57-17bf-4159-9b75-81072c68692e\") " pod="openstack/rabbitmq-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.109804 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/773c4b57-17bf-4159-9b75-81072c68692e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"773c4b57-17bf-4159-9b75-81072c68692e\") " pod="openstack/rabbitmq-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.110137 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/773c4b57-17bf-4159-9b75-81072c68692e-config-data\") pod \"rabbitmq-server-0\" (UID: \"773c4b57-17bf-4159-9b75-81072c68692e\") " pod="openstack/rabbitmq-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.110447 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/773c4b57-17bf-4159-9b75-81072c68692e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"773c4b57-17bf-4159-9b75-81072c68692e\") " pod="openstack/rabbitmq-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.110598 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/773c4b57-17bf-4159-9b75-81072c68692e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"773c4b57-17bf-4159-9b75-81072c68692e\") " pod="openstack/rabbitmq-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.110736 5005 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"773c4b57-17bf-4159-9b75-81072c68692e\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.118651 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/773c4b57-17bf-4159-9b75-81072c68692e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"773c4b57-17bf-4159-9b75-81072c68692e\") " pod="openstack/rabbitmq-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.119563 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/773c4b57-17bf-4159-9b75-81072c68692e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"773c4b57-17bf-4159-9b75-81072c68692e\") " pod="openstack/rabbitmq-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.122795 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/773c4b57-17bf-4159-9b75-81072c68692e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"773c4b57-17bf-4159-9b75-81072c68692e\") " pod="openstack/rabbitmq-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.125177 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/773c4b57-17bf-4159-9b75-81072c68692e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"773c4b57-17bf-4159-9b75-81072c68692e\") " pod="openstack/rabbitmq-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.126872 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/773c4b57-17bf-4159-9b75-81072c68692e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"773c4b57-17bf-4159-9b75-81072c68692e\") " pod="openstack/rabbitmq-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.128490 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnnc2\" (UniqueName: \"kubernetes.io/projected/773c4b57-17bf-4159-9b75-81072c68692e-kube-api-access-tnnc2\") pod \"rabbitmq-server-0\" (UID: \"773c4b57-17bf-4159-9b75-81072c68692e\") " pod="openstack/rabbitmq-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.170427 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.171228 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"773c4b57-17bf-4159-9b75-81072c68692e\") " pod="openstack/rabbitmq-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.171620 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.174972 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.175229 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.175410 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.175576 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.175719 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.175862 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-6pt8q" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.175994 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.179906 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.260146 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.312858 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"86534792-e561-447f-bcef-4ff82b02561c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.312926 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/86534792-e561-447f-bcef-4ff82b02561c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"86534792-e561-447f-bcef-4ff82b02561c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.312957 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/86534792-e561-447f-bcef-4ff82b02561c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"86534792-e561-447f-bcef-4ff82b02561c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.312988 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkrdx\" (UniqueName: \"kubernetes.io/projected/86534792-e561-447f-bcef-4ff82b02561c-kube-api-access-hkrdx\") pod \"rabbitmq-cell1-server-0\" (UID: \"86534792-e561-447f-bcef-4ff82b02561c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.313022 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/86534792-e561-447f-bcef-4ff82b02561c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"86534792-e561-447f-bcef-4ff82b02561c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.313049 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/86534792-e561-447f-bcef-4ff82b02561c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"86534792-e561-447f-bcef-4ff82b02561c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.313100 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/86534792-e561-447f-bcef-4ff82b02561c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"86534792-e561-447f-bcef-4ff82b02561c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.313151 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/86534792-e561-447f-bcef-4ff82b02561c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"86534792-e561-447f-bcef-4ff82b02561c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.313177 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/86534792-e561-447f-bcef-4ff82b02561c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"86534792-e561-447f-bcef-4ff82b02561c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.313212 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/86534792-e561-447f-bcef-4ff82b02561c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"86534792-e561-447f-bcef-4ff82b02561c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.313260 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/86534792-e561-447f-bcef-4ff82b02561c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"86534792-e561-447f-bcef-4ff82b02561c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.414470 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/86534792-e561-447f-bcef-4ff82b02561c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"86534792-e561-447f-bcef-4ff82b02561c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.414539 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/86534792-e561-447f-bcef-4ff82b02561c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"86534792-e561-447f-bcef-4ff82b02561c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.414569 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkrdx\" (UniqueName: \"kubernetes.io/projected/86534792-e561-447f-bcef-4ff82b02561c-kube-api-access-hkrdx\") pod \"rabbitmq-cell1-server-0\" (UID: \"86534792-e561-447f-bcef-4ff82b02561c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.414592 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/86534792-e561-447f-bcef-4ff82b02561c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"86534792-e561-447f-bcef-4ff82b02561c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.414610 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/86534792-e561-447f-bcef-4ff82b02561c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"86534792-e561-447f-bcef-4ff82b02561c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.414637 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/86534792-e561-447f-bcef-4ff82b02561c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"86534792-e561-447f-bcef-4ff82b02561c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.414674 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/86534792-e561-447f-bcef-4ff82b02561c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"86534792-e561-447f-bcef-4ff82b02561c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.414695 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/86534792-e561-447f-bcef-4ff82b02561c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"86534792-e561-447f-bcef-4ff82b02561c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.414722 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/86534792-e561-447f-bcef-4ff82b02561c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"86534792-e561-447f-bcef-4ff82b02561c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.414743 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/86534792-e561-447f-bcef-4ff82b02561c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"86534792-e561-447f-bcef-4ff82b02561c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.414771 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"86534792-e561-447f-bcef-4ff82b02561c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.414958 5005 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"86534792-e561-447f-bcef-4ff82b02561c\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.415680 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/86534792-e561-447f-bcef-4ff82b02561c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"86534792-e561-447f-bcef-4ff82b02561c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.415965 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/86534792-e561-447f-bcef-4ff82b02561c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"86534792-e561-447f-bcef-4ff82b02561c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.415990 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/86534792-e561-447f-bcef-4ff82b02561c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"86534792-e561-447f-bcef-4ff82b02561c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.416150 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/86534792-e561-447f-bcef-4ff82b02561c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"86534792-e561-447f-bcef-4ff82b02561c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.417020 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/86534792-e561-447f-bcef-4ff82b02561c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"86534792-e561-447f-bcef-4ff82b02561c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.419523 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/86534792-e561-447f-bcef-4ff82b02561c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"86534792-e561-447f-bcef-4ff82b02561c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.421225 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/86534792-e561-447f-bcef-4ff82b02561c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"86534792-e561-447f-bcef-4ff82b02561c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.421398 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/86534792-e561-447f-bcef-4ff82b02561c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"86534792-e561-447f-bcef-4ff82b02561c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.428848 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/86534792-e561-447f-bcef-4ff82b02561c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"86534792-e561-447f-bcef-4ff82b02561c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.434492 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkrdx\" (UniqueName: \"kubernetes.io/projected/86534792-e561-447f-bcef-4ff82b02561c-kube-api-access-hkrdx\") pod \"rabbitmq-cell1-server-0\" (UID: \"86534792-e561-447f-bcef-4ff82b02561c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.465236 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"86534792-e561-447f-bcef-4ff82b02561c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:34:19 crc kubenswrapper[5005]: I0225 11:34:19.522758 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:34:20 crc kubenswrapper[5005]: I0225 11:34:20.362248 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 25 11:34:20 crc kubenswrapper[5005]: I0225 11:34:20.363819 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 25 11:34:20 crc kubenswrapper[5005]: I0225 11:34:20.375695 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 25 11:34:20 crc kubenswrapper[5005]: I0225 11:34:20.375943 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 25 11:34:20 crc kubenswrapper[5005]: I0225 11:34:20.376094 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-dt67h" Feb 25 11:34:20 crc kubenswrapper[5005]: I0225 11:34:20.376242 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 25 11:34:20 crc kubenswrapper[5005]: I0225 11:34:20.378884 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 25 11:34:20 crc kubenswrapper[5005]: I0225 11:34:20.379681 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 25 11:34:20 crc kubenswrapper[5005]: I0225 11:34:20.427985 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/fe9dcc0a-0321-4f68-929f-fb5393b97e38-config-data-generated\") pod \"openstack-galera-0\" (UID: \"fe9dcc0a-0321-4f68-929f-fb5393b97e38\") " pod="openstack/openstack-galera-0" Feb 25 11:34:20 crc kubenswrapper[5005]: I0225 11:34:20.428033 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/fe9dcc0a-0321-4f68-929f-fb5393b97e38-config-data-default\") pod \"openstack-galera-0\" (UID: \"fe9dcc0a-0321-4f68-929f-fb5393b97e38\") " pod="openstack/openstack-galera-0" Feb 25 11:34:20 crc kubenswrapper[5005]: I0225 11:34:20.428064 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2gd5\" (UniqueName: \"kubernetes.io/projected/fe9dcc0a-0321-4f68-929f-fb5393b97e38-kube-api-access-f2gd5\") pod \"openstack-galera-0\" (UID: \"fe9dcc0a-0321-4f68-929f-fb5393b97e38\") " pod="openstack/openstack-galera-0" Feb 25 11:34:20 crc kubenswrapper[5005]: I0225 11:34:20.428163 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe9dcc0a-0321-4f68-929f-fb5393b97e38-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"fe9dcc0a-0321-4f68-929f-fb5393b97e38\") " pod="openstack/openstack-galera-0" Feb 25 11:34:20 crc kubenswrapper[5005]: I0225 11:34:20.428197 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"fe9dcc0a-0321-4f68-929f-fb5393b97e38\") " pod="openstack/openstack-galera-0" Feb 25 11:34:20 crc kubenswrapper[5005]: I0225 11:34:20.428220 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe9dcc0a-0321-4f68-929f-fb5393b97e38-operator-scripts\") pod \"openstack-galera-0\" (UID: \"fe9dcc0a-0321-4f68-929f-fb5393b97e38\") " pod="openstack/openstack-galera-0" Feb 25 11:34:20 crc kubenswrapper[5005]: I0225 11:34:20.428245 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe9dcc0a-0321-4f68-929f-fb5393b97e38-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"fe9dcc0a-0321-4f68-929f-fb5393b97e38\") " pod="openstack/openstack-galera-0" Feb 25 11:34:20 crc kubenswrapper[5005]: I0225 11:34:20.428286 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fe9dcc0a-0321-4f68-929f-fb5393b97e38-kolla-config\") pod \"openstack-galera-0\" (UID: \"fe9dcc0a-0321-4f68-929f-fb5393b97e38\") " pod="openstack/openstack-galera-0" Feb 25 11:34:20 crc kubenswrapper[5005]: I0225 11:34:20.529597 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe9dcc0a-0321-4f68-929f-fb5393b97e38-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"fe9dcc0a-0321-4f68-929f-fb5393b97e38\") " pod="openstack/openstack-galera-0" Feb 25 11:34:20 crc kubenswrapper[5005]: I0225 11:34:20.529975 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"fe9dcc0a-0321-4f68-929f-fb5393b97e38\") " pod="openstack/openstack-galera-0" Feb 25 11:34:20 crc kubenswrapper[5005]: I0225 11:34:20.530005 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe9dcc0a-0321-4f68-929f-fb5393b97e38-operator-scripts\") pod \"openstack-galera-0\" (UID: \"fe9dcc0a-0321-4f68-929f-fb5393b97e38\") " pod="openstack/openstack-galera-0" Feb 25 11:34:20 crc kubenswrapper[5005]: I0225 11:34:20.530030 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe9dcc0a-0321-4f68-929f-fb5393b97e38-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"fe9dcc0a-0321-4f68-929f-fb5393b97e38\") " pod="openstack/openstack-galera-0" Feb 25 11:34:20 crc kubenswrapper[5005]: I0225 11:34:20.530070 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fe9dcc0a-0321-4f68-929f-fb5393b97e38-kolla-config\") pod \"openstack-galera-0\" (UID: \"fe9dcc0a-0321-4f68-929f-fb5393b97e38\") " pod="openstack/openstack-galera-0" Feb 25 11:34:20 crc kubenswrapper[5005]: I0225 11:34:20.530129 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/fe9dcc0a-0321-4f68-929f-fb5393b97e38-config-data-generated\") pod \"openstack-galera-0\" (UID: \"fe9dcc0a-0321-4f68-929f-fb5393b97e38\") " pod="openstack/openstack-galera-0" Feb 25 11:34:20 crc kubenswrapper[5005]: I0225 11:34:20.530160 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/fe9dcc0a-0321-4f68-929f-fb5393b97e38-config-data-default\") pod \"openstack-galera-0\" (UID: \"fe9dcc0a-0321-4f68-929f-fb5393b97e38\") " pod="openstack/openstack-galera-0" Feb 25 11:34:20 crc kubenswrapper[5005]: I0225 11:34:20.530191 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2gd5\" (UniqueName: \"kubernetes.io/projected/fe9dcc0a-0321-4f68-929f-fb5393b97e38-kube-api-access-f2gd5\") pod \"openstack-galera-0\" (UID: \"fe9dcc0a-0321-4f68-929f-fb5393b97e38\") " pod="openstack/openstack-galera-0" Feb 25 11:34:20 crc kubenswrapper[5005]: I0225 11:34:20.530396 5005 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"fe9dcc0a-0321-4f68-929f-fb5393b97e38\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/openstack-galera-0" Feb 25 11:34:20 crc kubenswrapper[5005]: I0225 11:34:20.530909 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/fe9dcc0a-0321-4f68-929f-fb5393b97e38-config-data-generated\") pod \"openstack-galera-0\" (UID: \"fe9dcc0a-0321-4f68-929f-fb5393b97e38\") " pod="openstack/openstack-galera-0" Feb 25 11:34:20 crc kubenswrapper[5005]: I0225 11:34:20.531315 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fe9dcc0a-0321-4f68-929f-fb5393b97e38-kolla-config\") pod \"openstack-galera-0\" (UID: \"fe9dcc0a-0321-4f68-929f-fb5393b97e38\") " pod="openstack/openstack-galera-0" Feb 25 11:34:20 crc kubenswrapper[5005]: I0225 11:34:20.532805 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/fe9dcc0a-0321-4f68-929f-fb5393b97e38-config-data-default\") pod \"openstack-galera-0\" (UID: \"fe9dcc0a-0321-4f68-929f-fb5393b97e38\") " pod="openstack/openstack-galera-0" Feb 25 11:34:20 crc kubenswrapper[5005]: I0225 11:34:20.533022 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe9dcc0a-0321-4f68-929f-fb5393b97e38-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"fe9dcc0a-0321-4f68-929f-fb5393b97e38\") " pod="openstack/openstack-galera-0" Feb 25 11:34:20 crc kubenswrapper[5005]: I0225 11:34:20.534209 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe9dcc0a-0321-4f68-929f-fb5393b97e38-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"fe9dcc0a-0321-4f68-929f-fb5393b97e38\") " pod="openstack/openstack-galera-0" Feb 25 11:34:20 crc kubenswrapper[5005]: I0225 11:34:20.536001 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe9dcc0a-0321-4f68-929f-fb5393b97e38-operator-scripts\") pod \"openstack-galera-0\" (UID: \"fe9dcc0a-0321-4f68-929f-fb5393b97e38\") " pod="openstack/openstack-galera-0" Feb 25 11:34:20 crc kubenswrapper[5005]: I0225 11:34:20.549343 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2gd5\" (UniqueName: \"kubernetes.io/projected/fe9dcc0a-0321-4f68-929f-fb5393b97e38-kube-api-access-f2gd5\") pod \"openstack-galera-0\" (UID: \"fe9dcc0a-0321-4f68-929f-fb5393b97e38\") " pod="openstack/openstack-galera-0" Feb 25 11:34:20 crc kubenswrapper[5005]: I0225 11:34:20.571433 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"fe9dcc0a-0321-4f68-929f-fb5393b97e38\") " pod="openstack/openstack-galera-0" Feb 25 11:34:20 crc kubenswrapper[5005]: I0225 11:34:20.730513 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 25 11:34:21 crc kubenswrapper[5005]: I0225 11:34:21.831657 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 25 11:34:21 crc kubenswrapper[5005]: I0225 11:34:21.834439 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 25 11:34:21 crc kubenswrapper[5005]: I0225 11:34:21.837344 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 25 11:34:21 crc kubenswrapper[5005]: I0225 11:34:21.837598 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 25 11:34:21 crc kubenswrapper[5005]: I0225 11:34:21.837661 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-knprz" Feb 25 11:34:21 crc kubenswrapper[5005]: I0225 11:34:21.837816 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 25 11:34:21 crc kubenswrapper[5005]: I0225 11:34:21.844493 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 25 11:34:21 crc kubenswrapper[5005]: I0225 11:34:21.951297 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7d41f2ea-e694-463b-a0bb-d8b987bab0b4-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"7d41f2ea-e694-463b-a0bb-d8b987bab0b4\") " pod="openstack/openstack-cell1-galera-0" Feb 25 11:34:21 crc kubenswrapper[5005]: I0225 11:34:21.951444 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d41f2ea-e694-463b-a0bb-d8b987bab0b4-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"7d41f2ea-e694-463b-a0bb-d8b987bab0b4\") " pod="openstack/openstack-cell1-galera-0" Feb 25 11:34:21 crc kubenswrapper[5005]: I0225 11:34:21.951514 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7d41f2ea-e694-463b-a0bb-d8b987bab0b4-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"7d41f2ea-e694-463b-a0bb-d8b987bab0b4\") " pod="openstack/openstack-cell1-galera-0" Feb 25 11:34:21 crc kubenswrapper[5005]: I0225 11:34:21.951545 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xbrx\" (UniqueName: \"kubernetes.io/projected/7d41f2ea-e694-463b-a0bb-d8b987bab0b4-kube-api-access-2xbrx\") pod \"openstack-cell1-galera-0\" (UID: \"7d41f2ea-e694-463b-a0bb-d8b987bab0b4\") " pod="openstack/openstack-cell1-galera-0" Feb 25 11:34:21 crc kubenswrapper[5005]: I0225 11:34:21.951597 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d41f2ea-e694-463b-a0bb-d8b987bab0b4-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"7d41f2ea-e694-463b-a0bb-d8b987bab0b4\") " pod="openstack/openstack-cell1-galera-0" Feb 25 11:34:21 crc kubenswrapper[5005]: I0225 11:34:21.951742 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7d41f2ea-e694-463b-a0bb-d8b987bab0b4-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"7d41f2ea-e694-463b-a0bb-d8b987bab0b4\") " pod="openstack/openstack-cell1-galera-0" Feb 25 11:34:21 crc kubenswrapper[5005]: I0225 11:34:21.951851 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"7d41f2ea-e694-463b-a0bb-d8b987bab0b4\") " pod="openstack/openstack-cell1-galera-0" Feb 25 11:34:21 crc kubenswrapper[5005]: I0225 11:34:21.952048 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d41f2ea-e694-463b-a0bb-d8b987bab0b4-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"7d41f2ea-e694-463b-a0bb-d8b987bab0b4\") " pod="openstack/openstack-cell1-galera-0" Feb 25 11:34:22 crc kubenswrapper[5005]: I0225 11:34:22.053846 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d41f2ea-e694-463b-a0bb-d8b987bab0b4-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"7d41f2ea-e694-463b-a0bb-d8b987bab0b4\") " pod="openstack/openstack-cell1-galera-0" Feb 25 11:34:22 crc kubenswrapper[5005]: I0225 11:34:22.054141 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7d41f2ea-e694-463b-a0bb-d8b987bab0b4-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"7d41f2ea-e694-463b-a0bb-d8b987bab0b4\") " pod="openstack/openstack-cell1-galera-0" Feb 25 11:34:22 crc kubenswrapper[5005]: I0225 11:34:22.054230 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"7d41f2ea-e694-463b-a0bb-d8b987bab0b4\") " pod="openstack/openstack-cell1-galera-0" Feb 25 11:34:22 crc kubenswrapper[5005]: I0225 11:34:22.054330 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d41f2ea-e694-463b-a0bb-d8b987bab0b4-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"7d41f2ea-e694-463b-a0bb-d8b987bab0b4\") " pod="openstack/openstack-cell1-galera-0" Feb 25 11:34:22 crc kubenswrapper[5005]: I0225 11:34:22.054446 5005 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"7d41f2ea-e694-463b-a0bb-d8b987bab0b4\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-cell1-galera-0" Feb 25 11:34:22 crc kubenswrapper[5005]: I0225 11:34:22.054458 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7d41f2ea-e694-463b-a0bb-d8b987bab0b4-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"7d41f2ea-e694-463b-a0bb-d8b987bab0b4\") " pod="openstack/openstack-cell1-galera-0" Feb 25 11:34:22 crc kubenswrapper[5005]: I0225 11:34:22.054591 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d41f2ea-e694-463b-a0bb-d8b987bab0b4-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"7d41f2ea-e694-463b-a0bb-d8b987bab0b4\") " pod="openstack/openstack-cell1-galera-0" Feb 25 11:34:22 crc kubenswrapper[5005]: I0225 11:34:22.054646 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7d41f2ea-e694-463b-a0bb-d8b987bab0b4-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"7d41f2ea-e694-463b-a0bb-d8b987bab0b4\") " pod="openstack/openstack-cell1-galera-0" Feb 25 11:34:22 crc kubenswrapper[5005]: I0225 11:34:22.054671 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xbrx\" (UniqueName: \"kubernetes.io/projected/7d41f2ea-e694-463b-a0bb-d8b987bab0b4-kube-api-access-2xbrx\") pod \"openstack-cell1-galera-0\" (UID: \"7d41f2ea-e694-463b-a0bb-d8b987bab0b4\") " pod="openstack/openstack-cell1-galera-0" Feb 25 11:34:22 crc kubenswrapper[5005]: I0225 11:34:22.055083 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7d41f2ea-e694-463b-a0bb-d8b987bab0b4-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"7d41f2ea-e694-463b-a0bb-d8b987bab0b4\") " pod="openstack/openstack-cell1-galera-0" Feb 25 11:34:22 crc kubenswrapper[5005]: I0225 11:34:22.055577 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7d41f2ea-e694-463b-a0bb-d8b987bab0b4-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"7d41f2ea-e694-463b-a0bb-d8b987bab0b4\") " pod="openstack/openstack-cell1-galera-0" Feb 25 11:34:22 crc kubenswrapper[5005]: I0225 11:34:22.055710 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7d41f2ea-e694-463b-a0bb-d8b987bab0b4-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"7d41f2ea-e694-463b-a0bb-d8b987bab0b4\") " pod="openstack/openstack-cell1-galera-0" Feb 25 11:34:22 crc kubenswrapper[5005]: I0225 11:34:22.057190 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d41f2ea-e694-463b-a0bb-d8b987bab0b4-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"7d41f2ea-e694-463b-a0bb-d8b987bab0b4\") " pod="openstack/openstack-cell1-galera-0" Feb 25 11:34:22 crc kubenswrapper[5005]: I0225 11:34:22.060230 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d41f2ea-e694-463b-a0bb-d8b987bab0b4-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"7d41f2ea-e694-463b-a0bb-d8b987bab0b4\") " pod="openstack/openstack-cell1-galera-0" Feb 25 11:34:22 crc kubenswrapper[5005]: I0225 11:34:22.061310 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d41f2ea-e694-463b-a0bb-d8b987bab0b4-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"7d41f2ea-e694-463b-a0bb-d8b987bab0b4\") " pod="openstack/openstack-cell1-galera-0" Feb 25 11:34:22 crc kubenswrapper[5005]: I0225 11:34:22.086121 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xbrx\" (UniqueName: \"kubernetes.io/projected/7d41f2ea-e694-463b-a0bb-d8b987bab0b4-kube-api-access-2xbrx\") pod \"openstack-cell1-galera-0\" (UID: \"7d41f2ea-e694-463b-a0bb-d8b987bab0b4\") " pod="openstack/openstack-cell1-galera-0" Feb 25 11:34:22 crc kubenswrapper[5005]: I0225 11:34:22.092666 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"7d41f2ea-e694-463b-a0bb-d8b987bab0b4\") " pod="openstack/openstack-cell1-galera-0" Feb 25 11:34:22 crc kubenswrapper[5005]: I0225 11:34:22.119558 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 25 11:34:22 crc kubenswrapper[5005]: I0225 11:34:22.120650 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 25 11:34:22 crc kubenswrapper[5005]: I0225 11:34:22.122387 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 25 11:34:22 crc kubenswrapper[5005]: I0225 11:34:22.122798 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-hppsl" Feb 25 11:34:22 crc kubenswrapper[5005]: I0225 11:34:22.123111 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 25 11:34:22 crc kubenswrapper[5005]: I0225 11:34:22.139632 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 25 11:34:22 crc kubenswrapper[5005]: I0225 11:34:22.164947 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b0ce46e-c63e-49fa-b35c-10745cf3abc4-combined-ca-bundle\") pod \"memcached-0\" (UID: \"9b0ce46e-c63e-49fa-b35c-10745cf3abc4\") " pod="openstack/memcached-0" Feb 25 11:34:22 crc kubenswrapper[5005]: I0225 11:34:22.165038 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwbxq\" (UniqueName: \"kubernetes.io/projected/9b0ce46e-c63e-49fa-b35c-10745cf3abc4-kube-api-access-xwbxq\") pod \"memcached-0\" (UID: \"9b0ce46e-c63e-49fa-b35c-10745cf3abc4\") " pod="openstack/memcached-0" Feb 25 11:34:22 crc kubenswrapper[5005]: I0225 11:34:22.165078 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b0ce46e-c63e-49fa-b35c-10745cf3abc4-config-data\") pod \"memcached-0\" (UID: \"9b0ce46e-c63e-49fa-b35c-10745cf3abc4\") " pod="openstack/memcached-0" Feb 25 11:34:22 crc kubenswrapper[5005]: I0225 11:34:22.165102 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9b0ce46e-c63e-49fa-b35c-10745cf3abc4-kolla-config\") pod \"memcached-0\" (UID: \"9b0ce46e-c63e-49fa-b35c-10745cf3abc4\") " pod="openstack/memcached-0" Feb 25 11:34:22 crc kubenswrapper[5005]: I0225 11:34:22.165117 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b0ce46e-c63e-49fa-b35c-10745cf3abc4-memcached-tls-certs\") pod \"memcached-0\" (UID: \"9b0ce46e-c63e-49fa-b35c-10745cf3abc4\") " pod="openstack/memcached-0" Feb 25 11:34:22 crc kubenswrapper[5005]: I0225 11:34:22.214092 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 25 11:34:22 crc kubenswrapper[5005]: I0225 11:34:22.267143 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwbxq\" (UniqueName: \"kubernetes.io/projected/9b0ce46e-c63e-49fa-b35c-10745cf3abc4-kube-api-access-xwbxq\") pod \"memcached-0\" (UID: \"9b0ce46e-c63e-49fa-b35c-10745cf3abc4\") " pod="openstack/memcached-0" Feb 25 11:34:22 crc kubenswrapper[5005]: I0225 11:34:22.267201 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b0ce46e-c63e-49fa-b35c-10745cf3abc4-config-data\") pod \"memcached-0\" (UID: \"9b0ce46e-c63e-49fa-b35c-10745cf3abc4\") " pod="openstack/memcached-0" Feb 25 11:34:22 crc kubenswrapper[5005]: I0225 11:34:22.267229 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9b0ce46e-c63e-49fa-b35c-10745cf3abc4-kolla-config\") pod \"memcached-0\" (UID: \"9b0ce46e-c63e-49fa-b35c-10745cf3abc4\") " pod="openstack/memcached-0" Feb 25 11:34:22 crc kubenswrapper[5005]: I0225 11:34:22.267246 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b0ce46e-c63e-49fa-b35c-10745cf3abc4-memcached-tls-certs\") pod \"memcached-0\" (UID: \"9b0ce46e-c63e-49fa-b35c-10745cf3abc4\") " pod="openstack/memcached-0" Feb 25 11:34:22 crc kubenswrapper[5005]: I0225 11:34:22.267293 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b0ce46e-c63e-49fa-b35c-10745cf3abc4-combined-ca-bundle\") pod \"memcached-0\" (UID: \"9b0ce46e-c63e-49fa-b35c-10745cf3abc4\") " pod="openstack/memcached-0" Feb 25 11:34:22 crc kubenswrapper[5005]: I0225 11:34:22.268280 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9b0ce46e-c63e-49fa-b35c-10745cf3abc4-kolla-config\") pod \"memcached-0\" (UID: \"9b0ce46e-c63e-49fa-b35c-10745cf3abc4\") " pod="openstack/memcached-0" Feb 25 11:34:22 crc kubenswrapper[5005]: I0225 11:34:22.268698 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b0ce46e-c63e-49fa-b35c-10745cf3abc4-config-data\") pod \"memcached-0\" (UID: \"9b0ce46e-c63e-49fa-b35c-10745cf3abc4\") " pod="openstack/memcached-0" Feb 25 11:34:22 crc kubenswrapper[5005]: I0225 11:34:22.271121 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b0ce46e-c63e-49fa-b35c-10745cf3abc4-combined-ca-bundle\") pod \"memcached-0\" (UID: \"9b0ce46e-c63e-49fa-b35c-10745cf3abc4\") " pod="openstack/memcached-0" Feb 25 11:34:22 crc kubenswrapper[5005]: I0225 11:34:22.271469 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b0ce46e-c63e-49fa-b35c-10745cf3abc4-memcached-tls-certs\") pod \"memcached-0\" (UID: \"9b0ce46e-c63e-49fa-b35c-10745cf3abc4\") " pod="openstack/memcached-0" Feb 25 11:34:22 crc kubenswrapper[5005]: I0225 11:34:22.280790 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwbxq\" (UniqueName: \"kubernetes.io/projected/9b0ce46e-c63e-49fa-b35c-10745cf3abc4-kube-api-access-xwbxq\") pod \"memcached-0\" (UID: \"9b0ce46e-c63e-49fa-b35c-10745cf3abc4\") " pod="openstack/memcached-0" Feb 25 11:34:22 crc kubenswrapper[5005]: I0225 11:34:22.441461 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 25 11:34:22 crc kubenswrapper[5005]: I0225 11:34:22.963722 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-m4bbm" event={"ID":"3705a4c5-72f2-4423-9987-7b182bba8ae6","Type":"ContainerStarted","Data":"ff9c40a8421b0bdc698cd11882eb4f535c9db0a29bd4b532d3b166128c5cba65"} Feb 25 11:34:24 crc kubenswrapper[5005]: I0225 11:34:24.384024 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 25 11:34:24 crc kubenswrapper[5005]: I0225 11:34:24.385664 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 25 11:34:24 crc kubenswrapper[5005]: I0225 11:34:24.388002 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-6fh7x" Feb 25 11:34:24 crc kubenswrapper[5005]: I0225 11:34:24.396850 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 25 11:34:24 crc kubenswrapper[5005]: I0225 11:34:24.518166 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmg8r\" (UniqueName: \"kubernetes.io/projected/06987e0a-c281-4cbf-acdf-5831dd0b3561-kube-api-access-bmg8r\") pod \"kube-state-metrics-0\" (UID: \"06987e0a-c281-4cbf-acdf-5831dd0b3561\") " pod="openstack/kube-state-metrics-0" Feb 25 11:34:24 crc kubenswrapper[5005]: I0225 11:34:24.618999 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmg8r\" (UniqueName: \"kubernetes.io/projected/06987e0a-c281-4cbf-acdf-5831dd0b3561-kube-api-access-bmg8r\") pod \"kube-state-metrics-0\" (UID: \"06987e0a-c281-4cbf-acdf-5831dd0b3561\") " pod="openstack/kube-state-metrics-0" Feb 25 11:34:24 crc kubenswrapper[5005]: I0225 11:34:24.637962 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmg8r\" (UniqueName: \"kubernetes.io/projected/06987e0a-c281-4cbf-acdf-5831dd0b3561-kube-api-access-bmg8r\") pod \"kube-state-metrics-0\" (UID: \"06987e0a-c281-4cbf-acdf-5831dd0b3561\") " pod="openstack/kube-state-metrics-0" Feb 25 11:34:24 crc kubenswrapper[5005]: I0225 11:34:24.704488 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.089364 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-mhhkt"] Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.090508 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mhhkt" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.090862 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.090916 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.092888 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.093072 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.093518 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-lh6v2" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.105351 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-stnkf"] Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.107292 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-stnkf" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.119911 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mhhkt"] Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.126580 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-stnkf"] Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.172099 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.173495 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.175914 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.176178 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.177199 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.177321 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.177441 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-hclbk" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.179060 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.182397 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d7433eab-76d5-403c-8949-6b99fa8624d5-var-log-ovn\") pod \"ovn-controller-mhhkt\" (UID: \"d7433eab-76d5-403c-8949-6b99fa8624d5\") " pod="openstack/ovn-controller-mhhkt" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.182457 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d7433eab-76d5-403c-8949-6b99fa8624d5-var-run\") pod \"ovn-controller-mhhkt\" (UID: \"d7433eab-76d5-403c-8949-6b99fa8624d5\") " pod="openstack/ovn-controller-mhhkt" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.182484 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s75hp\" (UniqueName: \"kubernetes.io/projected/d7433eab-76d5-403c-8949-6b99fa8624d5-kube-api-access-s75hp\") pod \"ovn-controller-mhhkt\" (UID: \"d7433eab-76d5-403c-8949-6b99fa8624d5\") " pod="openstack/ovn-controller-mhhkt" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.182542 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8bb9b011-87e3-4dd1-bec8-10c27806cad6-scripts\") pod \"ovn-controller-ovs-stnkf\" (UID: \"8bb9b011-87e3-4dd1-bec8-10c27806cad6\") " pod="openstack/ovn-controller-ovs-stnkf" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.182578 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8bb9b011-87e3-4dd1-bec8-10c27806cad6-var-run\") pod \"ovn-controller-ovs-stnkf\" (UID: \"8bb9b011-87e3-4dd1-bec8-10c27806cad6\") " pod="openstack/ovn-controller-ovs-stnkf" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.182612 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d7433eab-76d5-403c-8949-6b99fa8624d5-scripts\") pod \"ovn-controller-mhhkt\" (UID: \"d7433eab-76d5-403c-8949-6b99fa8624d5\") " pod="openstack/ovn-controller-mhhkt" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.182696 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7433eab-76d5-403c-8949-6b99fa8624d5-combined-ca-bundle\") pod \"ovn-controller-mhhkt\" (UID: \"d7433eab-76d5-403c-8949-6b99fa8624d5\") " pod="openstack/ovn-controller-mhhkt" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.182760 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8bb9b011-87e3-4dd1-bec8-10c27806cad6-var-log\") pod \"ovn-controller-ovs-stnkf\" (UID: \"8bb9b011-87e3-4dd1-bec8-10c27806cad6\") " pod="openstack/ovn-controller-ovs-stnkf" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.182861 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/8bb9b011-87e3-4dd1-bec8-10c27806cad6-var-lib\") pod \"ovn-controller-ovs-stnkf\" (UID: \"8bb9b011-87e3-4dd1-bec8-10c27806cad6\") " pod="openstack/ovn-controller-ovs-stnkf" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.182986 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s9kp\" (UniqueName: \"kubernetes.io/projected/8bb9b011-87e3-4dd1-bec8-10c27806cad6-kube-api-access-8s9kp\") pod \"ovn-controller-ovs-stnkf\" (UID: \"8bb9b011-87e3-4dd1-bec8-10c27806cad6\") " pod="openstack/ovn-controller-ovs-stnkf" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.183007 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7433eab-76d5-403c-8949-6b99fa8624d5-ovn-controller-tls-certs\") pod \"ovn-controller-mhhkt\" (UID: \"d7433eab-76d5-403c-8949-6b99fa8624d5\") " pod="openstack/ovn-controller-mhhkt" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.183037 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d7433eab-76d5-403c-8949-6b99fa8624d5-var-run-ovn\") pod \"ovn-controller-mhhkt\" (UID: \"d7433eab-76d5-403c-8949-6b99fa8624d5\") " pod="openstack/ovn-controller-mhhkt" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.183102 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/8bb9b011-87e3-4dd1-bec8-10c27806cad6-etc-ovs\") pod \"ovn-controller-ovs-stnkf\" (UID: \"8bb9b011-87e3-4dd1-bec8-10c27806cad6\") " pod="openstack/ovn-controller-ovs-stnkf" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.287288 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d7433eab-76d5-403c-8949-6b99fa8624d5-scripts\") pod \"ovn-controller-mhhkt\" (UID: \"d7433eab-76d5-403c-8949-6b99fa8624d5\") " pod="openstack/ovn-controller-mhhkt" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.288294 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"5b6d805e-5f35-4f58-a71a-5bdbb4eba017\") " pod="openstack/ovsdbserver-nb-0" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.288350 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7433eab-76d5-403c-8949-6b99fa8624d5-combined-ca-bundle\") pod \"ovn-controller-mhhkt\" (UID: \"d7433eab-76d5-403c-8949-6b99fa8624d5\") " pod="openstack/ovn-controller-mhhkt" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.288398 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8bb9b011-87e3-4dd1-bec8-10c27806cad6-var-log\") pod \"ovn-controller-ovs-stnkf\" (UID: \"8bb9b011-87e3-4dd1-bec8-10c27806cad6\") " pod="openstack/ovn-controller-ovs-stnkf" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.288423 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b6d805e-5f35-4f58-a71a-5bdbb4eba017-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"5b6d805e-5f35-4f58-a71a-5bdbb4eba017\") " pod="openstack/ovsdbserver-nb-0" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.288458 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gxpl\" (UniqueName: \"kubernetes.io/projected/5b6d805e-5f35-4f58-a71a-5bdbb4eba017-kube-api-access-8gxpl\") pod \"ovsdbserver-nb-0\" (UID: \"5b6d805e-5f35-4f58-a71a-5bdbb4eba017\") " pod="openstack/ovsdbserver-nb-0" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.288497 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/8bb9b011-87e3-4dd1-bec8-10c27806cad6-var-lib\") pod \"ovn-controller-ovs-stnkf\" (UID: \"8bb9b011-87e3-4dd1-bec8-10c27806cad6\") " pod="openstack/ovn-controller-ovs-stnkf" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.288543 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8s9kp\" (UniqueName: \"kubernetes.io/projected/8bb9b011-87e3-4dd1-bec8-10c27806cad6-kube-api-access-8s9kp\") pod \"ovn-controller-ovs-stnkf\" (UID: \"8bb9b011-87e3-4dd1-bec8-10c27806cad6\") " pod="openstack/ovn-controller-ovs-stnkf" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.288565 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7433eab-76d5-403c-8949-6b99fa8624d5-ovn-controller-tls-certs\") pod \"ovn-controller-mhhkt\" (UID: \"d7433eab-76d5-403c-8949-6b99fa8624d5\") " pod="openstack/ovn-controller-mhhkt" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.288609 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5b6d805e-5f35-4f58-a71a-5bdbb4eba017-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"5b6d805e-5f35-4f58-a71a-5bdbb4eba017\") " pod="openstack/ovsdbserver-nb-0" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.288634 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b6d805e-5f35-4f58-a71a-5bdbb4eba017-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"5b6d805e-5f35-4f58-a71a-5bdbb4eba017\") " pod="openstack/ovsdbserver-nb-0" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.288661 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d7433eab-76d5-403c-8949-6b99fa8624d5-var-run-ovn\") pod \"ovn-controller-mhhkt\" (UID: \"d7433eab-76d5-403c-8949-6b99fa8624d5\") " pod="openstack/ovn-controller-mhhkt" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.288681 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b6d805e-5f35-4f58-a71a-5bdbb4eba017-config\") pod \"ovsdbserver-nb-0\" (UID: \"5b6d805e-5f35-4f58-a71a-5bdbb4eba017\") " pod="openstack/ovsdbserver-nb-0" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.288716 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/8bb9b011-87e3-4dd1-bec8-10c27806cad6-etc-ovs\") pod \"ovn-controller-ovs-stnkf\" (UID: \"8bb9b011-87e3-4dd1-bec8-10c27806cad6\") " pod="openstack/ovn-controller-ovs-stnkf" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.288749 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d7433eab-76d5-403c-8949-6b99fa8624d5-var-log-ovn\") pod \"ovn-controller-mhhkt\" (UID: \"d7433eab-76d5-403c-8949-6b99fa8624d5\") " pod="openstack/ovn-controller-mhhkt" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.288770 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b6d805e-5f35-4f58-a71a-5bdbb4eba017-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"5b6d805e-5f35-4f58-a71a-5bdbb4eba017\") " pod="openstack/ovsdbserver-nb-0" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.288810 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s75hp\" (UniqueName: \"kubernetes.io/projected/d7433eab-76d5-403c-8949-6b99fa8624d5-kube-api-access-s75hp\") pod \"ovn-controller-mhhkt\" (UID: \"d7433eab-76d5-403c-8949-6b99fa8624d5\") " pod="openstack/ovn-controller-mhhkt" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.288832 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d7433eab-76d5-403c-8949-6b99fa8624d5-var-run\") pod \"ovn-controller-mhhkt\" (UID: \"d7433eab-76d5-403c-8949-6b99fa8624d5\") " pod="openstack/ovn-controller-mhhkt" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.288875 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8bb9b011-87e3-4dd1-bec8-10c27806cad6-scripts\") pod \"ovn-controller-ovs-stnkf\" (UID: \"8bb9b011-87e3-4dd1-bec8-10c27806cad6\") " pod="openstack/ovn-controller-ovs-stnkf" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.288896 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5b6d805e-5f35-4f58-a71a-5bdbb4eba017-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"5b6d805e-5f35-4f58-a71a-5bdbb4eba017\") " pod="openstack/ovsdbserver-nb-0" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.288938 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8bb9b011-87e3-4dd1-bec8-10c27806cad6-var-run\") pod \"ovn-controller-ovs-stnkf\" (UID: \"8bb9b011-87e3-4dd1-bec8-10c27806cad6\") " pod="openstack/ovn-controller-ovs-stnkf" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.289345 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d7433eab-76d5-403c-8949-6b99fa8624d5-var-log-ovn\") pod \"ovn-controller-mhhkt\" (UID: \"d7433eab-76d5-403c-8949-6b99fa8624d5\") " pod="openstack/ovn-controller-mhhkt" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.289538 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d7433eab-76d5-403c-8949-6b99fa8624d5-var-run-ovn\") pod \"ovn-controller-mhhkt\" (UID: \"d7433eab-76d5-403c-8949-6b99fa8624d5\") " pod="openstack/ovn-controller-mhhkt" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.289655 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/8bb9b011-87e3-4dd1-bec8-10c27806cad6-etc-ovs\") pod \"ovn-controller-ovs-stnkf\" (UID: \"8bb9b011-87e3-4dd1-bec8-10c27806cad6\") " pod="openstack/ovn-controller-ovs-stnkf" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.289730 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d7433eab-76d5-403c-8949-6b99fa8624d5-var-run\") pod \"ovn-controller-mhhkt\" (UID: \"d7433eab-76d5-403c-8949-6b99fa8624d5\") " pod="openstack/ovn-controller-mhhkt" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.289858 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8bb9b011-87e3-4dd1-bec8-10c27806cad6-var-run\") pod \"ovn-controller-ovs-stnkf\" (UID: \"8bb9b011-87e3-4dd1-bec8-10c27806cad6\") " pod="openstack/ovn-controller-ovs-stnkf" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.290049 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8bb9b011-87e3-4dd1-bec8-10c27806cad6-var-log\") pod \"ovn-controller-ovs-stnkf\" (UID: \"8bb9b011-87e3-4dd1-bec8-10c27806cad6\") " pod="openstack/ovn-controller-ovs-stnkf" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.290285 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/8bb9b011-87e3-4dd1-bec8-10c27806cad6-var-lib\") pod \"ovn-controller-ovs-stnkf\" (UID: \"8bb9b011-87e3-4dd1-bec8-10c27806cad6\") " pod="openstack/ovn-controller-ovs-stnkf" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.290497 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d7433eab-76d5-403c-8949-6b99fa8624d5-scripts\") pod \"ovn-controller-mhhkt\" (UID: \"d7433eab-76d5-403c-8949-6b99fa8624d5\") " pod="openstack/ovn-controller-mhhkt" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.295394 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7433eab-76d5-403c-8949-6b99fa8624d5-ovn-controller-tls-certs\") pod \"ovn-controller-mhhkt\" (UID: \"d7433eab-76d5-403c-8949-6b99fa8624d5\") " pod="openstack/ovn-controller-mhhkt" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.296059 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7433eab-76d5-403c-8949-6b99fa8624d5-combined-ca-bundle\") pod \"ovn-controller-mhhkt\" (UID: \"d7433eab-76d5-403c-8949-6b99fa8624d5\") " pod="openstack/ovn-controller-mhhkt" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.298131 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8bb9b011-87e3-4dd1-bec8-10c27806cad6-scripts\") pod \"ovn-controller-ovs-stnkf\" (UID: \"8bb9b011-87e3-4dd1-bec8-10c27806cad6\") " pod="openstack/ovn-controller-ovs-stnkf" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.308861 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8s9kp\" (UniqueName: \"kubernetes.io/projected/8bb9b011-87e3-4dd1-bec8-10c27806cad6-kube-api-access-8s9kp\") pod \"ovn-controller-ovs-stnkf\" (UID: \"8bb9b011-87e3-4dd1-bec8-10c27806cad6\") " pod="openstack/ovn-controller-ovs-stnkf" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.313220 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s75hp\" (UniqueName: \"kubernetes.io/projected/d7433eab-76d5-403c-8949-6b99fa8624d5-kube-api-access-s75hp\") pod \"ovn-controller-mhhkt\" (UID: \"d7433eab-76d5-403c-8949-6b99fa8624d5\") " pod="openstack/ovn-controller-mhhkt" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.390870 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5b6d805e-5f35-4f58-a71a-5bdbb4eba017-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"5b6d805e-5f35-4f58-a71a-5bdbb4eba017\") " pod="openstack/ovsdbserver-nb-0" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.391568 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b6d805e-5f35-4f58-a71a-5bdbb4eba017-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"5b6d805e-5f35-4f58-a71a-5bdbb4eba017\") " pod="openstack/ovsdbserver-nb-0" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.391622 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b6d805e-5f35-4f58-a71a-5bdbb4eba017-config\") pod \"ovsdbserver-nb-0\" (UID: \"5b6d805e-5f35-4f58-a71a-5bdbb4eba017\") " pod="openstack/ovsdbserver-nb-0" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.391691 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b6d805e-5f35-4f58-a71a-5bdbb4eba017-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"5b6d805e-5f35-4f58-a71a-5bdbb4eba017\") " pod="openstack/ovsdbserver-nb-0" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.391774 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5b6d805e-5f35-4f58-a71a-5bdbb4eba017-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"5b6d805e-5f35-4f58-a71a-5bdbb4eba017\") " pod="openstack/ovsdbserver-nb-0" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.391832 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5b6d805e-5f35-4f58-a71a-5bdbb4eba017-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"5b6d805e-5f35-4f58-a71a-5bdbb4eba017\") " pod="openstack/ovsdbserver-nb-0" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.391949 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"5b6d805e-5f35-4f58-a71a-5bdbb4eba017\") " pod="openstack/ovsdbserver-nb-0" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.392003 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b6d805e-5f35-4f58-a71a-5bdbb4eba017-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"5b6d805e-5f35-4f58-a71a-5bdbb4eba017\") " pod="openstack/ovsdbserver-nb-0" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.392049 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gxpl\" (UniqueName: \"kubernetes.io/projected/5b6d805e-5f35-4f58-a71a-5bdbb4eba017-kube-api-access-8gxpl\") pod \"ovsdbserver-nb-0\" (UID: \"5b6d805e-5f35-4f58-a71a-5bdbb4eba017\") " pod="openstack/ovsdbserver-nb-0" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.392352 5005 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"5b6d805e-5f35-4f58-a71a-5bdbb4eba017\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/ovsdbserver-nb-0" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.393337 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5b6d805e-5f35-4f58-a71a-5bdbb4eba017-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"5b6d805e-5f35-4f58-a71a-5bdbb4eba017\") " pod="openstack/ovsdbserver-nb-0" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.400680 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b6d805e-5f35-4f58-a71a-5bdbb4eba017-config\") pod \"ovsdbserver-nb-0\" (UID: \"5b6d805e-5f35-4f58-a71a-5bdbb4eba017\") " pod="openstack/ovsdbserver-nb-0" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.401509 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b6d805e-5f35-4f58-a71a-5bdbb4eba017-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"5b6d805e-5f35-4f58-a71a-5bdbb4eba017\") " pod="openstack/ovsdbserver-nb-0" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.401522 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b6d805e-5f35-4f58-a71a-5bdbb4eba017-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"5b6d805e-5f35-4f58-a71a-5bdbb4eba017\") " pod="openstack/ovsdbserver-nb-0" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.401741 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b6d805e-5f35-4f58-a71a-5bdbb4eba017-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"5b6d805e-5f35-4f58-a71a-5bdbb4eba017\") " pod="openstack/ovsdbserver-nb-0" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.409149 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gxpl\" (UniqueName: \"kubernetes.io/projected/5b6d805e-5f35-4f58-a71a-5bdbb4eba017-kube-api-access-8gxpl\") pod \"ovsdbserver-nb-0\" (UID: \"5b6d805e-5f35-4f58-a71a-5bdbb4eba017\") " pod="openstack/ovsdbserver-nb-0" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.413101 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"5b6d805e-5f35-4f58-a71a-5bdbb4eba017\") " pod="openstack/ovsdbserver-nb-0" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.421600 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mhhkt" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.430847 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-stnkf" Feb 25 11:34:28 crc kubenswrapper[5005]: I0225 11:34:28.500232 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 25 11:34:30 crc kubenswrapper[5005]: E0225 11:34:30.486323 5005 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 25 11:34:30 crc kubenswrapper[5005]: E0225 11:34:30.486825 5005 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jx2pr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-8vs7h_openstack(636c0b8e-9d52-456e-b2d0-31aeb3a4c849): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 25 11:34:30 crc kubenswrapper[5005]: E0225 11:34:30.488123 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-8vs7h" podUID="636c0b8e-9d52-456e-b2d0-31aeb3a4c849" Feb 25 11:34:30 crc kubenswrapper[5005]: E0225 11:34:30.521899 5005 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 25 11:34:30 crc kubenswrapper[5005]: E0225 11:34:30.522898 5005 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jt7w5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-j4gzh_openstack(a2d2bd2a-3b7f-4416-83da-07ad21b753f5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 25 11:34:30 crc kubenswrapper[5005]: E0225 11:34:30.524061 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-j4gzh" podUID="a2d2bd2a-3b7f-4416-83da-07ad21b753f5" Feb 25 11:34:30 crc kubenswrapper[5005]: I0225 11:34:30.804712 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 25 11:34:30 crc kubenswrapper[5005]: I0225 11:34:30.854538 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 25 11:34:30 crc kubenswrapper[5005]: I0225 11:34:30.855948 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 25 11:34:30 crc kubenswrapper[5005]: I0225 11:34:30.859542 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 25 11:34:30 crc kubenswrapper[5005]: I0225 11:34:30.859820 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-dqn28" Feb 25 11:34:30 crc kubenswrapper[5005]: I0225 11:34:30.860454 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 25 11:34:30 crc kubenswrapper[5005]: I0225 11:34:30.863320 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 25 11:34:30 crc kubenswrapper[5005]: I0225 11:34:30.864856 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 25 11:34:30 crc kubenswrapper[5005]: I0225 11:34:30.949163 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69573081-3b63-4aab-b734-c29867f9f0c1-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"69573081-3b63-4aab-b734-c29867f9f0c1\") " pod="openstack/ovsdbserver-sb-0" Feb 25 11:34:30 crc kubenswrapper[5005]: I0225 11:34:30.949212 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvmzq\" (UniqueName: \"kubernetes.io/projected/69573081-3b63-4aab-b734-c29867f9f0c1-kube-api-access-rvmzq\") pod \"ovsdbserver-sb-0\" (UID: \"69573081-3b63-4aab-b734-c29867f9f0c1\") " pod="openstack/ovsdbserver-sb-0" Feb 25 11:34:30 crc kubenswrapper[5005]: I0225 11:34:30.949259 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/69573081-3b63-4aab-b734-c29867f9f0c1-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"69573081-3b63-4aab-b734-c29867f9f0c1\") " pod="openstack/ovsdbserver-sb-0" Feb 25 11:34:30 crc kubenswrapper[5005]: I0225 11:34:30.949277 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/69573081-3b63-4aab-b734-c29867f9f0c1-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"69573081-3b63-4aab-b734-c29867f9f0c1\") " pod="openstack/ovsdbserver-sb-0" Feb 25 11:34:30 crc kubenswrapper[5005]: I0225 11:34:30.949296 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/69573081-3b63-4aab-b734-c29867f9f0c1-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"69573081-3b63-4aab-b734-c29867f9f0c1\") " pod="openstack/ovsdbserver-sb-0" Feb 25 11:34:30 crc kubenswrapper[5005]: I0225 11:34:30.949324 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69573081-3b63-4aab-b734-c29867f9f0c1-config\") pod \"ovsdbserver-sb-0\" (UID: \"69573081-3b63-4aab-b734-c29867f9f0c1\") " pod="openstack/ovsdbserver-sb-0" Feb 25 11:34:30 crc kubenswrapper[5005]: I0225 11:34:30.949342 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/69573081-3b63-4aab-b734-c29867f9f0c1-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"69573081-3b63-4aab-b734-c29867f9f0c1\") " pod="openstack/ovsdbserver-sb-0" Feb 25 11:34:30 crc kubenswrapper[5005]: I0225 11:34:30.949412 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"69573081-3b63-4aab-b734-c29867f9f0c1\") " pod="openstack/ovsdbserver-sb-0" Feb 25 11:34:31 crc kubenswrapper[5005]: I0225 11:34:31.050158 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/69573081-3b63-4aab-b734-c29867f9f0c1-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"69573081-3b63-4aab-b734-c29867f9f0c1\") " pod="openstack/ovsdbserver-sb-0" Feb 25 11:34:31 crc kubenswrapper[5005]: I0225 11:34:31.050420 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/69573081-3b63-4aab-b734-c29867f9f0c1-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"69573081-3b63-4aab-b734-c29867f9f0c1\") " pod="openstack/ovsdbserver-sb-0" Feb 25 11:34:31 crc kubenswrapper[5005]: I0225 11:34:31.050452 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69573081-3b63-4aab-b734-c29867f9f0c1-config\") pod \"ovsdbserver-sb-0\" (UID: \"69573081-3b63-4aab-b734-c29867f9f0c1\") " pod="openstack/ovsdbserver-sb-0" Feb 25 11:34:31 crc kubenswrapper[5005]: I0225 11:34:31.050470 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/69573081-3b63-4aab-b734-c29867f9f0c1-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"69573081-3b63-4aab-b734-c29867f9f0c1\") " pod="openstack/ovsdbserver-sb-0" Feb 25 11:34:31 crc kubenswrapper[5005]: I0225 11:34:31.050523 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"69573081-3b63-4aab-b734-c29867f9f0c1\") " pod="openstack/ovsdbserver-sb-0" Feb 25 11:34:31 crc kubenswrapper[5005]: I0225 11:34:31.050555 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69573081-3b63-4aab-b734-c29867f9f0c1-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"69573081-3b63-4aab-b734-c29867f9f0c1\") " pod="openstack/ovsdbserver-sb-0" Feb 25 11:34:31 crc kubenswrapper[5005]: I0225 11:34:31.050573 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvmzq\" (UniqueName: \"kubernetes.io/projected/69573081-3b63-4aab-b734-c29867f9f0c1-kube-api-access-rvmzq\") pod \"ovsdbserver-sb-0\" (UID: \"69573081-3b63-4aab-b734-c29867f9f0c1\") " pod="openstack/ovsdbserver-sb-0" Feb 25 11:34:31 crc kubenswrapper[5005]: I0225 11:34:31.050614 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/69573081-3b63-4aab-b734-c29867f9f0c1-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"69573081-3b63-4aab-b734-c29867f9f0c1\") " pod="openstack/ovsdbserver-sb-0" Feb 25 11:34:31 crc kubenswrapper[5005]: I0225 11:34:31.051133 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/69573081-3b63-4aab-b734-c29867f9f0c1-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"69573081-3b63-4aab-b734-c29867f9f0c1\") " pod="openstack/ovsdbserver-sb-0" Feb 25 11:34:31 crc kubenswrapper[5005]: I0225 11:34:31.051518 5005 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"69573081-3b63-4aab-b734-c29867f9f0c1\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/ovsdbserver-sb-0" Feb 25 11:34:31 crc kubenswrapper[5005]: I0225 11:34:31.054365 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69573081-3b63-4aab-b734-c29867f9f0c1-config\") pod \"ovsdbserver-sb-0\" (UID: \"69573081-3b63-4aab-b734-c29867f9f0c1\") " pod="openstack/ovsdbserver-sb-0" Feb 25 11:34:31 crc kubenswrapper[5005]: I0225 11:34:31.054569 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/69573081-3b63-4aab-b734-c29867f9f0c1-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"69573081-3b63-4aab-b734-c29867f9f0c1\") " pod="openstack/ovsdbserver-sb-0" Feb 25 11:34:31 crc kubenswrapper[5005]: I0225 11:34:31.056788 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"773c4b57-17bf-4159-9b75-81072c68692e","Type":"ContainerStarted","Data":"473bc070e3444253060c438faf3aa2fb5e55dc8d4b3b3563c81f3038d76081bb"} Feb 25 11:34:31 crc kubenswrapper[5005]: I0225 11:34:31.058614 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69573081-3b63-4aab-b734-c29867f9f0c1-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"69573081-3b63-4aab-b734-c29867f9f0c1\") " pod="openstack/ovsdbserver-sb-0" Feb 25 11:34:31 crc kubenswrapper[5005]: I0225 11:34:31.059857 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/69573081-3b63-4aab-b734-c29867f9f0c1-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"69573081-3b63-4aab-b734-c29867f9f0c1\") " pod="openstack/ovsdbserver-sb-0" Feb 25 11:34:31 crc kubenswrapper[5005]: I0225 11:34:31.061507 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/69573081-3b63-4aab-b734-c29867f9f0c1-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"69573081-3b63-4aab-b734-c29867f9f0c1\") " pod="openstack/ovsdbserver-sb-0" Feb 25 11:34:31 crc kubenswrapper[5005]: I0225 11:34:31.074513 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvmzq\" (UniqueName: \"kubernetes.io/projected/69573081-3b63-4aab-b734-c29867f9f0c1-kube-api-access-rvmzq\") pod \"ovsdbserver-sb-0\" (UID: \"69573081-3b63-4aab-b734-c29867f9f0c1\") " pod="openstack/ovsdbserver-sb-0" Feb 25 11:34:31 crc kubenswrapper[5005]: I0225 11:34:31.122417 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 25 11:34:31 crc kubenswrapper[5005]: I0225 11:34:31.129261 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"69573081-3b63-4aab-b734-c29867f9f0c1\") " pod="openstack/ovsdbserver-sb-0" Feb 25 11:34:31 crc kubenswrapper[5005]: I0225 11:34:31.280136 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 25 11:34:31 crc kubenswrapper[5005]: I0225 11:34:31.295540 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 25 11:34:31 crc kubenswrapper[5005]: I0225 11:34:31.300300 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 25 11:34:31 crc kubenswrapper[5005]: I0225 11:34:31.330143 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mhhkt"] Feb 25 11:34:31 crc kubenswrapper[5005]: I0225 11:34:31.360256 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 25 11:34:31 crc kubenswrapper[5005]: I0225 11:34:31.437753 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 25 11:34:31 crc kubenswrapper[5005]: W0225 11:34:31.444308 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d41f2ea_e694_463b_a0bb_d8b987bab0b4.slice/crio-4bfe1078f6f83d05c01309aaa0e74a899658b2cea322f02f03afa283b40fd657 WatchSource:0}: Error finding container 4bfe1078f6f83d05c01309aaa0e74a899658b2cea322f02f03afa283b40fd657: Status 404 returned error can't find the container with id 4bfe1078f6f83d05c01309aaa0e74a899658b2cea322f02f03afa283b40fd657 Feb 25 11:34:31 crc kubenswrapper[5005]: I0225 11:34:31.447871 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 25 11:34:31 crc kubenswrapper[5005]: I0225 11:34:31.484083 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-j4gzh" Feb 25 11:34:31 crc kubenswrapper[5005]: I0225 11:34:31.506343 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-8vs7h" Feb 25 11:34:31 crc kubenswrapper[5005]: I0225 11:34:31.559663 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jt7w5\" (UniqueName: \"kubernetes.io/projected/a2d2bd2a-3b7f-4416-83da-07ad21b753f5-kube-api-access-jt7w5\") pod \"a2d2bd2a-3b7f-4416-83da-07ad21b753f5\" (UID: \"a2d2bd2a-3b7f-4416-83da-07ad21b753f5\") " Feb 25 11:34:31 crc kubenswrapper[5005]: I0225 11:34:31.559976 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/636c0b8e-9d52-456e-b2d0-31aeb3a4c849-config\") pod \"636c0b8e-9d52-456e-b2d0-31aeb3a4c849\" (UID: \"636c0b8e-9d52-456e-b2d0-31aeb3a4c849\") " Feb 25 11:34:31 crc kubenswrapper[5005]: I0225 11:34:31.560019 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jx2pr\" (UniqueName: \"kubernetes.io/projected/636c0b8e-9d52-456e-b2d0-31aeb3a4c849-kube-api-access-jx2pr\") pod \"636c0b8e-9d52-456e-b2d0-31aeb3a4c849\" (UID: \"636c0b8e-9d52-456e-b2d0-31aeb3a4c849\") " Feb 25 11:34:31 crc kubenswrapper[5005]: I0225 11:34:31.560088 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2d2bd2a-3b7f-4416-83da-07ad21b753f5-config\") pod \"a2d2bd2a-3b7f-4416-83da-07ad21b753f5\" (UID: \"a2d2bd2a-3b7f-4416-83da-07ad21b753f5\") " Feb 25 11:34:31 crc kubenswrapper[5005]: I0225 11:34:31.560121 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2d2bd2a-3b7f-4416-83da-07ad21b753f5-dns-svc\") pod \"a2d2bd2a-3b7f-4416-83da-07ad21b753f5\" (UID: \"a2d2bd2a-3b7f-4416-83da-07ad21b753f5\") " Feb 25 11:34:31 crc kubenswrapper[5005]: I0225 11:34:31.560898 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2d2bd2a-3b7f-4416-83da-07ad21b753f5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a2d2bd2a-3b7f-4416-83da-07ad21b753f5" (UID: "a2d2bd2a-3b7f-4416-83da-07ad21b753f5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:34:31 crc kubenswrapper[5005]: I0225 11:34:31.561295 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/636c0b8e-9d52-456e-b2d0-31aeb3a4c849-config" (OuterVolumeSpecName: "config") pod "636c0b8e-9d52-456e-b2d0-31aeb3a4c849" (UID: "636c0b8e-9d52-456e-b2d0-31aeb3a4c849"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:34:31 crc kubenswrapper[5005]: I0225 11:34:31.561828 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2d2bd2a-3b7f-4416-83da-07ad21b753f5-config" (OuterVolumeSpecName: "config") pod "a2d2bd2a-3b7f-4416-83da-07ad21b753f5" (UID: "a2d2bd2a-3b7f-4416-83da-07ad21b753f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:34:31 crc kubenswrapper[5005]: I0225 11:34:31.565439 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2d2bd2a-3b7f-4416-83da-07ad21b753f5-kube-api-access-jt7w5" (OuterVolumeSpecName: "kube-api-access-jt7w5") pod "a2d2bd2a-3b7f-4416-83da-07ad21b753f5" (UID: "a2d2bd2a-3b7f-4416-83da-07ad21b753f5"). InnerVolumeSpecName "kube-api-access-jt7w5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:34:31 crc kubenswrapper[5005]: I0225 11:34:31.565616 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/636c0b8e-9d52-456e-b2d0-31aeb3a4c849-kube-api-access-jx2pr" (OuterVolumeSpecName: "kube-api-access-jx2pr") pod "636c0b8e-9d52-456e-b2d0-31aeb3a4c849" (UID: "636c0b8e-9d52-456e-b2d0-31aeb3a4c849"). InnerVolumeSpecName "kube-api-access-jx2pr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:34:31 crc kubenswrapper[5005]: I0225 11:34:31.585770 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-stnkf"] Feb 25 11:34:31 crc kubenswrapper[5005]: W0225 11:34:31.587425 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8bb9b011_87e3_4dd1_bec8_10c27806cad6.slice/crio-4968c9f00c57d94f0afe1f808851d60548f7cd32e5f4a0924daf31d4f8a492af WatchSource:0}: Error finding container 4968c9f00c57d94f0afe1f808851d60548f7cd32e5f4a0924daf31d4f8a492af: Status 404 returned error can't find the container with id 4968c9f00c57d94f0afe1f808851d60548f7cd32e5f4a0924daf31d4f8a492af Feb 25 11:34:31 crc kubenswrapper[5005]: I0225 11:34:31.662324 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jt7w5\" (UniqueName: \"kubernetes.io/projected/a2d2bd2a-3b7f-4416-83da-07ad21b753f5-kube-api-access-jt7w5\") on node \"crc\" DevicePath \"\"" Feb 25 11:34:31 crc kubenswrapper[5005]: I0225 11:34:31.662383 5005 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/636c0b8e-9d52-456e-b2d0-31aeb3a4c849-config\") on node \"crc\" DevicePath \"\"" Feb 25 11:34:31 crc kubenswrapper[5005]: I0225 11:34:31.662397 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jx2pr\" (UniqueName: \"kubernetes.io/projected/636c0b8e-9d52-456e-b2d0-31aeb3a4c849-kube-api-access-jx2pr\") on node \"crc\" DevicePath \"\"" Feb 25 11:34:31 crc kubenswrapper[5005]: I0225 11:34:31.662406 5005 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2d2bd2a-3b7f-4416-83da-07ad21b753f5-config\") on node \"crc\" DevicePath \"\"" Feb 25 11:34:31 crc kubenswrapper[5005]: I0225 11:34:31.662415 5005 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2d2bd2a-3b7f-4416-83da-07ad21b753f5-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 25 11:34:31 crc kubenswrapper[5005]: I0225 11:34:31.850191 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 25 11:34:32 crc kubenswrapper[5005]: I0225 11:34:32.072140 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"5b6d805e-5f35-4f58-a71a-5bdbb4eba017","Type":"ContainerStarted","Data":"6969179fc99c6100102ec7da2663e1d8ee2b2d7384e880cbd2bd8f2fadf377e5"} Feb 25 11:34:32 crc kubenswrapper[5005]: I0225 11:34:32.074953 5005 generic.go:334] "Generic (PLEG): container finished" podID="d71bf24a-67d7-40ba-8368-5dfb5d2b6036" containerID="295a09df38822e89a91d630580add5c3c34316d92dc1ad422865a197ba919d84" exitCode=0 Feb 25 11:34:32 crc kubenswrapper[5005]: I0225 11:34:32.075010 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-9524w" event={"ID":"d71bf24a-67d7-40ba-8368-5dfb5d2b6036","Type":"ContainerDied","Data":"295a09df38822e89a91d630580add5c3c34316d92dc1ad422865a197ba919d84"} Feb 25 11:34:32 crc kubenswrapper[5005]: I0225 11:34:32.077947 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mhhkt" event={"ID":"d7433eab-76d5-403c-8949-6b99fa8624d5","Type":"ContainerStarted","Data":"97be4c119069c5b3329b658d117803ab0beddc7cb8915af1b41a2dab64b37750"} Feb 25 11:34:32 crc kubenswrapper[5005]: I0225 11:34:32.082072 5005 generic.go:334] "Generic (PLEG): container finished" podID="3705a4c5-72f2-4423-9987-7b182bba8ae6" containerID="c646e8d80fdbf6fe3b4fe057e0482b91a4cb6df312292d320db58f9b668c059d" exitCode=0 Feb 25 11:34:32 crc kubenswrapper[5005]: I0225 11:34:32.082136 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-m4bbm" event={"ID":"3705a4c5-72f2-4423-9987-7b182bba8ae6","Type":"ContainerDied","Data":"c646e8d80fdbf6fe3b4fe057e0482b91a4cb6df312292d320db58f9b668c059d"} Feb 25 11:34:32 crc kubenswrapper[5005]: I0225 11:34:32.089850 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-j4gzh" event={"ID":"a2d2bd2a-3b7f-4416-83da-07ad21b753f5","Type":"ContainerDied","Data":"d7e42993b3a83b4595753299da73836ea79eb7fb5ec32667144c7fcfea1626ad"} Feb 25 11:34:32 crc kubenswrapper[5005]: I0225 11:34:32.089910 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-j4gzh" Feb 25 11:34:32 crc kubenswrapper[5005]: I0225 11:34:32.091786 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"06987e0a-c281-4cbf-acdf-5831dd0b3561","Type":"ContainerStarted","Data":"eb1dfda2041125fd85dfe3a61ea07850bf76f61a293536aaf600dc6f458e6848"} Feb 25 11:34:32 crc kubenswrapper[5005]: I0225 11:34:32.094607 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-8vs7h" event={"ID":"636c0b8e-9d52-456e-b2d0-31aeb3a4c849","Type":"ContainerDied","Data":"b3d010f6b771d276532c0de7b2bd4838ac884846a8f9f74f6f0eb3f154f82b37"} Feb 25 11:34:32 crc kubenswrapper[5005]: I0225 11:34:32.094667 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-8vs7h" Feb 25 11:34:32 crc kubenswrapper[5005]: I0225 11:34:32.101533 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"69573081-3b63-4aab-b734-c29867f9f0c1","Type":"ContainerStarted","Data":"d15745ba2c8a88f8e89375206a2e27c80067e5a61302e163016ca6802fb8ae7a"} Feb 25 11:34:32 crc kubenswrapper[5005]: I0225 11:34:32.103624 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"9b0ce46e-c63e-49fa-b35c-10745cf3abc4","Type":"ContainerStarted","Data":"5171efe05bc00bf51c8a5cd267ccd95b7fd6e05b0e4baeba793702b91a28331f"} Feb 25 11:34:32 crc kubenswrapper[5005]: I0225 11:34:32.109025 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"fe9dcc0a-0321-4f68-929f-fb5393b97e38","Type":"ContainerStarted","Data":"53653dd1f82cc0e4b22455049a90970fc57c557e24925399e8b6ab431c5854cb"} Feb 25 11:34:32 crc kubenswrapper[5005]: I0225 11:34:32.112073 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"7d41f2ea-e694-463b-a0bb-d8b987bab0b4","Type":"ContainerStarted","Data":"4bfe1078f6f83d05c01309aaa0e74a899658b2cea322f02f03afa283b40fd657"} Feb 25 11:34:32 crc kubenswrapper[5005]: I0225 11:34:32.116712 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-stnkf" event={"ID":"8bb9b011-87e3-4dd1-bec8-10c27806cad6","Type":"ContainerStarted","Data":"4968c9f00c57d94f0afe1f808851d60548f7cd32e5f4a0924daf31d4f8a492af"} Feb 25 11:34:32 crc kubenswrapper[5005]: I0225 11:34:32.118796 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"86534792-e561-447f-bcef-4ff82b02561c","Type":"ContainerStarted","Data":"69288773e998c6320c4ba0d768e4c5ef4117d64813dc89bdb3d7a2f050dcdf1e"} Feb 25 11:34:32 crc kubenswrapper[5005]: I0225 11:34:32.161413 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-j4gzh"] Feb 25 11:34:32 crc kubenswrapper[5005]: I0225 11:34:32.166976 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-j4gzh"] Feb 25 11:34:32 crc kubenswrapper[5005]: I0225 11:34:32.203742 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8vs7h"] Feb 25 11:34:32 crc kubenswrapper[5005]: I0225 11:34:32.214615 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8vs7h"] Feb 25 11:34:32 crc kubenswrapper[5005]: I0225 11:34:32.701341 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="636c0b8e-9d52-456e-b2d0-31aeb3a4c849" path="/var/lib/kubelet/pods/636c0b8e-9d52-456e-b2d0-31aeb3a4c849/volumes" Feb 25 11:34:32 crc kubenswrapper[5005]: I0225 11:34:32.701892 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2d2bd2a-3b7f-4416-83da-07ad21b753f5" path="/var/lib/kubelet/pods/a2d2bd2a-3b7f-4416-83da-07ad21b753f5/volumes" Feb 25 11:34:40 crc kubenswrapper[5005]: I0225 11:34:40.176123 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"69573081-3b63-4aab-b734-c29867f9f0c1","Type":"ContainerStarted","Data":"74c902eaa5d328fd95f552d4e483297e20208c3d7b487b2373a063e26770e348"} Feb 25 11:34:40 crc kubenswrapper[5005]: I0225 11:34:40.179457 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"5b6d805e-5f35-4f58-a71a-5bdbb4eba017","Type":"ContainerStarted","Data":"62de651ee80967ce3784867ab28a7fcb510bc4032467e3bfbcf983c0cb40c462"} Feb 25 11:34:40 crc kubenswrapper[5005]: I0225 11:34:40.183169 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-9524w" event={"ID":"d71bf24a-67d7-40ba-8368-5dfb5d2b6036","Type":"ContainerStarted","Data":"ac356fd44d429a2ffeef1defa6530a8958b27e14a6f0b61c596922c248f08ba9"} Feb 25 11:34:40 crc kubenswrapper[5005]: I0225 11:34:40.185258 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-9524w" Feb 25 11:34:40 crc kubenswrapper[5005]: I0225 11:34:40.189824 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-m4bbm" event={"ID":"3705a4c5-72f2-4423-9987-7b182bba8ae6","Type":"ContainerStarted","Data":"e1f9b67ed4151e4756d4a11a7c586d419faf19a95df07d528fe6a0d771326d35"} Feb 25 11:34:40 crc kubenswrapper[5005]: I0225 11:34:40.189973 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-m4bbm" Feb 25 11:34:40 crc kubenswrapper[5005]: I0225 11:34:40.193488 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"9b0ce46e-c63e-49fa-b35c-10745cf3abc4","Type":"ContainerStarted","Data":"7a62bb52eb3fe55857d6ad208a745a133b4715a01ed5943c236732e7ed782a72"} Feb 25 11:34:40 crc kubenswrapper[5005]: I0225 11:34:40.193835 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 25 11:34:40 crc kubenswrapper[5005]: I0225 11:34:40.196640 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"fe9dcc0a-0321-4f68-929f-fb5393b97e38","Type":"ContainerStarted","Data":"f342b678c59a1fd0c910bc75b1ece6584400de2b1073bf8c1d560e1d456b0cab"} Feb 25 11:34:40 crc kubenswrapper[5005]: I0225 11:34:40.199723 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"7d41f2ea-e694-463b-a0bb-d8b987bab0b4","Type":"ContainerStarted","Data":"6b54535ac7d6d40040a5464c0f89dfc34ae6245c86d24ce9e2ff3c3380d24558"} Feb 25 11:34:40 crc kubenswrapper[5005]: I0225 11:34:40.207548 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-9524w" podStartSLOduration=11.024928212 podStartE2EDuration="23.207521985s" podCreationTimestamp="2026-02-25 11:34:17 +0000 UTC" firstStartedPulling="2026-02-25 11:34:18.593260984 +0000 UTC m=+972.633993311" lastFinishedPulling="2026-02-25 11:34:30.775854757 +0000 UTC m=+984.816587084" observedRunningTime="2026-02-25 11:34:40.203252816 +0000 UTC m=+994.243985173" watchObservedRunningTime="2026-02-25 11:34:40.207521985 +0000 UTC m=+994.248254342" Feb 25 11:34:40 crc kubenswrapper[5005]: I0225 11:34:40.231656 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=11.050668625 podStartE2EDuration="18.231630219s" podCreationTimestamp="2026-02-25 11:34:22 +0000 UTC" firstStartedPulling="2026-02-25 11:34:31.345049412 +0000 UTC m=+985.385781739" lastFinishedPulling="2026-02-25 11:34:38.526010976 +0000 UTC m=+992.566743333" observedRunningTime="2026-02-25 11:34:40.221654165 +0000 UTC m=+994.262386482" watchObservedRunningTime="2026-02-25 11:34:40.231630219 +0000 UTC m=+994.272362546" Feb 25 11:34:40 crc kubenswrapper[5005]: I0225 11:34:40.281489 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-m4bbm" podStartSLOduration=13.468811837 podStartE2EDuration="22.281463105s" podCreationTimestamp="2026-02-25 11:34:18 +0000 UTC" firstStartedPulling="2026-02-25 11:34:22.164210603 +0000 UTC m=+976.204942930" lastFinishedPulling="2026-02-25 11:34:30.976861871 +0000 UTC m=+985.017594198" observedRunningTime="2026-02-25 11:34:40.276742471 +0000 UTC m=+994.317474808" watchObservedRunningTime="2026-02-25 11:34:40.281463105 +0000 UTC m=+994.322195432" Feb 25 11:34:41 crc kubenswrapper[5005]: I0225 11:34:41.219952 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mhhkt" event={"ID":"d7433eab-76d5-403c-8949-6b99fa8624d5","Type":"ContainerStarted","Data":"9a2bf925d455d642afeceac947ed33bd2d63b9bbaa0cb8ffa57c0f27ef70e9d1"} Feb 25 11:34:41 crc kubenswrapper[5005]: I0225 11:34:41.221170 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-mhhkt" Feb 25 11:34:41 crc kubenswrapper[5005]: I0225 11:34:41.223706 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"773c4b57-17bf-4159-9b75-81072c68692e","Type":"ContainerStarted","Data":"06eb8e47ec1e952c5b7fecf002337ff4943629e25e9129542412322444f70bcd"} Feb 25 11:34:41 crc kubenswrapper[5005]: I0225 11:34:41.225607 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"06987e0a-c281-4cbf-acdf-5831dd0b3561","Type":"ContainerStarted","Data":"f5b619d3f6c89177173ffd5e7305627a3041510c96518fba1ce69bb1dcccaace"} Feb 25 11:34:41 crc kubenswrapper[5005]: I0225 11:34:41.225771 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 25 11:34:41 crc kubenswrapper[5005]: I0225 11:34:41.227666 5005 generic.go:334] "Generic (PLEG): container finished" podID="8bb9b011-87e3-4dd1-bec8-10c27806cad6" containerID="59b858a20b9cb26a53cd5261d059273dda98968e4dbbfc35cd1e377912a6fc97" exitCode=0 Feb 25 11:34:41 crc kubenswrapper[5005]: I0225 11:34:41.227722 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-stnkf" event={"ID":"8bb9b011-87e3-4dd1-bec8-10c27806cad6","Type":"ContainerDied","Data":"59b858a20b9cb26a53cd5261d059273dda98968e4dbbfc35cd1e377912a6fc97"} Feb 25 11:34:41 crc kubenswrapper[5005]: I0225 11:34:41.231783 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"86534792-e561-447f-bcef-4ff82b02561c","Type":"ContainerStarted","Data":"626ca3e16ee06387e105dcce1325831e7fb6d4531a71af0b7c907cf52bdf6da2"} Feb 25 11:34:41 crc kubenswrapper[5005]: I0225 11:34:41.246111 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-mhhkt" podStartSLOduration=6.0790537669999996 podStartE2EDuration="13.246084917s" podCreationTimestamp="2026-02-25 11:34:28 +0000 UTC" firstStartedPulling="2026-02-25 11:34:31.359022207 +0000 UTC m=+985.399754534" lastFinishedPulling="2026-02-25 11:34:38.526053347 +0000 UTC m=+992.566785684" observedRunningTime="2026-02-25 11:34:41.239130706 +0000 UTC m=+995.279863083" watchObservedRunningTime="2026-02-25 11:34:41.246084917 +0000 UTC m=+995.286817264" Feb 25 11:34:41 crc kubenswrapper[5005]: I0225 11:34:41.350220 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=9.325855476 podStartE2EDuration="17.348344758s" podCreationTimestamp="2026-02-25 11:34:24 +0000 UTC" firstStartedPulling="2026-02-25 11:34:31.345023661 +0000 UTC m=+985.385755988" lastFinishedPulling="2026-02-25 11:34:39.367512943 +0000 UTC m=+993.408245270" observedRunningTime="2026-02-25 11:34:41.345042227 +0000 UTC m=+995.385774554" watchObservedRunningTime="2026-02-25 11:34:41.348344758 +0000 UTC m=+995.389077085" Feb 25 11:34:42 crc kubenswrapper[5005]: I0225 11:34:42.240253 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-stnkf" event={"ID":"8bb9b011-87e3-4dd1-bec8-10c27806cad6","Type":"ContainerStarted","Data":"c22a1679f7572a6f935364c4b2e6789ead3c63882f48de194d5abac8f7bc1ee0"} Feb 25 11:34:42 crc kubenswrapper[5005]: I0225 11:34:42.240755 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-stnkf" event={"ID":"8bb9b011-87e3-4dd1-bec8-10c27806cad6","Type":"ContainerStarted","Data":"835d7e33beeec1325aeb8bfdb641ede21b4075047de47c47a10a097ab65702a5"} Feb 25 11:34:42 crc kubenswrapper[5005]: I0225 11:34:42.240802 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-stnkf" Feb 25 11:34:42 crc kubenswrapper[5005]: I0225 11:34:42.240834 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-stnkf" Feb 25 11:34:42 crc kubenswrapper[5005]: I0225 11:34:42.242737 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"69573081-3b63-4aab-b734-c29867f9f0c1","Type":"ContainerStarted","Data":"80c6ee13e75d8d98c5e1a354f8e470fca6d82d971f48604514213a433cacfd15"} Feb 25 11:34:42 crc kubenswrapper[5005]: I0225 11:34:42.245181 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"5b6d805e-5f35-4f58-a71a-5bdbb4eba017","Type":"ContainerStarted","Data":"4d539d6e360f78e6723b2e2de27f28b61379d3f01fb6811d49f13d17b814f9fb"} Feb 25 11:34:42 crc kubenswrapper[5005]: I0225 11:34:42.306361 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-stnkf" podStartSLOduration=7.067112062 podStartE2EDuration="14.306333229s" podCreationTimestamp="2026-02-25 11:34:28 +0000 UTC" firstStartedPulling="2026-02-25 11:34:31.589345583 +0000 UTC m=+985.630077910" lastFinishedPulling="2026-02-25 11:34:38.82856675 +0000 UTC m=+992.869299077" observedRunningTime="2026-02-25 11:34:42.273061066 +0000 UTC m=+996.313793423" watchObservedRunningTime="2026-02-25 11:34:42.306333229 +0000 UTC m=+996.347065596" Feb 25 11:34:42 crc kubenswrapper[5005]: I0225 11:34:42.310814 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=5.490868285 podStartE2EDuration="15.308596007s" podCreationTimestamp="2026-02-25 11:34:27 +0000 UTC" firstStartedPulling="2026-02-25 11:34:31.379801648 +0000 UTC m=+985.420533975" lastFinishedPulling="2026-02-25 11:34:41.19752937 +0000 UTC m=+995.238261697" observedRunningTime="2026-02-25 11:34:42.30048144 +0000 UTC m=+996.341213847" watchObservedRunningTime="2026-02-25 11:34:42.308596007 +0000 UTC m=+996.349328374" Feb 25 11:34:42 crc kubenswrapper[5005]: I0225 11:34:42.329171 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=3.969074592 podStartE2EDuration="13.329121461s" podCreationTimestamp="2026-02-25 11:34:29 +0000 UTC" firstStartedPulling="2026-02-25 11:34:31.85619987 +0000 UTC m=+985.896932197" lastFinishedPulling="2026-02-25 11:34:41.216246739 +0000 UTC m=+995.256979066" observedRunningTime="2026-02-25 11:34:42.321483439 +0000 UTC m=+996.362215766" watchObservedRunningTime="2026-02-25 11:34:42.329121461 +0000 UTC m=+996.369853788" Feb 25 11:34:43 crc kubenswrapper[5005]: I0225 11:34:43.301488 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 25 11:34:43 crc kubenswrapper[5005]: I0225 11:34:43.340945 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 25 11:34:43 crc kubenswrapper[5005]: I0225 11:34:43.501288 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 25 11:34:43 crc kubenswrapper[5005]: I0225 11:34:43.501405 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 25 11:34:43 crc kubenswrapper[5005]: I0225 11:34:43.561169 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 25 11:34:43 crc kubenswrapper[5005]: E0225 11:34:43.646002 5005 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d41f2ea_e694_463b_a0bb_d8b987bab0b4.slice/crio-conmon-6b54535ac7d6d40040a5464c0f89dfc34ae6245c86d24ce9e2ff3c3380d24558.scope\": RecentStats: unable to find data in memory cache]" Feb 25 11:34:44 crc kubenswrapper[5005]: I0225 11:34:44.265365 5005 generic.go:334] "Generic (PLEG): container finished" podID="fe9dcc0a-0321-4f68-929f-fb5393b97e38" containerID="f342b678c59a1fd0c910bc75b1ece6584400de2b1073bf8c1d560e1d456b0cab" exitCode=0 Feb 25 11:34:44 crc kubenswrapper[5005]: I0225 11:34:44.265450 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"fe9dcc0a-0321-4f68-929f-fb5393b97e38","Type":"ContainerDied","Data":"f342b678c59a1fd0c910bc75b1ece6584400de2b1073bf8c1d560e1d456b0cab"} Feb 25 11:34:44 crc kubenswrapper[5005]: I0225 11:34:44.269750 5005 generic.go:334] "Generic (PLEG): container finished" podID="7d41f2ea-e694-463b-a0bb-d8b987bab0b4" containerID="6b54535ac7d6d40040a5464c0f89dfc34ae6245c86d24ce9e2ff3c3380d24558" exitCode=0 Feb 25 11:34:44 crc kubenswrapper[5005]: I0225 11:34:44.270217 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"7d41f2ea-e694-463b-a0bb-d8b987bab0b4","Type":"ContainerDied","Data":"6b54535ac7d6d40040a5464c0f89dfc34ae6245c86d24ce9e2ff3c3380d24558"} Feb 25 11:34:44 crc kubenswrapper[5005]: I0225 11:34:44.271416 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 25 11:34:44 crc kubenswrapper[5005]: I0225 11:34:44.346507 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 25 11:34:44 crc kubenswrapper[5005]: I0225 11:34:44.372989 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 25 11:34:44 crc kubenswrapper[5005]: I0225 11:34:44.635726 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-m4bbm"] Feb 25 11:34:44 crc kubenswrapper[5005]: I0225 11:34:44.636013 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-m4bbm" podUID="3705a4c5-72f2-4423-9987-7b182bba8ae6" containerName="dnsmasq-dns" containerID="cri-o://e1f9b67ed4151e4756d4a11a7c586d419faf19a95df07d528fe6a0d771326d35" gracePeriod=10 Feb 25 11:34:44 crc kubenswrapper[5005]: I0225 11:34:44.637633 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-m4bbm" Feb 25 11:34:44 crc kubenswrapper[5005]: I0225 11:34:44.698934 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-bbt7l"] Feb 25 11:34:44 crc kubenswrapper[5005]: I0225 11:34:44.700447 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-bbt7l" Feb 25 11:34:44 crc kubenswrapper[5005]: I0225 11:34:44.709078 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 25 11:34:44 crc kubenswrapper[5005]: I0225 11:34:44.715420 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-bbt7l"] Feb 25 11:34:44 crc kubenswrapper[5005]: I0225 11:34:44.723636 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-q7v44"] Feb 25 11:34:44 crc kubenswrapper[5005]: I0225 11:34:44.724553 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-q7v44" Feb 25 11:34:44 crc kubenswrapper[5005]: I0225 11:34:44.732696 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 25 11:34:44 crc kubenswrapper[5005]: I0225 11:34:44.773306 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-q7v44"] Feb 25 11:34:44 crc kubenswrapper[5005]: I0225 11:34:44.806152 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/fc1ff781-cfcd-4b44-92d5-ff9153b19871-ovs-rundir\") pod \"ovn-controller-metrics-q7v44\" (UID: \"fc1ff781-cfcd-4b44-92d5-ff9153b19871\") " pod="openstack/ovn-controller-metrics-q7v44" Feb 25 11:34:44 crc kubenswrapper[5005]: I0225 11:34:44.806568 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d356e016-7c51-4906-be6a-783d3c4049d4-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-bbt7l\" (UID: \"d356e016-7c51-4906-be6a-783d3c4049d4\") " pod="openstack/dnsmasq-dns-7fd796d7df-bbt7l" Feb 25 11:34:44 crc kubenswrapper[5005]: I0225 11:34:44.806615 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc1ff781-cfcd-4b44-92d5-ff9153b19871-config\") pod \"ovn-controller-metrics-q7v44\" (UID: \"fc1ff781-cfcd-4b44-92d5-ff9153b19871\") " pod="openstack/ovn-controller-metrics-q7v44" Feb 25 11:34:44 crc kubenswrapper[5005]: I0225 11:34:44.806636 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d356e016-7c51-4906-be6a-783d3c4049d4-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-bbt7l\" (UID: \"d356e016-7c51-4906-be6a-783d3c4049d4\") " pod="openstack/dnsmasq-dns-7fd796d7df-bbt7l" Feb 25 11:34:44 crc kubenswrapper[5005]: I0225 11:34:44.806675 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvnqz\" (UniqueName: \"kubernetes.io/projected/d356e016-7c51-4906-be6a-783d3c4049d4-kube-api-access-kvnqz\") pod \"dnsmasq-dns-7fd796d7df-bbt7l\" (UID: \"d356e016-7c51-4906-be6a-783d3c4049d4\") " pod="openstack/dnsmasq-dns-7fd796d7df-bbt7l" Feb 25 11:34:44 crc kubenswrapper[5005]: I0225 11:34:44.806692 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc1ff781-cfcd-4b44-92d5-ff9153b19871-combined-ca-bundle\") pod \"ovn-controller-metrics-q7v44\" (UID: \"fc1ff781-cfcd-4b44-92d5-ff9153b19871\") " pod="openstack/ovn-controller-metrics-q7v44" Feb 25 11:34:44 crc kubenswrapper[5005]: I0225 11:34:44.806726 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d356e016-7c51-4906-be6a-783d3c4049d4-config\") pod \"dnsmasq-dns-7fd796d7df-bbt7l\" (UID: \"d356e016-7c51-4906-be6a-783d3c4049d4\") " pod="openstack/dnsmasq-dns-7fd796d7df-bbt7l" Feb 25 11:34:44 crc kubenswrapper[5005]: I0225 11:34:44.806751 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc1ff781-cfcd-4b44-92d5-ff9153b19871-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-q7v44\" (UID: \"fc1ff781-cfcd-4b44-92d5-ff9153b19871\") " pod="openstack/ovn-controller-metrics-q7v44" Feb 25 11:34:44 crc kubenswrapper[5005]: I0225 11:34:44.806776 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/fc1ff781-cfcd-4b44-92d5-ff9153b19871-ovn-rundir\") pod \"ovn-controller-metrics-q7v44\" (UID: \"fc1ff781-cfcd-4b44-92d5-ff9153b19871\") " pod="openstack/ovn-controller-metrics-q7v44" Feb 25 11:34:44 crc kubenswrapper[5005]: I0225 11:34:44.806793 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8xzc\" (UniqueName: \"kubernetes.io/projected/fc1ff781-cfcd-4b44-92d5-ff9153b19871-kube-api-access-m8xzc\") pod \"ovn-controller-metrics-q7v44\" (UID: \"fc1ff781-cfcd-4b44-92d5-ff9153b19871\") " pod="openstack/ovn-controller-metrics-q7v44" Feb 25 11:34:44 crc kubenswrapper[5005]: I0225 11:34:44.855768 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-9524w"] Feb 25 11:34:44 crc kubenswrapper[5005]: I0225 11:34:44.856493 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-9524w" podUID="d71bf24a-67d7-40ba-8368-5dfb5d2b6036" containerName="dnsmasq-dns" containerID="cri-o://ac356fd44d429a2ffeef1defa6530a8958b27e14a6f0b61c596922c248f08ba9" gracePeriod=10 Feb 25 11:34:44 crc kubenswrapper[5005]: I0225 11:34:44.859083 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-666b6646f7-9524w" Feb 25 11:34:44 crc kubenswrapper[5005]: I0225 11:34:44.893989 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-2mjrz"] Feb 25 11:34:44 crc kubenswrapper[5005]: I0225 11:34:44.901593 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-2mjrz" Feb 25 11:34:44 crc kubenswrapper[5005]: I0225 11:34:44.905764 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 25 11:34:44 crc kubenswrapper[5005]: I0225 11:34:44.908084 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/fc1ff781-cfcd-4b44-92d5-ff9153b19871-ovn-rundir\") pod \"ovn-controller-metrics-q7v44\" (UID: \"fc1ff781-cfcd-4b44-92d5-ff9153b19871\") " pod="openstack/ovn-controller-metrics-q7v44" Feb 25 11:34:44 crc kubenswrapper[5005]: I0225 11:34:44.908127 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8xzc\" (UniqueName: \"kubernetes.io/projected/fc1ff781-cfcd-4b44-92d5-ff9153b19871-kube-api-access-m8xzc\") pod \"ovn-controller-metrics-q7v44\" (UID: \"fc1ff781-cfcd-4b44-92d5-ff9153b19871\") " pod="openstack/ovn-controller-metrics-q7v44" Feb 25 11:34:44 crc kubenswrapper[5005]: I0225 11:34:44.908157 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/fc1ff781-cfcd-4b44-92d5-ff9153b19871-ovs-rundir\") pod \"ovn-controller-metrics-q7v44\" (UID: \"fc1ff781-cfcd-4b44-92d5-ff9153b19871\") " pod="openstack/ovn-controller-metrics-q7v44" Feb 25 11:34:44 crc kubenswrapper[5005]: I0225 11:34:44.908208 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d356e016-7c51-4906-be6a-783d3c4049d4-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-bbt7l\" (UID: \"d356e016-7c51-4906-be6a-783d3c4049d4\") " pod="openstack/dnsmasq-dns-7fd796d7df-bbt7l" Feb 25 11:34:44 crc kubenswrapper[5005]: I0225 11:34:44.908232 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc1ff781-cfcd-4b44-92d5-ff9153b19871-config\") pod \"ovn-controller-metrics-q7v44\" (UID: \"fc1ff781-cfcd-4b44-92d5-ff9153b19871\") " pod="openstack/ovn-controller-metrics-q7v44" Feb 25 11:34:44 crc kubenswrapper[5005]: I0225 11:34:44.908249 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d356e016-7c51-4906-be6a-783d3c4049d4-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-bbt7l\" (UID: \"d356e016-7c51-4906-be6a-783d3c4049d4\") " pod="openstack/dnsmasq-dns-7fd796d7df-bbt7l" Feb 25 11:34:44 crc kubenswrapper[5005]: I0225 11:34:44.908290 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvnqz\" (UniqueName: \"kubernetes.io/projected/d356e016-7c51-4906-be6a-783d3c4049d4-kube-api-access-kvnqz\") pod \"dnsmasq-dns-7fd796d7df-bbt7l\" (UID: \"d356e016-7c51-4906-be6a-783d3c4049d4\") " pod="openstack/dnsmasq-dns-7fd796d7df-bbt7l" Feb 25 11:34:44 crc kubenswrapper[5005]: I0225 11:34:44.908309 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc1ff781-cfcd-4b44-92d5-ff9153b19871-combined-ca-bundle\") pod \"ovn-controller-metrics-q7v44\" (UID: \"fc1ff781-cfcd-4b44-92d5-ff9153b19871\") " pod="openstack/ovn-controller-metrics-q7v44" Feb 25 11:34:44 crc kubenswrapper[5005]: I0225 11:34:44.908402 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d356e016-7c51-4906-be6a-783d3c4049d4-config\") pod \"dnsmasq-dns-7fd796d7df-bbt7l\" (UID: \"d356e016-7c51-4906-be6a-783d3c4049d4\") " pod="openstack/dnsmasq-dns-7fd796d7df-bbt7l" Feb 25 11:34:44 crc kubenswrapper[5005]: I0225 11:34:44.908426 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc1ff781-cfcd-4b44-92d5-ff9153b19871-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-q7v44\" (UID: \"fc1ff781-cfcd-4b44-92d5-ff9153b19871\") " pod="openstack/ovn-controller-metrics-q7v44" Feb 25 11:34:44 crc kubenswrapper[5005]: I0225 11:34:44.909245 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d356e016-7c51-4906-be6a-783d3c4049d4-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-bbt7l\" (UID: \"d356e016-7c51-4906-be6a-783d3c4049d4\") " pod="openstack/dnsmasq-dns-7fd796d7df-bbt7l" Feb 25 11:34:44 crc kubenswrapper[5005]: I0225 11:34:44.909538 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/fc1ff781-cfcd-4b44-92d5-ff9153b19871-ovn-rundir\") pod \"ovn-controller-metrics-q7v44\" (UID: \"fc1ff781-cfcd-4b44-92d5-ff9153b19871\") " pod="openstack/ovn-controller-metrics-q7v44" Feb 25 11:34:44 crc kubenswrapper[5005]: I0225 11:34:44.909762 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/fc1ff781-cfcd-4b44-92d5-ff9153b19871-ovs-rundir\") pod \"ovn-controller-metrics-q7v44\" (UID: \"fc1ff781-cfcd-4b44-92d5-ff9153b19871\") " pod="openstack/ovn-controller-metrics-q7v44" Feb 25 11:34:44 crc kubenswrapper[5005]: I0225 11:34:44.909881 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc1ff781-cfcd-4b44-92d5-ff9153b19871-config\") pod \"ovn-controller-metrics-q7v44\" (UID: \"fc1ff781-cfcd-4b44-92d5-ff9153b19871\") " pod="openstack/ovn-controller-metrics-q7v44" Feb 25 11:34:44 crc kubenswrapper[5005]: I0225 11:34:44.909960 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d356e016-7c51-4906-be6a-783d3c4049d4-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-bbt7l\" (UID: \"d356e016-7c51-4906-be6a-783d3c4049d4\") " pod="openstack/dnsmasq-dns-7fd796d7df-bbt7l" Feb 25 11:34:44 crc kubenswrapper[5005]: I0225 11:34:44.910492 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d356e016-7c51-4906-be6a-783d3c4049d4-config\") pod \"dnsmasq-dns-7fd796d7df-bbt7l\" (UID: \"d356e016-7c51-4906-be6a-783d3c4049d4\") " pod="openstack/dnsmasq-dns-7fd796d7df-bbt7l" Feb 25 11:34:44 crc kubenswrapper[5005]: I0225 11:34:44.911406 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 25 11:34:44 crc kubenswrapper[5005]: I0225 11:34:44.913262 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 25 11:34:44 crc kubenswrapper[5005]: I0225 11:34:44.920924 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-48bgw" Feb 25 11:34:44 crc kubenswrapper[5005]: I0225 11:34:44.921247 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 25 11:34:44 crc kubenswrapper[5005]: I0225 11:34:44.921407 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 25 11:34:44 crc kubenswrapper[5005]: I0225 11:34:44.923998 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc1ff781-cfcd-4b44-92d5-ff9153b19871-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-q7v44\" (UID: \"fc1ff781-cfcd-4b44-92d5-ff9153b19871\") " pod="openstack/ovn-controller-metrics-q7v44" Feb 25 11:34:44 crc kubenswrapper[5005]: I0225 11:34:44.926245 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc1ff781-cfcd-4b44-92d5-ff9153b19871-combined-ca-bundle\") pod \"ovn-controller-metrics-q7v44\" (UID: \"fc1ff781-cfcd-4b44-92d5-ff9153b19871\") " pod="openstack/ovn-controller-metrics-q7v44" Feb 25 11:34:44 crc kubenswrapper[5005]: I0225 11:34:44.936648 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 25 11:34:44 crc kubenswrapper[5005]: I0225 11:34:44.939950 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8xzc\" (UniqueName: \"kubernetes.io/projected/fc1ff781-cfcd-4b44-92d5-ff9153b19871-kube-api-access-m8xzc\") pod \"ovn-controller-metrics-q7v44\" (UID: \"fc1ff781-cfcd-4b44-92d5-ff9153b19871\") " pod="openstack/ovn-controller-metrics-q7v44" Feb 25 11:34:44 crc kubenswrapper[5005]: I0225 11:34:44.949383 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-2mjrz"] Feb 25 11:34:44 crc kubenswrapper[5005]: I0225 11:34:44.950733 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvnqz\" (UniqueName: \"kubernetes.io/projected/d356e016-7c51-4906-be6a-783d3c4049d4-kube-api-access-kvnqz\") pod \"dnsmasq-dns-7fd796d7df-bbt7l\" (UID: \"d356e016-7c51-4906-be6a-783d3c4049d4\") " pod="openstack/dnsmasq-dns-7fd796d7df-bbt7l" Feb 25 11:34:44 crc kubenswrapper[5005]: I0225 11:34:44.978640 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 25 11:34:45 crc kubenswrapper[5005]: I0225 11:34:45.009448 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e2fa3add-6e64-4cfb-9349-7650d9fa6da5-scripts\") pod \"ovn-northd-0\" (UID: \"e2fa3add-6e64-4cfb-9349-7650d9fa6da5\") " pod="openstack/ovn-northd-0" Feb 25 11:34:45 crc kubenswrapper[5005]: I0225 11:34:45.009515 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a46587d-8818-4e55-8351-4bb327e0010b-config\") pod \"dnsmasq-dns-86db49b7ff-2mjrz\" (UID: \"6a46587d-8818-4e55-8351-4bb327e0010b\") " pod="openstack/dnsmasq-dns-86db49b7ff-2mjrz" Feb 25 11:34:45 crc kubenswrapper[5005]: I0225 11:34:45.009540 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmkf8\" (UniqueName: \"kubernetes.io/projected/e2fa3add-6e64-4cfb-9349-7650d9fa6da5-kube-api-access-qmkf8\") pod \"ovn-northd-0\" (UID: \"e2fa3add-6e64-4cfb-9349-7650d9fa6da5\") " pod="openstack/ovn-northd-0" Feb 25 11:34:45 crc kubenswrapper[5005]: I0225 11:34:45.009560 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2fa3add-6e64-4cfb-9349-7650d9fa6da5-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"e2fa3add-6e64-4cfb-9349-7650d9fa6da5\") " pod="openstack/ovn-northd-0" Feb 25 11:34:45 crc kubenswrapper[5005]: I0225 11:34:45.009593 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6a46587d-8818-4e55-8351-4bb327e0010b-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-2mjrz\" (UID: \"6a46587d-8818-4e55-8351-4bb327e0010b\") " pod="openstack/dnsmasq-dns-86db49b7ff-2mjrz" Feb 25 11:34:45 crc kubenswrapper[5005]: I0225 11:34:45.009673 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e2fa3add-6e64-4cfb-9349-7650d9fa6da5-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"e2fa3add-6e64-4cfb-9349-7650d9fa6da5\") " pod="openstack/ovn-northd-0" Feb 25 11:34:45 crc kubenswrapper[5005]: I0225 11:34:45.009697 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2fa3add-6e64-4cfb-9349-7650d9fa6da5-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"e2fa3add-6e64-4cfb-9349-7650d9fa6da5\") " pod="openstack/ovn-northd-0" Feb 25 11:34:45 crc kubenswrapper[5005]: I0225 11:34:45.009796 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2fa3add-6e64-4cfb-9349-7650d9fa6da5-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"e2fa3add-6e64-4cfb-9349-7650d9fa6da5\") " pod="openstack/ovn-northd-0" Feb 25 11:34:45 crc kubenswrapper[5005]: I0225 11:34:45.009842 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d2lx\" (UniqueName: \"kubernetes.io/projected/6a46587d-8818-4e55-8351-4bb327e0010b-kube-api-access-6d2lx\") pod \"dnsmasq-dns-86db49b7ff-2mjrz\" (UID: \"6a46587d-8818-4e55-8351-4bb327e0010b\") " pod="openstack/dnsmasq-dns-86db49b7ff-2mjrz" Feb 25 11:34:45 crc kubenswrapper[5005]: I0225 11:34:45.009905 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2fa3add-6e64-4cfb-9349-7650d9fa6da5-config\") pod \"ovn-northd-0\" (UID: \"e2fa3add-6e64-4cfb-9349-7650d9fa6da5\") " pod="openstack/ovn-northd-0" Feb 25 11:34:45 crc kubenswrapper[5005]: I0225 11:34:45.009954 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a46587d-8818-4e55-8351-4bb327e0010b-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-2mjrz\" (UID: \"6a46587d-8818-4e55-8351-4bb327e0010b\") " pod="openstack/dnsmasq-dns-86db49b7ff-2mjrz" Feb 25 11:34:45 crc kubenswrapper[5005]: I0225 11:34:45.010011 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6a46587d-8818-4e55-8351-4bb327e0010b-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-2mjrz\" (UID: \"6a46587d-8818-4e55-8351-4bb327e0010b\") " pod="openstack/dnsmasq-dns-86db49b7ff-2mjrz" Feb 25 11:34:45 crc kubenswrapper[5005]: I0225 11:34:45.019753 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-bbt7l" Feb 25 11:34:45 crc kubenswrapper[5005]: I0225 11:34:45.045723 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-q7v44" Feb 25 11:34:45 crc kubenswrapper[5005]: I0225 11:34:45.111656 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a46587d-8818-4e55-8351-4bb327e0010b-config\") pod \"dnsmasq-dns-86db49b7ff-2mjrz\" (UID: \"6a46587d-8818-4e55-8351-4bb327e0010b\") " pod="openstack/dnsmasq-dns-86db49b7ff-2mjrz" Feb 25 11:34:45 crc kubenswrapper[5005]: I0225 11:34:45.111694 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmkf8\" (UniqueName: \"kubernetes.io/projected/e2fa3add-6e64-4cfb-9349-7650d9fa6da5-kube-api-access-qmkf8\") pod \"ovn-northd-0\" (UID: \"e2fa3add-6e64-4cfb-9349-7650d9fa6da5\") " pod="openstack/ovn-northd-0" Feb 25 11:34:45 crc kubenswrapper[5005]: I0225 11:34:45.111717 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2fa3add-6e64-4cfb-9349-7650d9fa6da5-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"e2fa3add-6e64-4cfb-9349-7650d9fa6da5\") " pod="openstack/ovn-northd-0" Feb 25 11:34:45 crc kubenswrapper[5005]: I0225 11:34:45.111754 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6a46587d-8818-4e55-8351-4bb327e0010b-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-2mjrz\" (UID: \"6a46587d-8818-4e55-8351-4bb327e0010b\") " pod="openstack/dnsmasq-dns-86db49b7ff-2mjrz" Feb 25 11:34:45 crc kubenswrapper[5005]: I0225 11:34:45.111809 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e2fa3add-6e64-4cfb-9349-7650d9fa6da5-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"e2fa3add-6e64-4cfb-9349-7650d9fa6da5\") " pod="openstack/ovn-northd-0" Feb 25 11:34:45 crc kubenswrapper[5005]: I0225 11:34:45.111837 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2fa3add-6e64-4cfb-9349-7650d9fa6da5-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"e2fa3add-6e64-4cfb-9349-7650d9fa6da5\") " pod="openstack/ovn-northd-0" Feb 25 11:34:45 crc kubenswrapper[5005]: I0225 11:34:45.111864 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2fa3add-6e64-4cfb-9349-7650d9fa6da5-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"e2fa3add-6e64-4cfb-9349-7650d9fa6da5\") " pod="openstack/ovn-northd-0" Feb 25 11:34:45 crc kubenswrapper[5005]: I0225 11:34:45.111881 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6d2lx\" (UniqueName: \"kubernetes.io/projected/6a46587d-8818-4e55-8351-4bb327e0010b-kube-api-access-6d2lx\") pod \"dnsmasq-dns-86db49b7ff-2mjrz\" (UID: \"6a46587d-8818-4e55-8351-4bb327e0010b\") " pod="openstack/dnsmasq-dns-86db49b7ff-2mjrz" Feb 25 11:34:45 crc kubenswrapper[5005]: I0225 11:34:45.111903 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2fa3add-6e64-4cfb-9349-7650d9fa6da5-config\") pod \"ovn-northd-0\" (UID: \"e2fa3add-6e64-4cfb-9349-7650d9fa6da5\") " pod="openstack/ovn-northd-0" Feb 25 11:34:45 crc kubenswrapper[5005]: I0225 11:34:45.111923 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a46587d-8818-4e55-8351-4bb327e0010b-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-2mjrz\" (UID: \"6a46587d-8818-4e55-8351-4bb327e0010b\") " pod="openstack/dnsmasq-dns-86db49b7ff-2mjrz" Feb 25 11:34:45 crc kubenswrapper[5005]: I0225 11:34:45.111950 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6a46587d-8818-4e55-8351-4bb327e0010b-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-2mjrz\" (UID: \"6a46587d-8818-4e55-8351-4bb327e0010b\") " pod="openstack/dnsmasq-dns-86db49b7ff-2mjrz" Feb 25 11:34:45 crc kubenswrapper[5005]: I0225 11:34:45.111990 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e2fa3add-6e64-4cfb-9349-7650d9fa6da5-scripts\") pod \"ovn-northd-0\" (UID: \"e2fa3add-6e64-4cfb-9349-7650d9fa6da5\") " pod="openstack/ovn-northd-0" Feb 25 11:34:45 crc kubenswrapper[5005]: I0225 11:34:45.112642 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e2fa3add-6e64-4cfb-9349-7650d9fa6da5-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"e2fa3add-6e64-4cfb-9349-7650d9fa6da5\") " pod="openstack/ovn-northd-0" Feb 25 11:34:45 crc kubenswrapper[5005]: I0225 11:34:45.113207 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2fa3add-6e64-4cfb-9349-7650d9fa6da5-config\") pod \"ovn-northd-0\" (UID: \"e2fa3add-6e64-4cfb-9349-7650d9fa6da5\") " pod="openstack/ovn-northd-0" Feb 25 11:34:45 crc kubenswrapper[5005]: I0225 11:34:45.113712 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6a46587d-8818-4e55-8351-4bb327e0010b-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-2mjrz\" (UID: \"6a46587d-8818-4e55-8351-4bb327e0010b\") " pod="openstack/dnsmasq-dns-86db49b7ff-2mjrz" Feb 25 11:34:45 crc kubenswrapper[5005]: I0225 11:34:45.113772 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e2fa3add-6e64-4cfb-9349-7650d9fa6da5-scripts\") pod \"ovn-northd-0\" (UID: \"e2fa3add-6e64-4cfb-9349-7650d9fa6da5\") " pod="openstack/ovn-northd-0" Feb 25 11:34:45 crc kubenswrapper[5005]: I0225 11:34:45.114491 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a46587d-8818-4e55-8351-4bb327e0010b-config\") pod \"dnsmasq-dns-86db49b7ff-2mjrz\" (UID: \"6a46587d-8818-4e55-8351-4bb327e0010b\") " pod="openstack/dnsmasq-dns-86db49b7ff-2mjrz" Feb 25 11:34:45 crc kubenswrapper[5005]: I0225 11:34:45.116881 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2fa3add-6e64-4cfb-9349-7650d9fa6da5-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"e2fa3add-6e64-4cfb-9349-7650d9fa6da5\") " pod="openstack/ovn-northd-0" Feb 25 11:34:45 crc kubenswrapper[5005]: I0225 11:34:45.118030 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a46587d-8818-4e55-8351-4bb327e0010b-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-2mjrz\" (UID: \"6a46587d-8818-4e55-8351-4bb327e0010b\") " pod="openstack/dnsmasq-dns-86db49b7ff-2mjrz" Feb 25 11:34:45 crc kubenswrapper[5005]: I0225 11:34:45.118389 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6a46587d-8818-4e55-8351-4bb327e0010b-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-2mjrz\" (UID: \"6a46587d-8818-4e55-8351-4bb327e0010b\") " pod="openstack/dnsmasq-dns-86db49b7ff-2mjrz" Feb 25 11:34:45 crc kubenswrapper[5005]: I0225 11:34:45.122893 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2fa3add-6e64-4cfb-9349-7650d9fa6da5-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"e2fa3add-6e64-4cfb-9349-7650d9fa6da5\") " pod="openstack/ovn-northd-0" Feb 25 11:34:45 crc kubenswrapper[5005]: I0225 11:34:45.148152 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2fa3add-6e64-4cfb-9349-7650d9fa6da5-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"e2fa3add-6e64-4cfb-9349-7650d9fa6da5\") " pod="openstack/ovn-northd-0" Feb 25 11:34:45 crc kubenswrapper[5005]: I0225 11:34:45.148178 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmkf8\" (UniqueName: \"kubernetes.io/projected/e2fa3add-6e64-4cfb-9349-7650d9fa6da5-kube-api-access-qmkf8\") pod \"ovn-northd-0\" (UID: \"e2fa3add-6e64-4cfb-9349-7650d9fa6da5\") " pod="openstack/ovn-northd-0" Feb 25 11:34:45 crc kubenswrapper[5005]: I0225 11:34:45.150424 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6d2lx\" (UniqueName: \"kubernetes.io/projected/6a46587d-8818-4e55-8351-4bb327e0010b-kube-api-access-6d2lx\") pod \"dnsmasq-dns-86db49b7ff-2mjrz\" (UID: \"6a46587d-8818-4e55-8351-4bb327e0010b\") " pod="openstack/dnsmasq-dns-86db49b7ff-2mjrz" Feb 25 11:34:45 crc kubenswrapper[5005]: I0225 11:34:45.289906 5005 generic.go:334] "Generic (PLEG): container finished" podID="d71bf24a-67d7-40ba-8368-5dfb5d2b6036" containerID="ac356fd44d429a2ffeef1defa6530a8958b27e14a6f0b61c596922c248f08ba9" exitCode=0 Feb 25 11:34:45 crc kubenswrapper[5005]: I0225 11:34:45.290076 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-9524w" event={"ID":"d71bf24a-67d7-40ba-8368-5dfb5d2b6036","Type":"ContainerDied","Data":"ac356fd44d429a2ffeef1defa6530a8958b27e14a6f0b61c596922c248f08ba9"} Feb 25 11:34:45 crc kubenswrapper[5005]: I0225 11:34:45.294585 5005 generic.go:334] "Generic (PLEG): container finished" podID="3705a4c5-72f2-4423-9987-7b182bba8ae6" containerID="e1f9b67ed4151e4756d4a11a7c586d419faf19a95df07d528fe6a0d771326d35" exitCode=0 Feb 25 11:34:45 crc kubenswrapper[5005]: I0225 11:34:45.295670 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-m4bbm" event={"ID":"3705a4c5-72f2-4423-9987-7b182bba8ae6","Type":"ContainerDied","Data":"e1f9b67ed4151e4756d4a11a7c586d419faf19a95df07d528fe6a0d771326d35"} Feb 25 11:34:45 crc kubenswrapper[5005]: I0225 11:34:45.315214 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-2mjrz" Feb 25 11:34:45 crc kubenswrapper[5005]: I0225 11:34:45.335244 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 25 11:34:45 crc kubenswrapper[5005]: I0225 11:34:45.455795 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-bbt7l"] Feb 25 11:34:45 crc kubenswrapper[5005]: W0225 11:34:45.477542 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd356e016_7c51_4906_be6a_783d3c4049d4.slice/crio-ffe6e8ceb583edbc7b68b4708da70b7e6586af0cd52dcf6011a753fd68ab8e6b WatchSource:0}: Error finding container ffe6e8ceb583edbc7b68b4708da70b7e6586af0cd52dcf6011a753fd68ab8e6b: Status 404 returned error can't find the container with id ffe6e8ceb583edbc7b68b4708da70b7e6586af0cd52dcf6011a753fd68ab8e6b Feb 25 11:34:45 crc kubenswrapper[5005]: I0225 11:34:45.553242 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-q7v44"] Feb 25 11:34:45 crc kubenswrapper[5005]: W0225 11:34:45.555273 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc1ff781_cfcd_4b44_92d5_ff9153b19871.slice/crio-fba78f4ddf9e84265886f06721c0a80d057176634b300a0c0ecdc1a497fed296 WatchSource:0}: Error finding container fba78f4ddf9e84265886f06721c0a80d057176634b300a0c0ecdc1a497fed296: Status 404 returned error can't find the container with id fba78f4ddf9e84265886f06721c0a80d057176634b300a0c0ecdc1a497fed296 Feb 25 11:34:45 crc kubenswrapper[5005]: I0225 11:34:45.788852 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-2mjrz"] Feb 25 11:34:45 crc kubenswrapper[5005]: W0225 11:34:45.791861 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a46587d_8818_4e55_8351_4bb327e0010b.slice/crio-6ac681e56458fad36cd1c3fc728bf37d33faa8381c072418b36d2489d67e8200 WatchSource:0}: Error finding container 6ac681e56458fad36cd1c3fc728bf37d33faa8381c072418b36d2489d67e8200: Status 404 returned error can't find the container with id 6ac681e56458fad36cd1c3fc728bf37d33faa8381c072418b36d2489d67e8200 Feb 25 11:34:45 crc kubenswrapper[5005]: I0225 11:34:45.849448 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 25 11:34:46 crc kubenswrapper[5005]: I0225 11:34:46.304681 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-q7v44" event={"ID":"fc1ff781-cfcd-4b44-92d5-ff9153b19871","Type":"ContainerStarted","Data":"fba78f4ddf9e84265886f06721c0a80d057176634b300a0c0ecdc1a497fed296"} Feb 25 11:34:46 crc kubenswrapper[5005]: I0225 11:34:46.306646 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"e2fa3add-6e64-4cfb-9349-7650d9fa6da5","Type":"ContainerStarted","Data":"4cd0dc4df53aa8218a5cd1faf95261f1f6c2f810f335a802c9a635754dc2a7cb"} Feb 25 11:34:46 crc kubenswrapper[5005]: I0225 11:34:46.308216 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-bbt7l" event={"ID":"d356e016-7c51-4906-be6a-783d3c4049d4","Type":"ContainerStarted","Data":"ffe6e8ceb583edbc7b68b4708da70b7e6586af0cd52dcf6011a753fd68ab8e6b"} Feb 25 11:34:46 crc kubenswrapper[5005]: I0225 11:34:46.309622 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-2mjrz" event={"ID":"6a46587d-8818-4e55-8351-4bb327e0010b","Type":"ContainerStarted","Data":"6ac681e56458fad36cd1c3fc728bf37d33faa8381c072418b36d2489d67e8200"} Feb 25 11:34:46 crc kubenswrapper[5005]: I0225 11:34:46.955564 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-9524w" Feb 25 11:34:47 crc kubenswrapper[5005]: I0225 11:34:47.050526 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8sbtq\" (UniqueName: \"kubernetes.io/projected/d71bf24a-67d7-40ba-8368-5dfb5d2b6036-kube-api-access-8sbtq\") pod \"d71bf24a-67d7-40ba-8368-5dfb5d2b6036\" (UID: \"d71bf24a-67d7-40ba-8368-5dfb5d2b6036\") " Feb 25 11:34:47 crc kubenswrapper[5005]: I0225 11:34:47.050932 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d71bf24a-67d7-40ba-8368-5dfb5d2b6036-config\") pod \"d71bf24a-67d7-40ba-8368-5dfb5d2b6036\" (UID: \"d71bf24a-67d7-40ba-8368-5dfb5d2b6036\") " Feb 25 11:34:47 crc kubenswrapper[5005]: I0225 11:34:47.051410 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d71bf24a-67d7-40ba-8368-5dfb5d2b6036-dns-svc\") pod \"d71bf24a-67d7-40ba-8368-5dfb5d2b6036\" (UID: \"d71bf24a-67d7-40ba-8368-5dfb5d2b6036\") " Feb 25 11:34:47 crc kubenswrapper[5005]: I0225 11:34:47.056635 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-m4bbm" Feb 25 11:34:47 crc kubenswrapper[5005]: I0225 11:34:47.057415 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d71bf24a-67d7-40ba-8368-5dfb5d2b6036-kube-api-access-8sbtq" (OuterVolumeSpecName: "kube-api-access-8sbtq") pod "d71bf24a-67d7-40ba-8368-5dfb5d2b6036" (UID: "d71bf24a-67d7-40ba-8368-5dfb5d2b6036"). InnerVolumeSpecName "kube-api-access-8sbtq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:34:47 crc kubenswrapper[5005]: I0225 11:34:47.097456 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d71bf24a-67d7-40ba-8368-5dfb5d2b6036-config" (OuterVolumeSpecName: "config") pod "d71bf24a-67d7-40ba-8368-5dfb5d2b6036" (UID: "d71bf24a-67d7-40ba-8368-5dfb5d2b6036"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:34:47 crc kubenswrapper[5005]: I0225 11:34:47.107781 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d71bf24a-67d7-40ba-8368-5dfb5d2b6036-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d71bf24a-67d7-40ba-8368-5dfb5d2b6036" (UID: "d71bf24a-67d7-40ba-8368-5dfb5d2b6036"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:34:47 crc kubenswrapper[5005]: I0225 11:34:47.153060 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zh9pr\" (UniqueName: \"kubernetes.io/projected/3705a4c5-72f2-4423-9987-7b182bba8ae6-kube-api-access-zh9pr\") pod \"3705a4c5-72f2-4423-9987-7b182bba8ae6\" (UID: \"3705a4c5-72f2-4423-9987-7b182bba8ae6\") " Feb 25 11:34:47 crc kubenswrapper[5005]: I0225 11:34:47.153130 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3705a4c5-72f2-4423-9987-7b182bba8ae6-dns-svc\") pod \"3705a4c5-72f2-4423-9987-7b182bba8ae6\" (UID: \"3705a4c5-72f2-4423-9987-7b182bba8ae6\") " Feb 25 11:34:47 crc kubenswrapper[5005]: I0225 11:34:47.153230 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3705a4c5-72f2-4423-9987-7b182bba8ae6-config\") pod \"3705a4c5-72f2-4423-9987-7b182bba8ae6\" (UID: \"3705a4c5-72f2-4423-9987-7b182bba8ae6\") " Feb 25 11:34:47 crc kubenswrapper[5005]: I0225 11:34:47.153894 5005 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d71bf24a-67d7-40ba-8368-5dfb5d2b6036-config\") on node \"crc\" DevicePath \"\"" Feb 25 11:34:47 crc kubenswrapper[5005]: I0225 11:34:47.153913 5005 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d71bf24a-67d7-40ba-8368-5dfb5d2b6036-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 25 11:34:47 crc kubenswrapper[5005]: I0225 11:34:47.153922 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8sbtq\" (UniqueName: \"kubernetes.io/projected/d71bf24a-67d7-40ba-8368-5dfb5d2b6036-kube-api-access-8sbtq\") on node \"crc\" DevicePath \"\"" Feb 25 11:34:47 crc kubenswrapper[5005]: I0225 11:34:47.156571 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3705a4c5-72f2-4423-9987-7b182bba8ae6-kube-api-access-zh9pr" (OuterVolumeSpecName: "kube-api-access-zh9pr") pod "3705a4c5-72f2-4423-9987-7b182bba8ae6" (UID: "3705a4c5-72f2-4423-9987-7b182bba8ae6"). InnerVolumeSpecName "kube-api-access-zh9pr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:34:47 crc kubenswrapper[5005]: I0225 11:34:47.192676 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3705a4c5-72f2-4423-9987-7b182bba8ae6-config" (OuterVolumeSpecName: "config") pod "3705a4c5-72f2-4423-9987-7b182bba8ae6" (UID: "3705a4c5-72f2-4423-9987-7b182bba8ae6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:34:47 crc kubenswrapper[5005]: I0225 11:34:47.198512 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3705a4c5-72f2-4423-9987-7b182bba8ae6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3705a4c5-72f2-4423-9987-7b182bba8ae6" (UID: "3705a4c5-72f2-4423-9987-7b182bba8ae6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:34:47 crc kubenswrapper[5005]: I0225 11:34:47.255922 5005 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3705a4c5-72f2-4423-9987-7b182bba8ae6-config\") on node \"crc\" DevicePath \"\"" Feb 25 11:34:47 crc kubenswrapper[5005]: I0225 11:34:47.255972 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zh9pr\" (UniqueName: \"kubernetes.io/projected/3705a4c5-72f2-4423-9987-7b182bba8ae6-kube-api-access-zh9pr\") on node \"crc\" DevicePath \"\"" Feb 25 11:34:47 crc kubenswrapper[5005]: I0225 11:34:47.255982 5005 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3705a4c5-72f2-4423-9987-7b182bba8ae6-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 25 11:34:47 crc kubenswrapper[5005]: I0225 11:34:47.318658 5005 generic.go:334] "Generic (PLEG): container finished" podID="d356e016-7c51-4906-be6a-783d3c4049d4" containerID="df6a900dbc43223370f78e1cd0fe8eb7571af96ce2ebd0c86cd79f02331b05e6" exitCode=0 Feb 25 11:34:47 crc kubenswrapper[5005]: I0225 11:34:47.318827 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-bbt7l" event={"ID":"d356e016-7c51-4906-be6a-783d3c4049d4","Type":"ContainerDied","Data":"df6a900dbc43223370f78e1cd0fe8eb7571af96ce2ebd0c86cd79f02331b05e6"} Feb 25 11:34:47 crc kubenswrapper[5005]: I0225 11:34:47.331215 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-9524w" Feb 25 11:34:47 crc kubenswrapper[5005]: I0225 11:34:47.331195 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-9524w" event={"ID":"d71bf24a-67d7-40ba-8368-5dfb5d2b6036","Type":"ContainerDied","Data":"50155660ec794158d706bc8193ed8e4be985e8bb7600dca5bdfd229f3bc72cf6"} Feb 25 11:34:47 crc kubenswrapper[5005]: I0225 11:34:47.331491 5005 scope.go:117] "RemoveContainer" containerID="ac356fd44d429a2ffeef1defa6530a8958b27e14a6f0b61c596922c248f08ba9" Feb 25 11:34:47 crc kubenswrapper[5005]: I0225 11:34:47.345246 5005 generic.go:334] "Generic (PLEG): container finished" podID="6a46587d-8818-4e55-8351-4bb327e0010b" containerID="366e8a79a0df83ee226891ebb4b2d003cd8a6eab3562a4550795fe6253ba4d31" exitCode=0 Feb 25 11:34:47 crc kubenswrapper[5005]: I0225 11:34:47.345655 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-2mjrz" event={"ID":"6a46587d-8818-4e55-8351-4bb327e0010b","Type":"ContainerDied","Data":"366e8a79a0df83ee226891ebb4b2d003cd8a6eab3562a4550795fe6253ba4d31"} Feb 25 11:34:47 crc kubenswrapper[5005]: I0225 11:34:47.350043 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-m4bbm" Feb 25 11:34:47 crc kubenswrapper[5005]: I0225 11:34:47.350246 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-m4bbm" event={"ID":"3705a4c5-72f2-4423-9987-7b182bba8ae6","Type":"ContainerDied","Data":"ff9c40a8421b0bdc698cd11882eb4f535c9db0a29bd4b532d3b166128c5cba65"} Feb 25 11:34:47 crc kubenswrapper[5005]: I0225 11:34:47.352260 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-q7v44" event={"ID":"fc1ff781-cfcd-4b44-92d5-ff9153b19871","Type":"ContainerStarted","Data":"abbed7a101162ec8c5d5dcce21fd0ac3dd3eaf181fe09edde6cc416d98a13f6f"} Feb 25 11:34:47 crc kubenswrapper[5005]: I0225 11:34:47.364834 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"fe9dcc0a-0321-4f68-929f-fb5393b97e38","Type":"ContainerStarted","Data":"cbd740bb857e68f5fba8ea889a3ce7f12d8e10f1751a6661da242fe890e56084"} Feb 25 11:34:47 crc kubenswrapper[5005]: I0225 11:34:47.399597 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"7d41f2ea-e694-463b-a0bb-d8b987bab0b4","Type":"ContainerStarted","Data":"c9ed67bc5ffe518ff7cdc1a39cfa1e25afebec762b23ea408346bb2840f5a27a"} Feb 25 11:34:47 crc kubenswrapper[5005]: I0225 11:34:47.421123 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-q7v44" podStartSLOduration=3.421099442 podStartE2EDuration="3.421099442s" podCreationTimestamp="2026-02-25 11:34:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:34:47.404414825 +0000 UTC m=+1001.445147152" watchObservedRunningTime="2026-02-25 11:34:47.421099442 +0000 UTC m=+1001.461831769" Feb 25 11:34:47 crc kubenswrapper[5005]: I0225 11:34:47.436403 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=21.068940128 podStartE2EDuration="28.436345765s" podCreationTimestamp="2026-02-25 11:34:19 +0000 UTC" firstStartedPulling="2026-02-25 11:34:31.15862145 +0000 UTC m=+985.199353767" lastFinishedPulling="2026-02-25 11:34:38.526027077 +0000 UTC m=+992.566759404" observedRunningTime="2026-02-25 11:34:47.433936082 +0000 UTC m=+1001.474668409" watchObservedRunningTime="2026-02-25 11:34:47.436345765 +0000 UTC m=+1001.477078092" Feb 25 11:34:47 crc kubenswrapper[5005]: I0225 11:34:47.444004 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 25 11:34:47 crc kubenswrapper[5005]: I0225 11:34:47.464926 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=20.295686767 podStartE2EDuration="27.464909245s" podCreationTimestamp="2026-02-25 11:34:20 +0000 UTC" firstStartedPulling="2026-02-25 11:34:31.449633643 +0000 UTC m=+985.490365970" lastFinishedPulling="2026-02-25 11:34:38.618856121 +0000 UTC m=+992.659588448" observedRunningTime="2026-02-25 11:34:47.457597843 +0000 UTC m=+1001.498330170" watchObservedRunningTime="2026-02-25 11:34:47.464909245 +0000 UTC m=+1001.505641572" Feb 25 11:34:47 crc kubenswrapper[5005]: I0225 11:34:47.536892 5005 scope.go:117] "RemoveContainer" containerID="295a09df38822e89a91d630580add5c3c34316d92dc1ad422865a197ba919d84" Feb 25 11:34:47 crc kubenswrapper[5005]: I0225 11:34:47.543280 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-9524w"] Feb 25 11:34:47 crc kubenswrapper[5005]: I0225 11:34:47.549612 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-9524w"] Feb 25 11:34:47 crc kubenswrapper[5005]: I0225 11:34:47.555724 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-m4bbm"] Feb 25 11:34:47 crc kubenswrapper[5005]: I0225 11:34:47.574075 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-m4bbm"] Feb 25 11:34:47 crc kubenswrapper[5005]: I0225 11:34:47.586571 5005 scope.go:117] "RemoveContainer" containerID="e1f9b67ed4151e4756d4a11a7c586d419faf19a95df07d528fe6a0d771326d35" Feb 25 11:34:47 crc kubenswrapper[5005]: I0225 11:34:47.604684 5005 scope.go:117] "RemoveContainer" containerID="c646e8d80fdbf6fe3b4fe057e0482b91a4cb6df312292d320db58f9b668c059d" Feb 25 11:34:48 crc kubenswrapper[5005]: I0225 11:34:48.408295 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-2mjrz" event={"ID":"6a46587d-8818-4e55-8351-4bb327e0010b","Type":"ContainerStarted","Data":"bfc9412e280796d64f8b1d1103466667529a68066dacbe185f3d37b22315e0e7"} Feb 25 11:34:48 crc kubenswrapper[5005]: I0225 11:34:48.408411 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-2mjrz" Feb 25 11:34:48 crc kubenswrapper[5005]: I0225 11:34:48.410716 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"e2fa3add-6e64-4cfb-9349-7650d9fa6da5","Type":"ContainerStarted","Data":"0648f26f7c670cca1a5ef7e6974497d0b2581802ed848f0f7e3bc01920d9f801"} Feb 25 11:34:48 crc kubenswrapper[5005]: I0225 11:34:48.410758 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"e2fa3add-6e64-4cfb-9349-7650d9fa6da5","Type":"ContainerStarted","Data":"2890fc77a2b1795c19502544e902cce029ee82f2ce298a0810466d776e08f079"} Feb 25 11:34:48 crc kubenswrapper[5005]: I0225 11:34:48.411148 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 25 11:34:48 crc kubenswrapper[5005]: I0225 11:34:48.412531 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-bbt7l" event={"ID":"d356e016-7c51-4906-be6a-783d3c4049d4","Type":"ContainerStarted","Data":"7503deb3962fa9074cc58421159e69fa020ad8c0e42059c0c9ca96b83b6b1f02"} Feb 25 11:34:48 crc kubenswrapper[5005]: I0225 11:34:48.412657 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fd796d7df-bbt7l" Feb 25 11:34:48 crc kubenswrapper[5005]: I0225 11:34:48.436908 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-2mjrz" podStartSLOduration=4.436890561 podStartE2EDuration="4.436890561s" podCreationTimestamp="2026-02-25 11:34:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:34:48.432465286 +0000 UTC m=+1002.473197613" watchObservedRunningTime="2026-02-25 11:34:48.436890561 +0000 UTC m=+1002.477622888" Feb 25 11:34:48 crc kubenswrapper[5005]: I0225 11:34:48.456635 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.884451818 podStartE2EDuration="4.456619711s" podCreationTimestamp="2026-02-25 11:34:44 +0000 UTC" firstStartedPulling="2026-02-25 11:34:45.856826769 +0000 UTC m=+999.897559106" lastFinishedPulling="2026-02-25 11:34:47.428994672 +0000 UTC m=+1001.469726999" observedRunningTime="2026-02-25 11:34:48.454536667 +0000 UTC m=+1002.495268994" watchObservedRunningTime="2026-02-25 11:34:48.456619711 +0000 UTC m=+1002.497352038" Feb 25 11:34:48 crc kubenswrapper[5005]: I0225 11:34:48.474786 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fd796d7df-bbt7l" podStartSLOduration=4.474770633 podStartE2EDuration="4.474770633s" podCreationTimestamp="2026-02-25 11:34:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:34:48.470479803 +0000 UTC m=+1002.511212130" watchObservedRunningTime="2026-02-25 11:34:48.474770633 +0000 UTC m=+1002.515502960" Feb 25 11:34:48 crc kubenswrapper[5005]: I0225 11:34:48.696297 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3705a4c5-72f2-4423-9987-7b182bba8ae6" path="/var/lib/kubelet/pods/3705a4c5-72f2-4423-9987-7b182bba8ae6/volumes" Feb 25 11:34:48 crc kubenswrapper[5005]: I0225 11:34:48.697092 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d71bf24a-67d7-40ba-8368-5dfb5d2b6036" path="/var/lib/kubelet/pods/d71bf24a-67d7-40ba-8368-5dfb5d2b6036/volumes" Feb 25 11:34:50 crc kubenswrapper[5005]: I0225 11:34:50.731812 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 25 11:34:50 crc kubenswrapper[5005]: I0225 11:34:50.732311 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 25 11:34:51 crc kubenswrapper[5005]: I0225 11:34:51.142539 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 25 11:34:51 crc kubenswrapper[5005]: I0225 11:34:51.515326 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 25 11:34:52 crc kubenswrapper[5005]: I0225 11:34:52.215015 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 25 11:34:52 crc kubenswrapper[5005]: I0225 11:34:52.215062 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 25 11:34:52 crc kubenswrapper[5005]: I0225 11:34:52.287500 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 25 11:34:52 crc kubenswrapper[5005]: I0225 11:34:52.516969 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 25 11:34:53 crc kubenswrapper[5005]: I0225 11:34:53.581227 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-pgwr8"] Feb 25 11:34:53 crc kubenswrapper[5005]: E0225 11:34:53.581769 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3705a4c5-72f2-4423-9987-7b182bba8ae6" containerName="init" Feb 25 11:34:53 crc kubenswrapper[5005]: I0225 11:34:53.581793 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="3705a4c5-72f2-4423-9987-7b182bba8ae6" containerName="init" Feb 25 11:34:53 crc kubenswrapper[5005]: E0225 11:34:53.581812 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3705a4c5-72f2-4423-9987-7b182bba8ae6" containerName="dnsmasq-dns" Feb 25 11:34:53 crc kubenswrapper[5005]: I0225 11:34:53.581826 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="3705a4c5-72f2-4423-9987-7b182bba8ae6" containerName="dnsmasq-dns" Feb 25 11:34:53 crc kubenswrapper[5005]: E0225 11:34:53.581882 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d71bf24a-67d7-40ba-8368-5dfb5d2b6036" containerName="dnsmasq-dns" Feb 25 11:34:53 crc kubenswrapper[5005]: I0225 11:34:53.581896 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="d71bf24a-67d7-40ba-8368-5dfb5d2b6036" containerName="dnsmasq-dns" Feb 25 11:34:53 crc kubenswrapper[5005]: E0225 11:34:53.581933 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d71bf24a-67d7-40ba-8368-5dfb5d2b6036" containerName="init" Feb 25 11:34:53 crc kubenswrapper[5005]: I0225 11:34:53.581945 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="d71bf24a-67d7-40ba-8368-5dfb5d2b6036" containerName="init" Feb 25 11:34:53 crc kubenswrapper[5005]: I0225 11:34:53.582257 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="d71bf24a-67d7-40ba-8368-5dfb5d2b6036" containerName="dnsmasq-dns" Feb 25 11:34:53 crc kubenswrapper[5005]: I0225 11:34:53.582287 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="3705a4c5-72f2-4423-9987-7b182bba8ae6" containerName="dnsmasq-dns" Feb 25 11:34:53 crc kubenswrapper[5005]: I0225 11:34:53.583104 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-pgwr8" Feb 25 11:34:53 crc kubenswrapper[5005]: I0225 11:34:53.595985 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-pgwr8"] Feb 25 11:34:53 crc kubenswrapper[5005]: I0225 11:34:53.698872 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8lbb\" (UniqueName: \"kubernetes.io/projected/55244bcb-677d-4120-9e64-a52a075d96b8-kube-api-access-x8lbb\") pod \"keystone-db-create-pgwr8\" (UID: \"55244bcb-677d-4120-9e64-a52a075d96b8\") " pod="openstack/keystone-db-create-pgwr8" Feb 25 11:34:53 crc kubenswrapper[5005]: I0225 11:34:53.698993 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55244bcb-677d-4120-9e64-a52a075d96b8-operator-scripts\") pod \"keystone-db-create-pgwr8\" (UID: \"55244bcb-677d-4120-9e64-a52a075d96b8\") " pod="openstack/keystone-db-create-pgwr8" Feb 25 11:34:53 crc kubenswrapper[5005]: I0225 11:34:53.720690 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6acf-account-create-update-jxgrk"] Feb 25 11:34:53 crc kubenswrapper[5005]: I0225 11:34:53.725308 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6acf-account-create-update-jxgrk" Feb 25 11:34:53 crc kubenswrapper[5005]: I0225 11:34:53.728023 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 25 11:34:53 crc kubenswrapper[5005]: I0225 11:34:53.729533 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6acf-account-create-update-jxgrk"] Feb 25 11:34:53 crc kubenswrapper[5005]: I0225 11:34:53.789721 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-kdhsw"] Feb 25 11:34:53 crc kubenswrapper[5005]: I0225 11:34:53.791468 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-kdhsw" Feb 25 11:34:53 crc kubenswrapper[5005]: I0225 11:34:53.800191 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e276afbc-14ea-40fc-85b8-1c94ee29e8f6-operator-scripts\") pod \"keystone-6acf-account-create-update-jxgrk\" (UID: \"e276afbc-14ea-40fc-85b8-1c94ee29e8f6\") " pod="openstack/keystone-6acf-account-create-update-jxgrk" Feb 25 11:34:53 crc kubenswrapper[5005]: I0225 11:34:53.800412 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8lbb\" (UniqueName: \"kubernetes.io/projected/55244bcb-677d-4120-9e64-a52a075d96b8-kube-api-access-x8lbb\") pod \"keystone-db-create-pgwr8\" (UID: \"55244bcb-677d-4120-9e64-a52a075d96b8\") " pod="openstack/keystone-db-create-pgwr8" Feb 25 11:34:53 crc kubenswrapper[5005]: I0225 11:34:53.800504 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64bs5\" (UniqueName: \"kubernetes.io/projected/e276afbc-14ea-40fc-85b8-1c94ee29e8f6-kube-api-access-64bs5\") pod \"keystone-6acf-account-create-update-jxgrk\" (UID: \"e276afbc-14ea-40fc-85b8-1c94ee29e8f6\") " pod="openstack/keystone-6acf-account-create-update-jxgrk" Feb 25 11:34:53 crc kubenswrapper[5005]: I0225 11:34:53.800592 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55244bcb-677d-4120-9e64-a52a075d96b8-operator-scripts\") pod \"keystone-db-create-pgwr8\" (UID: \"55244bcb-677d-4120-9e64-a52a075d96b8\") " pod="openstack/keystone-db-create-pgwr8" Feb 25 11:34:53 crc kubenswrapper[5005]: I0225 11:34:53.801420 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55244bcb-677d-4120-9e64-a52a075d96b8-operator-scripts\") pod \"keystone-db-create-pgwr8\" (UID: \"55244bcb-677d-4120-9e64-a52a075d96b8\") " pod="openstack/keystone-db-create-pgwr8" Feb 25 11:34:53 crc kubenswrapper[5005]: I0225 11:34:53.819717 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-kdhsw"] Feb 25 11:34:53 crc kubenswrapper[5005]: I0225 11:34:53.838847 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8lbb\" (UniqueName: \"kubernetes.io/projected/55244bcb-677d-4120-9e64-a52a075d96b8-kube-api-access-x8lbb\") pod \"keystone-db-create-pgwr8\" (UID: \"55244bcb-677d-4120-9e64-a52a075d96b8\") " pod="openstack/keystone-db-create-pgwr8" Feb 25 11:34:53 crc kubenswrapper[5005]: I0225 11:34:53.899546 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-1873-account-create-update-6jhct"] Feb 25 11:34:53 crc kubenswrapper[5005]: I0225 11:34:53.900767 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1873-account-create-update-6jhct" Feb 25 11:34:53 crc kubenswrapper[5005]: I0225 11:34:53.901462 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e276afbc-14ea-40fc-85b8-1c94ee29e8f6-operator-scripts\") pod \"keystone-6acf-account-create-update-jxgrk\" (UID: \"e276afbc-14ea-40fc-85b8-1c94ee29e8f6\") " pod="openstack/keystone-6acf-account-create-update-jxgrk" Feb 25 11:34:53 crc kubenswrapper[5005]: I0225 11:34:53.901509 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64bs5\" (UniqueName: \"kubernetes.io/projected/e276afbc-14ea-40fc-85b8-1c94ee29e8f6-kube-api-access-64bs5\") pod \"keystone-6acf-account-create-update-jxgrk\" (UID: \"e276afbc-14ea-40fc-85b8-1c94ee29e8f6\") " pod="openstack/keystone-6acf-account-create-update-jxgrk" Feb 25 11:34:53 crc kubenswrapper[5005]: I0225 11:34:53.901576 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd2vl\" (UniqueName: \"kubernetes.io/projected/2f5eb2d0-058d-4b0e-80bc-5e2f919b4985-kube-api-access-cd2vl\") pod \"placement-db-create-kdhsw\" (UID: \"2f5eb2d0-058d-4b0e-80bc-5e2f919b4985\") " pod="openstack/placement-db-create-kdhsw" Feb 25 11:34:53 crc kubenswrapper[5005]: I0225 11:34:53.901616 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f5eb2d0-058d-4b0e-80bc-5e2f919b4985-operator-scripts\") pod \"placement-db-create-kdhsw\" (UID: \"2f5eb2d0-058d-4b0e-80bc-5e2f919b4985\") " pod="openstack/placement-db-create-kdhsw" Feb 25 11:34:53 crc kubenswrapper[5005]: I0225 11:34:53.902201 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e276afbc-14ea-40fc-85b8-1c94ee29e8f6-operator-scripts\") pod \"keystone-6acf-account-create-update-jxgrk\" (UID: \"e276afbc-14ea-40fc-85b8-1c94ee29e8f6\") " pod="openstack/keystone-6acf-account-create-update-jxgrk" Feb 25 11:34:53 crc kubenswrapper[5005]: I0225 11:34:53.902790 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-pgwr8" Feb 25 11:34:53 crc kubenswrapper[5005]: I0225 11:34:53.903113 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 25 11:34:53 crc kubenswrapper[5005]: I0225 11:34:53.909193 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-1873-account-create-update-6jhct"] Feb 25 11:34:53 crc kubenswrapper[5005]: I0225 11:34:53.932681 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64bs5\" (UniqueName: \"kubernetes.io/projected/e276afbc-14ea-40fc-85b8-1c94ee29e8f6-kube-api-access-64bs5\") pod \"keystone-6acf-account-create-update-jxgrk\" (UID: \"e276afbc-14ea-40fc-85b8-1c94ee29e8f6\") " pod="openstack/keystone-6acf-account-create-update-jxgrk" Feb 25 11:34:54 crc kubenswrapper[5005]: I0225 11:34:54.002508 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09419be8-2dd4-4b0d-a830-e59c81a5a02c-operator-scripts\") pod \"placement-1873-account-create-update-6jhct\" (UID: \"09419be8-2dd4-4b0d-a830-e59c81a5a02c\") " pod="openstack/placement-1873-account-create-update-6jhct" Feb 25 11:34:54 crc kubenswrapper[5005]: I0225 11:34:54.002755 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cd2vl\" (UniqueName: \"kubernetes.io/projected/2f5eb2d0-058d-4b0e-80bc-5e2f919b4985-kube-api-access-cd2vl\") pod \"placement-db-create-kdhsw\" (UID: \"2f5eb2d0-058d-4b0e-80bc-5e2f919b4985\") " pod="openstack/placement-db-create-kdhsw" Feb 25 11:34:54 crc kubenswrapper[5005]: I0225 11:34:54.002793 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f5eb2d0-058d-4b0e-80bc-5e2f919b4985-operator-scripts\") pod \"placement-db-create-kdhsw\" (UID: \"2f5eb2d0-058d-4b0e-80bc-5e2f919b4985\") " pod="openstack/placement-db-create-kdhsw" Feb 25 11:34:54 crc kubenswrapper[5005]: I0225 11:34:54.002857 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swdpd\" (UniqueName: \"kubernetes.io/projected/09419be8-2dd4-4b0d-a830-e59c81a5a02c-kube-api-access-swdpd\") pod \"placement-1873-account-create-update-6jhct\" (UID: \"09419be8-2dd4-4b0d-a830-e59c81a5a02c\") " pod="openstack/placement-1873-account-create-update-6jhct" Feb 25 11:34:54 crc kubenswrapper[5005]: I0225 11:34:54.003826 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f5eb2d0-058d-4b0e-80bc-5e2f919b4985-operator-scripts\") pod \"placement-db-create-kdhsw\" (UID: \"2f5eb2d0-058d-4b0e-80bc-5e2f919b4985\") " pod="openstack/placement-db-create-kdhsw" Feb 25 11:34:54 crc kubenswrapper[5005]: I0225 11:34:54.037090 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cd2vl\" (UniqueName: \"kubernetes.io/projected/2f5eb2d0-058d-4b0e-80bc-5e2f919b4985-kube-api-access-cd2vl\") pod \"placement-db-create-kdhsw\" (UID: \"2f5eb2d0-058d-4b0e-80bc-5e2f919b4985\") " pod="openstack/placement-db-create-kdhsw" Feb 25 11:34:54 crc kubenswrapper[5005]: I0225 11:34:54.040330 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6acf-account-create-update-jxgrk" Feb 25 11:34:54 crc kubenswrapper[5005]: I0225 11:34:54.104265 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swdpd\" (UniqueName: \"kubernetes.io/projected/09419be8-2dd4-4b0d-a830-e59c81a5a02c-kube-api-access-swdpd\") pod \"placement-1873-account-create-update-6jhct\" (UID: \"09419be8-2dd4-4b0d-a830-e59c81a5a02c\") " pod="openstack/placement-1873-account-create-update-6jhct" Feb 25 11:34:54 crc kubenswrapper[5005]: I0225 11:34:54.104697 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09419be8-2dd4-4b0d-a830-e59c81a5a02c-operator-scripts\") pod \"placement-1873-account-create-update-6jhct\" (UID: \"09419be8-2dd4-4b0d-a830-e59c81a5a02c\") " pod="openstack/placement-1873-account-create-update-6jhct" Feb 25 11:34:54 crc kubenswrapper[5005]: I0225 11:34:54.106164 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09419be8-2dd4-4b0d-a830-e59c81a5a02c-operator-scripts\") pod \"placement-1873-account-create-update-6jhct\" (UID: \"09419be8-2dd4-4b0d-a830-e59c81a5a02c\") " pod="openstack/placement-1873-account-create-update-6jhct" Feb 25 11:34:54 crc kubenswrapper[5005]: I0225 11:34:54.111678 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-kdhsw" Feb 25 11:34:54 crc kubenswrapper[5005]: I0225 11:34:54.121309 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swdpd\" (UniqueName: \"kubernetes.io/projected/09419be8-2dd4-4b0d-a830-e59c81a5a02c-kube-api-access-swdpd\") pod \"placement-1873-account-create-update-6jhct\" (UID: \"09419be8-2dd4-4b0d-a830-e59c81a5a02c\") " pod="openstack/placement-1873-account-create-update-6jhct" Feb 25 11:34:54 crc kubenswrapper[5005]: I0225 11:34:54.315904 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1873-account-create-update-6jhct" Feb 25 11:34:54 crc kubenswrapper[5005]: I0225 11:34:54.349680 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-pgwr8"] Feb 25 11:34:54 crc kubenswrapper[5005]: W0225 11:34:54.360266 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55244bcb_677d_4120_9e64_a52a075d96b8.slice/crio-1ebaaa915edcdb17c4f9beb7987e89e419831055ddecca30b9bb30290e1a7a1e WatchSource:0}: Error finding container 1ebaaa915edcdb17c4f9beb7987e89e419831055ddecca30b9bb30290e1a7a1e: Status 404 returned error can't find the container with id 1ebaaa915edcdb17c4f9beb7987e89e419831055ddecca30b9bb30290e1a7a1e Feb 25 11:34:54 crc kubenswrapper[5005]: I0225 11:34:54.467285 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-pgwr8" event={"ID":"55244bcb-677d-4120-9e64-a52a075d96b8","Type":"ContainerStarted","Data":"1ebaaa915edcdb17c4f9beb7987e89e419831055ddecca30b9bb30290e1a7a1e"} Feb 25 11:34:54 crc kubenswrapper[5005]: W0225 11:34:54.503891 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode276afbc_14ea_40fc_85b8_1c94ee29e8f6.slice/crio-6fda10630e29f388e147c61823b8dfa7b98324486626ea054ec7a99a7fe03abf WatchSource:0}: Error finding container 6fda10630e29f388e147c61823b8dfa7b98324486626ea054ec7a99a7fe03abf: Status 404 returned error can't find the container with id 6fda10630e29f388e147c61823b8dfa7b98324486626ea054ec7a99a7fe03abf Feb 25 11:34:54 crc kubenswrapper[5005]: I0225 11:34:54.504435 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6acf-account-create-update-jxgrk"] Feb 25 11:34:54 crc kubenswrapper[5005]: I0225 11:34:54.596299 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-kdhsw"] Feb 25 11:34:54 crc kubenswrapper[5005]: I0225 11:34:54.714086 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 25 11:34:54 crc kubenswrapper[5005]: I0225 11:34:54.737031 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-1873-account-create-update-6jhct"] Feb 25 11:34:54 crc kubenswrapper[5005]: W0225 11:34:54.769300 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod09419be8_2dd4_4b0d_a830_e59c81a5a02c.slice/crio-d1116252f57e05b5c2a514c1a506ddf9d718e5c7e62ad86a5d5df52b9ae9a90e WatchSource:0}: Error finding container d1116252f57e05b5c2a514c1a506ddf9d718e5c7e62ad86a5d5df52b9ae9a90e: Status 404 returned error can't find the container with id d1116252f57e05b5c2a514c1a506ddf9d718e5c7e62ad86a5d5df52b9ae9a90e Feb 25 11:34:55 crc kubenswrapper[5005]: I0225 11:34:55.021545 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fd796d7df-bbt7l" Feb 25 11:34:55 crc kubenswrapper[5005]: I0225 11:34:55.316498 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-2mjrz" Feb 25 11:34:55 crc kubenswrapper[5005]: I0225 11:34:55.405441 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-bbt7l"] Feb 25 11:34:55 crc kubenswrapper[5005]: I0225 11:34:55.474253 5005 generic.go:334] "Generic (PLEG): container finished" podID="2f5eb2d0-058d-4b0e-80bc-5e2f919b4985" containerID="037a174a553ca0e3c71f0a9242b078d596495feb895a62bac6462bacbd9b6133" exitCode=0 Feb 25 11:34:55 crc kubenswrapper[5005]: I0225 11:34:55.474327 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-kdhsw" event={"ID":"2f5eb2d0-058d-4b0e-80bc-5e2f919b4985","Type":"ContainerDied","Data":"037a174a553ca0e3c71f0a9242b078d596495feb895a62bac6462bacbd9b6133"} Feb 25 11:34:55 crc kubenswrapper[5005]: I0225 11:34:55.474351 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-kdhsw" event={"ID":"2f5eb2d0-058d-4b0e-80bc-5e2f919b4985","Type":"ContainerStarted","Data":"a1dec74b3d1de41f805b40f1f5c5b731a4cad5a4b80bcbe0bd9cd7b21d9759a6"} Feb 25 11:34:55 crc kubenswrapper[5005]: I0225 11:34:55.479119 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6acf-account-create-update-jxgrk" event={"ID":"e276afbc-14ea-40fc-85b8-1c94ee29e8f6","Type":"ContainerDied","Data":"ee2a0311e7e2de7640df3181dad75a64754ec9167f7d4c894b23a0bd9c748d29"} Feb 25 11:34:55 crc kubenswrapper[5005]: I0225 11:34:55.479074 5005 generic.go:334] "Generic (PLEG): container finished" podID="e276afbc-14ea-40fc-85b8-1c94ee29e8f6" containerID="ee2a0311e7e2de7640df3181dad75a64754ec9167f7d4c894b23a0bd9c748d29" exitCode=0 Feb 25 11:34:55 crc kubenswrapper[5005]: I0225 11:34:55.479166 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6acf-account-create-update-jxgrk" event={"ID":"e276afbc-14ea-40fc-85b8-1c94ee29e8f6","Type":"ContainerStarted","Data":"6fda10630e29f388e147c61823b8dfa7b98324486626ea054ec7a99a7fe03abf"} Feb 25 11:34:55 crc kubenswrapper[5005]: I0225 11:34:55.497785 5005 generic.go:334] "Generic (PLEG): container finished" podID="09419be8-2dd4-4b0d-a830-e59c81a5a02c" containerID="d29d6f434381be09314967ee5f8cd4bf28986e657be8b78ff14f79551573feba" exitCode=0 Feb 25 11:34:55 crc kubenswrapper[5005]: I0225 11:34:55.497894 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1873-account-create-update-6jhct" event={"ID":"09419be8-2dd4-4b0d-a830-e59c81a5a02c","Type":"ContainerDied","Data":"d29d6f434381be09314967ee5f8cd4bf28986e657be8b78ff14f79551573feba"} Feb 25 11:34:55 crc kubenswrapper[5005]: I0225 11:34:55.497951 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1873-account-create-update-6jhct" event={"ID":"09419be8-2dd4-4b0d-a830-e59c81a5a02c","Type":"ContainerStarted","Data":"d1116252f57e05b5c2a514c1a506ddf9d718e5c7e62ad86a5d5df52b9ae9a90e"} Feb 25 11:34:55 crc kubenswrapper[5005]: I0225 11:34:55.499971 5005 generic.go:334] "Generic (PLEG): container finished" podID="55244bcb-677d-4120-9e64-a52a075d96b8" containerID="75cbe7f0936212e1bffe2d28ea3d9f007ecc44c963752360cb7162d4aba80823" exitCode=0 Feb 25 11:34:55 crc kubenswrapper[5005]: I0225 11:34:55.500772 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd796d7df-bbt7l" podUID="d356e016-7c51-4906-be6a-783d3c4049d4" containerName="dnsmasq-dns" containerID="cri-o://7503deb3962fa9074cc58421159e69fa020ad8c0e42059c0c9ca96b83b6b1f02" gracePeriod=10 Feb 25 11:34:55 crc kubenswrapper[5005]: I0225 11:34:55.501443 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-pgwr8" event={"ID":"55244bcb-677d-4120-9e64-a52a075d96b8","Type":"ContainerDied","Data":"75cbe7f0936212e1bffe2d28ea3d9f007ecc44c963752360cb7162d4aba80823"} Feb 25 11:34:55 crc kubenswrapper[5005]: I0225 11:34:55.923012 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-bbt7l" Feb 25 11:34:56 crc kubenswrapper[5005]: I0225 11:34:56.047475 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvnqz\" (UniqueName: \"kubernetes.io/projected/d356e016-7c51-4906-be6a-783d3c4049d4-kube-api-access-kvnqz\") pod \"d356e016-7c51-4906-be6a-783d3c4049d4\" (UID: \"d356e016-7c51-4906-be6a-783d3c4049d4\") " Feb 25 11:34:56 crc kubenswrapper[5005]: I0225 11:34:56.047530 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d356e016-7c51-4906-be6a-783d3c4049d4-dns-svc\") pod \"d356e016-7c51-4906-be6a-783d3c4049d4\" (UID: \"d356e016-7c51-4906-be6a-783d3c4049d4\") " Feb 25 11:34:56 crc kubenswrapper[5005]: I0225 11:34:56.048451 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d356e016-7c51-4906-be6a-783d3c4049d4-config\") pod \"d356e016-7c51-4906-be6a-783d3c4049d4\" (UID: \"d356e016-7c51-4906-be6a-783d3c4049d4\") " Feb 25 11:34:56 crc kubenswrapper[5005]: I0225 11:34:56.048483 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d356e016-7c51-4906-be6a-783d3c4049d4-ovsdbserver-nb\") pod \"d356e016-7c51-4906-be6a-783d3c4049d4\" (UID: \"d356e016-7c51-4906-be6a-783d3c4049d4\") " Feb 25 11:34:56 crc kubenswrapper[5005]: I0225 11:34:56.053197 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d356e016-7c51-4906-be6a-783d3c4049d4-kube-api-access-kvnqz" (OuterVolumeSpecName: "kube-api-access-kvnqz") pod "d356e016-7c51-4906-be6a-783d3c4049d4" (UID: "d356e016-7c51-4906-be6a-783d3c4049d4"). InnerVolumeSpecName "kube-api-access-kvnqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:34:56 crc kubenswrapper[5005]: I0225 11:34:56.081743 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d356e016-7c51-4906-be6a-783d3c4049d4-config" (OuterVolumeSpecName: "config") pod "d356e016-7c51-4906-be6a-783d3c4049d4" (UID: "d356e016-7c51-4906-be6a-783d3c4049d4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:34:56 crc kubenswrapper[5005]: I0225 11:34:56.082470 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d356e016-7c51-4906-be6a-783d3c4049d4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d356e016-7c51-4906-be6a-783d3c4049d4" (UID: "d356e016-7c51-4906-be6a-783d3c4049d4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:34:56 crc kubenswrapper[5005]: I0225 11:34:56.096229 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d356e016-7c51-4906-be6a-783d3c4049d4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d356e016-7c51-4906-be6a-783d3c4049d4" (UID: "d356e016-7c51-4906-be6a-783d3c4049d4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:34:56 crc kubenswrapper[5005]: I0225 11:34:56.150809 5005 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d356e016-7c51-4906-be6a-783d3c4049d4-config\") on node \"crc\" DevicePath \"\"" Feb 25 11:34:56 crc kubenswrapper[5005]: I0225 11:34:56.150859 5005 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d356e016-7c51-4906-be6a-783d3c4049d4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 25 11:34:56 crc kubenswrapper[5005]: I0225 11:34:56.150879 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvnqz\" (UniqueName: \"kubernetes.io/projected/d356e016-7c51-4906-be6a-783d3c4049d4-kube-api-access-kvnqz\") on node \"crc\" DevicePath \"\"" Feb 25 11:34:56 crc kubenswrapper[5005]: I0225 11:34:56.150894 5005 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d356e016-7c51-4906-be6a-783d3c4049d4-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 25 11:34:56 crc kubenswrapper[5005]: I0225 11:34:56.512035 5005 generic.go:334] "Generic (PLEG): container finished" podID="d356e016-7c51-4906-be6a-783d3c4049d4" containerID="7503deb3962fa9074cc58421159e69fa020ad8c0e42059c0c9ca96b83b6b1f02" exitCode=0 Feb 25 11:34:56 crc kubenswrapper[5005]: I0225 11:34:56.512098 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-bbt7l" event={"ID":"d356e016-7c51-4906-be6a-783d3c4049d4","Type":"ContainerDied","Data":"7503deb3962fa9074cc58421159e69fa020ad8c0e42059c0c9ca96b83b6b1f02"} Feb 25 11:34:56 crc kubenswrapper[5005]: I0225 11:34:56.512126 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-bbt7l" Feb 25 11:34:56 crc kubenswrapper[5005]: I0225 11:34:56.512166 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-bbt7l" event={"ID":"d356e016-7c51-4906-be6a-783d3c4049d4","Type":"ContainerDied","Data":"ffe6e8ceb583edbc7b68b4708da70b7e6586af0cd52dcf6011a753fd68ab8e6b"} Feb 25 11:34:56 crc kubenswrapper[5005]: I0225 11:34:56.512191 5005 scope.go:117] "RemoveContainer" containerID="7503deb3962fa9074cc58421159e69fa020ad8c0e42059c0c9ca96b83b6b1f02" Feb 25 11:34:56 crc kubenswrapper[5005]: I0225 11:34:56.542889 5005 scope.go:117] "RemoveContainer" containerID="df6a900dbc43223370f78e1cd0fe8eb7571af96ce2ebd0c86cd79f02331b05e6" Feb 25 11:34:56 crc kubenswrapper[5005]: I0225 11:34:56.551277 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-bbt7l"] Feb 25 11:34:56 crc kubenswrapper[5005]: I0225 11:34:56.560222 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-bbt7l"] Feb 25 11:34:56 crc kubenswrapper[5005]: I0225 11:34:56.596048 5005 scope.go:117] "RemoveContainer" containerID="7503deb3962fa9074cc58421159e69fa020ad8c0e42059c0c9ca96b83b6b1f02" Feb 25 11:34:56 crc kubenswrapper[5005]: E0225 11:34:56.596884 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7503deb3962fa9074cc58421159e69fa020ad8c0e42059c0c9ca96b83b6b1f02\": container with ID starting with 7503deb3962fa9074cc58421159e69fa020ad8c0e42059c0c9ca96b83b6b1f02 not found: ID does not exist" containerID="7503deb3962fa9074cc58421159e69fa020ad8c0e42059c0c9ca96b83b6b1f02" Feb 25 11:34:56 crc kubenswrapper[5005]: I0225 11:34:56.596926 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7503deb3962fa9074cc58421159e69fa020ad8c0e42059c0c9ca96b83b6b1f02"} err="failed to get container status \"7503deb3962fa9074cc58421159e69fa020ad8c0e42059c0c9ca96b83b6b1f02\": rpc error: code = NotFound desc = could not find container \"7503deb3962fa9074cc58421159e69fa020ad8c0e42059c0c9ca96b83b6b1f02\": container with ID starting with 7503deb3962fa9074cc58421159e69fa020ad8c0e42059c0c9ca96b83b6b1f02 not found: ID does not exist" Feb 25 11:34:56 crc kubenswrapper[5005]: I0225 11:34:56.596951 5005 scope.go:117] "RemoveContainer" containerID="df6a900dbc43223370f78e1cd0fe8eb7571af96ce2ebd0c86cd79f02331b05e6" Feb 25 11:34:56 crc kubenswrapper[5005]: E0225 11:34:56.597330 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df6a900dbc43223370f78e1cd0fe8eb7571af96ce2ebd0c86cd79f02331b05e6\": container with ID starting with df6a900dbc43223370f78e1cd0fe8eb7571af96ce2ebd0c86cd79f02331b05e6 not found: ID does not exist" containerID="df6a900dbc43223370f78e1cd0fe8eb7571af96ce2ebd0c86cd79f02331b05e6" Feb 25 11:34:56 crc kubenswrapper[5005]: I0225 11:34:56.597356 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df6a900dbc43223370f78e1cd0fe8eb7571af96ce2ebd0c86cd79f02331b05e6"} err="failed to get container status \"df6a900dbc43223370f78e1cd0fe8eb7571af96ce2ebd0c86cd79f02331b05e6\": rpc error: code = NotFound desc = could not find container \"df6a900dbc43223370f78e1cd0fe8eb7571af96ce2ebd0c86cd79f02331b05e6\": container with ID starting with df6a900dbc43223370f78e1cd0fe8eb7571af96ce2ebd0c86cd79f02331b05e6 not found: ID does not exist" Feb 25 11:34:56 crc kubenswrapper[5005]: I0225 11:34:56.697689 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d356e016-7c51-4906-be6a-783d3c4049d4" path="/var/lib/kubelet/pods/d356e016-7c51-4906-be6a-783d3c4049d4/volumes" Feb 25 11:34:56 crc kubenswrapper[5005]: I0225 11:34:56.824111 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1873-account-create-update-6jhct" Feb 25 11:34:56 crc kubenswrapper[5005]: I0225 11:34:56.965952 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swdpd\" (UniqueName: \"kubernetes.io/projected/09419be8-2dd4-4b0d-a830-e59c81a5a02c-kube-api-access-swdpd\") pod \"09419be8-2dd4-4b0d-a830-e59c81a5a02c\" (UID: \"09419be8-2dd4-4b0d-a830-e59c81a5a02c\") " Feb 25 11:34:56 crc kubenswrapper[5005]: I0225 11:34:56.966122 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09419be8-2dd4-4b0d-a830-e59c81a5a02c-operator-scripts\") pod \"09419be8-2dd4-4b0d-a830-e59c81a5a02c\" (UID: \"09419be8-2dd4-4b0d-a830-e59c81a5a02c\") " Feb 25 11:34:56 crc kubenswrapper[5005]: I0225 11:34:56.967088 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09419be8-2dd4-4b0d-a830-e59c81a5a02c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "09419be8-2dd4-4b0d-a830-e59c81a5a02c" (UID: "09419be8-2dd4-4b0d-a830-e59c81a5a02c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:34:56 crc kubenswrapper[5005]: I0225 11:34:56.967308 5005 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09419be8-2dd4-4b0d-a830-e59c81a5a02c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 25 11:34:56 crc kubenswrapper[5005]: I0225 11:34:56.969514 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09419be8-2dd4-4b0d-a830-e59c81a5a02c-kube-api-access-swdpd" (OuterVolumeSpecName: "kube-api-access-swdpd") pod "09419be8-2dd4-4b0d-a830-e59c81a5a02c" (UID: "09419be8-2dd4-4b0d-a830-e59c81a5a02c"). InnerVolumeSpecName "kube-api-access-swdpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:34:57 crc kubenswrapper[5005]: I0225 11:34:57.014191 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-pgwr8" Feb 25 11:34:57 crc kubenswrapper[5005]: I0225 11:34:57.018652 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6acf-account-create-update-jxgrk" Feb 25 11:34:57 crc kubenswrapper[5005]: I0225 11:34:57.028599 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-kdhsw" Feb 25 11:34:57 crc kubenswrapper[5005]: I0225 11:34:57.068664 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e276afbc-14ea-40fc-85b8-1c94ee29e8f6-operator-scripts\") pod \"e276afbc-14ea-40fc-85b8-1c94ee29e8f6\" (UID: \"e276afbc-14ea-40fc-85b8-1c94ee29e8f6\") " Feb 25 11:34:57 crc kubenswrapper[5005]: I0225 11:34:57.068799 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64bs5\" (UniqueName: \"kubernetes.io/projected/e276afbc-14ea-40fc-85b8-1c94ee29e8f6-kube-api-access-64bs5\") pod \"e276afbc-14ea-40fc-85b8-1c94ee29e8f6\" (UID: \"e276afbc-14ea-40fc-85b8-1c94ee29e8f6\") " Feb 25 11:34:57 crc kubenswrapper[5005]: I0225 11:34:57.068828 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8lbb\" (UniqueName: \"kubernetes.io/projected/55244bcb-677d-4120-9e64-a52a075d96b8-kube-api-access-x8lbb\") pod \"55244bcb-677d-4120-9e64-a52a075d96b8\" (UID: \"55244bcb-677d-4120-9e64-a52a075d96b8\") " Feb 25 11:34:57 crc kubenswrapper[5005]: I0225 11:34:57.068851 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55244bcb-677d-4120-9e64-a52a075d96b8-operator-scripts\") pod \"55244bcb-677d-4120-9e64-a52a075d96b8\" (UID: \"55244bcb-677d-4120-9e64-a52a075d96b8\") " Feb 25 11:34:57 crc kubenswrapper[5005]: I0225 11:34:57.069132 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swdpd\" (UniqueName: \"kubernetes.io/projected/09419be8-2dd4-4b0d-a830-e59c81a5a02c-kube-api-access-swdpd\") on node \"crc\" DevicePath \"\"" Feb 25 11:34:57 crc kubenswrapper[5005]: I0225 11:34:57.069526 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e276afbc-14ea-40fc-85b8-1c94ee29e8f6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e276afbc-14ea-40fc-85b8-1c94ee29e8f6" (UID: "e276afbc-14ea-40fc-85b8-1c94ee29e8f6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:34:57 crc kubenswrapper[5005]: I0225 11:34:57.069887 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55244bcb-677d-4120-9e64-a52a075d96b8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "55244bcb-677d-4120-9e64-a52a075d96b8" (UID: "55244bcb-677d-4120-9e64-a52a075d96b8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:34:57 crc kubenswrapper[5005]: I0225 11:34:57.072607 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55244bcb-677d-4120-9e64-a52a075d96b8-kube-api-access-x8lbb" (OuterVolumeSpecName: "kube-api-access-x8lbb") pod "55244bcb-677d-4120-9e64-a52a075d96b8" (UID: "55244bcb-677d-4120-9e64-a52a075d96b8"). InnerVolumeSpecName "kube-api-access-x8lbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:34:57 crc kubenswrapper[5005]: I0225 11:34:57.073404 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e276afbc-14ea-40fc-85b8-1c94ee29e8f6-kube-api-access-64bs5" (OuterVolumeSpecName: "kube-api-access-64bs5") pod "e276afbc-14ea-40fc-85b8-1c94ee29e8f6" (UID: "e276afbc-14ea-40fc-85b8-1c94ee29e8f6"). InnerVolumeSpecName "kube-api-access-64bs5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:34:57 crc kubenswrapper[5005]: I0225 11:34:57.170141 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cd2vl\" (UniqueName: \"kubernetes.io/projected/2f5eb2d0-058d-4b0e-80bc-5e2f919b4985-kube-api-access-cd2vl\") pod \"2f5eb2d0-058d-4b0e-80bc-5e2f919b4985\" (UID: \"2f5eb2d0-058d-4b0e-80bc-5e2f919b4985\") " Feb 25 11:34:57 crc kubenswrapper[5005]: I0225 11:34:57.170210 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f5eb2d0-058d-4b0e-80bc-5e2f919b4985-operator-scripts\") pod \"2f5eb2d0-058d-4b0e-80bc-5e2f919b4985\" (UID: \"2f5eb2d0-058d-4b0e-80bc-5e2f919b4985\") " Feb 25 11:34:57 crc kubenswrapper[5005]: I0225 11:34:57.170585 5005 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55244bcb-677d-4120-9e64-a52a075d96b8-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 25 11:34:57 crc kubenswrapper[5005]: I0225 11:34:57.170603 5005 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e276afbc-14ea-40fc-85b8-1c94ee29e8f6-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 25 11:34:57 crc kubenswrapper[5005]: I0225 11:34:57.170613 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64bs5\" (UniqueName: \"kubernetes.io/projected/e276afbc-14ea-40fc-85b8-1c94ee29e8f6-kube-api-access-64bs5\") on node \"crc\" DevicePath \"\"" Feb 25 11:34:57 crc kubenswrapper[5005]: I0225 11:34:57.170626 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8lbb\" (UniqueName: \"kubernetes.io/projected/55244bcb-677d-4120-9e64-a52a075d96b8-kube-api-access-x8lbb\") on node \"crc\" DevicePath \"\"" Feb 25 11:34:57 crc kubenswrapper[5005]: I0225 11:34:57.170977 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f5eb2d0-058d-4b0e-80bc-5e2f919b4985-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2f5eb2d0-058d-4b0e-80bc-5e2f919b4985" (UID: "2f5eb2d0-058d-4b0e-80bc-5e2f919b4985"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:34:57 crc kubenswrapper[5005]: I0225 11:34:57.173111 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f5eb2d0-058d-4b0e-80bc-5e2f919b4985-kube-api-access-cd2vl" (OuterVolumeSpecName: "kube-api-access-cd2vl") pod "2f5eb2d0-058d-4b0e-80bc-5e2f919b4985" (UID: "2f5eb2d0-058d-4b0e-80bc-5e2f919b4985"). InnerVolumeSpecName "kube-api-access-cd2vl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:34:57 crc kubenswrapper[5005]: I0225 11:34:57.271842 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cd2vl\" (UniqueName: \"kubernetes.io/projected/2f5eb2d0-058d-4b0e-80bc-5e2f919b4985-kube-api-access-cd2vl\") on node \"crc\" DevicePath \"\"" Feb 25 11:34:57 crc kubenswrapper[5005]: I0225 11:34:57.271882 5005 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f5eb2d0-058d-4b0e-80bc-5e2f919b4985-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 25 11:34:57 crc kubenswrapper[5005]: I0225 11:34:57.524642 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6acf-account-create-update-jxgrk" Feb 25 11:34:57 crc kubenswrapper[5005]: I0225 11:34:57.524658 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6acf-account-create-update-jxgrk" event={"ID":"e276afbc-14ea-40fc-85b8-1c94ee29e8f6","Type":"ContainerDied","Data":"6fda10630e29f388e147c61823b8dfa7b98324486626ea054ec7a99a7fe03abf"} Feb 25 11:34:57 crc kubenswrapper[5005]: I0225 11:34:57.524715 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6fda10630e29f388e147c61823b8dfa7b98324486626ea054ec7a99a7fe03abf" Feb 25 11:34:57 crc kubenswrapper[5005]: I0225 11:34:57.527641 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1873-account-create-update-6jhct" event={"ID":"09419be8-2dd4-4b0d-a830-e59c81a5a02c","Type":"ContainerDied","Data":"d1116252f57e05b5c2a514c1a506ddf9d718e5c7e62ad86a5d5df52b9ae9a90e"} Feb 25 11:34:57 crc kubenswrapper[5005]: I0225 11:34:57.527729 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1116252f57e05b5c2a514c1a506ddf9d718e5c7e62ad86a5d5df52b9ae9a90e" Feb 25 11:34:57 crc kubenswrapper[5005]: I0225 11:34:57.527924 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1873-account-create-update-6jhct" Feb 25 11:34:57 crc kubenswrapper[5005]: I0225 11:34:57.532694 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-pgwr8" Feb 25 11:34:57 crc kubenswrapper[5005]: I0225 11:34:57.532693 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-pgwr8" event={"ID":"55244bcb-677d-4120-9e64-a52a075d96b8","Type":"ContainerDied","Data":"1ebaaa915edcdb17c4f9beb7987e89e419831055ddecca30b9bb30290e1a7a1e"} Feb 25 11:34:57 crc kubenswrapper[5005]: I0225 11:34:57.532831 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ebaaa915edcdb17c4f9beb7987e89e419831055ddecca30b9bb30290e1a7a1e" Feb 25 11:34:57 crc kubenswrapper[5005]: I0225 11:34:57.538701 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-kdhsw" event={"ID":"2f5eb2d0-058d-4b0e-80bc-5e2f919b4985","Type":"ContainerDied","Data":"a1dec74b3d1de41f805b40f1f5c5b731a4cad5a4b80bcbe0bd9cd7b21d9759a6"} Feb 25 11:34:57 crc kubenswrapper[5005]: I0225 11:34:57.538745 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1dec74b3d1de41f805b40f1f5c5b731a4cad5a4b80bcbe0bd9cd7b21d9759a6" Feb 25 11:34:57 crc kubenswrapper[5005]: I0225 11:34:57.538756 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-kdhsw" Feb 25 11:34:57 crc kubenswrapper[5005]: I0225 11:34:57.607333 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-7bd7z"] Feb 25 11:34:57 crc kubenswrapper[5005]: E0225 11:34:57.607805 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f5eb2d0-058d-4b0e-80bc-5e2f919b4985" containerName="mariadb-database-create" Feb 25 11:34:57 crc kubenswrapper[5005]: I0225 11:34:57.607834 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f5eb2d0-058d-4b0e-80bc-5e2f919b4985" containerName="mariadb-database-create" Feb 25 11:34:57 crc kubenswrapper[5005]: E0225 11:34:57.607874 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d356e016-7c51-4906-be6a-783d3c4049d4" containerName="init" Feb 25 11:34:57 crc kubenswrapper[5005]: I0225 11:34:57.607886 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="d356e016-7c51-4906-be6a-783d3c4049d4" containerName="init" Feb 25 11:34:57 crc kubenswrapper[5005]: E0225 11:34:57.607920 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d356e016-7c51-4906-be6a-783d3c4049d4" containerName="dnsmasq-dns" Feb 25 11:34:57 crc kubenswrapper[5005]: I0225 11:34:57.607932 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="d356e016-7c51-4906-be6a-783d3c4049d4" containerName="dnsmasq-dns" Feb 25 11:34:57 crc kubenswrapper[5005]: E0225 11:34:57.607967 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55244bcb-677d-4120-9e64-a52a075d96b8" containerName="mariadb-database-create" Feb 25 11:34:57 crc kubenswrapper[5005]: I0225 11:34:57.607980 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="55244bcb-677d-4120-9e64-a52a075d96b8" containerName="mariadb-database-create" Feb 25 11:34:57 crc kubenswrapper[5005]: E0225 11:34:57.608009 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e276afbc-14ea-40fc-85b8-1c94ee29e8f6" containerName="mariadb-account-create-update" Feb 25 11:34:57 crc kubenswrapper[5005]: I0225 11:34:57.608023 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="e276afbc-14ea-40fc-85b8-1c94ee29e8f6" containerName="mariadb-account-create-update" Feb 25 11:34:57 crc kubenswrapper[5005]: E0225 11:34:57.608043 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09419be8-2dd4-4b0d-a830-e59c81a5a02c" containerName="mariadb-account-create-update" Feb 25 11:34:57 crc kubenswrapper[5005]: I0225 11:34:57.608055 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="09419be8-2dd4-4b0d-a830-e59c81a5a02c" containerName="mariadb-account-create-update" Feb 25 11:34:57 crc kubenswrapper[5005]: I0225 11:34:57.608317 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="d356e016-7c51-4906-be6a-783d3c4049d4" containerName="dnsmasq-dns" Feb 25 11:34:57 crc kubenswrapper[5005]: I0225 11:34:57.608337 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="55244bcb-677d-4120-9e64-a52a075d96b8" containerName="mariadb-database-create" Feb 25 11:34:57 crc kubenswrapper[5005]: I0225 11:34:57.608361 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="09419be8-2dd4-4b0d-a830-e59c81a5a02c" containerName="mariadb-account-create-update" Feb 25 11:34:57 crc kubenswrapper[5005]: I0225 11:34:57.608404 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="e276afbc-14ea-40fc-85b8-1c94ee29e8f6" containerName="mariadb-account-create-update" Feb 25 11:34:57 crc kubenswrapper[5005]: I0225 11:34:57.608421 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f5eb2d0-058d-4b0e-80bc-5e2f919b4985" containerName="mariadb-database-create" Feb 25 11:34:57 crc kubenswrapper[5005]: I0225 11:34:57.609468 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-7bd7z" Feb 25 11:34:57 crc kubenswrapper[5005]: I0225 11:34:57.671815 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-7bd7z"] Feb 25 11:34:57 crc kubenswrapper[5005]: I0225 11:34:57.687894 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4a8b7c8-c7dc-42ad-8811-977d6f50f3d7-operator-scripts\") pod \"glance-db-create-7bd7z\" (UID: \"b4a8b7c8-c7dc-42ad-8811-977d6f50f3d7\") " pod="openstack/glance-db-create-7bd7z" Feb 25 11:34:57 crc kubenswrapper[5005]: I0225 11:34:57.688050 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4c6j\" (UniqueName: \"kubernetes.io/projected/b4a8b7c8-c7dc-42ad-8811-977d6f50f3d7-kube-api-access-j4c6j\") pod \"glance-db-create-7bd7z\" (UID: \"b4a8b7c8-c7dc-42ad-8811-977d6f50f3d7\") " pod="openstack/glance-db-create-7bd7z" Feb 25 11:34:57 crc kubenswrapper[5005]: I0225 11:34:57.722166 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-7a71-account-create-update-cmjms"] Feb 25 11:34:57 crc kubenswrapper[5005]: I0225 11:34:57.723492 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7a71-account-create-update-cmjms" Feb 25 11:34:57 crc kubenswrapper[5005]: I0225 11:34:57.725799 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 25 11:34:57 crc kubenswrapper[5005]: I0225 11:34:57.731890 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-7a71-account-create-update-cmjms"] Feb 25 11:34:57 crc kubenswrapper[5005]: I0225 11:34:57.789180 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c677f6a5-2bfc-4007-aed6-c065c62b582d-operator-scripts\") pod \"glance-7a71-account-create-update-cmjms\" (UID: \"c677f6a5-2bfc-4007-aed6-c065c62b582d\") " pod="openstack/glance-7a71-account-create-update-cmjms" Feb 25 11:34:57 crc kubenswrapper[5005]: I0225 11:34:57.789406 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4a8b7c8-c7dc-42ad-8811-977d6f50f3d7-operator-scripts\") pod \"glance-db-create-7bd7z\" (UID: \"b4a8b7c8-c7dc-42ad-8811-977d6f50f3d7\") " pod="openstack/glance-db-create-7bd7z" Feb 25 11:34:57 crc kubenswrapper[5005]: I0225 11:34:57.789575 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4c6j\" (UniqueName: \"kubernetes.io/projected/b4a8b7c8-c7dc-42ad-8811-977d6f50f3d7-kube-api-access-j4c6j\") pod \"glance-db-create-7bd7z\" (UID: \"b4a8b7c8-c7dc-42ad-8811-977d6f50f3d7\") " pod="openstack/glance-db-create-7bd7z" Feb 25 11:34:57 crc kubenswrapper[5005]: I0225 11:34:57.789704 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6j8m\" (UniqueName: \"kubernetes.io/projected/c677f6a5-2bfc-4007-aed6-c065c62b582d-kube-api-access-b6j8m\") pod \"glance-7a71-account-create-update-cmjms\" (UID: \"c677f6a5-2bfc-4007-aed6-c065c62b582d\") " pod="openstack/glance-7a71-account-create-update-cmjms" Feb 25 11:34:57 crc kubenswrapper[5005]: I0225 11:34:57.791028 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4a8b7c8-c7dc-42ad-8811-977d6f50f3d7-operator-scripts\") pod \"glance-db-create-7bd7z\" (UID: \"b4a8b7c8-c7dc-42ad-8811-977d6f50f3d7\") " pod="openstack/glance-db-create-7bd7z" Feb 25 11:34:57 crc kubenswrapper[5005]: I0225 11:34:57.811121 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4c6j\" (UniqueName: \"kubernetes.io/projected/b4a8b7c8-c7dc-42ad-8811-977d6f50f3d7-kube-api-access-j4c6j\") pod \"glance-db-create-7bd7z\" (UID: \"b4a8b7c8-c7dc-42ad-8811-977d6f50f3d7\") " pod="openstack/glance-db-create-7bd7z" Feb 25 11:34:57 crc kubenswrapper[5005]: I0225 11:34:57.891048 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6j8m\" (UniqueName: \"kubernetes.io/projected/c677f6a5-2bfc-4007-aed6-c065c62b582d-kube-api-access-b6j8m\") pod \"glance-7a71-account-create-update-cmjms\" (UID: \"c677f6a5-2bfc-4007-aed6-c065c62b582d\") " pod="openstack/glance-7a71-account-create-update-cmjms" Feb 25 11:34:57 crc kubenswrapper[5005]: I0225 11:34:57.891134 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c677f6a5-2bfc-4007-aed6-c065c62b582d-operator-scripts\") pod \"glance-7a71-account-create-update-cmjms\" (UID: \"c677f6a5-2bfc-4007-aed6-c065c62b582d\") " pod="openstack/glance-7a71-account-create-update-cmjms" Feb 25 11:34:57 crc kubenswrapper[5005]: I0225 11:34:57.891823 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c677f6a5-2bfc-4007-aed6-c065c62b582d-operator-scripts\") pod \"glance-7a71-account-create-update-cmjms\" (UID: \"c677f6a5-2bfc-4007-aed6-c065c62b582d\") " pod="openstack/glance-7a71-account-create-update-cmjms" Feb 25 11:34:57 crc kubenswrapper[5005]: I0225 11:34:57.911224 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6j8m\" (UniqueName: \"kubernetes.io/projected/c677f6a5-2bfc-4007-aed6-c065c62b582d-kube-api-access-b6j8m\") pod \"glance-7a71-account-create-update-cmjms\" (UID: \"c677f6a5-2bfc-4007-aed6-c065c62b582d\") " pod="openstack/glance-7a71-account-create-update-cmjms" Feb 25 11:34:57 crc kubenswrapper[5005]: I0225 11:34:57.978318 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-7bd7z" Feb 25 11:34:58 crc kubenswrapper[5005]: I0225 11:34:58.048303 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7a71-account-create-update-cmjms" Feb 25 11:34:58 crc kubenswrapper[5005]: I0225 11:34:58.087204 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 11:34:58 crc kubenswrapper[5005]: I0225 11:34:58.087273 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 11:34:58 crc kubenswrapper[5005]: I0225 11:34:58.087326 5005 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" Feb 25 11:34:58 crc kubenswrapper[5005]: I0225 11:34:58.088044 5005 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bd803ca207119b9d3512cdb7e88427039abb6164280815395a904d77de0c108e"} pod="openshift-machine-config-operator/machine-config-daemon-tct5q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 25 11:34:58 crc kubenswrapper[5005]: I0225 11:34:58.088112 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" containerID="cri-o://bd803ca207119b9d3512cdb7e88427039abb6164280815395a904d77de0c108e" gracePeriod=600 Feb 25 11:34:58 crc kubenswrapper[5005]: I0225 11:34:58.501000 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-7bd7z"] Feb 25 11:34:58 crc kubenswrapper[5005]: W0225 11:34:58.508684 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4a8b7c8_c7dc_42ad_8811_977d6f50f3d7.slice/crio-cf994c24b283b133039947686337223761c84a2b8c5944a7af2ffe67fcc04748 WatchSource:0}: Error finding container cf994c24b283b133039947686337223761c84a2b8c5944a7af2ffe67fcc04748: Status 404 returned error can't find the container with id cf994c24b283b133039947686337223761c84a2b8c5944a7af2ffe67fcc04748 Feb 25 11:34:58 crc kubenswrapper[5005]: I0225 11:34:58.550564 5005 generic.go:334] "Generic (PLEG): container finished" podID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerID="bd803ca207119b9d3512cdb7e88427039abb6164280815395a904d77de0c108e" exitCode=0 Feb 25 11:34:58 crc kubenswrapper[5005]: I0225 11:34:58.550785 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerDied","Data":"bd803ca207119b9d3512cdb7e88427039abb6164280815395a904d77de0c108e"} Feb 25 11:34:58 crc kubenswrapper[5005]: I0225 11:34:58.550850 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerStarted","Data":"436a11adc02a3406c7c2d7029cfaa74683b64c268bdd958d676e56e989c38e2c"} Feb 25 11:34:58 crc kubenswrapper[5005]: I0225 11:34:58.550892 5005 scope.go:117] "RemoveContainer" containerID="725882823e770cccc62c7fb201d52910ad904925f31e4c131e1bed3c3ec5a21f" Feb 25 11:34:58 crc kubenswrapper[5005]: I0225 11:34:58.552779 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-7bd7z" event={"ID":"b4a8b7c8-c7dc-42ad-8811-977d6f50f3d7","Type":"ContainerStarted","Data":"cf994c24b283b133039947686337223761c84a2b8c5944a7af2ffe67fcc04748"} Feb 25 11:34:58 crc kubenswrapper[5005]: I0225 11:34:58.583153 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-7a71-account-create-update-cmjms"] Feb 25 11:34:58 crc kubenswrapper[5005]: W0225 11:34:58.584228 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc677f6a5_2bfc_4007_aed6_c065c62b582d.slice/crio-d4c82280b588b6153d9ed4ae548d77fdc80b5625049e090772249ebce68bd817 WatchSource:0}: Error finding container d4c82280b588b6153d9ed4ae548d77fdc80b5625049e090772249ebce68bd817: Status 404 returned error can't find the container with id d4c82280b588b6153d9ed4ae548d77fdc80b5625049e090772249ebce68bd817 Feb 25 11:34:59 crc kubenswrapper[5005]: I0225 11:34:59.317650 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-d47wv"] Feb 25 11:34:59 crc kubenswrapper[5005]: I0225 11:34:59.319471 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-d47wv" Feb 25 11:34:59 crc kubenswrapper[5005]: I0225 11:34:59.322067 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 25 11:34:59 crc kubenswrapper[5005]: I0225 11:34:59.348236 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-d47wv"] Feb 25 11:34:59 crc kubenswrapper[5005]: I0225 11:34:59.418810 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26c56d4e-9ca3-428b-8588-be5b14c30b64-operator-scripts\") pod \"root-account-create-update-d47wv\" (UID: \"26c56d4e-9ca3-428b-8588-be5b14c30b64\") " pod="openstack/root-account-create-update-d47wv" Feb 25 11:34:59 crc kubenswrapper[5005]: I0225 11:34:59.419255 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jg4z\" (UniqueName: \"kubernetes.io/projected/26c56d4e-9ca3-428b-8588-be5b14c30b64-kube-api-access-9jg4z\") pod \"root-account-create-update-d47wv\" (UID: \"26c56d4e-9ca3-428b-8588-be5b14c30b64\") " pod="openstack/root-account-create-update-d47wv" Feb 25 11:34:59 crc kubenswrapper[5005]: I0225 11:34:59.520765 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jg4z\" (UniqueName: \"kubernetes.io/projected/26c56d4e-9ca3-428b-8588-be5b14c30b64-kube-api-access-9jg4z\") pod \"root-account-create-update-d47wv\" (UID: \"26c56d4e-9ca3-428b-8588-be5b14c30b64\") " pod="openstack/root-account-create-update-d47wv" Feb 25 11:34:59 crc kubenswrapper[5005]: I0225 11:34:59.520935 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26c56d4e-9ca3-428b-8588-be5b14c30b64-operator-scripts\") pod \"root-account-create-update-d47wv\" (UID: \"26c56d4e-9ca3-428b-8588-be5b14c30b64\") " pod="openstack/root-account-create-update-d47wv" Feb 25 11:34:59 crc kubenswrapper[5005]: I0225 11:34:59.521859 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26c56d4e-9ca3-428b-8588-be5b14c30b64-operator-scripts\") pod \"root-account-create-update-d47wv\" (UID: \"26c56d4e-9ca3-428b-8588-be5b14c30b64\") " pod="openstack/root-account-create-update-d47wv" Feb 25 11:34:59 crc kubenswrapper[5005]: I0225 11:34:59.544610 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jg4z\" (UniqueName: \"kubernetes.io/projected/26c56d4e-9ca3-428b-8588-be5b14c30b64-kube-api-access-9jg4z\") pod \"root-account-create-update-d47wv\" (UID: \"26c56d4e-9ca3-428b-8588-be5b14c30b64\") " pod="openstack/root-account-create-update-d47wv" Feb 25 11:34:59 crc kubenswrapper[5005]: I0225 11:34:59.564596 5005 generic.go:334] "Generic (PLEG): container finished" podID="b4a8b7c8-c7dc-42ad-8811-977d6f50f3d7" containerID="b2eb9a440be25a4de790806ba4806867f4c9b22b04f12f8d951248290608dbda" exitCode=0 Feb 25 11:34:59 crc kubenswrapper[5005]: I0225 11:34:59.564842 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-7bd7z" event={"ID":"b4a8b7c8-c7dc-42ad-8811-977d6f50f3d7","Type":"ContainerDied","Data":"b2eb9a440be25a4de790806ba4806867f4c9b22b04f12f8d951248290608dbda"} Feb 25 11:34:59 crc kubenswrapper[5005]: I0225 11:34:59.565708 5005 generic.go:334] "Generic (PLEG): container finished" podID="c677f6a5-2bfc-4007-aed6-c065c62b582d" containerID="93df242d3df6401dcc3fa87a4bf9762771934edad27050fde7371cd41e024e89" exitCode=0 Feb 25 11:34:59 crc kubenswrapper[5005]: I0225 11:34:59.565816 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7a71-account-create-update-cmjms" event={"ID":"c677f6a5-2bfc-4007-aed6-c065c62b582d","Type":"ContainerDied","Data":"93df242d3df6401dcc3fa87a4bf9762771934edad27050fde7371cd41e024e89"} Feb 25 11:34:59 crc kubenswrapper[5005]: I0225 11:34:59.565895 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7a71-account-create-update-cmjms" event={"ID":"c677f6a5-2bfc-4007-aed6-c065c62b582d","Type":"ContainerStarted","Data":"d4c82280b588b6153d9ed4ae548d77fdc80b5625049e090772249ebce68bd817"} Feb 25 11:34:59 crc kubenswrapper[5005]: I0225 11:34:59.639660 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-d47wv" Feb 25 11:35:00 crc kubenswrapper[5005]: I0225 11:35:00.305948 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-d47wv"] Feb 25 11:35:00 crc kubenswrapper[5005]: I0225 11:35:00.575653 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-d47wv" event={"ID":"26c56d4e-9ca3-428b-8588-be5b14c30b64","Type":"ContainerStarted","Data":"44b43c062143be50350cdf7b19a3fb68bcc51eeaca2f7dbccb6f7a44c4ec64d9"} Feb 25 11:35:00 crc kubenswrapper[5005]: I0225 11:35:00.575709 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-d47wv" event={"ID":"26c56d4e-9ca3-428b-8588-be5b14c30b64","Type":"ContainerStarted","Data":"d4cece63efd99d8214adb70efe95977bf8500ab75cc53c7e00fa4eea7d93921c"} Feb 25 11:35:00 crc kubenswrapper[5005]: I0225 11:35:00.599543 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-d47wv" podStartSLOduration=1.599523312 podStartE2EDuration="1.599523312s" podCreationTimestamp="2026-02-25 11:34:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:35:00.588816996 +0000 UTC m=+1014.629549333" watchObservedRunningTime="2026-02-25 11:35:00.599523312 +0000 UTC m=+1014.640255649" Feb 25 11:35:00 crc kubenswrapper[5005]: I0225 11:35:00.887868 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7a71-account-create-update-cmjms" Feb 25 11:35:00 crc kubenswrapper[5005]: I0225 11:35:00.949339 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c677f6a5-2bfc-4007-aed6-c065c62b582d-operator-scripts\") pod \"c677f6a5-2bfc-4007-aed6-c065c62b582d\" (UID: \"c677f6a5-2bfc-4007-aed6-c065c62b582d\") " Feb 25 11:35:00 crc kubenswrapper[5005]: I0225 11:35:00.949538 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6j8m\" (UniqueName: \"kubernetes.io/projected/c677f6a5-2bfc-4007-aed6-c065c62b582d-kube-api-access-b6j8m\") pod \"c677f6a5-2bfc-4007-aed6-c065c62b582d\" (UID: \"c677f6a5-2bfc-4007-aed6-c065c62b582d\") " Feb 25 11:35:00 crc kubenswrapper[5005]: I0225 11:35:00.949984 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c677f6a5-2bfc-4007-aed6-c065c62b582d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c677f6a5-2bfc-4007-aed6-c065c62b582d" (UID: "c677f6a5-2bfc-4007-aed6-c065c62b582d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:35:00 crc kubenswrapper[5005]: I0225 11:35:00.950473 5005 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c677f6a5-2bfc-4007-aed6-c065c62b582d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 25 11:35:00 crc kubenswrapper[5005]: I0225 11:35:00.955811 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c677f6a5-2bfc-4007-aed6-c065c62b582d-kube-api-access-b6j8m" (OuterVolumeSpecName: "kube-api-access-b6j8m") pod "c677f6a5-2bfc-4007-aed6-c065c62b582d" (UID: "c677f6a5-2bfc-4007-aed6-c065c62b582d"). InnerVolumeSpecName "kube-api-access-b6j8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:35:00 crc kubenswrapper[5005]: I0225 11:35:00.970141 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-7bd7z" Feb 25 11:35:01 crc kubenswrapper[5005]: I0225 11:35:01.051922 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4c6j\" (UniqueName: \"kubernetes.io/projected/b4a8b7c8-c7dc-42ad-8811-977d6f50f3d7-kube-api-access-j4c6j\") pod \"b4a8b7c8-c7dc-42ad-8811-977d6f50f3d7\" (UID: \"b4a8b7c8-c7dc-42ad-8811-977d6f50f3d7\") " Feb 25 11:35:01 crc kubenswrapper[5005]: I0225 11:35:01.052266 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4a8b7c8-c7dc-42ad-8811-977d6f50f3d7-operator-scripts\") pod \"b4a8b7c8-c7dc-42ad-8811-977d6f50f3d7\" (UID: \"b4a8b7c8-c7dc-42ad-8811-977d6f50f3d7\") " Feb 25 11:35:01 crc kubenswrapper[5005]: I0225 11:35:01.052813 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4a8b7c8-c7dc-42ad-8811-977d6f50f3d7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b4a8b7c8-c7dc-42ad-8811-977d6f50f3d7" (UID: "b4a8b7c8-c7dc-42ad-8811-977d6f50f3d7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:35:01 crc kubenswrapper[5005]: I0225 11:35:01.053129 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6j8m\" (UniqueName: \"kubernetes.io/projected/c677f6a5-2bfc-4007-aed6-c065c62b582d-kube-api-access-b6j8m\") on node \"crc\" DevicePath \"\"" Feb 25 11:35:01 crc kubenswrapper[5005]: I0225 11:35:01.053225 5005 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4a8b7c8-c7dc-42ad-8811-977d6f50f3d7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 25 11:35:01 crc kubenswrapper[5005]: I0225 11:35:01.054656 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4a8b7c8-c7dc-42ad-8811-977d6f50f3d7-kube-api-access-j4c6j" (OuterVolumeSpecName: "kube-api-access-j4c6j") pod "b4a8b7c8-c7dc-42ad-8811-977d6f50f3d7" (UID: "b4a8b7c8-c7dc-42ad-8811-977d6f50f3d7"). InnerVolumeSpecName "kube-api-access-j4c6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:35:01 crc kubenswrapper[5005]: I0225 11:35:01.154575 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4c6j\" (UniqueName: \"kubernetes.io/projected/b4a8b7c8-c7dc-42ad-8811-977d6f50f3d7-kube-api-access-j4c6j\") on node \"crc\" DevicePath \"\"" Feb 25 11:35:01 crc kubenswrapper[5005]: I0225 11:35:01.585722 5005 generic.go:334] "Generic (PLEG): container finished" podID="26c56d4e-9ca3-428b-8588-be5b14c30b64" containerID="44b43c062143be50350cdf7b19a3fb68bcc51eeaca2f7dbccb6f7a44c4ec64d9" exitCode=0 Feb 25 11:35:01 crc kubenswrapper[5005]: I0225 11:35:01.585867 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-d47wv" event={"ID":"26c56d4e-9ca3-428b-8588-be5b14c30b64","Type":"ContainerDied","Data":"44b43c062143be50350cdf7b19a3fb68bcc51eeaca2f7dbccb6f7a44c4ec64d9"} Feb 25 11:35:01 crc kubenswrapper[5005]: I0225 11:35:01.588253 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-7bd7z" event={"ID":"b4a8b7c8-c7dc-42ad-8811-977d6f50f3d7","Type":"ContainerDied","Data":"cf994c24b283b133039947686337223761c84a2b8c5944a7af2ffe67fcc04748"} Feb 25 11:35:01 crc kubenswrapper[5005]: I0225 11:35:01.588313 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-7bd7z" Feb 25 11:35:01 crc kubenswrapper[5005]: I0225 11:35:01.588320 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf994c24b283b133039947686337223761c84a2b8c5944a7af2ffe67fcc04748" Feb 25 11:35:01 crc kubenswrapper[5005]: I0225 11:35:01.590512 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7a71-account-create-update-cmjms" event={"ID":"c677f6a5-2bfc-4007-aed6-c065c62b582d","Type":"ContainerDied","Data":"d4c82280b588b6153d9ed4ae548d77fdc80b5625049e090772249ebce68bd817"} Feb 25 11:35:01 crc kubenswrapper[5005]: I0225 11:35:01.590553 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4c82280b588b6153d9ed4ae548d77fdc80b5625049e090772249ebce68bd817" Feb 25 11:35:01 crc kubenswrapper[5005]: I0225 11:35:01.590568 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7a71-account-create-update-cmjms" Feb 25 11:35:02 crc kubenswrapper[5005]: I0225 11:35:02.878941 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-8k7gx"] Feb 25 11:35:02 crc kubenswrapper[5005]: E0225 11:35:02.879872 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c677f6a5-2bfc-4007-aed6-c065c62b582d" containerName="mariadb-account-create-update" Feb 25 11:35:02 crc kubenswrapper[5005]: I0225 11:35:02.879895 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="c677f6a5-2bfc-4007-aed6-c065c62b582d" containerName="mariadb-account-create-update" Feb 25 11:35:02 crc kubenswrapper[5005]: E0225 11:35:02.879925 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4a8b7c8-c7dc-42ad-8811-977d6f50f3d7" containerName="mariadb-database-create" Feb 25 11:35:02 crc kubenswrapper[5005]: I0225 11:35:02.879937 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4a8b7c8-c7dc-42ad-8811-977d6f50f3d7" containerName="mariadb-database-create" Feb 25 11:35:02 crc kubenswrapper[5005]: I0225 11:35:02.880211 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4a8b7c8-c7dc-42ad-8811-977d6f50f3d7" containerName="mariadb-database-create" Feb 25 11:35:02 crc kubenswrapper[5005]: I0225 11:35:02.880250 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="c677f6a5-2bfc-4007-aed6-c065c62b582d" containerName="mariadb-account-create-update" Feb 25 11:35:02 crc kubenswrapper[5005]: I0225 11:35:02.881071 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-8k7gx" Feb 25 11:35:02 crc kubenswrapper[5005]: I0225 11:35:02.883832 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-khhns" Feb 25 11:35:02 crc kubenswrapper[5005]: I0225 11:35:02.885500 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 25 11:35:02 crc kubenswrapper[5005]: I0225 11:35:02.892878 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-8k7gx"] Feb 25 11:35:02 crc kubenswrapper[5005]: I0225 11:35:02.981590 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-d47wv" Feb 25 11:35:02 crc kubenswrapper[5005]: I0225 11:35:02.988666 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4hxh\" (UniqueName: \"kubernetes.io/projected/a6fdb7b6-9eca-4adc-a5d2-3aee73085ea4-kube-api-access-x4hxh\") pod \"glance-db-sync-8k7gx\" (UID: \"a6fdb7b6-9eca-4adc-a5d2-3aee73085ea4\") " pod="openstack/glance-db-sync-8k7gx" Feb 25 11:35:02 crc kubenswrapper[5005]: I0225 11:35:02.988731 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a6fdb7b6-9eca-4adc-a5d2-3aee73085ea4-db-sync-config-data\") pod \"glance-db-sync-8k7gx\" (UID: \"a6fdb7b6-9eca-4adc-a5d2-3aee73085ea4\") " pod="openstack/glance-db-sync-8k7gx" Feb 25 11:35:02 crc kubenswrapper[5005]: I0225 11:35:02.988931 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6fdb7b6-9eca-4adc-a5d2-3aee73085ea4-combined-ca-bundle\") pod \"glance-db-sync-8k7gx\" (UID: \"a6fdb7b6-9eca-4adc-a5d2-3aee73085ea4\") " pod="openstack/glance-db-sync-8k7gx" Feb 25 11:35:02 crc kubenswrapper[5005]: I0225 11:35:02.989254 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6fdb7b6-9eca-4adc-a5d2-3aee73085ea4-config-data\") pod \"glance-db-sync-8k7gx\" (UID: \"a6fdb7b6-9eca-4adc-a5d2-3aee73085ea4\") " pod="openstack/glance-db-sync-8k7gx" Feb 25 11:35:03 crc kubenswrapper[5005]: I0225 11:35:03.090323 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jg4z\" (UniqueName: \"kubernetes.io/projected/26c56d4e-9ca3-428b-8588-be5b14c30b64-kube-api-access-9jg4z\") pod \"26c56d4e-9ca3-428b-8588-be5b14c30b64\" (UID: \"26c56d4e-9ca3-428b-8588-be5b14c30b64\") " Feb 25 11:35:03 crc kubenswrapper[5005]: I0225 11:35:03.090412 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26c56d4e-9ca3-428b-8588-be5b14c30b64-operator-scripts\") pod \"26c56d4e-9ca3-428b-8588-be5b14c30b64\" (UID: \"26c56d4e-9ca3-428b-8588-be5b14c30b64\") " Feb 25 11:35:03 crc kubenswrapper[5005]: I0225 11:35:03.090651 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6fdb7b6-9eca-4adc-a5d2-3aee73085ea4-config-data\") pod \"glance-db-sync-8k7gx\" (UID: \"a6fdb7b6-9eca-4adc-a5d2-3aee73085ea4\") " pod="openstack/glance-db-sync-8k7gx" Feb 25 11:35:03 crc kubenswrapper[5005]: I0225 11:35:03.091344 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26c56d4e-9ca3-428b-8588-be5b14c30b64-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "26c56d4e-9ca3-428b-8588-be5b14c30b64" (UID: "26c56d4e-9ca3-428b-8588-be5b14c30b64"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:35:03 crc kubenswrapper[5005]: I0225 11:35:03.091485 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4hxh\" (UniqueName: \"kubernetes.io/projected/a6fdb7b6-9eca-4adc-a5d2-3aee73085ea4-kube-api-access-x4hxh\") pod \"glance-db-sync-8k7gx\" (UID: \"a6fdb7b6-9eca-4adc-a5d2-3aee73085ea4\") " pod="openstack/glance-db-sync-8k7gx" Feb 25 11:35:03 crc kubenswrapper[5005]: I0225 11:35:03.091830 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a6fdb7b6-9eca-4adc-a5d2-3aee73085ea4-db-sync-config-data\") pod \"glance-db-sync-8k7gx\" (UID: \"a6fdb7b6-9eca-4adc-a5d2-3aee73085ea4\") " pod="openstack/glance-db-sync-8k7gx" Feb 25 11:35:03 crc kubenswrapper[5005]: I0225 11:35:03.092000 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6fdb7b6-9eca-4adc-a5d2-3aee73085ea4-combined-ca-bundle\") pod \"glance-db-sync-8k7gx\" (UID: \"a6fdb7b6-9eca-4adc-a5d2-3aee73085ea4\") " pod="openstack/glance-db-sync-8k7gx" Feb 25 11:35:03 crc kubenswrapper[5005]: I0225 11:35:03.092402 5005 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26c56d4e-9ca3-428b-8588-be5b14c30b64-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 25 11:35:03 crc kubenswrapper[5005]: I0225 11:35:03.096620 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26c56d4e-9ca3-428b-8588-be5b14c30b64-kube-api-access-9jg4z" (OuterVolumeSpecName: "kube-api-access-9jg4z") pod "26c56d4e-9ca3-428b-8588-be5b14c30b64" (UID: "26c56d4e-9ca3-428b-8588-be5b14c30b64"). InnerVolumeSpecName "kube-api-access-9jg4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:35:03 crc kubenswrapper[5005]: I0225 11:35:03.097109 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a6fdb7b6-9eca-4adc-a5d2-3aee73085ea4-db-sync-config-data\") pod \"glance-db-sync-8k7gx\" (UID: \"a6fdb7b6-9eca-4adc-a5d2-3aee73085ea4\") " pod="openstack/glance-db-sync-8k7gx" Feb 25 11:35:03 crc kubenswrapper[5005]: I0225 11:35:03.097231 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6fdb7b6-9eca-4adc-a5d2-3aee73085ea4-config-data\") pod \"glance-db-sync-8k7gx\" (UID: \"a6fdb7b6-9eca-4adc-a5d2-3aee73085ea4\") " pod="openstack/glance-db-sync-8k7gx" Feb 25 11:35:03 crc kubenswrapper[5005]: I0225 11:35:03.097306 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6fdb7b6-9eca-4adc-a5d2-3aee73085ea4-combined-ca-bundle\") pod \"glance-db-sync-8k7gx\" (UID: \"a6fdb7b6-9eca-4adc-a5d2-3aee73085ea4\") " pod="openstack/glance-db-sync-8k7gx" Feb 25 11:35:03 crc kubenswrapper[5005]: I0225 11:35:03.106985 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4hxh\" (UniqueName: \"kubernetes.io/projected/a6fdb7b6-9eca-4adc-a5d2-3aee73085ea4-kube-api-access-x4hxh\") pod \"glance-db-sync-8k7gx\" (UID: \"a6fdb7b6-9eca-4adc-a5d2-3aee73085ea4\") " pod="openstack/glance-db-sync-8k7gx" Feb 25 11:35:03 crc kubenswrapper[5005]: I0225 11:35:03.194165 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9jg4z\" (UniqueName: \"kubernetes.io/projected/26c56d4e-9ca3-428b-8588-be5b14c30b64-kube-api-access-9jg4z\") on node \"crc\" DevicePath \"\"" Feb 25 11:35:03 crc kubenswrapper[5005]: I0225 11:35:03.208655 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-8k7gx" Feb 25 11:35:03 crc kubenswrapper[5005]: I0225 11:35:03.631916 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-d47wv" event={"ID":"26c56d4e-9ca3-428b-8588-be5b14c30b64","Type":"ContainerDied","Data":"d4cece63efd99d8214adb70efe95977bf8500ab75cc53c7e00fa4eea7d93921c"} Feb 25 11:35:03 crc kubenswrapper[5005]: I0225 11:35:03.632265 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4cece63efd99d8214adb70efe95977bf8500ab75cc53c7e00fa4eea7d93921c" Feb 25 11:35:03 crc kubenswrapper[5005]: I0225 11:35:03.632348 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-d47wv" Feb 25 11:35:03 crc kubenswrapper[5005]: W0225 11:35:03.718925 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6fdb7b6_9eca_4adc_a5d2_3aee73085ea4.slice/crio-4f19ca4ebd41504a61cbab994480f5bc5e66ddd2e699f36567c9a0a9464744fe WatchSource:0}: Error finding container 4f19ca4ebd41504a61cbab994480f5bc5e66ddd2e699f36567c9a0a9464744fe: Status 404 returned error can't find the container with id 4f19ca4ebd41504a61cbab994480f5bc5e66ddd2e699f36567c9a0a9464744fe Feb 25 11:35:03 crc kubenswrapper[5005]: I0225 11:35:03.726208 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-8k7gx"] Feb 25 11:35:04 crc kubenswrapper[5005]: I0225 11:35:04.642364 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-8k7gx" event={"ID":"a6fdb7b6-9eca-4adc-a5d2-3aee73085ea4","Type":"ContainerStarted","Data":"4f19ca4ebd41504a61cbab994480f5bc5e66ddd2e699f36567c9a0a9464744fe"} Feb 25 11:35:05 crc kubenswrapper[5005]: I0225 11:35:05.395178 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 25 11:35:05 crc kubenswrapper[5005]: I0225 11:35:05.812678 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-d47wv"] Feb 25 11:35:05 crc kubenswrapper[5005]: I0225 11:35:05.825254 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-d47wv"] Feb 25 11:35:06 crc kubenswrapper[5005]: I0225 11:35:06.699625 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26c56d4e-9ca3-428b-8588-be5b14c30b64" path="/var/lib/kubelet/pods/26c56d4e-9ca3-428b-8588-be5b14c30b64/volumes" Feb 25 11:35:10 crc kubenswrapper[5005]: I0225 11:35:10.821971 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-b824f"] Feb 25 11:35:10 crc kubenswrapper[5005]: E0225 11:35:10.822872 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26c56d4e-9ca3-428b-8588-be5b14c30b64" containerName="mariadb-account-create-update" Feb 25 11:35:10 crc kubenswrapper[5005]: I0225 11:35:10.822889 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="26c56d4e-9ca3-428b-8588-be5b14c30b64" containerName="mariadb-account-create-update" Feb 25 11:35:10 crc kubenswrapper[5005]: I0225 11:35:10.823119 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="26c56d4e-9ca3-428b-8588-be5b14c30b64" containerName="mariadb-account-create-update" Feb 25 11:35:10 crc kubenswrapper[5005]: I0225 11:35:10.823729 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-b824f" Feb 25 11:35:10 crc kubenswrapper[5005]: I0225 11:35:10.825508 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 25 11:35:10 crc kubenswrapper[5005]: I0225 11:35:10.830690 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-b824f"] Feb 25 11:35:10 crc kubenswrapper[5005]: I0225 11:35:10.953677 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2pz9\" (UniqueName: \"kubernetes.io/projected/e16ead21-e865-48ed-8adc-fa1892f6a38f-kube-api-access-x2pz9\") pod \"root-account-create-update-b824f\" (UID: \"e16ead21-e865-48ed-8adc-fa1892f6a38f\") " pod="openstack/root-account-create-update-b824f" Feb 25 11:35:10 crc kubenswrapper[5005]: I0225 11:35:10.953779 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e16ead21-e865-48ed-8adc-fa1892f6a38f-operator-scripts\") pod \"root-account-create-update-b824f\" (UID: \"e16ead21-e865-48ed-8adc-fa1892f6a38f\") " pod="openstack/root-account-create-update-b824f" Feb 25 11:35:11 crc kubenswrapper[5005]: I0225 11:35:11.055453 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e16ead21-e865-48ed-8adc-fa1892f6a38f-operator-scripts\") pod \"root-account-create-update-b824f\" (UID: \"e16ead21-e865-48ed-8adc-fa1892f6a38f\") " pod="openstack/root-account-create-update-b824f" Feb 25 11:35:11 crc kubenswrapper[5005]: I0225 11:35:11.055593 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2pz9\" (UniqueName: \"kubernetes.io/projected/e16ead21-e865-48ed-8adc-fa1892f6a38f-kube-api-access-x2pz9\") pod \"root-account-create-update-b824f\" (UID: \"e16ead21-e865-48ed-8adc-fa1892f6a38f\") " pod="openstack/root-account-create-update-b824f" Feb 25 11:35:11 crc kubenswrapper[5005]: I0225 11:35:11.056508 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e16ead21-e865-48ed-8adc-fa1892f6a38f-operator-scripts\") pod \"root-account-create-update-b824f\" (UID: \"e16ead21-e865-48ed-8adc-fa1892f6a38f\") " pod="openstack/root-account-create-update-b824f" Feb 25 11:35:11 crc kubenswrapper[5005]: I0225 11:35:11.086579 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2pz9\" (UniqueName: \"kubernetes.io/projected/e16ead21-e865-48ed-8adc-fa1892f6a38f-kube-api-access-x2pz9\") pod \"root-account-create-update-b824f\" (UID: \"e16ead21-e865-48ed-8adc-fa1892f6a38f\") " pod="openstack/root-account-create-update-b824f" Feb 25 11:35:11 crc kubenswrapper[5005]: I0225 11:35:11.154838 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-b824f" Feb 25 11:35:12 crc kubenswrapper[5005]: I0225 11:35:12.703299 5005 generic.go:334] "Generic (PLEG): container finished" podID="773c4b57-17bf-4159-9b75-81072c68692e" containerID="06eb8e47ec1e952c5b7fecf002337ff4943629e25e9129542412322444f70bcd" exitCode=0 Feb 25 11:35:12 crc kubenswrapper[5005]: I0225 11:35:12.703364 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"773c4b57-17bf-4159-9b75-81072c68692e","Type":"ContainerDied","Data":"06eb8e47ec1e952c5b7fecf002337ff4943629e25e9129542412322444f70bcd"} Feb 25 11:35:12 crc kubenswrapper[5005]: I0225 11:35:12.706992 5005 generic.go:334] "Generic (PLEG): container finished" podID="86534792-e561-447f-bcef-4ff82b02561c" containerID="626ca3e16ee06387e105dcce1325831e7fb6d4531a71af0b7c907cf52bdf6da2" exitCode=0 Feb 25 11:35:12 crc kubenswrapper[5005]: I0225 11:35:12.707024 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"86534792-e561-447f-bcef-4ff82b02561c","Type":"ContainerDied","Data":"626ca3e16ee06387e105dcce1325831e7fb6d4531a71af0b7c907cf52bdf6da2"} Feb 25 11:35:13 crc kubenswrapper[5005]: I0225 11:35:13.468932 5005 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-mhhkt" podUID="d7433eab-76d5-403c-8949-6b99fa8624d5" containerName="ovn-controller" probeResult="failure" output=< Feb 25 11:35:13 crc kubenswrapper[5005]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 25 11:35:13 crc kubenswrapper[5005]: > Feb 25 11:35:13 crc kubenswrapper[5005]: I0225 11:35:13.474620 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-stnkf" Feb 25 11:35:13 crc kubenswrapper[5005]: I0225 11:35:13.474681 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-stnkf" Feb 25 11:35:13 crc kubenswrapper[5005]: I0225 11:35:13.704821 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-mhhkt-config-mqjts"] Feb 25 11:35:13 crc kubenswrapper[5005]: I0225 11:35:13.706220 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mhhkt-config-mqjts" Feb 25 11:35:13 crc kubenswrapper[5005]: I0225 11:35:13.710211 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 25 11:35:13 crc kubenswrapper[5005]: I0225 11:35:13.711895 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mhhkt-config-mqjts"] Feb 25 11:35:13 crc kubenswrapper[5005]: I0225 11:35:13.720898 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"773c4b57-17bf-4159-9b75-81072c68692e","Type":"ContainerStarted","Data":"961da7cb929f2b8a6214035d30be71ffd508090ea56b3de8cfa7aeff6862ff0e"} Feb 25 11:35:13 crc kubenswrapper[5005]: I0225 11:35:13.721336 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 25 11:35:13 crc kubenswrapper[5005]: I0225 11:35:13.741983 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"86534792-e561-447f-bcef-4ff82b02561c","Type":"ContainerStarted","Data":"ddb459434e538b24e97be4a21eb45c15cef47dd89dd904bc100b7a86cefd81ba"} Feb 25 11:35:13 crc kubenswrapper[5005]: I0225 11:35:13.742924 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:35:13 crc kubenswrapper[5005]: I0225 11:35:13.765473 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=49.163779318 podStartE2EDuration="56.765457441s" podCreationTimestamp="2026-02-25 11:34:17 +0000 UTC" firstStartedPulling="2026-02-25 11:34:30.924250711 +0000 UTC m=+984.964983038" lastFinishedPulling="2026-02-25 11:34:38.525928834 +0000 UTC m=+992.566661161" observedRunningTime="2026-02-25 11:35:13.763249664 +0000 UTC m=+1027.803981991" watchObservedRunningTime="2026-02-25 11:35:13.765457441 +0000 UTC m=+1027.806189768" Feb 25 11:35:13 crc kubenswrapper[5005]: I0225 11:35:13.794768 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=48.558578417 podStartE2EDuration="55.794732942s" podCreationTimestamp="2026-02-25 11:34:18 +0000 UTC" firstStartedPulling="2026-02-25 11:34:31.455551292 +0000 UTC m=+985.496283619" lastFinishedPulling="2026-02-25 11:34:38.691705807 +0000 UTC m=+992.732438144" observedRunningTime="2026-02-25 11:35:13.785310215 +0000 UTC m=+1027.826042542" watchObservedRunningTime="2026-02-25 11:35:13.794732942 +0000 UTC m=+1027.835465269" Feb 25 11:35:13 crc kubenswrapper[5005]: I0225 11:35:13.802577 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3d91cf8f-aada-4516-8b39-1446799622c9-var-run-ovn\") pod \"ovn-controller-mhhkt-config-mqjts\" (UID: \"3d91cf8f-aada-4516-8b39-1446799622c9\") " pod="openstack/ovn-controller-mhhkt-config-mqjts" Feb 25 11:35:13 crc kubenswrapper[5005]: I0225 11:35:13.802669 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3d91cf8f-aada-4516-8b39-1446799622c9-var-log-ovn\") pod \"ovn-controller-mhhkt-config-mqjts\" (UID: \"3d91cf8f-aada-4516-8b39-1446799622c9\") " pod="openstack/ovn-controller-mhhkt-config-mqjts" Feb 25 11:35:13 crc kubenswrapper[5005]: I0225 11:35:13.803604 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3d91cf8f-aada-4516-8b39-1446799622c9-var-run\") pod \"ovn-controller-mhhkt-config-mqjts\" (UID: \"3d91cf8f-aada-4516-8b39-1446799622c9\") " pod="openstack/ovn-controller-mhhkt-config-mqjts" Feb 25 11:35:13 crc kubenswrapper[5005]: I0225 11:35:13.803644 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3d91cf8f-aada-4516-8b39-1446799622c9-additional-scripts\") pod \"ovn-controller-mhhkt-config-mqjts\" (UID: \"3d91cf8f-aada-4516-8b39-1446799622c9\") " pod="openstack/ovn-controller-mhhkt-config-mqjts" Feb 25 11:35:13 crc kubenswrapper[5005]: I0225 11:35:13.804573 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf5lb\" (UniqueName: \"kubernetes.io/projected/3d91cf8f-aada-4516-8b39-1446799622c9-kube-api-access-zf5lb\") pod \"ovn-controller-mhhkt-config-mqjts\" (UID: \"3d91cf8f-aada-4516-8b39-1446799622c9\") " pod="openstack/ovn-controller-mhhkt-config-mqjts" Feb 25 11:35:13 crc kubenswrapper[5005]: I0225 11:35:13.804705 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3d91cf8f-aada-4516-8b39-1446799622c9-scripts\") pod \"ovn-controller-mhhkt-config-mqjts\" (UID: \"3d91cf8f-aada-4516-8b39-1446799622c9\") " pod="openstack/ovn-controller-mhhkt-config-mqjts" Feb 25 11:35:13 crc kubenswrapper[5005]: I0225 11:35:13.868388 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-b824f"] Feb 25 11:35:13 crc kubenswrapper[5005]: I0225 11:35:13.906790 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3d91cf8f-aada-4516-8b39-1446799622c9-var-run-ovn\") pod \"ovn-controller-mhhkt-config-mqjts\" (UID: \"3d91cf8f-aada-4516-8b39-1446799622c9\") " pod="openstack/ovn-controller-mhhkt-config-mqjts" Feb 25 11:35:13 crc kubenswrapper[5005]: I0225 11:35:13.906862 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3d91cf8f-aada-4516-8b39-1446799622c9-var-log-ovn\") pod \"ovn-controller-mhhkt-config-mqjts\" (UID: \"3d91cf8f-aada-4516-8b39-1446799622c9\") " pod="openstack/ovn-controller-mhhkt-config-mqjts" Feb 25 11:35:13 crc kubenswrapper[5005]: I0225 11:35:13.906921 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3d91cf8f-aada-4516-8b39-1446799622c9-var-run\") pod \"ovn-controller-mhhkt-config-mqjts\" (UID: \"3d91cf8f-aada-4516-8b39-1446799622c9\") " pod="openstack/ovn-controller-mhhkt-config-mqjts" Feb 25 11:35:13 crc kubenswrapper[5005]: I0225 11:35:13.906943 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3d91cf8f-aada-4516-8b39-1446799622c9-additional-scripts\") pod \"ovn-controller-mhhkt-config-mqjts\" (UID: \"3d91cf8f-aada-4516-8b39-1446799622c9\") " pod="openstack/ovn-controller-mhhkt-config-mqjts" Feb 25 11:35:13 crc kubenswrapper[5005]: I0225 11:35:13.906987 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zf5lb\" (UniqueName: \"kubernetes.io/projected/3d91cf8f-aada-4516-8b39-1446799622c9-kube-api-access-zf5lb\") pod \"ovn-controller-mhhkt-config-mqjts\" (UID: \"3d91cf8f-aada-4516-8b39-1446799622c9\") " pod="openstack/ovn-controller-mhhkt-config-mqjts" Feb 25 11:35:13 crc kubenswrapper[5005]: I0225 11:35:13.907813 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3d91cf8f-aada-4516-8b39-1446799622c9-scripts\") pod \"ovn-controller-mhhkt-config-mqjts\" (UID: \"3d91cf8f-aada-4516-8b39-1446799622c9\") " pod="openstack/ovn-controller-mhhkt-config-mqjts" Feb 25 11:35:13 crc kubenswrapper[5005]: I0225 11:35:13.907265 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3d91cf8f-aada-4516-8b39-1446799622c9-var-run\") pod \"ovn-controller-mhhkt-config-mqjts\" (UID: \"3d91cf8f-aada-4516-8b39-1446799622c9\") " pod="openstack/ovn-controller-mhhkt-config-mqjts" Feb 25 11:35:13 crc kubenswrapper[5005]: I0225 11:35:13.907320 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3d91cf8f-aada-4516-8b39-1446799622c9-var-run-ovn\") pod \"ovn-controller-mhhkt-config-mqjts\" (UID: \"3d91cf8f-aada-4516-8b39-1446799622c9\") " pod="openstack/ovn-controller-mhhkt-config-mqjts" Feb 25 11:35:13 crc kubenswrapper[5005]: I0225 11:35:13.907739 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3d91cf8f-aada-4516-8b39-1446799622c9-additional-scripts\") pod \"ovn-controller-mhhkt-config-mqjts\" (UID: \"3d91cf8f-aada-4516-8b39-1446799622c9\") " pod="openstack/ovn-controller-mhhkt-config-mqjts" Feb 25 11:35:13 crc kubenswrapper[5005]: I0225 11:35:13.907250 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3d91cf8f-aada-4516-8b39-1446799622c9-var-log-ovn\") pod \"ovn-controller-mhhkt-config-mqjts\" (UID: \"3d91cf8f-aada-4516-8b39-1446799622c9\") " pod="openstack/ovn-controller-mhhkt-config-mqjts" Feb 25 11:35:13 crc kubenswrapper[5005]: I0225 11:35:13.910517 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3d91cf8f-aada-4516-8b39-1446799622c9-scripts\") pod \"ovn-controller-mhhkt-config-mqjts\" (UID: \"3d91cf8f-aada-4516-8b39-1446799622c9\") " pod="openstack/ovn-controller-mhhkt-config-mqjts" Feb 25 11:35:13 crc kubenswrapper[5005]: I0225 11:35:13.929657 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf5lb\" (UniqueName: \"kubernetes.io/projected/3d91cf8f-aada-4516-8b39-1446799622c9-kube-api-access-zf5lb\") pod \"ovn-controller-mhhkt-config-mqjts\" (UID: \"3d91cf8f-aada-4516-8b39-1446799622c9\") " pod="openstack/ovn-controller-mhhkt-config-mqjts" Feb 25 11:35:14 crc kubenswrapper[5005]: I0225 11:35:14.048766 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mhhkt-config-mqjts" Feb 25 11:35:14 crc kubenswrapper[5005]: I0225 11:35:14.337025 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mhhkt-config-mqjts"] Feb 25 11:35:14 crc kubenswrapper[5005]: I0225 11:35:14.751338 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-8k7gx" event={"ID":"a6fdb7b6-9eca-4adc-a5d2-3aee73085ea4","Type":"ContainerStarted","Data":"815d02c514771320af523b35277ca48176d5cc869d02adb292dffe19d904376b"} Feb 25 11:35:14 crc kubenswrapper[5005]: I0225 11:35:14.753620 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mhhkt-config-mqjts" event={"ID":"3d91cf8f-aada-4516-8b39-1446799622c9","Type":"ContainerStarted","Data":"fa48850c458d8b67647176930b1d6bae600e22fb94f4932b97a6498b6b33f99d"} Feb 25 11:35:14 crc kubenswrapper[5005]: I0225 11:35:14.755616 5005 generic.go:334] "Generic (PLEG): container finished" podID="e16ead21-e865-48ed-8adc-fa1892f6a38f" containerID="345ed5d728ed0d0d7036455b3a9ed66f585f80930fc0eb19f5bdba901adad354" exitCode=0 Feb 25 11:35:14 crc kubenswrapper[5005]: I0225 11:35:14.755690 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-b824f" event={"ID":"e16ead21-e865-48ed-8adc-fa1892f6a38f","Type":"ContainerDied","Data":"345ed5d728ed0d0d7036455b3a9ed66f585f80930fc0eb19f5bdba901adad354"} Feb 25 11:35:14 crc kubenswrapper[5005]: I0225 11:35:14.755715 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-b824f" event={"ID":"e16ead21-e865-48ed-8adc-fa1892f6a38f","Type":"ContainerStarted","Data":"1acddcb9dcd661f8edba8c2f7f7bff7a764d612631485ff0735fbdc69ac76fc2"} Feb 25 11:35:14 crc kubenswrapper[5005]: I0225 11:35:14.785464 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-8k7gx" podStartSLOduration=3.025466884 podStartE2EDuration="12.785446528s" podCreationTimestamp="2026-02-25 11:35:02 +0000 UTC" firstStartedPulling="2026-02-25 11:35:03.721050375 +0000 UTC m=+1017.761782712" lastFinishedPulling="2026-02-25 11:35:13.481030029 +0000 UTC m=+1027.521762356" observedRunningTime="2026-02-25 11:35:14.767987607 +0000 UTC m=+1028.808720004" watchObservedRunningTime="2026-02-25 11:35:14.785446528 +0000 UTC m=+1028.826178875" Feb 25 11:35:15 crc kubenswrapper[5005]: I0225 11:35:15.764565 5005 generic.go:334] "Generic (PLEG): container finished" podID="3d91cf8f-aada-4516-8b39-1446799622c9" containerID="910dec65f28760f00b5da1cd4499b5bed074277e43f7eeaba4945ccdb6ef44a1" exitCode=0 Feb 25 11:35:15 crc kubenswrapper[5005]: I0225 11:35:15.764620 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mhhkt-config-mqjts" event={"ID":"3d91cf8f-aada-4516-8b39-1446799622c9","Type":"ContainerDied","Data":"910dec65f28760f00b5da1cd4499b5bed074277e43f7eeaba4945ccdb6ef44a1"} Feb 25 11:35:16 crc kubenswrapper[5005]: I0225 11:35:16.096141 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-b824f" Feb 25 11:35:16 crc kubenswrapper[5005]: I0225 11:35:16.244356 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e16ead21-e865-48ed-8adc-fa1892f6a38f-operator-scripts\") pod \"e16ead21-e865-48ed-8adc-fa1892f6a38f\" (UID: \"e16ead21-e865-48ed-8adc-fa1892f6a38f\") " Feb 25 11:35:16 crc kubenswrapper[5005]: I0225 11:35:16.244442 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2pz9\" (UniqueName: \"kubernetes.io/projected/e16ead21-e865-48ed-8adc-fa1892f6a38f-kube-api-access-x2pz9\") pod \"e16ead21-e865-48ed-8adc-fa1892f6a38f\" (UID: \"e16ead21-e865-48ed-8adc-fa1892f6a38f\") " Feb 25 11:35:16 crc kubenswrapper[5005]: I0225 11:35:16.245039 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e16ead21-e865-48ed-8adc-fa1892f6a38f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e16ead21-e865-48ed-8adc-fa1892f6a38f" (UID: "e16ead21-e865-48ed-8adc-fa1892f6a38f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:35:16 crc kubenswrapper[5005]: I0225 11:35:16.249494 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e16ead21-e865-48ed-8adc-fa1892f6a38f-kube-api-access-x2pz9" (OuterVolumeSpecName: "kube-api-access-x2pz9") pod "e16ead21-e865-48ed-8adc-fa1892f6a38f" (UID: "e16ead21-e865-48ed-8adc-fa1892f6a38f"). InnerVolumeSpecName "kube-api-access-x2pz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:35:16 crc kubenswrapper[5005]: I0225 11:35:16.346606 5005 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e16ead21-e865-48ed-8adc-fa1892f6a38f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 25 11:35:16 crc kubenswrapper[5005]: I0225 11:35:16.346639 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2pz9\" (UniqueName: \"kubernetes.io/projected/e16ead21-e865-48ed-8adc-fa1892f6a38f-kube-api-access-x2pz9\") on node \"crc\" DevicePath \"\"" Feb 25 11:35:16 crc kubenswrapper[5005]: I0225 11:35:16.773155 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-b824f" event={"ID":"e16ead21-e865-48ed-8adc-fa1892f6a38f","Type":"ContainerDied","Data":"1acddcb9dcd661f8edba8c2f7f7bff7a764d612631485ff0735fbdc69ac76fc2"} Feb 25 11:35:16 crc kubenswrapper[5005]: I0225 11:35:16.773691 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1acddcb9dcd661f8edba8c2f7f7bff7a764d612631485ff0735fbdc69ac76fc2" Feb 25 11:35:16 crc kubenswrapper[5005]: I0225 11:35:16.773177 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-b824f" Feb 25 11:35:17 crc kubenswrapper[5005]: I0225 11:35:17.100934 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mhhkt-config-mqjts" Feb 25 11:35:17 crc kubenswrapper[5005]: I0225 11:35:17.260066 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3d91cf8f-aada-4516-8b39-1446799622c9-var-log-ovn\") pod \"3d91cf8f-aada-4516-8b39-1446799622c9\" (UID: \"3d91cf8f-aada-4516-8b39-1446799622c9\") " Feb 25 11:35:17 crc kubenswrapper[5005]: I0225 11:35:17.260166 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zf5lb\" (UniqueName: \"kubernetes.io/projected/3d91cf8f-aada-4516-8b39-1446799622c9-kube-api-access-zf5lb\") pod \"3d91cf8f-aada-4516-8b39-1446799622c9\" (UID: \"3d91cf8f-aada-4516-8b39-1446799622c9\") " Feb 25 11:35:17 crc kubenswrapper[5005]: I0225 11:35:17.260197 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3d91cf8f-aada-4516-8b39-1446799622c9-additional-scripts\") pod \"3d91cf8f-aada-4516-8b39-1446799622c9\" (UID: \"3d91cf8f-aada-4516-8b39-1446799622c9\") " Feb 25 11:35:17 crc kubenswrapper[5005]: I0225 11:35:17.260266 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3d91cf8f-aada-4516-8b39-1446799622c9-var-run-ovn\") pod \"3d91cf8f-aada-4516-8b39-1446799622c9\" (UID: \"3d91cf8f-aada-4516-8b39-1446799622c9\") " Feb 25 11:35:17 crc kubenswrapper[5005]: I0225 11:35:17.260291 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3d91cf8f-aada-4516-8b39-1446799622c9-scripts\") pod \"3d91cf8f-aada-4516-8b39-1446799622c9\" (UID: \"3d91cf8f-aada-4516-8b39-1446799622c9\") " Feb 25 11:35:17 crc kubenswrapper[5005]: I0225 11:35:17.260320 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3d91cf8f-aada-4516-8b39-1446799622c9-var-run\") pod \"3d91cf8f-aada-4516-8b39-1446799622c9\" (UID: \"3d91cf8f-aada-4516-8b39-1446799622c9\") " Feb 25 11:35:17 crc kubenswrapper[5005]: I0225 11:35:17.260643 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d91cf8f-aada-4516-8b39-1446799622c9-var-run" (OuterVolumeSpecName: "var-run") pod "3d91cf8f-aada-4516-8b39-1446799622c9" (UID: "3d91cf8f-aada-4516-8b39-1446799622c9"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 25 11:35:17 crc kubenswrapper[5005]: I0225 11:35:17.260675 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d91cf8f-aada-4516-8b39-1446799622c9-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "3d91cf8f-aada-4516-8b39-1446799622c9" (UID: "3d91cf8f-aada-4516-8b39-1446799622c9"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 25 11:35:17 crc kubenswrapper[5005]: I0225 11:35:17.260708 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d91cf8f-aada-4516-8b39-1446799622c9-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "3d91cf8f-aada-4516-8b39-1446799622c9" (UID: "3d91cf8f-aada-4516-8b39-1446799622c9"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 25 11:35:17 crc kubenswrapper[5005]: I0225 11:35:17.261497 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d91cf8f-aada-4516-8b39-1446799622c9-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "3d91cf8f-aada-4516-8b39-1446799622c9" (UID: "3d91cf8f-aada-4516-8b39-1446799622c9"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:35:17 crc kubenswrapper[5005]: I0225 11:35:17.261671 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d91cf8f-aada-4516-8b39-1446799622c9-scripts" (OuterVolumeSpecName: "scripts") pod "3d91cf8f-aada-4516-8b39-1446799622c9" (UID: "3d91cf8f-aada-4516-8b39-1446799622c9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:35:17 crc kubenswrapper[5005]: I0225 11:35:17.279262 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d91cf8f-aada-4516-8b39-1446799622c9-kube-api-access-zf5lb" (OuterVolumeSpecName: "kube-api-access-zf5lb") pod "3d91cf8f-aada-4516-8b39-1446799622c9" (UID: "3d91cf8f-aada-4516-8b39-1446799622c9"). InnerVolumeSpecName "kube-api-access-zf5lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:35:17 crc kubenswrapper[5005]: I0225 11:35:17.362612 5005 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3d91cf8f-aada-4516-8b39-1446799622c9-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 25 11:35:17 crc kubenswrapper[5005]: I0225 11:35:17.362653 5005 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3d91cf8f-aada-4516-8b39-1446799622c9-scripts\") on node \"crc\" DevicePath \"\"" Feb 25 11:35:17 crc kubenswrapper[5005]: I0225 11:35:17.362669 5005 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3d91cf8f-aada-4516-8b39-1446799622c9-var-run\") on node \"crc\" DevicePath \"\"" Feb 25 11:35:17 crc kubenswrapper[5005]: I0225 11:35:17.362680 5005 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3d91cf8f-aada-4516-8b39-1446799622c9-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 25 11:35:17 crc kubenswrapper[5005]: I0225 11:35:17.362692 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zf5lb\" (UniqueName: \"kubernetes.io/projected/3d91cf8f-aada-4516-8b39-1446799622c9-kube-api-access-zf5lb\") on node \"crc\" DevicePath \"\"" Feb 25 11:35:17 crc kubenswrapper[5005]: I0225 11:35:17.362707 5005 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3d91cf8f-aada-4516-8b39-1446799622c9-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 25 11:35:17 crc kubenswrapper[5005]: I0225 11:35:17.783444 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mhhkt-config-mqjts" event={"ID":"3d91cf8f-aada-4516-8b39-1446799622c9","Type":"ContainerDied","Data":"fa48850c458d8b67647176930b1d6bae600e22fb94f4932b97a6498b6b33f99d"} Feb 25 11:35:17 crc kubenswrapper[5005]: I0225 11:35:17.783861 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa48850c458d8b67647176930b1d6bae600e22fb94f4932b97a6498b6b33f99d" Feb 25 11:35:17 crc kubenswrapper[5005]: I0225 11:35:17.783535 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mhhkt-config-mqjts" Feb 25 11:35:18 crc kubenswrapper[5005]: I0225 11:35:18.208885 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-mhhkt-config-mqjts"] Feb 25 11:35:18 crc kubenswrapper[5005]: I0225 11:35:18.220138 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-mhhkt-config-mqjts"] Feb 25 11:35:18 crc kubenswrapper[5005]: I0225 11:35:18.456357 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-mhhkt" Feb 25 11:35:18 crc kubenswrapper[5005]: I0225 11:35:18.701610 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d91cf8f-aada-4516-8b39-1446799622c9" path="/var/lib/kubelet/pods/3d91cf8f-aada-4516-8b39-1446799622c9/volumes" Feb 25 11:35:19 crc kubenswrapper[5005]: I0225 11:35:19.799331 5005 generic.go:334] "Generic (PLEG): container finished" podID="a6fdb7b6-9eca-4adc-a5d2-3aee73085ea4" containerID="815d02c514771320af523b35277ca48176d5cc869d02adb292dffe19d904376b" exitCode=0 Feb 25 11:35:19 crc kubenswrapper[5005]: I0225 11:35:19.799386 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-8k7gx" event={"ID":"a6fdb7b6-9eca-4adc-a5d2-3aee73085ea4","Type":"ContainerDied","Data":"815d02c514771320af523b35277ca48176d5cc869d02adb292dffe19d904376b"} Feb 25 11:35:21 crc kubenswrapper[5005]: I0225 11:35:21.164352 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-8k7gx" Feb 25 11:35:21 crc kubenswrapper[5005]: I0225 11:35:21.267212 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4hxh\" (UniqueName: \"kubernetes.io/projected/a6fdb7b6-9eca-4adc-a5d2-3aee73085ea4-kube-api-access-x4hxh\") pod \"a6fdb7b6-9eca-4adc-a5d2-3aee73085ea4\" (UID: \"a6fdb7b6-9eca-4adc-a5d2-3aee73085ea4\") " Feb 25 11:35:21 crc kubenswrapper[5005]: I0225 11:35:21.267288 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6fdb7b6-9eca-4adc-a5d2-3aee73085ea4-combined-ca-bundle\") pod \"a6fdb7b6-9eca-4adc-a5d2-3aee73085ea4\" (UID: \"a6fdb7b6-9eca-4adc-a5d2-3aee73085ea4\") " Feb 25 11:35:21 crc kubenswrapper[5005]: I0225 11:35:21.267366 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a6fdb7b6-9eca-4adc-a5d2-3aee73085ea4-db-sync-config-data\") pod \"a6fdb7b6-9eca-4adc-a5d2-3aee73085ea4\" (UID: \"a6fdb7b6-9eca-4adc-a5d2-3aee73085ea4\") " Feb 25 11:35:21 crc kubenswrapper[5005]: I0225 11:35:21.267400 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6fdb7b6-9eca-4adc-a5d2-3aee73085ea4-config-data\") pod \"a6fdb7b6-9eca-4adc-a5d2-3aee73085ea4\" (UID: \"a6fdb7b6-9eca-4adc-a5d2-3aee73085ea4\") " Feb 25 11:35:21 crc kubenswrapper[5005]: I0225 11:35:21.272755 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6fdb7b6-9eca-4adc-a5d2-3aee73085ea4-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a6fdb7b6-9eca-4adc-a5d2-3aee73085ea4" (UID: "a6fdb7b6-9eca-4adc-a5d2-3aee73085ea4"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:35:21 crc kubenswrapper[5005]: I0225 11:35:21.274120 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6fdb7b6-9eca-4adc-a5d2-3aee73085ea4-kube-api-access-x4hxh" (OuterVolumeSpecName: "kube-api-access-x4hxh") pod "a6fdb7b6-9eca-4adc-a5d2-3aee73085ea4" (UID: "a6fdb7b6-9eca-4adc-a5d2-3aee73085ea4"). InnerVolumeSpecName "kube-api-access-x4hxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:35:21 crc kubenswrapper[5005]: I0225 11:35:21.289247 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6fdb7b6-9eca-4adc-a5d2-3aee73085ea4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a6fdb7b6-9eca-4adc-a5d2-3aee73085ea4" (UID: "a6fdb7b6-9eca-4adc-a5d2-3aee73085ea4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:35:21 crc kubenswrapper[5005]: I0225 11:35:21.322116 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6fdb7b6-9eca-4adc-a5d2-3aee73085ea4-config-data" (OuterVolumeSpecName: "config-data") pod "a6fdb7b6-9eca-4adc-a5d2-3aee73085ea4" (UID: "a6fdb7b6-9eca-4adc-a5d2-3aee73085ea4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:35:21 crc kubenswrapper[5005]: I0225 11:35:21.369511 5005 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6fdb7b6-9eca-4adc-a5d2-3aee73085ea4-config-data\") on node \"crc\" DevicePath \"\"" Feb 25 11:35:21 crc kubenswrapper[5005]: I0225 11:35:21.369551 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4hxh\" (UniqueName: \"kubernetes.io/projected/a6fdb7b6-9eca-4adc-a5d2-3aee73085ea4-kube-api-access-x4hxh\") on node \"crc\" DevicePath \"\"" Feb 25 11:35:21 crc kubenswrapper[5005]: I0225 11:35:21.369565 5005 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6fdb7b6-9eca-4adc-a5d2-3aee73085ea4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 11:35:21 crc kubenswrapper[5005]: I0225 11:35:21.369578 5005 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a6fdb7b6-9eca-4adc-a5d2-3aee73085ea4-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 25 11:35:21 crc kubenswrapper[5005]: I0225 11:35:21.818543 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-8k7gx" event={"ID":"a6fdb7b6-9eca-4adc-a5d2-3aee73085ea4","Type":"ContainerDied","Data":"4f19ca4ebd41504a61cbab994480f5bc5e66ddd2e699f36567c9a0a9464744fe"} Feb 25 11:35:21 crc kubenswrapper[5005]: I0225 11:35:21.818802 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f19ca4ebd41504a61cbab994480f5bc5e66ddd2e699f36567c9a0a9464744fe" Feb 25 11:35:21 crc kubenswrapper[5005]: I0225 11:35:21.818633 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-8k7gx" Feb 25 11:35:22 crc kubenswrapper[5005]: I0225 11:35:22.402158 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-shqxt"] Feb 25 11:35:22 crc kubenswrapper[5005]: E0225 11:35:22.402966 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6fdb7b6-9eca-4adc-a5d2-3aee73085ea4" containerName="glance-db-sync" Feb 25 11:35:22 crc kubenswrapper[5005]: I0225 11:35:22.402983 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6fdb7b6-9eca-4adc-a5d2-3aee73085ea4" containerName="glance-db-sync" Feb 25 11:35:22 crc kubenswrapper[5005]: E0225 11:35:22.402998 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d91cf8f-aada-4516-8b39-1446799622c9" containerName="ovn-config" Feb 25 11:35:22 crc kubenswrapper[5005]: I0225 11:35:22.403005 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d91cf8f-aada-4516-8b39-1446799622c9" containerName="ovn-config" Feb 25 11:35:22 crc kubenswrapper[5005]: E0225 11:35:22.403014 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e16ead21-e865-48ed-8adc-fa1892f6a38f" containerName="mariadb-account-create-update" Feb 25 11:35:22 crc kubenswrapper[5005]: I0225 11:35:22.403022 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="e16ead21-e865-48ed-8adc-fa1892f6a38f" containerName="mariadb-account-create-update" Feb 25 11:35:22 crc kubenswrapper[5005]: I0225 11:35:22.403215 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="e16ead21-e865-48ed-8adc-fa1892f6a38f" containerName="mariadb-account-create-update" Feb 25 11:35:22 crc kubenswrapper[5005]: I0225 11:35:22.403233 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6fdb7b6-9eca-4adc-a5d2-3aee73085ea4" containerName="glance-db-sync" Feb 25 11:35:22 crc kubenswrapper[5005]: I0225 11:35:22.403247 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d91cf8f-aada-4516-8b39-1446799622c9" containerName="ovn-config" Feb 25 11:35:22 crc kubenswrapper[5005]: I0225 11:35:22.406344 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54f9b7b8d9-shqxt" Feb 25 11:35:22 crc kubenswrapper[5005]: I0225 11:35:22.419030 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-shqxt"] Feb 25 11:35:22 crc kubenswrapper[5005]: I0225 11:35:22.487582 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkp5r\" (UniqueName: \"kubernetes.io/projected/7e5145ce-efae-4a28-9f07-d2922d2682bb-kube-api-access-fkp5r\") pod \"dnsmasq-dns-54f9b7b8d9-shqxt\" (UID: \"7e5145ce-efae-4a28-9f07-d2922d2682bb\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-shqxt" Feb 25 11:35:22 crc kubenswrapper[5005]: I0225 11:35:22.487635 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7e5145ce-efae-4a28-9f07-d2922d2682bb-ovsdbserver-nb\") pod \"dnsmasq-dns-54f9b7b8d9-shqxt\" (UID: \"7e5145ce-efae-4a28-9f07-d2922d2682bb\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-shqxt" Feb 25 11:35:22 crc kubenswrapper[5005]: I0225 11:35:22.487697 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e5145ce-efae-4a28-9f07-d2922d2682bb-dns-svc\") pod \"dnsmasq-dns-54f9b7b8d9-shqxt\" (UID: \"7e5145ce-efae-4a28-9f07-d2922d2682bb\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-shqxt" Feb 25 11:35:22 crc kubenswrapper[5005]: I0225 11:35:22.487733 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e5145ce-efae-4a28-9f07-d2922d2682bb-config\") pod \"dnsmasq-dns-54f9b7b8d9-shqxt\" (UID: \"7e5145ce-efae-4a28-9f07-d2922d2682bb\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-shqxt" Feb 25 11:35:22 crc kubenswrapper[5005]: I0225 11:35:22.487763 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7e5145ce-efae-4a28-9f07-d2922d2682bb-ovsdbserver-sb\") pod \"dnsmasq-dns-54f9b7b8d9-shqxt\" (UID: \"7e5145ce-efae-4a28-9f07-d2922d2682bb\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-shqxt" Feb 25 11:35:22 crc kubenswrapper[5005]: I0225 11:35:22.589048 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e5145ce-efae-4a28-9f07-d2922d2682bb-dns-svc\") pod \"dnsmasq-dns-54f9b7b8d9-shqxt\" (UID: \"7e5145ce-efae-4a28-9f07-d2922d2682bb\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-shqxt" Feb 25 11:35:22 crc kubenswrapper[5005]: I0225 11:35:22.589126 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e5145ce-efae-4a28-9f07-d2922d2682bb-config\") pod \"dnsmasq-dns-54f9b7b8d9-shqxt\" (UID: \"7e5145ce-efae-4a28-9f07-d2922d2682bb\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-shqxt" Feb 25 11:35:22 crc kubenswrapper[5005]: I0225 11:35:22.589164 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7e5145ce-efae-4a28-9f07-d2922d2682bb-ovsdbserver-sb\") pod \"dnsmasq-dns-54f9b7b8d9-shqxt\" (UID: \"7e5145ce-efae-4a28-9f07-d2922d2682bb\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-shqxt" Feb 25 11:35:22 crc kubenswrapper[5005]: I0225 11:35:22.589245 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkp5r\" (UniqueName: \"kubernetes.io/projected/7e5145ce-efae-4a28-9f07-d2922d2682bb-kube-api-access-fkp5r\") pod \"dnsmasq-dns-54f9b7b8d9-shqxt\" (UID: \"7e5145ce-efae-4a28-9f07-d2922d2682bb\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-shqxt" Feb 25 11:35:22 crc kubenswrapper[5005]: I0225 11:35:22.589283 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7e5145ce-efae-4a28-9f07-d2922d2682bb-ovsdbserver-nb\") pod \"dnsmasq-dns-54f9b7b8d9-shqxt\" (UID: \"7e5145ce-efae-4a28-9f07-d2922d2682bb\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-shqxt" Feb 25 11:35:22 crc kubenswrapper[5005]: I0225 11:35:22.590032 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e5145ce-efae-4a28-9f07-d2922d2682bb-config\") pod \"dnsmasq-dns-54f9b7b8d9-shqxt\" (UID: \"7e5145ce-efae-4a28-9f07-d2922d2682bb\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-shqxt" Feb 25 11:35:22 crc kubenswrapper[5005]: I0225 11:35:22.590036 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e5145ce-efae-4a28-9f07-d2922d2682bb-dns-svc\") pod \"dnsmasq-dns-54f9b7b8d9-shqxt\" (UID: \"7e5145ce-efae-4a28-9f07-d2922d2682bb\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-shqxt" Feb 25 11:35:22 crc kubenswrapper[5005]: I0225 11:35:22.590143 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7e5145ce-efae-4a28-9f07-d2922d2682bb-ovsdbserver-sb\") pod \"dnsmasq-dns-54f9b7b8d9-shqxt\" (UID: \"7e5145ce-efae-4a28-9f07-d2922d2682bb\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-shqxt" Feb 25 11:35:22 crc kubenswrapper[5005]: I0225 11:35:22.590149 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7e5145ce-efae-4a28-9f07-d2922d2682bb-ovsdbserver-nb\") pod \"dnsmasq-dns-54f9b7b8d9-shqxt\" (UID: \"7e5145ce-efae-4a28-9f07-d2922d2682bb\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-shqxt" Feb 25 11:35:22 crc kubenswrapper[5005]: I0225 11:35:22.606193 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkp5r\" (UniqueName: \"kubernetes.io/projected/7e5145ce-efae-4a28-9f07-d2922d2682bb-kube-api-access-fkp5r\") pod \"dnsmasq-dns-54f9b7b8d9-shqxt\" (UID: \"7e5145ce-efae-4a28-9f07-d2922d2682bb\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-shqxt" Feb 25 11:35:22 crc kubenswrapper[5005]: I0225 11:35:22.725064 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54f9b7b8d9-shqxt" Feb 25 11:35:23 crc kubenswrapper[5005]: I0225 11:35:23.146386 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-shqxt"] Feb 25 11:35:23 crc kubenswrapper[5005]: I0225 11:35:23.837167 5005 generic.go:334] "Generic (PLEG): container finished" podID="7e5145ce-efae-4a28-9f07-d2922d2682bb" containerID="3fef4ac12736813affee756604d2aa92c54cfe09710c675657f662a1a385c08e" exitCode=0 Feb 25 11:35:23 crc kubenswrapper[5005]: I0225 11:35:23.837293 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-shqxt" event={"ID":"7e5145ce-efae-4a28-9f07-d2922d2682bb","Type":"ContainerDied","Data":"3fef4ac12736813affee756604d2aa92c54cfe09710c675657f662a1a385c08e"} Feb 25 11:35:23 crc kubenswrapper[5005]: I0225 11:35:23.837518 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-shqxt" event={"ID":"7e5145ce-efae-4a28-9f07-d2922d2682bb","Type":"ContainerStarted","Data":"c91c7bda94e61d228bb07edf2f314af3700c8bcac48fb0b33724c4d58bfe913a"} Feb 25 11:35:24 crc kubenswrapper[5005]: I0225 11:35:24.851156 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-shqxt" event={"ID":"7e5145ce-efae-4a28-9f07-d2922d2682bb","Type":"ContainerStarted","Data":"2085434377bad043b70e66957f75af5fd0c473326a82f50f8848ccc9c8dba41e"} Feb 25 11:35:24 crc kubenswrapper[5005]: I0225 11:35:24.851486 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-54f9b7b8d9-shqxt" Feb 25 11:35:24 crc kubenswrapper[5005]: I0225 11:35:24.885363 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-54f9b7b8d9-shqxt" podStartSLOduration=2.885332092 podStartE2EDuration="2.885332092s" podCreationTimestamp="2026-02-25 11:35:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:35:24.875416671 +0000 UTC m=+1038.916149028" watchObservedRunningTime="2026-02-25 11:35:24.885332092 +0000 UTC m=+1038.926064459" Feb 25 11:35:29 crc kubenswrapper[5005]: I0225 11:35:29.264697 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 25 11:35:29 crc kubenswrapper[5005]: I0225 11:35:29.531606 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:35:29 crc kubenswrapper[5005]: I0225 11:35:29.669622 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-kw2jf"] Feb 25 11:35:29 crc kubenswrapper[5005]: I0225 11:35:29.670963 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-kw2jf" Feb 25 11:35:29 crc kubenswrapper[5005]: I0225 11:35:29.681133 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-kw2jf"] Feb 25 11:35:29 crc kubenswrapper[5005]: I0225 11:35:29.760516 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-xv4rr"] Feb 25 11:35:29 crc kubenswrapper[5005]: I0225 11:35:29.761412 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-xv4rr" Feb 25 11:35:29 crc kubenswrapper[5005]: I0225 11:35:29.779927 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-xv4rr"] Feb 25 11:35:29 crc kubenswrapper[5005]: I0225 11:35:29.787902 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-605b-account-create-update-dpg5x"] Feb 25 11:35:29 crc kubenswrapper[5005]: I0225 11:35:29.788971 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-605b-account-create-update-dpg5x" Feb 25 11:35:29 crc kubenswrapper[5005]: I0225 11:35:29.793078 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 25 11:35:29 crc kubenswrapper[5005]: I0225 11:35:29.803257 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-605b-account-create-update-dpg5x"] Feb 25 11:35:29 crc kubenswrapper[5005]: I0225 11:35:29.818613 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0637913d-b2d2-4492-8aa2-eba57f5e7177-operator-scripts\") pod \"cinder-db-create-kw2jf\" (UID: \"0637913d-b2d2-4492-8aa2-eba57f5e7177\") " pod="openstack/cinder-db-create-kw2jf" Feb 25 11:35:29 crc kubenswrapper[5005]: I0225 11:35:29.818727 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpp48\" (UniqueName: \"kubernetes.io/projected/14057729-fc6f-46d3-ba9f-606be9cb3e28-kube-api-access-mpp48\") pod \"barbican-db-create-xv4rr\" (UID: \"14057729-fc6f-46d3-ba9f-606be9cb3e28\") " pod="openstack/barbican-db-create-xv4rr" Feb 25 11:35:29 crc kubenswrapper[5005]: I0225 11:35:29.818749 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14057729-fc6f-46d3-ba9f-606be9cb3e28-operator-scripts\") pod \"barbican-db-create-xv4rr\" (UID: \"14057729-fc6f-46d3-ba9f-606be9cb3e28\") " pod="openstack/barbican-db-create-xv4rr" Feb 25 11:35:29 crc kubenswrapper[5005]: I0225 11:35:29.818768 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q525w\" (UniqueName: \"kubernetes.io/projected/0637913d-b2d2-4492-8aa2-eba57f5e7177-kube-api-access-q525w\") pod \"cinder-db-create-kw2jf\" (UID: \"0637913d-b2d2-4492-8aa2-eba57f5e7177\") " pod="openstack/cinder-db-create-kw2jf" Feb 25 11:35:29 crc kubenswrapper[5005]: I0225 11:35:29.857093 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-jt9t4"] Feb 25 11:35:29 crc kubenswrapper[5005]: I0225 11:35:29.858099 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jt9t4" Feb 25 11:35:29 crc kubenswrapper[5005]: I0225 11:35:29.870809 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-495c-account-create-update-zshp6"] Feb 25 11:35:29 crc kubenswrapper[5005]: I0225 11:35:29.871764 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-495c-account-create-update-zshp6" Feb 25 11:35:29 crc kubenswrapper[5005]: I0225 11:35:29.874715 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 25 11:35:29 crc kubenswrapper[5005]: I0225 11:35:29.879927 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-495c-account-create-update-zshp6"] Feb 25 11:35:29 crc kubenswrapper[5005]: I0225 11:35:29.886114 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-jt9t4"] Feb 25 11:35:29 crc kubenswrapper[5005]: I0225 11:35:29.916636 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-dp75q"] Feb 25 11:35:29 crc kubenswrapper[5005]: I0225 11:35:29.917672 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-dp75q" Feb 25 11:35:29 crc kubenswrapper[5005]: I0225 11:35:29.919916 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41d7b8d7-c3a9-4065-8dd8-5c01a37f6566-operator-scripts\") pod \"neutron-db-create-jt9t4\" (UID: \"41d7b8d7-c3a9-4065-8dd8-5c01a37f6566\") " pod="openstack/neutron-db-create-jt9t4" Feb 25 11:35:29 crc kubenswrapper[5005]: I0225 11:35:29.919981 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0637913d-b2d2-4492-8aa2-eba57f5e7177-operator-scripts\") pod \"cinder-db-create-kw2jf\" (UID: \"0637913d-b2d2-4492-8aa2-eba57f5e7177\") " pod="openstack/cinder-db-create-kw2jf" Feb 25 11:35:29 crc kubenswrapper[5005]: I0225 11:35:29.920005 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgpz6\" (UniqueName: \"kubernetes.io/projected/ef4bd119-b8db-4ea6-92e1-efca3d60f766-kube-api-access-bgpz6\") pod \"barbican-605b-account-create-update-dpg5x\" (UID: \"ef4bd119-b8db-4ea6-92e1-efca3d60f766\") " pod="openstack/barbican-605b-account-create-update-dpg5x" Feb 25 11:35:29 crc kubenswrapper[5005]: I0225 11:35:29.920028 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef4bd119-b8db-4ea6-92e1-efca3d60f766-operator-scripts\") pod \"barbican-605b-account-create-update-dpg5x\" (UID: \"ef4bd119-b8db-4ea6-92e1-efca3d60f766\") " pod="openstack/barbican-605b-account-create-update-dpg5x" Feb 25 11:35:29 crc kubenswrapper[5005]: I0225 11:35:29.920050 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xf4q\" (UniqueName: \"kubernetes.io/projected/41d7b8d7-c3a9-4065-8dd8-5c01a37f6566-kube-api-access-8xf4q\") pod \"neutron-db-create-jt9t4\" (UID: \"41d7b8d7-c3a9-4065-8dd8-5c01a37f6566\") " pod="openstack/neutron-db-create-jt9t4" Feb 25 11:35:29 crc kubenswrapper[5005]: I0225 11:35:29.920112 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpp48\" (UniqueName: \"kubernetes.io/projected/14057729-fc6f-46d3-ba9f-606be9cb3e28-kube-api-access-mpp48\") pod \"barbican-db-create-xv4rr\" (UID: \"14057729-fc6f-46d3-ba9f-606be9cb3e28\") " pod="openstack/barbican-db-create-xv4rr" Feb 25 11:35:29 crc kubenswrapper[5005]: I0225 11:35:29.920131 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14057729-fc6f-46d3-ba9f-606be9cb3e28-operator-scripts\") pod \"barbican-db-create-xv4rr\" (UID: \"14057729-fc6f-46d3-ba9f-606be9cb3e28\") " pod="openstack/barbican-db-create-xv4rr" Feb 25 11:35:29 crc kubenswrapper[5005]: I0225 11:35:29.920151 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q525w\" (UniqueName: \"kubernetes.io/projected/0637913d-b2d2-4492-8aa2-eba57f5e7177-kube-api-access-q525w\") pod \"cinder-db-create-kw2jf\" (UID: \"0637913d-b2d2-4492-8aa2-eba57f5e7177\") " pod="openstack/cinder-db-create-kw2jf" Feb 25 11:35:29 crc kubenswrapper[5005]: I0225 11:35:29.921036 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0637913d-b2d2-4492-8aa2-eba57f5e7177-operator-scripts\") pod \"cinder-db-create-kw2jf\" (UID: \"0637913d-b2d2-4492-8aa2-eba57f5e7177\") " pod="openstack/cinder-db-create-kw2jf" Feb 25 11:35:29 crc kubenswrapper[5005]: I0225 11:35:29.921706 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14057729-fc6f-46d3-ba9f-606be9cb3e28-operator-scripts\") pod \"barbican-db-create-xv4rr\" (UID: \"14057729-fc6f-46d3-ba9f-606be9cb3e28\") " pod="openstack/barbican-db-create-xv4rr" Feb 25 11:35:29 crc kubenswrapper[5005]: I0225 11:35:29.922044 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-nppj9" Feb 25 11:35:29 crc kubenswrapper[5005]: I0225 11:35:29.922192 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 25 11:35:29 crc kubenswrapper[5005]: I0225 11:35:29.922295 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 25 11:35:29 crc kubenswrapper[5005]: I0225 11:35:29.922407 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 25 11:35:29 crc kubenswrapper[5005]: I0225 11:35:29.941234 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-dp75q"] Feb 25 11:35:29 crc kubenswrapper[5005]: I0225 11:35:29.944067 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpp48\" (UniqueName: \"kubernetes.io/projected/14057729-fc6f-46d3-ba9f-606be9cb3e28-kube-api-access-mpp48\") pod \"barbican-db-create-xv4rr\" (UID: \"14057729-fc6f-46d3-ba9f-606be9cb3e28\") " pod="openstack/barbican-db-create-xv4rr" Feb 25 11:35:29 crc kubenswrapper[5005]: I0225 11:35:29.976802 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q525w\" (UniqueName: \"kubernetes.io/projected/0637913d-b2d2-4492-8aa2-eba57f5e7177-kube-api-access-q525w\") pod \"cinder-db-create-kw2jf\" (UID: \"0637913d-b2d2-4492-8aa2-eba57f5e7177\") " pod="openstack/cinder-db-create-kw2jf" Feb 25 11:35:29 crc kubenswrapper[5005]: I0225 11:35:29.990449 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-kw2jf" Feb 25 11:35:30 crc kubenswrapper[5005]: I0225 11:35:30.021573 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41d7b8d7-c3a9-4065-8dd8-5c01a37f6566-operator-scripts\") pod \"neutron-db-create-jt9t4\" (UID: \"41d7b8d7-c3a9-4065-8dd8-5c01a37f6566\") " pod="openstack/neutron-db-create-jt9t4" Feb 25 11:35:30 crc kubenswrapper[5005]: I0225 11:35:30.021655 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f67m\" (UniqueName: \"kubernetes.io/projected/9bceed99-c54d-44a7-b7f0-85183b242006-kube-api-access-5f67m\") pod \"cinder-495c-account-create-update-zshp6\" (UID: \"9bceed99-c54d-44a7-b7f0-85183b242006\") " pod="openstack/cinder-495c-account-create-update-zshp6" Feb 25 11:35:30 crc kubenswrapper[5005]: I0225 11:35:30.021688 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50d0cec3-9a4a-4252-accb-dc4194ad752e-config-data\") pod \"keystone-db-sync-dp75q\" (UID: \"50d0cec3-9a4a-4252-accb-dc4194ad752e\") " pod="openstack/keystone-db-sync-dp75q" Feb 25 11:35:30 crc kubenswrapper[5005]: I0225 11:35:30.021724 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50d0cec3-9a4a-4252-accb-dc4194ad752e-combined-ca-bundle\") pod \"keystone-db-sync-dp75q\" (UID: \"50d0cec3-9a4a-4252-accb-dc4194ad752e\") " pod="openstack/keystone-db-sync-dp75q" Feb 25 11:35:30 crc kubenswrapper[5005]: I0225 11:35:30.021747 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgpz6\" (UniqueName: \"kubernetes.io/projected/ef4bd119-b8db-4ea6-92e1-efca3d60f766-kube-api-access-bgpz6\") pod \"barbican-605b-account-create-update-dpg5x\" (UID: \"ef4bd119-b8db-4ea6-92e1-efca3d60f766\") " pod="openstack/barbican-605b-account-create-update-dpg5x" Feb 25 11:35:30 crc kubenswrapper[5005]: I0225 11:35:30.021768 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef4bd119-b8db-4ea6-92e1-efca3d60f766-operator-scripts\") pod \"barbican-605b-account-create-update-dpg5x\" (UID: \"ef4bd119-b8db-4ea6-92e1-efca3d60f766\") " pod="openstack/barbican-605b-account-create-update-dpg5x" Feb 25 11:35:30 crc kubenswrapper[5005]: I0225 11:35:30.021791 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9bceed99-c54d-44a7-b7f0-85183b242006-operator-scripts\") pod \"cinder-495c-account-create-update-zshp6\" (UID: \"9bceed99-c54d-44a7-b7f0-85183b242006\") " pod="openstack/cinder-495c-account-create-update-zshp6" Feb 25 11:35:30 crc kubenswrapper[5005]: I0225 11:35:30.021812 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xf4q\" (UniqueName: \"kubernetes.io/projected/41d7b8d7-c3a9-4065-8dd8-5c01a37f6566-kube-api-access-8xf4q\") pod \"neutron-db-create-jt9t4\" (UID: \"41d7b8d7-c3a9-4065-8dd8-5c01a37f6566\") " pod="openstack/neutron-db-create-jt9t4" Feb 25 11:35:30 crc kubenswrapper[5005]: I0225 11:35:30.022020 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbwzf\" (UniqueName: \"kubernetes.io/projected/50d0cec3-9a4a-4252-accb-dc4194ad752e-kube-api-access-qbwzf\") pod \"keystone-db-sync-dp75q\" (UID: \"50d0cec3-9a4a-4252-accb-dc4194ad752e\") " pod="openstack/keystone-db-sync-dp75q" Feb 25 11:35:30 crc kubenswrapper[5005]: I0225 11:35:30.022643 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef4bd119-b8db-4ea6-92e1-efca3d60f766-operator-scripts\") pod \"barbican-605b-account-create-update-dpg5x\" (UID: \"ef4bd119-b8db-4ea6-92e1-efca3d60f766\") " pod="openstack/barbican-605b-account-create-update-dpg5x" Feb 25 11:35:30 crc kubenswrapper[5005]: I0225 11:35:30.022896 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41d7b8d7-c3a9-4065-8dd8-5c01a37f6566-operator-scripts\") pod \"neutron-db-create-jt9t4\" (UID: \"41d7b8d7-c3a9-4065-8dd8-5c01a37f6566\") " pod="openstack/neutron-db-create-jt9t4" Feb 25 11:35:30 crc kubenswrapper[5005]: I0225 11:35:30.040972 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgpz6\" (UniqueName: \"kubernetes.io/projected/ef4bd119-b8db-4ea6-92e1-efca3d60f766-kube-api-access-bgpz6\") pod \"barbican-605b-account-create-update-dpg5x\" (UID: \"ef4bd119-b8db-4ea6-92e1-efca3d60f766\") " pod="openstack/barbican-605b-account-create-update-dpg5x" Feb 25 11:35:30 crc kubenswrapper[5005]: I0225 11:35:30.041466 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xf4q\" (UniqueName: \"kubernetes.io/projected/41d7b8d7-c3a9-4065-8dd8-5c01a37f6566-kube-api-access-8xf4q\") pod \"neutron-db-create-jt9t4\" (UID: \"41d7b8d7-c3a9-4065-8dd8-5c01a37f6566\") " pod="openstack/neutron-db-create-jt9t4" Feb 25 11:35:30 crc kubenswrapper[5005]: I0225 11:35:30.075977 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-xv4rr" Feb 25 11:35:30 crc kubenswrapper[5005]: I0225 11:35:30.112767 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-605b-account-create-update-dpg5x" Feb 25 11:35:30 crc kubenswrapper[5005]: I0225 11:35:30.123852 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50d0cec3-9a4a-4252-accb-dc4194ad752e-combined-ca-bundle\") pod \"keystone-db-sync-dp75q\" (UID: \"50d0cec3-9a4a-4252-accb-dc4194ad752e\") " pod="openstack/keystone-db-sync-dp75q" Feb 25 11:35:30 crc kubenswrapper[5005]: I0225 11:35:30.123904 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9bceed99-c54d-44a7-b7f0-85183b242006-operator-scripts\") pod \"cinder-495c-account-create-update-zshp6\" (UID: \"9bceed99-c54d-44a7-b7f0-85183b242006\") " pod="openstack/cinder-495c-account-create-update-zshp6" Feb 25 11:35:30 crc kubenswrapper[5005]: I0225 11:35:30.123974 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbwzf\" (UniqueName: \"kubernetes.io/projected/50d0cec3-9a4a-4252-accb-dc4194ad752e-kube-api-access-qbwzf\") pod \"keystone-db-sync-dp75q\" (UID: \"50d0cec3-9a4a-4252-accb-dc4194ad752e\") " pod="openstack/keystone-db-sync-dp75q" Feb 25 11:35:30 crc kubenswrapper[5005]: I0225 11:35:30.124013 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5f67m\" (UniqueName: \"kubernetes.io/projected/9bceed99-c54d-44a7-b7f0-85183b242006-kube-api-access-5f67m\") pod \"cinder-495c-account-create-update-zshp6\" (UID: \"9bceed99-c54d-44a7-b7f0-85183b242006\") " pod="openstack/cinder-495c-account-create-update-zshp6" Feb 25 11:35:30 crc kubenswrapper[5005]: I0225 11:35:30.124039 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50d0cec3-9a4a-4252-accb-dc4194ad752e-config-data\") pod \"keystone-db-sync-dp75q\" (UID: \"50d0cec3-9a4a-4252-accb-dc4194ad752e\") " pod="openstack/keystone-db-sync-dp75q" Feb 25 11:35:30 crc kubenswrapper[5005]: I0225 11:35:30.125978 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9bceed99-c54d-44a7-b7f0-85183b242006-operator-scripts\") pod \"cinder-495c-account-create-update-zshp6\" (UID: \"9bceed99-c54d-44a7-b7f0-85183b242006\") " pod="openstack/cinder-495c-account-create-update-zshp6" Feb 25 11:35:30 crc kubenswrapper[5005]: I0225 11:35:30.135959 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50d0cec3-9a4a-4252-accb-dc4194ad752e-combined-ca-bundle\") pod \"keystone-db-sync-dp75q\" (UID: \"50d0cec3-9a4a-4252-accb-dc4194ad752e\") " pod="openstack/keystone-db-sync-dp75q" Feb 25 11:35:30 crc kubenswrapper[5005]: I0225 11:35:30.145456 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50d0cec3-9a4a-4252-accb-dc4194ad752e-config-data\") pod \"keystone-db-sync-dp75q\" (UID: \"50d0cec3-9a4a-4252-accb-dc4194ad752e\") " pod="openstack/keystone-db-sync-dp75q" Feb 25 11:35:30 crc kubenswrapper[5005]: I0225 11:35:30.153468 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbwzf\" (UniqueName: \"kubernetes.io/projected/50d0cec3-9a4a-4252-accb-dc4194ad752e-kube-api-access-qbwzf\") pod \"keystone-db-sync-dp75q\" (UID: \"50d0cec3-9a4a-4252-accb-dc4194ad752e\") " pod="openstack/keystone-db-sync-dp75q" Feb 25 11:35:30 crc kubenswrapper[5005]: I0225 11:35:30.162731 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5f67m\" (UniqueName: \"kubernetes.io/projected/9bceed99-c54d-44a7-b7f0-85183b242006-kube-api-access-5f67m\") pod \"cinder-495c-account-create-update-zshp6\" (UID: \"9bceed99-c54d-44a7-b7f0-85183b242006\") " pod="openstack/cinder-495c-account-create-update-zshp6" Feb 25 11:35:30 crc kubenswrapper[5005]: I0225 11:35:30.167662 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-1fc9-account-create-update-fhrgw"] Feb 25 11:35:30 crc kubenswrapper[5005]: I0225 11:35:30.168547 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-1fc9-account-create-update-fhrgw" Feb 25 11:35:30 crc kubenswrapper[5005]: I0225 11:35:30.173026 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 25 11:35:30 crc kubenswrapper[5005]: I0225 11:35:30.176785 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jt9t4" Feb 25 11:35:30 crc kubenswrapper[5005]: I0225 11:35:30.184134 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-1fc9-account-create-update-fhrgw"] Feb 25 11:35:30 crc kubenswrapper[5005]: I0225 11:35:30.189354 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-495c-account-create-update-zshp6" Feb 25 11:35:30 crc kubenswrapper[5005]: I0225 11:35:30.225636 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfqpr\" (UniqueName: \"kubernetes.io/projected/9c924c62-341b-43ea-af0b-ac567b5acfd0-kube-api-access-rfqpr\") pod \"neutron-1fc9-account-create-update-fhrgw\" (UID: \"9c924c62-341b-43ea-af0b-ac567b5acfd0\") " pod="openstack/neutron-1fc9-account-create-update-fhrgw" Feb 25 11:35:30 crc kubenswrapper[5005]: I0225 11:35:30.226012 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c924c62-341b-43ea-af0b-ac567b5acfd0-operator-scripts\") pod \"neutron-1fc9-account-create-update-fhrgw\" (UID: \"9c924c62-341b-43ea-af0b-ac567b5acfd0\") " pod="openstack/neutron-1fc9-account-create-update-fhrgw" Feb 25 11:35:30 crc kubenswrapper[5005]: I0225 11:35:30.240543 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-dp75q" Feb 25 11:35:30 crc kubenswrapper[5005]: I0225 11:35:30.329892 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfqpr\" (UniqueName: \"kubernetes.io/projected/9c924c62-341b-43ea-af0b-ac567b5acfd0-kube-api-access-rfqpr\") pod \"neutron-1fc9-account-create-update-fhrgw\" (UID: \"9c924c62-341b-43ea-af0b-ac567b5acfd0\") " pod="openstack/neutron-1fc9-account-create-update-fhrgw" Feb 25 11:35:30 crc kubenswrapper[5005]: I0225 11:35:30.330023 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c924c62-341b-43ea-af0b-ac567b5acfd0-operator-scripts\") pod \"neutron-1fc9-account-create-update-fhrgw\" (UID: \"9c924c62-341b-43ea-af0b-ac567b5acfd0\") " pod="openstack/neutron-1fc9-account-create-update-fhrgw" Feb 25 11:35:30 crc kubenswrapper[5005]: I0225 11:35:30.330795 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c924c62-341b-43ea-af0b-ac567b5acfd0-operator-scripts\") pod \"neutron-1fc9-account-create-update-fhrgw\" (UID: \"9c924c62-341b-43ea-af0b-ac567b5acfd0\") " pod="openstack/neutron-1fc9-account-create-update-fhrgw" Feb 25 11:35:30 crc kubenswrapper[5005]: I0225 11:35:30.354944 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfqpr\" (UniqueName: \"kubernetes.io/projected/9c924c62-341b-43ea-af0b-ac567b5acfd0-kube-api-access-rfqpr\") pod \"neutron-1fc9-account-create-update-fhrgw\" (UID: \"9c924c62-341b-43ea-af0b-ac567b5acfd0\") " pod="openstack/neutron-1fc9-account-create-update-fhrgw" Feb 25 11:35:30 crc kubenswrapper[5005]: I0225 11:35:30.463492 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-kw2jf"] Feb 25 11:35:30 crc kubenswrapper[5005]: W0225 11:35:30.470350 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0637913d_b2d2_4492_8aa2_eba57f5e7177.slice/crio-54b999a0bc7ad914cb7ef90c760d9ac2164cec477e6e278269deb1c29b8a1805 WatchSource:0}: Error finding container 54b999a0bc7ad914cb7ef90c760d9ac2164cec477e6e278269deb1c29b8a1805: Status 404 returned error can't find the container with id 54b999a0bc7ad914cb7ef90c760d9ac2164cec477e6e278269deb1c29b8a1805 Feb 25 11:35:31 crc kubenswrapper[5005]: I0225 11:35:30.496767 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-1fc9-account-create-update-fhrgw" Feb 25 11:35:31 crc kubenswrapper[5005]: I0225 11:35:30.648126 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-605b-account-create-update-dpg5x"] Feb 25 11:35:31 crc kubenswrapper[5005]: I0225 11:35:30.705791 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-xv4rr"] Feb 25 11:35:31 crc kubenswrapper[5005]: I0225 11:35:30.905996 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-xv4rr" event={"ID":"14057729-fc6f-46d3-ba9f-606be9cb3e28","Type":"ContainerStarted","Data":"266aa5b4962075250ebc2828496cba227fb34428c2df38e53a719a6fb6cd94e0"} Feb 25 11:35:31 crc kubenswrapper[5005]: I0225 11:35:30.907561 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-605b-account-create-update-dpg5x" event={"ID":"ef4bd119-b8db-4ea6-92e1-efca3d60f766","Type":"ContainerStarted","Data":"59848075720a1db4a249ef7203c2fb94753952f95787c54be547b60995c76725"} Feb 25 11:35:31 crc kubenswrapper[5005]: I0225 11:35:30.907585 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-605b-account-create-update-dpg5x" event={"ID":"ef4bd119-b8db-4ea6-92e1-efca3d60f766","Type":"ContainerStarted","Data":"ffd18e45bb2064e9ca9472dbc7bd1f364fb4be3cd1341545fd08d0b561b909f4"} Feb 25 11:35:31 crc kubenswrapper[5005]: I0225 11:35:30.909775 5005 generic.go:334] "Generic (PLEG): container finished" podID="0637913d-b2d2-4492-8aa2-eba57f5e7177" containerID="f843b1f73bfe00c37a78c01266db491f0b8c22f10d1e81b85edc4ba1e6c0294c" exitCode=0 Feb 25 11:35:31 crc kubenswrapper[5005]: I0225 11:35:30.909799 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-kw2jf" event={"ID":"0637913d-b2d2-4492-8aa2-eba57f5e7177","Type":"ContainerDied","Data":"f843b1f73bfe00c37a78c01266db491f0b8c22f10d1e81b85edc4ba1e6c0294c"} Feb 25 11:35:31 crc kubenswrapper[5005]: I0225 11:35:30.909819 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-kw2jf" event={"ID":"0637913d-b2d2-4492-8aa2-eba57f5e7177","Type":"ContainerStarted","Data":"54b999a0bc7ad914cb7ef90c760d9ac2164cec477e6e278269deb1c29b8a1805"} Feb 25 11:35:31 crc kubenswrapper[5005]: I0225 11:35:30.930650 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-605b-account-create-update-dpg5x" podStartSLOduration=1.930625702 podStartE2EDuration="1.930625702s" podCreationTimestamp="2026-02-25 11:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:35:30.922850276 +0000 UTC m=+1044.963582603" watchObservedRunningTime="2026-02-25 11:35:30.930625702 +0000 UTC m=+1044.971358039" Feb 25 11:35:31 crc kubenswrapper[5005]: I0225 11:35:31.487545 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-495c-account-create-update-zshp6"] Feb 25 11:35:31 crc kubenswrapper[5005]: I0225 11:35:31.496967 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-dp75q"] Feb 25 11:35:31 crc kubenswrapper[5005]: I0225 11:35:31.504362 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-jt9t4"] Feb 25 11:35:31 crc kubenswrapper[5005]: W0225 11:35:31.516225 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50d0cec3_9a4a_4252_accb_dc4194ad752e.slice/crio-70330e78d33c1eab3de514445e49b4d11ed390249f1481edaf9908caf8bac739 WatchSource:0}: Error finding container 70330e78d33c1eab3de514445e49b4d11ed390249f1481edaf9908caf8bac739: Status 404 returned error can't find the container with id 70330e78d33c1eab3de514445e49b4d11ed390249f1481edaf9908caf8bac739 Feb 25 11:35:31 crc kubenswrapper[5005]: W0225 11:35:31.516507 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41d7b8d7_c3a9_4065_8dd8_5c01a37f6566.slice/crio-a4e6eddc3cd1036707f0f1ed5bf6f90aee829e7c085a753314b7c4e5813ebec1 WatchSource:0}: Error finding container a4e6eddc3cd1036707f0f1ed5bf6f90aee829e7c085a753314b7c4e5813ebec1: Status 404 returned error can't find the container with id a4e6eddc3cd1036707f0f1ed5bf6f90aee829e7c085a753314b7c4e5813ebec1 Feb 25 11:35:31 crc kubenswrapper[5005]: W0225 11:35:31.517714 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bceed99_c54d_44a7_b7f0_85183b242006.slice/crio-254645115d99e3e92018e37dec80081be918a17158b03d0a4d74ba27576b5e1d WatchSource:0}: Error finding container 254645115d99e3e92018e37dec80081be918a17158b03d0a4d74ba27576b5e1d: Status 404 returned error can't find the container with id 254645115d99e3e92018e37dec80081be918a17158b03d0a4d74ba27576b5e1d Feb 25 11:35:31 crc kubenswrapper[5005]: I0225 11:35:31.643305 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-1fc9-account-create-update-fhrgw"] Feb 25 11:35:31 crc kubenswrapper[5005]: W0225 11:35:31.652085 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c924c62_341b_43ea_af0b_ac567b5acfd0.slice/crio-020bfaa2f585033196b0ad23e231808a0526ff9ac45960d6c90aef6a4793723b WatchSource:0}: Error finding container 020bfaa2f585033196b0ad23e231808a0526ff9ac45960d6c90aef6a4793723b: Status 404 returned error can't find the container with id 020bfaa2f585033196b0ad23e231808a0526ff9ac45960d6c90aef6a4793723b Feb 25 11:35:31 crc kubenswrapper[5005]: I0225 11:35:31.920978 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-dp75q" event={"ID":"50d0cec3-9a4a-4252-accb-dc4194ad752e","Type":"ContainerStarted","Data":"70330e78d33c1eab3de514445e49b4d11ed390249f1481edaf9908caf8bac739"} Feb 25 11:35:31 crc kubenswrapper[5005]: I0225 11:35:31.922497 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-jt9t4" event={"ID":"41d7b8d7-c3a9-4065-8dd8-5c01a37f6566","Type":"ContainerStarted","Data":"f2ebb03f275271efadc78e5ea3c7d55611237aea67ad3882ea356517a1261b27"} Feb 25 11:35:31 crc kubenswrapper[5005]: I0225 11:35:31.922527 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-jt9t4" event={"ID":"41d7b8d7-c3a9-4065-8dd8-5c01a37f6566","Type":"ContainerStarted","Data":"a4e6eddc3cd1036707f0f1ed5bf6f90aee829e7c085a753314b7c4e5813ebec1"} Feb 25 11:35:31 crc kubenswrapper[5005]: I0225 11:35:31.927321 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-1fc9-account-create-update-fhrgw" event={"ID":"9c924c62-341b-43ea-af0b-ac567b5acfd0","Type":"ContainerStarted","Data":"f580a081e16413868fa6337e55d16bbe79ddf2c7f702cff7a44696c7d609c6ca"} Feb 25 11:35:31 crc kubenswrapper[5005]: I0225 11:35:31.927363 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-1fc9-account-create-update-fhrgw" event={"ID":"9c924c62-341b-43ea-af0b-ac567b5acfd0","Type":"ContainerStarted","Data":"020bfaa2f585033196b0ad23e231808a0526ff9ac45960d6c90aef6a4793723b"} Feb 25 11:35:31 crc kubenswrapper[5005]: I0225 11:35:31.929044 5005 generic.go:334] "Generic (PLEG): container finished" podID="14057729-fc6f-46d3-ba9f-606be9cb3e28" containerID="8a7417500063e6ba1c55865afe2325e88a10b31fb69926a3a6b290fda7de620c" exitCode=0 Feb 25 11:35:31 crc kubenswrapper[5005]: I0225 11:35:31.929125 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-xv4rr" event={"ID":"14057729-fc6f-46d3-ba9f-606be9cb3e28","Type":"ContainerDied","Data":"8a7417500063e6ba1c55865afe2325e88a10b31fb69926a3a6b290fda7de620c"} Feb 25 11:35:31 crc kubenswrapper[5005]: I0225 11:35:31.933628 5005 generic.go:334] "Generic (PLEG): container finished" podID="ef4bd119-b8db-4ea6-92e1-efca3d60f766" containerID="59848075720a1db4a249ef7203c2fb94753952f95787c54be547b60995c76725" exitCode=0 Feb 25 11:35:31 crc kubenswrapper[5005]: I0225 11:35:31.933741 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-605b-account-create-update-dpg5x" event={"ID":"ef4bd119-b8db-4ea6-92e1-efca3d60f766","Type":"ContainerDied","Data":"59848075720a1db4a249ef7203c2fb94753952f95787c54be547b60995c76725"} Feb 25 11:35:31 crc kubenswrapper[5005]: I0225 11:35:31.936432 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-495c-account-create-update-zshp6" event={"ID":"9bceed99-c54d-44a7-b7f0-85183b242006","Type":"ContainerStarted","Data":"b835a9d324d0aed163674299f0268232a5e454350e3023a27a40fbf0c129ebb6"} Feb 25 11:35:31 crc kubenswrapper[5005]: I0225 11:35:31.936478 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-495c-account-create-update-zshp6" event={"ID":"9bceed99-c54d-44a7-b7f0-85183b242006","Type":"ContainerStarted","Data":"254645115d99e3e92018e37dec80081be918a17158b03d0a4d74ba27576b5e1d"} Feb 25 11:35:31 crc kubenswrapper[5005]: I0225 11:35:31.977454 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-jt9t4" podStartSLOduration=2.977428774 podStartE2EDuration="2.977428774s" podCreationTimestamp="2026-02-25 11:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:35:31.951686501 +0000 UTC m=+1045.992418848" watchObservedRunningTime="2026-02-25 11:35:31.977428774 +0000 UTC m=+1046.018161111" Feb 25 11:35:31 crc kubenswrapper[5005]: I0225 11:35:31.988005 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-1fc9-account-create-update-fhrgw" podStartSLOduration=1.987988085 podStartE2EDuration="1.987988085s" podCreationTimestamp="2026-02-25 11:35:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:35:31.982913761 +0000 UTC m=+1046.023646088" watchObservedRunningTime="2026-02-25 11:35:31.987988085 +0000 UTC m=+1046.028720412" Feb 25 11:35:32 crc kubenswrapper[5005]: I0225 11:35:32.000487 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-495c-account-create-update-zshp6" podStartSLOduration=3.000469255 podStartE2EDuration="3.000469255s" podCreationTimestamp="2026-02-25 11:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:35:31.99602052 +0000 UTC m=+1046.036752847" watchObservedRunningTime="2026-02-25 11:35:32.000469255 +0000 UTC m=+1046.041201582" Feb 25 11:35:32 crc kubenswrapper[5005]: I0225 11:35:32.281831 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-kw2jf" Feb 25 11:35:32 crc kubenswrapper[5005]: I0225 11:35:32.388596 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q525w\" (UniqueName: \"kubernetes.io/projected/0637913d-b2d2-4492-8aa2-eba57f5e7177-kube-api-access-q525w\") pod \"0637913d-b2d2-4492-8aa2-eba57f5e7177\" (UID: \"0637913d-b2d2-4492-8aa2-eba57f5e7177\") " Feb 25 11:35:32 crc kubenswrapper[5005]: I0225 11:35:32.389010 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0637913d-b2d2-4492-8aa2-eba57f5e7177-operator-scripts\") pod \"0637913d-b2d2-4492-8aa2-eba57f5e7177\" (UID: \"0637913d-b2d2-4492-8aa2-eba57f5e7177\") " Feb 25 11:35:32 crc kubenswrapper[5005]: I0225 11:35:32.389857 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0637913d-b2d2-4492-8aa2-eba57f5e7177-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0637913d-b2d2-4492-8aa2-eba57f5e7177" (UID: "0637913d-b2d2-4492-8aa2-eba57f5e7177"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:35:32 crc kubenswrapper[5005]: I0225 11:35:32.394271 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0637913d-b2d2-4492-8aa2-eba57f5e7177-kube-api-access-q525w" (OuterVolumeSpecName: "kube-api-access-q525w") pod "0637913d-b2d2-4492-8aa2-eba57f5e7177" (UID: "0637913d-b2d2-4492-8aa2-eba57f5e7177"). InnerVolumeSpecName "kube-api-access-q525w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:35:32 crc kubenswrapper[5005]: I0225 11:35:32.491195 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q525w\" (UniqueName: \"kubernetes.io/projected/0637913d-b2d2-4492-8aa2-eba57f5e7177-kube-api-access-q525w\") on node \"crc\" DevicePath \"\"" Feb 25 11:35:32 crc kubenswrapper[5005]: I0225 11:35:32.491226 5005 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0637913d-b2d2-4492-8aa2-eba57f5e7177-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 25 11:35:32 crc kubenswrapper[5005]: I0225 11:35:32.735050 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-54f9b7b8d9-shqxt" Feb 25 11:35:32 crc kubenswrapper[5005]: I0225 11:35:32.787025 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-2mjrz"] Feb 25 11:35:32 crc kubenswrapper[5005]: I0225 11:35:32.787304 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-2mjrz" podUID="6a46587d-8818-4e55-8351-4bb327e0010b" containerName="dnsmasq-dns" containerID="cri-o://bfc9412e280796d64f8b1d1103466667529a68066dacbe185f3d37b22315e0e7" gracePeriod=10 Feb 25 11:35:32 crc kubenswrapper[5005]: I0225 11:35:32.954249 5005 generic.go:334] "Generic (PLEG): container finished" podID="41d7b8d7-c3a9-4065-8dd8-5c01a37f6566" containerID="f2ebb03f275271efadc78e5ea3c7d55611237aea67ad3882ea356517a1261b27" exitCode=0 Feb 25 11:35:32 crc kubenswrapper[5005]: I0225 11:35:32.954422 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-jt9t4" event={"ID":"41d7b8d7-c3a9-4065-8dd8-5c01a37f6566","Type":"ContainerDied","Data":"f2ebb03f275271efadc78e5ea3c7d55611237aea67ad3882ea356517a1261b27"} Feb 25 11:35:32 crc kubenswrapper[5005]: I0225 11:35:32.959413 5005 generic.go:334] "Generic (PLEG): container finished" podID="6a46587d-8818-4e55-8351-4bb327e0010b" containerID="bfc9412e280796d64f8b1d1103466667529a68066dacbe185f3d37b22315e0e7" exitCode=0 Feb 25 11:35:32 crc kubenswrapper[5005]: I0225 11:35:32.959491 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-2mjrz" event={"ID":"6a46587d-8818-4e55-8351-4bb327e0010b","Type":"ContainerDied","Data":"bfc9412e280796d64f8b1d1103466667529a68066dacbe185f3d37b22315e0e7"} Feb 25 11:35:32 crc kubenswrapper[5005]: I0225 11:35:32.961116 5005 generic.go:334] "Generic (PLEG): container finished" podID="9c924c62-341b-43ea-af0b-ac567b5acfd0" containerID="f580a081e16413868fa6337e55d16bbe79ddf2c7f702cff7a44696c7d609c6ca" exitCode=0 Feb 25 11:35:32 crc kubenswrapper[5005]: I0225 11:35:32.961173 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-1fc9-account-create-update-fhrgw" event={"ID":"9c924c62-341b-43ea-af0b-ac567b5acfd0","Type":"ContainerDied","Data":"f580a081e16413868fa6337e55d16bbe79ddf2c7f702cff7a44696c7d609c6ca"} Feb 25 11:35:32 crc kubenswrapper[5005]: I0225 11:35:32.963924 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-kw2jf" event={"ID":"0637913d-b2d2-4492-8aa2-eba57f5e7177","Type":"ContainerDied","Data":"54b999a0bc7ad914cb7ef90c760d9ac2164cec477e6e278269deb1c29b8a1805"} Feb 25 11:35:32 crc kubenswrapper[5005]: I0225 11:35:32.963943 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54b999a0bc7ad914cb7ef90c760d9ac2164cec477e6e278269deb1c29b8a1805" Feb 25 11:35:32 crc kubenswrapper[5005]: I0225 11:35:32.963949 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-kw2jf" Feb 25 11:35:32 crc kubenswrapper[5005]: I0225 11:35:32.966289 5005 generic.go:334] "Generic (PLEG): container finished" podID="9bceed99-c54d-44a7-b7f0-85183b242006" containerID="b835a9d324d0aed163674299f0268232a5e454350e3023a27a40fbf0c129ebb6" exitCode=0 Feb 25 11:35:32 crc kubenswrapper[5005]: I0225 11:35:32.966340 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-495c-account-create-update-zshp6" event={"ID":"9bceed99-c54d-44a7-b7f0-85183b242006","Type":"ContainerDied","Data":"b835a9d324d0aed163674299f0268232a5e454350e3023a27a40fbf0c129ebb6"} Feb 25 11:35:33 crc kubenswrapper[5005]: I0225 11:35:33.393333 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-2mjrz" Feb 25 11:35:33 crc kubenswrapper[5005]: I0225 11:35:33.409398 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-605b-account-create-update-dpg5x" Feb 25 11:35:33 crc kubenswrapper[5005]: I0225 11:35:33.422222 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-xv4rr" Feb 25 11:35:33 crc kubenswrapper[5005]: I0225 11:35:33.520008 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpp48\" (UniqueName: \"kubernetes.io/projected/14057729-fc6f-46d3-ba9f-606be9cb3e28-kube-api-access-mpp48\") pod \"14057729-fc6f-46d3-ba9f-606be9cb3e28\" (UID: \"14057729-fc6f-46d3-ba9f-606be9cb3e28\") " Feb 25 11:35:33 crc kubenswrapper[5005]: I0225 11:35:33.520084 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a46587d-8818-4e55-8351-4bb327e0010b-dns-svc\") pod \"6a46587d-8818-4e55-8351-4bb327e0010b\" (UID: \"6a46587d-8818-4e55-8351-4bb327e0010b\") " Feb 25 11:35:33 crc kubenswrapper[5005]: I0225 11:35:33.520124 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6d2lx\" (UniqueName: \"kubernetes.io/projected/6a46587d-8818-4e55-8351-4bb327e0010b-kube-api-access-6d2lx\") pod \"6a46587d-8818-4e55-8351-4bb327e0010b\" (UID: \"6a46587d-8818-4e55-8351-4bb327e0010b\") " Feb 25 11:35:33 crc kubenswrapper[5005]: I0225 11:35:33.520160 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef4bd119-b8db-4ea6-92e1-efca3d60f766-operator-scripts\") pod \"ef4bd119-b8db-4ea6-92e1-efca3d60f766\" (UID: \"ef4bd119-b8db-4ea6-92e1-efca3d60f766\") " Feb 25 11:35:33 crc kubenswrapper[5005]: I0225 11:35:33.520205 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14057729-fc6f-46d3-ba9f-606be9cb3e28-operator-scripts\") pod \"14057729-fc6f-46d3-ba9f-606be9cb3e28\" (UID: \"14057729-fc6f-46d3-ba9f-606be9cb3e28\") " Feb 25 11:35:33 crc kubenswrapper[5005]: I0225 11:35:33.520227 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgpz6\" (UniqueName: \"kubernetes.io/projected/ef4bd119-b8db-4ea6-92e1-efca3d60f766-kube-api-access-bgpz6\") pod \"ef4bd119-b8db-4ea6-92e1-efca3d60f766\" (UID: \"ef4bd119-b8db-4ea6-92e1-efca3d60f766\") " Feb 25 11:35:33 crc kubenswrapper[5005]: I0225 11:35:33.520255 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6a46587d-8818-4e55-8351-4bb327e0010b-ovsdbserver-sb\") pod \"6a46587d-8818-4e55-8351-4bb327e0010b\" (UID: \"6a46587d-8818-4e55-8351-4bb327e0010b\") " Feb 25 11:35:33 crc kubenswrapper[5005]: I0225 11:35:33.520274 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6a46587d-8818-4e55-8351-4bb327e0010b-ovsdbserver-nb\") pod \"6a46587d-8818-4e55-8351-4bb327e0010b\" (UID: \"6a46587d-8818-4e55-8351-4bb327e0010b\") " Feb 25 11:35:33 crc kubenswrapper[5005]: I0225 11:35:33.520293 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a46587d-8818-4e55-8351-4bb327e0010b-config\") pod \"6a46587d-8818-4e55-8351-4bb327e0010b\" (UID: \"6a46587d-8818-4e55-8351-4bb327e0010b\") " Feb 25 11:35:33 crc kubenswrapper[5005]: I0225 11:35:33.521749 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef4bd119-b8db-4ea6-92e1-efca3d60f766-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ef4bd119-b8db-4ea6-92e1-efca3d60f766" (UID: "ef4bd119-b8db-4ea6-92e1-efca3d60f766"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:35:33 crc kubenswrapper[5005]: I0225 11:35:33.522186 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14057729-fc6f-46d3-ba9f-606be9cb3e28-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "14057729-fc6f-46d3-ba9f-606be9cb3e28" (UID: "14057729-fc6f-46d3-ba9f-606be9cb3e28"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:35:33 crc kubenswrapper[5005]: I0225 11:35:33.526962 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14057729-fc6f-46d3-ba9f-606be9cb3e28-kube-api-access-mpp48" (OuterVolumeSpecName: "kube-api-access-mpp48") pod "14057729-fc6f-46d3-ba9f-606be9cb3e28" (UID: "14057729-fc6f-46d3-ba9f-606be9cb3e28"). InnerVolumeSpecName "kube-api-access-mpp48". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:35:33 crc kubenswrapper[5005]: I0225 11:35:33.527015 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef4bd119-b8db-4ea6-92e1-efca3d60f766-kube-api-access-bgpz6" (OuterVolumeSpecName: "kube-api-access-bgpz6") pod "ef4bd119-b8db-4ea6-92e1-efca3d60f766" (UID: "ef4bd119-b8db-4ea6-92e1-efca3d60f766"). InnerVolumeSpecName "kube-api-access-bgpz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:35:33 crc kubenswrapper[5005]: I0225 11:35:33.543500 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a46587d-8818-4e55-8351-4bb327e0010b-kube-api-access-6d2lx" (OuterVolumeSpecName: "kube-api-access-6d2lx") pod "6a46587d-8818-4e55-8351-4bb327e0010b" (UID: "6a46587d-8818-4e55-8351-4bb327e0010b"). InnerVolumeSpecName "kube-api-access-6d2lx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:35:33 crc kubenswrapper[5005]: I0225 11:35:33.559838 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a46587d-8818-4e55-8351-4bb327e0010b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6a46587d-8818-4e55-8351-4bb327e0010b" (UID: "6a46587d-8818-4e55-8351-4bb327e0010b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:35:33 crc kubenswrapper[5005]: I0225 11:35:33.566595 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a46587d-8818-4e55-8351-4bb327e0010b-config" (OuterVolumeSpecName: "config") pod "6a46587d-8818-4e55-8351-4bb327e0010b" (UID: "6a46587d-8818-4e55-8351-4bb327e0010b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:35:33 crc kubenswrapper[5005]: I0225 11:35:33.569093 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a46587d-8818-4e55-8351-4bb327e0010b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6a46587d-8818-4e55-8351-4bb327e0010b" (UID: "6a46587d-8818-4e55-8351-4bb327e0010b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:35:33 crc kubenswrapper[5005]: I0225 11:35:33.570874 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a46587d-8818-4e55-8351-4bb327e0010b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6a46587d-8818-4e55-8351-4bb327e0010b" (UID: "6a46587d-8818-4e55-8351-4bb327e0010b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:35:33 crc kubenswrapper[5005]: I0225 11:35:33.622793 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpp48\" (UniqueName: \"kubernetes.io/projected/14057729-fc6f-46d3-ba9f-606be9cb3e28-kube-api-access-mpp48\") on node \"crc\" DevicePath \"\"" Feb 25 11:35:33 crc kubenswrapper[5005]: I0225 11:35:33.622831 5005 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a46587d-8818-4e55-8351-4bb327e0010b-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 25 11:35:33 crc kubenswrapper[5005]: I0225 11:35:33.622840 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6d2lx\" (UniqueName: \"kubernetes.io/projected/6a46587d-8818-4e55-8351-4bb327e0010b-kube-api-access-6d2lx\") on node \"crc\" DevicePath \"\"" Feb 25 11:35:33 crc kubenswrapper[5005]: I0225 11:35:33.622849 5005 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef4bd119-b8db-4ea6-92e1-efca3d60f766-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 25 11:35:33 crc kubenswrapper[5005]: I0225 11:35:33.622858 5005 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14057729-fc6f-46d3-ba9f-606be9cb3e28-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 25 11:35:33 crc kubenswrapper[5005]: I0225 11:35:33.622866 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgpz6\" (UniqueName: \"kubernetes.io/projected/ef4bd119-b8db-4ea6-92e1-efca3d60f766-kube-api-access-bgpz6\") on node \"crc\" DevicePath \"\"" Feb 25 11:35:33 crc kubenswrapper[5005]: I0225 11:35:33.622874 5005 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6a46587d-8818-4e55-8351-4bb327e0010b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 25 11:35:33 crc kubenswrapper[5005]: I0225 11:35:33.622882 5005 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6a46587d-8818-4e55-8351-4bb327e0010b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 25 11:35:33 crc kubenswrapper[5005]: I0225 11:35:33.622891 5005 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a46587d-8818-4e55-8351-4bb327e0010b-config\") on node \"crc\" DevicePath \"\"" Feb 25 11:35:33 crc kubenswrapper[5005]: I0225 11:35:33.981929 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-2mjrz" event={"ID":"6a46587d-8818-4e55-8351-4bb327e0010b","Type":"ContainerDied","Data":"6ac681e56458fad36cd1c3fc728bf37d33faa8381c072418b36d2489d67e8200"} Feb 25 11:35:33 crc kubenswrapper[5005]: I0225 11:35:33.981987 5005 scope.go:117] "RemoveContainer" containerID="bfc9412e280796d64f8b1d1103466667529a68066dacbe185f3d37b22315e0e7" Feb 25 11:35:33 crc kubenswrapper[5005]: I0225 11:35:33.982104 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-2mjrz" Feb 25 11:35:33 crc kubenswrapper[5005]: I0225 11:35:33.989841 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-xv4rr" event={"ID":"14057729-fc6f-46d3-ba9f-606be9cb3e28","Type":"ContainerDied","Data":"266aa5b4962075250ebc2828496cba227fb34428c2df38e53a719a6fb6cd94e0"} Feb 25 11:35:33 crc kubenswrapper[5005]: I0225 11:35:33.989881 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="266aa5b4962075250ebc2828496cba227fb34428c2df38e53a719a6fb6cd94e0" Feb 25 11:35:33 crc kubenswrapper[5005]: I0225 11:35:33.989902 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-xv4rr" Feb 25 11:35:33 crc kubenswrapper[5005]: I0225 11:35:33.991346 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-605b-account-create-update-dpg5x" event={"ID":"ef4bd119-b8db-4ea6-92e1-efca3d60f766","Type":"ContainerDied","Data":"ffd18e45bb2064e9ca9472dbc7bd1f364fb4be3cd1341545fd08d0b561b909f4"} Feb 25 11:35:33 crc kubenswrapper[5005]: I0225 11:35:33.991391 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ffd18e45bb2064e9ca9472dbc7bd1f364fb4be3cd1341545fd08d0b561b909f4" Feb 25 11:35:33 crc kubenswrapper[5005]: I0225 11:35:33.991541 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-605b-account-create-update-dpg5x" Feb 25 11:35:34 crc kubenswrapper[5005]: I0225 11:35:34.026837 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-2mjrz"] Feb 25 11:35:34 crc kubenswrapper[5005]: I0225 11:35:34.031903 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-2mjrz"] Feb 25 11:35:34 crc kubenswrapper[5005]: I0225 11:35:34.701843 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a46587d-8818-4e55-8351-4bb327e0010b" path="/var/lib/kubelet/pods/6a46587d-8818-4e55-8351-4bb327e0010b/volumes" Feb 25 11:35:36 crc kubenswrapper[5005]: I0225 11:35:36.531268 5005 scope.go:117] "RemoveContainer" containerID="366e8a79a0df83ee226891ebb4b2d003cd8a6eab3562a4550795fe6253ba4d31" Feb 25 11:35:36 crc kubenswrapper[5005]: I0225 11:35:36.766311 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-495c-account-create-update-zshp6" Feb 25 11:35:36 crc kubenswrapper[5005]: I0225 11:35:36.772116 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-1fc9-account-create-update-fhrgw" Feb 25 11:35:36 crc kubenswrapper[5005]: I0225 11:35:36.800898 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jt9t4" Feb 25 11:35:36 crc kubenswrapper[5005]: I0225 11:35:36.890353 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfqpr\" (UniqueName: \"kubernetes.io/projected/9c924c62-341b-43ea-af0b-ac567b5acfd0-kube-api-access-rfqpr\") pod \"9c924c62-341b-43ea-af0b-ac567b5acfd0\" (UID: \"9c924c62-341b-43ea-af0b-ac567b5acfd0\") " Feb 25 11:35:36 crc kubenswrapper[5005]: I0225 11:35:36.890473 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9bceed99-c54d-44a7-b7f0-85183b242006-operator-scripts\") pod \"9bceed99-c54d-44a7-b7f0-85183b242006\" (UID: \"9bceed99-c54d-44a7-b7f0-85183b242006\") " Feb 25 11:35:36 crc kubenswrapper[5005]: I0225 11:35:36.890529 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c924c62-341b-43ea-af0b-ac567b5acfd0-operator-scripts\") pod \"9c924c62-341b-43ea-af0b-ac567b5acfd0\" (UID: \"9c924c62-341b-43ea-af0b-ac567b5acfd0\") " Feb 25 11:35:36 crc kubenswrapper[5005]: I0225 11:35:36.890597 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41d7b8d7-c3a9-4065-8dd8-5c01a37f6566-operator-scripts\") pod \"41d7b8d7-c3a9-4065-8dd8-5c01a37f6566\" (UID: \"41d7b8d7-c3a9-4065-8dd8-5c01a37f6566\") " Feb 25 11:35:36 crc kubenswrapper[5005]: I0225 11:35:36.890624 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xf4q\" (UniqueName: \"kubernetes.io/projected/41d7b8d7-c3a9-4065-8dd8-5c01a37f6566-kube-api-access-8xf4q\") pod \"41d7b8d7-c3a9-4065-8dd8-5c01a37f6566\" (UID: \"41d7b8d7-c3a9-4065-8dd8-5c01a37f6566\") " Feb 25 11:35:36 crc kubenswrapper[5005]: I0225 11:35:36.890668 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5f67m\" (UniqueName: \"kubernetes.io/projected/9bceed99-c54d-44a7-b7f0-85183b242006-kube-api-access-5f67m\") pod \"9bceed99-c54d-44a7-b7f0-85183b242006\" (UID: \"9bceed99-c54d-44a7-b7f0-85183b242006\") " Feb 25 11:35:36 crc kubenswrapper[5005]: I0225 11:35:36.891814 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c924c62-341b-43ea-af0b-ac567b5acfd0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9c924c62-341b-43ea-af0b-ac567b5acfd0" (UID: "9c924c62-341b-43ea-af0b-ac567b5acfd0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:35:36 crc kubenswrapper[5005]: I0225 11:35:36.892117 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41d7b8d7-c3a9-4065-8dd8-5c01a37f6566-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "41d7b8d7-c3a9-4065-8dd8-5c01a37f6566" (UID: "41d7b8d7-c3a9-4065-8dd8-5c01a37f6566"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:35:36 crc kubenswrapper[5005]: I0225 11:35:36.892204 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bceed99-c54d-44a7-b7f0-85183b242006-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9bceed99-c54d-44a7-b7f0-85183b242006" (UID: "9bceed99-c54d-44a7-b7f0-85183b242006"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:35:36 crc kubenswrapper[5005]: I0225 11:35:36.894586 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bceed99-c54d-44a7-b7f0-85183b242006-kube-api-access-5f67m" (OuterVolumeSpecName: "kube-api-access-5f67m") pod "9bceed99-c54d-44a7-b7f0-85183b242006" (UID: "9bceed99-c54d-44a7-b7f0-85183b242006"). InnerVolumeSpecName "kube-api-access-5f67m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:35:36 crc kubenswrapper[5005]: I0225 11:35:36.896429 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c924c62-341b-43ea-af0b-ac567b5acfd0-kube-api-access-rfqpr" (OuterVolumeSpecName: "kube-api-access-rfqpr") pod "9c924c62-341b-43ea-af0b-ac567b5acfd0" (UID: "9c924c62-341b-43ea-af0b-ac567b5acfd0"). InnerVolumeSpecName "kube-api-access-rfqpr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:35:36 crc kubenswrapper[5005]: I0225 11:35:36.896662 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41d7b8d7-c3a9-4065-8dd8-5c01a37f6566-kube-api-access-8xf4q" (OuterVolumeSpecName: "kube-api-access-8xf4q") pod "41d7b8d7-c3a9-4065-8dd8-5c01a37f6566" (UID: "41d7b8d7-c3a9-4065-8dd8-5c01a37f6566"). InnerVolumeSpecName "kube-api-access-8xf4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:35:36 crc kubenswrapper[5005]: I0225 11:35:36.992021 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfqpr\" (UniqueName: \"kubernetes.io/projected/9c924c62-341b-43ea-af0b-ac567b5acfd0-kube-api-access-rfqpr\") on node \"crc\" DevicePath \"\"" Feb 25 11:35:36 crc kubenswrapper[5005]: I0225 11:35:36.992227 5005 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9bceed99-c54d-44a7-b7f0-85183b242006-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 25 11:35:36 crc kubenswrapper[5005]: I0225 11:35:36.992280 5005 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c924c62-341b-43ea-af0b-ac567b5acfd0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 25 11:35:36 crc kubenswrapper[5005]: I0225 11:35:36.992335 5005 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41d7b8d7-c3a9-4065-8dd8-5c01a37f6566-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 25 11:35:36 crc kubenswrapper[5005]: I0225 11:35:36.992395 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xf4q\" (UniqueName: \"kubernetes.io/projected/41d7b8d7-c3a9-4065-8dd8-5c01a37f6566-kube-api-access-8xf4q\") on node \"crc\" DevicePath \"\"" Feb 25 11:35:36 crc kubenswrapper[5005]: I0225 11:35:36.992454 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5f67m\" (UniqueName: \"kubernetes.io/projected/9bceed99-c54d-44a7-b7f0-85183b242006-kube-api-access-5f67m\") on node \"crc\" DevicePath \"\"" Feb 25 11:35:37 crc kubenswrapper[5005]: I0225 11:35:37.027284 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-495c-account-create-update-zshp6" event={"ID":"9bceed99-c54d-44a7-b7f0-85183b242006","Type":"ContainerDied","Data":"254645115d99e3e92018e37dec80081be918a17158b03d0a4d74ba27576b5e1d"} Feb 25 11:35:37 crc kubenswrapper[5005]: I0225 11:35:37.027636 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="254645115d99e3e92018e37dec80081be918a17158b03d0a4d74ba27576b5e1d" Feb 25 11:35:37 crc kubenswrapper[5005]: I0225 11:35:37.027338 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-495c-account-create-update-zshp6" Feb 25 11:35:37 crc kubenswrapper[5005]: I0225 11:35:37.028970 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-dp75q" event={"ID":"50d0cec3-9a4a-4252-accb-dc4194ad752e","Type":"ContainerStarted","Data":"ace342d97d3cc627380850762cfb0a51025f9c59c584d1d4c1afab89d5d5c0f1"} Feb 25 11:35:37 crc kubenswrapper[5005]: I0225 11:35:37.031870 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-jt9t4" event={"ID":"41d7b8d7-c3a9-4065-8dd8-5c01a37f6566","Type":"ContainerDied","Data":"a4e6eddc3cd1036707f0f1ed5bf6f90aee829e7c085a753314b7c4e5813ebec1"} Feb 25 11:35:37 crc kubenswrapper[5005]: I0225 11:35:37.031931 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4e6eddc3cd1036707f0f1ed5bf6f90aee829e7c085a753314b7c4e5813ebec1" Feb 25 11:35:37 crc kubenswrapper[5005]: I0225 11:35:37.032016 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jt9t4" Feb 25 11:35:37 crc kubenswrapper[5005]: I0225 11:35:37.039572 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-1fc9-account-create-update-fhrgw" event={"ID":"9c924c62-341b-43ea-af0b-ac567b5acfd0","Type":"ContainerDied","Data":"020bfaa2f585033196b0ad23e231808a0526ff9ac45960d6c90aef6a4793723b"} Feb 25 11:35:37 crc kubenswrapper[5005]: I0225 11:35:37.039613 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="020bfaa2f585033196b0ad23e231808a0526ff9ac45960d6c90aef6a4793723b" Feb 25 11:35:37 crc kubenswrapper[5005]: I0225 11:35:37.039682 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-1fc9-account-create-update-fhrgw" Feb 25 11:35:37 crc kubenswrapper[5005]: I0225 11:35:37.058698 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-dp75q" podStartSLOduration=2.957773526 podStartE2EDuration="8.058671768s" podCreationTimestamp="2026-02-25 11:35:29 +0000 UTC" firstStartedPulling="2026-02-25 11:35:31.522486666 +0000 UTC m=+1045.563218993" lastFinishedPulling="2026-02-25 11:35:36.623384918 +0000 UTC m=+1050.664117235" observedRunningTime="2026-02-25 11:35:37.054028208 +0000 UTC m=+1051.094760575" watchObservedRunningTime="2026-02-25 11:35:37.058671768 +0000 UTC m=+1051.099404095" Feb 25 11:35:40 crc kubenswrapper[5005]: I0225 11:35:40.067470 5005 generic.go:334] "Generic (PLEG): container finished" podID="50d0cec3-9a4a-4252-accb-dc4194ad752e" containerID="ace342d97d3cc627380850762cfb0a51025f9c59c584d1d4c1afab89d5d5c0f1" exitCode=0 Feb 25 11:35:40 crc kubenswrapper[5005]: I0225 11:35:40.067564 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-dp75q" event={"ID":"50d0cec3-9a4a-4252-accb-dc4194ad752e","Type":"ContainerDied","Data":"ace342d97d3cc627380850762cfb0a51025f9c59c584d1d4c1afab89d5d5c0f1"} Feb 25 11:35:41 crc kubenswrapper[5005]: I0225 11:35:41.941532 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-dp75q" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.083624 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-dp75q" event={"ID":"50d0cec3-9a4a-4252-accb-dc4194ad752e","Type":"ContainerDied","Data":"70330e78d33c1eab3de514445e49b4d11ed390249f1481edaf9908caf8bac739"} Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.083669 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70330e78d33c1eab3de514445e49b4d11ed390249f1481edaf9908caf8bac739" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.083704 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-dp75q" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.111714 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbwzf\" (UniqueName: \"kubernetes.io/projected/50d0cec3-9a4a-4252-accb-dc4194ad752e-kube-api-access-qbwzf\") pod \"50d0cec3-9a4a-4252-accb-dc4194ad752e\" (UID: \"50d0cec3-9a4a-4252-accb-dc4194ad752e\") " Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.111939 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50d0cec3-9a4a-4252-accb-dc4194ad752e-config-data\") pod \"50d0cec3-9a4a-4252-accb-dc4194ad752e\" (UID: \"50d0cec3-9a4a-4252-accb-dc4194ad752e\") " Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.111987 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50d0cec3-9a4a-4252-accb-dc4194ad752e-combined-ca-bundle\") pod \"50d0cec3-9a4a-4252-accb-dc4194ad752e\" (UID: \"50d0cec3-9a4a-4252-accb-dc4194ad752e\") " Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.139737 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50d0cec3-9a4a-4252-accb-dc4194ad752e-kube-api-access-qbwzf" (OuterVolumeSpecName: "kube-api-access-qbwzf") pod "50d0cec3-9a4a-4252-accb-dc4194ad752e" (UID: "50d0cec3-9a4a-4252-accb-dc4194ad752e"). InnerVolumeSpecName "kube-api-access-qbwzf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.147916 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50d0cec3-9a4a-4252-accb-dc4194ad752e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "50d0cec3-9a4a-4252-accb-dc4194ad752e" (UID: "50d0cec3-9a4a-4252-accb-dc4194ad752e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.167416 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50d0cec3-9a4a-4252-accb-dc4194ad752e-config-data" (OuterVolumeSpecName: "config-data") pod "50d0cec3-9a4a-4252-accb-dc4194ad752e" (UID: "50d0cec3-9a4a-4252-accb-dc4194ad752e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.214544 5005 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50d0cec3-9a4a-4252-accb-dc4194ad752e-config-data\") on node \"crc\" DevicePath \"\"" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.214667 5005 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50d0cec3-9a4a-4252-accb-dc4194ad752e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.214740 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbwzf\" (UniqueName: \"kubernetes.io/projected/50d0cec3-9a4a-4252-accb-dc4194ad752e-kube-api-access-qbwzf\") on node \"crc\" DevicePath \"\"" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.386946 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-mwr6c"] Feb 25 11:35:42 crc kubenswrapper[5005]: E0225 11:35:42.387697 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a46587d-8818-4e55-8351-4bb327e0010b" containerName="init" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.387710 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a46587d-8818-4e55-8351-4bb327e0010b" containerName="init" Feb 25 11:35:42 crc kubenswrapper[5005]: E0225 11:35:42.387730 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a46587d-8818-4e55-8351-4bb327e0010b" containerName="dnsmasq-dns" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.387735 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a46587d-8818-4e55-8351-4bb327e0010b" containerName="dnsmasq-dns" Feb 25 11:35:42 crc kubenswrapper[5005]: E0225 11:35:42.387744 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50d0cec3-9a4a-4252-accb-dc4194ad752e" containerName="keystone-db-sync" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.387751 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="50d0cec3-9a4a-4252-accb-dc4194ad752e" containerName="keystone-db-sync" Feb 25 11:35:42 crc kubenswrapper[5005]: E0225 11:35:42.387762 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41d7b8d7-c3a9-4065-8dd8-5c01a37f6566" containerName="mariadb-database-create" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.387768 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="41d7b8d7-c3a9-4065-8dd8-5c01a37f6566" containerName="mariadb-database-create" Feb 25 11:35:42 crc kubenswrapper[5005]: E0225 11:35:42.387776 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bceed99-c54d-44a7-b7f0-85183b242006" containerName="mariadb-account-create-update" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.387781 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bceed99-c54d-44a7-b7f0-85183b242006" containerName="mariadb-account-create-update" Feb 25 11:35:42 crc kubenswrapper[5005]: E0225 11:35:42.387788 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c924c62-341b-43ea-af0b-ac567b5acfd0" containerName="mariadb-account-create-update" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.387794 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c924c62-341b-43ea-af0b-ac567b5acfd0" containerName="mariadb-account-create-update" Feb 25 11:35:42 crc kubenswrapper[5005]: E0225 11:35:42.387802 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0637913d-b2d2-4492-8aa2-eba57f5e7177" containerName="mariadb-database-create" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.387807 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="0637913d-b2d2-4492-8aa2-eba57f5e7177" containerName="mariadb-database-create" Feb 25 11:35:42 crc kubenswrapper[5005]: E0225 11:35:42.387818 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef4bd119-b8db-4ea6-92e1-efca3d60f766" containerName="mariadb-account-create-update" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.387823 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef4bd119-b8db-4ea6-92e1-efca3d60f766" containerName="mariadb-account-create-update" Feb 25 11:35:42 crc kubenswrapper[5005]: E0225 11:35:42.387831 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14057729-fc6f-46d3-ba9f-606be9cb3e28" containerName="mariadb-database-create" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.387837 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="14057729-fc6f-46d3-ba9f-606be9cb3e28" containerName="mariadb-database-create" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.387963 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef4bd119-b8db-4ea6-92e1-efca3d60f766" containerName="mariadb-account-create-update" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.387973 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="0637913d-b2d2-4492-8aa2-eba57f5e7177" containerName="mariadb-database-create" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.387982 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="14057729-fc6f-46d3-ba9f-606be9cb3e28" containerName="mariadb-database-create" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.387990 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bceed99-c54d-44a7-b7f0-85183b242006" containerName="mariadb-account-create-update" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.387999 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="50d0cec3-9a4a-4252-accb-dc4194ad752e" containerName="keystone-db-sync" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.388006 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c924c62-341b-43ea-af0b-ac567b5acfd0" containerName="mariadb-account-create-update" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.388012 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="41d7b8d7-c3a9-4065-8dd8-5c01a37f6566" containerName="mariadb-database-create" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.388021 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a46587d-8818-4e55-8351-4bb327e0010b" containerName="dnsmasq-dns" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.396466 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mwr6c" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.404899 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.405188 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.405219 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-nppj9" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.405308 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.405320 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.414807 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-kkthh"] Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.416199 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6546db6db7-kkthh" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.441044 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-kkthh"] Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.456263 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-mwr6c"] Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.518114 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55f88f1a-0a87-458e-8800-ebd780ced0cb-config-data\") pod \"keystone-bootstrap-mwr6c\" (UID: \"55f88f1a-0a87-458e-8800-ebd780ced0cb\") " pod="openstack/keystone-bootstrap-mwr6c" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.518167 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj46n\" (UniqueName: \"kubernetes.io/projected/9271ee2f-d4fa-48af-a5bb-80384bbf60dc-kube-api-access-pj46n\") pod \"dnsmasq-dns-6546db6db7-kkthh\" (UID: \"9271ee2f-d4fa-48af-a5bb-80384bbf60dc\") " pod="openstack/dnsmasq-dns-6546db6db7-kkthh" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.518203 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9271ee2f-d4fa-48af-a5bb-80384bbf60dc-config\") pod \"dnsmasq-dns-6546db6db7-kkthh\" (UID: \"9271ee2f-d4fa-48af-a5bb-80384bbf60dc\") " pod="openstack/dnsmasq-dns-6546db6db7-kkthh" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.518221 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/55f88f1a-0a87-458e-8800-ebd780ced0cb-credential-keys\") pod \"keystone-bootstrap-mwr6c\" (UID: \"55f88f1a-0a87-458e-8800-ebd780ced0cb\") " pod="openstack/keystone-bootstrap-mwr6c" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.518239 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9271ee2f-d4fa-48af-a5bb-80384bbf60dc-ovsdbserver-nb\") pod \"dnsmasq-dns-6546db6db7-kkthh\" (UID: \"9271ee2f-d4fa-48af-a5bb-80384bbf60dc\") " pod="openstack/dnsmasq-dns-6546db6db7-kkthh" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.518255 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55f88f1a-0a87-458e-8800-ebd780ced0cb-scripts\") pod \"keystone-bootstrap-mwr6c\" (UID: \"55f88f1a-0a87-458e-8800-ebd780ced0cb\") " pod="openstack/keystone-bootstrap-mwr6c" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.518440 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9271ee2f-d4fa-48af-a5bb-80384bbf60dc-ovsdbserver-sb\") pod \"dnsmasq-dns-6546db6db7-kkthh\" (UID: \"9271ee2f-d4fa-48af-a5bb-80384bbf60dc\") " pod="openstack/dnsmasq-dns-6546db6db7-kkthh" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.518494 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/55f88f1a-0a87-458e-8800-ebd780ced0cb-fernet-keys\") pod \"keystone-bootstrap-mwr6c\" (UID: \"55f88f1a-0a87-458e-8800-ebd780ced0cb\") " pod="openstack/keystone-bootstrap-mwr6c" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.518578 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55f88f1a-0a87-458e-8800-ebd780ced0cb-combined-ca-bundle\") pod \"keystone-bootstrap-mwr6c\" (UID: \"55f88f1a-0a87-458e-8800-ebd780ced0cb\") " pod="openstack/keystone-bootstrap-mwr6c" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.518641 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4wtr\" (UniqueName: \"kubernetes.io/projected/55f88f1a-0a87-458e-8800-ebd780ced0cb-kube-api-access-l4wtr\") pod \"keystone-bootstrap-mwr6c\" (UID: \"55f88f1a-0a87-458e-8800-ebd780ced0cb\") " pod="openstack/keystone-bootstrap-mwr6c" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.518674 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9271ee2f-d4fa-48af-a5bb-80384bbf60dc-dns-svc\") pod \"dnsmasq-dns-6546db6db7-kkthh\" (UID: \"9271ee2f-d4fa-48af-a5bb-80384bbf60dc\") " pod="openstack/dnsmasq-dns-6546db6db7-kkthh" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.558804 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-wngsg"] Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.559710 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-wngsg" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.564742 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.564800 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.564972 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-sv6bf" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.571815 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7ff575667-wqvzf"] Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.572971 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7ff575667-wqvzf" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.576398 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.576689 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-z6njm" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.576803 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.579930 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.614289 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7ff575667-wqvzf"] Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.620210 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9271ee2f-d4fa-48af-a5bb-80384bbf60dc-config\") pod \"dnsmasq-dns-6546db6db7-kkthh\" (UID: \"9271ee2f-d4fa-48af-a5bb-80384bbf60dc\") " pod="openstack/dnsmasq-dns-6546db6db7-kkthh" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.620441 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/55f88f1a-0a87-458e-8800-ebd780ced0cb-credential-keys\") pod \"keystone-bootstrap-mwr6c\" (UID: \"55f88f1a-0a87-458e-8800-ebd780ced0cb\") " pod="openstack/keystone-bootstrap-mwr6c" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.620500 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9271ee2f-d4fa-48af-a5bb-80384bbf60dc-ovsdbserver-nb\") pod \"dnsmasq-dns-6546db6db7-kkthh\" (UID: \"9271ee2f-d4fa-48af-a5bb-80384bbf60dc\") " pod="openstack/dnsmasq-dns-6546db6db7-kkthh" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.620540 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55f88f1a-0a87-458e-8800-ebd780ced0cb-scripts\") pod \"keystone-bootstrap-mwr6c\" (UID: \"55f88f1a-0a87-458e-8800-ebd780ced0cb\") " pod="openstack/keystone-bootstrap-mwr6c" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.620660 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9271ee2f-d4fa-48af-a5bb-80384bbf60dc-ovsdbserver-sb\") pod \"dnsmasq-dns-6546db6db7-kkthh\" (UID: \"9271ee2f-d4fa-48af-a5bb-80384bbf60dc\") " pod="openstack/dnsmasq-dns-6546db6db7-kkthh" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.620707 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/55f88f1a-0a87-458e-8800-ebd780ced0cb-fernet-keys\") pod \"keystone-bootstrap-mwr6c\" (UID: \"55f88f1a-0a87-458e-8800-ebd780ced0cb\") " pod="openstack/keystone-bootstrap-mwr6c" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.624408 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9271ee2f-d4fa-48af-a5bb-80384bbf60dc-ovsdbserver-sb\") pod \"dnsmasq-dns-6546db6db7-kkthh\" (UID: \"9271ee2f-d4fa-48af-a5bb-80384bbf60dc\") " pod="openstack/dnsmasq-dns-6546db6db7-kkthh" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.625330 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9271ee2f-d4fa-48af-a5bb-80384bbf60dc-ovsdbserver-nb\") pod \"dnsmasq-dns-6546db6db7-kkthh\" (UID: \"9271ee2f-d4fa-48af-a5bb-80384bbf60dc\") " pod="openstack/dnsmasq-dns-6546db6db7-kkthh" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.626514 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9271ee2f-d4fa-48af-a5bb-80384bbf60dc-config\") pod \"dnsmasq-dns-6546db6db7-kkthh\" (UID: \"9271ee2f-d4fa-48af-a5bb-80384bbf60dc\") " pod="openstack/dnsmasq-dns-6546db6db7-kkthh" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.627461 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55f88f1a-0a87-458e-8800-ebd780ced0cb-scripts\") pod \"keystone-bootstrap-mwr6c\" (UID: \"55f88f1a-0a87-458e-8800-ebd780ced0cb\") " pod="openstack/keystone-bootstrap-mwr6c" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.665745 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55f88f1a-0a87-458e-8800-ebd780ced0cb-combined-ca-bundle\") pod \"keystone-bootstrap-mwr6c\" (UID: \"55f88f1a-0a87-458e-8800-ebd780ced0cb\") " pod="openstack/keystone-bootstrap-mwr6c" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.665794 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4wtr\" (UniqueName: \"kubernetes.io/projected/55f88f1a-0a87-458e-8800-ebd780ced0cb-kube-api-access-l4wtr\") pod \"keystone-bootstrap-mwr6c\" (UID: \"55f88f1a-0a87-458e-8800-ebd780ced0cb\") " pod="openstack/keystone-bootstrap-mwr6c" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.665818 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9271ee2f-d4fa-48af-a5bb-80384bbf60dc-dns-svc\") pod \"dnsmasq-dns-6546db6db7-kkthh\" (UID: \"9271ee2f-d4fa-48af-a5bb-80384bbf60dc\") " pod="openstack/dnsmasq-dns-6546db6db7-kkthh" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.665872 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55f88f1a-0a87-458e-8800-ebd780ced0cb-config-data\") pod \"keystone-bootstrap-mwr6c\" (UID: \"55f88f1a-0a87-458e-8800-ebd780ced0cb\") " pod="openstack/keystone-bootstrap-mwr6c" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.665914 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj46n\" (UniqueName: \"kubernetes.io/projected/9271ee2f-d4fa-48af-a5bb-80384bbf60dc-kube-api-access-pj46n\") pod \"dnsmasq-dns-6546db6db7-kkthh\" (UID: \"9271ee2f-d4fa-48af-a5bb-80384bbf60dc\") " pod="openstack/dnsmasq-dns-6546db6db7-kkthh" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.667098 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9271ee2f-d4fa-48af-a5bb-80384bbf60dc-dns-svc\") pod \"dnsmasq-dns-6546db6db7-kkthh\" (UID: \"9271ee2f-d4fa-48af-a5bb-80384bbf60dc\") " pod="openstack/dnsmasq-dns-6546db6db7-kkthh" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.670825 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/55f88f1a-0a87-458e-8800-ebd780ced0cb-credential-keys\") pod \"keystone-bootstrap-mwr6c\" (UID: \"55f88f1a-0a87-458e-8800-ebd780ced0cb\") " pod="openstack/keystone-bootstrap-mwr6c" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.675623 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-wngsg"] Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.676928 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/55f88f1a-0a87-458e-8800-ebd780ced0cb-fernet-keys\") pod \"keystone-bootstrap-mwr6c\" (UID: \"55f88f1a-0a87-458e-8800-ebd780ced0cb\") " pod="openstack/keystone-bootstrap-mwr6c" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.684476 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55f88f1a-0a87-458e-8800-ebd780ced0cb-config-data\") pod \"keystone-bootstrap-mwr6c\" (UID: \"55f88f1a-0a87-458e-8800-ebd780ced0cb\") " pod="openstack/keystone-bootstrap-mwr6c" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.684602 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55f88f1a-0a87-458e-8800-ebd780ced0cb-combined-ca-bundle\") pod \"keystone-bootstrap-mwr6c\" (UID: \"55f88f1a-0a87-458e-8800-ebd780ced0cb\") " pod="openstack/keystone-bootstrap-mwr6c" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.695692 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pj46n\" (UniqueName: \"kubernetes.io/projected/9271ee2f-d4fa-48af-a5bb-80384bbf60dc-kube-api-access-pj46n\") pod \"dnsmasq-dns-6546db6db7-kkthh\" (UID: \"9271ee2f-d4fa-48af-a5bb-80384bbf60dc\") " pod="openstack/dnsmasq-dns-6546db6db7-kkthh" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.698214 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-qrzvf"] Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.699281 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-qrzvf"] Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.699361 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qrzvf" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.702455 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4wtr\" (UniqueName: \"kubernetes.io/projected/55f88f1a-0a87-458e-8800-ebd780ced0cb-kube-api-access-l4wtr\") pod \"keystone-bootstrap-mwr6c\" (UID: \"55f88f1a-0a87-458e-8800-ebd780ced0cb\") " pod="openstack/keystone-bootstrap-mwr6c" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.705791 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.705866 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-8c5wp" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.705920 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.735415 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.737385 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.738754 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mwr6c" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.742546 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.752989 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.753021 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6546db6db7-kkthh" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.757090 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.769448 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9058179d-2c7b-4d0b-b162-10141ea754b0-horizon-secret-key\") pod \"horizon-7ff575667-wqvzf\" (UID: \"9058179d-2c7b-4d0b-b162-10141ea754b0\") " pod="openstack/horizon-7ff575667-wqvzf" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.769491 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9058179d-2c7b-4d0b-b162-10141ea754b0-config-data\") pod \"horizon-7ff575667-wqvzf\" (UID: \"9058179d-2c7b-4d0b-b162-10141ea754b0\") " pod="openstack/horizon-7ff575667-wqvzf" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.769518 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b06f7ac8-89c3-4886-9cd8-58353d24e476-combined-ca-bundle\") pod \"neutron-db-sync-wngsg\" (UID: \"b06f7ac8-89c3-4886-9cd8-58353d24e476\") " pod="openstack/neutron-db-sync-wngsg" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.769536 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b06f7ac8-89c3-4886-9cd8-58353d24e476-config\") pod \"neutron-db-sync-wngsg\" (UID: \"b06f7ac8-89c3-4886-9cd8-58353d24e476\") " pod="openstack/neutron-db-sync-wngsg" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.769616 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9058179d-2c7b-4d0b-b162-10141ea754b0-scripts\") pod \"horizon-7ff575667-wqvzf\" (UID: \"9058179d-2c7b-4d0b-b162-10141ea754b0\") " pod="openstack/horizon-7ff575667-wqvzf" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.769650 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmf79\" (UniqueName: \"kubernetes.io/projected/9058179d-2c7b-4d0b-b162-10141ea754b0-kube-api-access-jmf79\") pod \"horizon-7ff575667-wqvzf\" (UID: \"9058179d-2c7b-4d0b-b162-10141ea754b0\") " pod="openstack/horizon-7ff575667-wqvzf" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.769670 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9058179d-2c7b-4d0b-b162-10141ea754b0-logs\") pod \"horizon-7ff575667-wqvzf\" (UID: \"9058179d-2c7b-4d0b-b162-10141ea754b0\") " pod="openstack/horizon-7ff575667-wqvzf" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.769686 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql7lh\" (UniqueName: \"kubernetes.io/projected/b06f7ac8-89c3-4886-9cd8-58353d24e476-kube-api-access-ql7lh\") pod \"neutron-db-sync-wngsg\" (UID: \"b06f7ac8-89c3-4886-9cd8-58353d24e476\") " pod="openstack/neutron-db-sync-wngsg" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.777485 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-6cr5h"] Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.778569 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-6cr5h" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.790944 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-9tbgz" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.791279 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.810800 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-kkthh"] Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.850488 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-wqnnp"] Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.851431 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-wqnnp" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.866088 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-wqnnp"] Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.868037 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.868178 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-sz5nt" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.868280 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.872506 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fa77b98-833c-4278-b615-49e4c28e69c5-config-data\") pod \"cinder-db-sync-qrzvf\" (UID: \"1fa77b98-833c-4278-b615-49e4c28e69c5\") " pod="openstack/cinder-db-sync-qrzvf" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.872550 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1fa77b98-833c-4278-b615-49e4c28e69c5-etc-machine-id\") pod \"cinder-db-sync-qrzvf\" (UID: \"1fa77b98-833c-4278-b615-49e4c28e69c5\") " pod="openstack/cinder-db-sync-qrzvf" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.872575 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b033028-9ac7-43d8-95da-931e7a25249a-scripts\") pod \"ceilometer-0\" (UID: \"7b033028-9ac7-43d8-95da-931e7a25249a\") " pod="openstack/ceilometer-0" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.872600 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9058179d-2c7b-4d0b-b162-10141ea754b0-scripts\") pod \"horizon-7ff575667-wqvzf\" (UID: \"9058179d-2c7b-4d0b-b162-10141ea754b0\") " pod="openstack/horizon-7ff575667-wqvzf" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.872621 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a68e341-40f2-4b1b-b92f-bca9a4946b39-config-data\") pod \"placement-db-sync-wqnnp\" (UID: \"5a68e341-40f2-4b1b-b92f-bca9a4946b39\") " pod="openstack/placement-db-sync-wqnnp" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.872638 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fa77b98-833c-4278-b615-49e4c28e69c5-scripts\") pod \"cinder-db-sync-qrzvf\" (UID: \"1fa77b98-833c-4278-b615-49e4c28e69c5\") " pod="openstack/cinder-db-sync-qrzvf" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.872670 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmf79\" (UniqueName: \"kubernetes.io/projected/9058179d-2c7b-4d0b-b162-10141ea754b0-kube-api-access-jmf79\") pod \"horizon-7ff575667-wqvzf\" (UID: \"9058179d-2c7b-4d0b-b162-10141ea754b0\") " pod="openstack/horizon-7ff575667-wqvzf" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.872691 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9058179d-2c7b-4d0b-b162-10141ea754b0-logs\") pod \"horizon-7ff575667-wqvzf\" (UID: \"9058179d-2c7b-4d0b-b162-10141ea754b0\") " pod="openstack/horizon-7ff575667-wqvzf" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.872709 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ql7lh\" (UniqueName: \"kubernetes.io/projected/b06f7ac8-89c3-4886-9cd8-58353d24e476-kube-api-access-ql7lh\") pod \"neutron-db-sync-wngsg\" (UID: \"b06f7ac8-89c3-4886-9cd8-58353d24e476\") " pod="openstack/neutron-db-sync-wngsg" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.872728 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ppfk\" (UniqueName: \"kubernetes.io/projected/5a68e341-40f2-4b1b-b92f-bca9a4946b39-kube-api-access-2ppfk\") pod \"placement-db-sync-wqnnp\" (UID: \"5a68e341-40f2-4b1b-b92f-bca9a4946b39\") " pod="openstack/placement-db-sync-wqnnp" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.872744 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzlv2\" (UniqueName: \"kubernetes.io/projected/1fa77b98-833c-4278-b615-49e4c28e69c5-kube-api-access-rzlv2\") pod \"cinder-db-sync-qrzvf\" (UID: \"1fa77b98-833c-4278-b615-49e4c28e69c5\") " pod="openstack/cinder-db-sync-qrzvf" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.872761 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a68e341-40f2-4b1b-b92f-bca9a4946b39-scripts\") pod \"placement-db-sync-wqnnp\" (UID: \"5a68e341-40f2-4b1b-b92f-bca9a4946b39\") " pod="openstack/placement-db-sync-wqnnp" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.872791 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b033028-9ac7-43d8-95da-931e7a25249a-config-data\") pod \"ceilometer-0\" (UID: \"7b033028-9ac7-43d8-95da-931e7a25249a\") " pod="openstack/ceilometer-0" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.872806 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7b033028-9ac7-43d8-95da-931e7a25249a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7b033028-9ac7-43d8-95da-931e7a25249a\") " pod="openstack/ceilometer-0" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.872821 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b033028-9ac7-43d8-95da-931e7a25249a-run-httpd\") pod \"ceilometer-0\" (UID: \"7b033028-9ac7-43d8-95da-931e7a25249a\") " pod="openstack/ceilometer-0" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.872838 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a68e341-40f2-4b1b-b92f-bca9a4946b39-combined-ca-bundle\") pod \"placement-db-sync-wqnnp\" (UID: \"5a68e341-40f2-4b1b-b92f-bca9a4946b39\") " pod="openstack/placement-db-sync-wqnnp" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.872857 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9058179d-2c7b-4d0b-b162-10141ea754b0-horizon-secret-key\") pod \"horizon-7ff575667-wqvzf\" (UID: \"9058179d-2c7b-4d0b-b162-10141ea754b0\") " pod="openstack/horizon-7ff575667-wqvzf" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.872874 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9058179d-2c7b-4d0b-b162-10141ea754b0-config-data\") pod \"horizon-7ff575667-wqvzf\" (UID: \"9058179d-2c7b-4d0b-b162-10141ea754b0\") " pod="openstack/horizon-7ff575667-wqvzf" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.872893 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b06f7ac8-89c3-4886-9cd8-58353d24e476-combined-ca-bundle\") pod \"neutron-db-sync-wngsg\" (UID: \"b06f7ac8-89c3-4886-9cd8-58353d24e476\") " pod="openstack/neutron-db-sync-wngsg" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.872911 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b06f7ac8-89c3-4886-9cd8-58353d24e476-config\") pod \"neutron-db-sync-wngsg\" (UID: \"b06f7ac8-89c3-4886-9cd8-58353d24e476\") " pod="openstack/neutron-db-sync-wngsg" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.872934 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b033028-9ac7-43d8-95da-931e7a25249a-log-httpd\") pod \"ceilometer-0\" (UID: \"7b033028-9ac7-43d8-95da-931e7a25249a\") " pod="openstack/ceilometer-0" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.872955 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1fa77b98-833c-4278-b615-49e4c28e69c5-db-sync-config-data\") pod \"cinder-db-sync-qrzvf\" (UID: \"1fa77b98-833c-4278-b615-49e4c28e69c5\") " pod="openstack/cinder-db-sync-qrzvf" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.872971 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68rjp\" (UniqueName: \"kubernetes.io/projected/7b033028-9ac7-43d8-95da-931e7a25249a-kube-api-access-68rjp\") pod \"ceilometer-0\" (UID: \"7b033028-9ac7-43d8-95da-931e7a25249a\") " pod="openstack/ceilometer-0" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.872987 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a68e341-40f2-4b1b-b92f-bca9a4946b39-logs\") pod \"placement-db-sync-wqnnp\" (UID: \"5a68e341-40f2-4b1b-b92f-bca9a4946b39\") " pod="openstack/placement-db-sync-wqnnp" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.873016 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fa77b98-833c-4278-b615-49e4c28e69c5-combined-ca-bundle\") pod \"cinder-db-sync-qrzvf\" (UID: \"1fa77b98-833c-4278-b615-49e4c28e69c5\") " pod="openstack/cinder-db-sync-qrzvf" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.873030 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b033028-9ac7-43d8-95da-931e7a25249a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7b033028-9ac7-43d8-95da-931e7a25249a\") " pod="openstack/ceilometer-0" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.873691 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9058179d-2c7b-4d0b-b162-10141ea754b0-scripts\") pod \"horizon-7ff575667-wqvzf\" (UID: \"9058179d-2c7b-4d0b-b162-10141ea754b0\") " pod="openstack/horizon-7ff575667-wqvzf" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.873990 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9058179d-2c7b-4d0b-b162-10141ea754b0-logs\") pod \"horizon-7ff575667-wqvzf\" (UID: \"9058179d-2c7b-4d0b-b162-10141ea754b0\") " pod="openstack/horizon-7ff575667-wqvzf" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.875072 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9058179d-2c7b-4d0b-b162-10141ea754b0-config-data\") pod \"horizon-7ff575667-wqvzf\" (UID: \"9058179d-2c7b-4d0b-b162-10141ea754b0\") " pod="openstack/horizon-7ff575667-wqvzf" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.881289 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b06f7ac8-89c3-4886-9cd8-58353d24e476-combined-ca-bundle\") pod \"neutron-db-sync-wngsg\" (UID: \"b06f7ac8-89c3-4886-9cd8-58353d24e476\") " pod="openstack/neutron-db-sync-wngsg" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.882954 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9058179d-2c7b-4d0b-b162-10141ea754b0-horizon-secret-key\") pod \"horizon-7ff575667-wqvzf\" (UID: \"9058179d-2c7b-4d0b-b162-10141ea754b0\") " pod="openstack/horizon-7ff575667-wqvzf" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.893613 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b06f7ac8-89c3-4886-9cd8-58353d24e476-config\") pod \"neutron-db-sync-wngsg\" (UID: \"b06f7ac8-89c3-4886-9cd8-58353d24e476\") " pod="openstack/neutron-db-sync-wngsg" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.901434 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-6cr5h"] Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.919225 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-vgsgg"] Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.920987 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-vgsgg" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.943649 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ql7lh\" (UniqueName: \"kubernetes.io/projected/b06f7ac8-89c3-4886-9cd8-58353d24e476-kube-api-access-ql7lh\") pod \"neutron-db-sync-wngsg\" (UID: \"b06f7ac8-89c3-4886-9cd8-58353d24e476\") " pod="openstack/neutron-db-sync-wngsg" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.949706 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmf79\" (UniqueName: \"kubernetes.io/projected/9058179d-2c7b-4d0b-b162-10141ea754b0-kube-api-access-jmf79\") pod \"horizon-7ff575667-wqvzf\" (UID: \"9058179d-2c7b-4d0b-b162-10141ea754b0\") " pod="openstack/horizon-7ff575667-wqvzf" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.979639 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ppfk\" (UniqueName: \"kubernetes.io/projected/5a68e341-40f2-4b1b-b92f-bca9a4946b39-kube-api-access-2ppfk\") pod \"placement-db-sync-wqnnp\" (UID: \"5a68e341-40f2-4b1b-b92f-bca9a4946b39\") " pod="openstack/placement-db-sync-wqnnp" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.979683 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k8vh\" (UniqueName: \"kubernetes.io/projected/3052cba9-7666-438d-a17c-d3028c836c1d-kube-api-access-4k8vh\") pod \"barbican-db-sync-6cr5h\" (UID: \"3052cba9-7666-438d-a17c-d3028c836c1d\") " pod="openstack/barbican-db-sync-6cr5h" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.979705 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzlv2\" (UniqueName: \"kubernetes.io/projected/1fa77b98-833c-4278-b615-49e4c28e69c5-kube-api-access-rzlv2\") pod \"cinder-db-sync-qrzvf\" (UID: \"1fa77b98-833c-4278-b615-49e4c28e69c5\") " pod="openstack/cinder-db-sync-qrzvf" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.979723 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a68e341-40f2-4b1b-b92f-bca9a4946b39-scripts\") pod \"placement-db-sync-wqnnp\" (UID: \"5a68e341-40f2-4b1b-b92f-bca9a4946b39\") " pod="openstack/placement-db-sync-wqnnp" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.979746 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3052cba9-7666-438d-a17c-d3028c836c1d-db-sync-config-data\") pod \"barbican-db-sync-6cr5h\" (UID: \"3052cba9-7666-438d-a17c-d3028c836c1d\") " pod="openstack/barbican-db-sync-6cr5h" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.979764 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec247cb2-074d-4c74-96fd-62353e78b898-ovsdbserver-sb\") pod \"dnsmasq-dns-7987f74bbc-vgsgg\" (UID: \"ec247cb2-074d-4c74-96fd-62353e78b898\") " pod="openstack/dnsmasq-dns-7987f74bbc-vgsgg" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.979846 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b033028-9ac7-43d8-95da-931e7a25249a-config-data\") pod \"ceilometer-0\" (UID: \"7b033028-9ac7-43d8-95da-931e7a25249a\") " pod="openstack/ceilometer-0" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.979862 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7b033028-9ac7-43d8-95da-931e7a25249a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7b033028-9ac7-43d8-95da-931e7a25249a\") " pod="openstack/ceilometer-0" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.979875 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b033028-9ac7-43d8-95da-931e7a25249a-run-httpd\") pod \"ceilometer-0\" (UID: \"7b033028-9ac7-43d8-95da-931e7a25249a\") " pod="openstack/ceilometer-0" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.979897 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a68e341-40f2-4b1b-b92f-bca9a4946b39-combined-ca-bundle\") pod \"placement-db-sync-wqnnp\" (UID: \"5a68e341-40f2-4b1b-b92f-bca9a4946b39\") " pod="openstack/placement-db-sync-wqnnp" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.979921 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec247cb2-074d-4c74-96fd-62353e78b898-ovsdbserver-nb\") pod \"dnsmasq-dns-7987f74bbc-vgsgg\" (UID: \"ec247cb2-074d-4c74-96fd-62353e78b898\") " pod="openstack/dnsmasq-dns-7987f74bbc-vgsgg" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.979939 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec247cb2-074d-4c74-96fd-62353e78b898-dns-svc\") pod \"dnsmasq-dns-7987f74bbc-vgsgg\" (UID: \"ec247cb2-074d-4c74-96fd-62353e78b898\") " pod="openstack/dnsmasq-dns-7987f74bbc-vgsgg" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.979963 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b033028-9ac7-43d8-95da-931e7a25249a-log-httpd\") pod \"ceilometer-0\" (UID: \"7b033028-9ac7-43d8-95da-931e7a25249a\") " pod="openstack/ceilometer-0" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.979982 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec247cb2-074d-4c74-96fd-62353e78b898-config\") pod \"dnsmasq-dns-7987f74bbc-vgsgg\" (UID: \"ec247cb2-074d-4c74-96fd-62353e78b898\") " pod="openstack/dnsmasq-dns-7987f74bbc-vgsgg" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.980001 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68rjp\" (UniqueName: \"kubernetes.io/projected/7b033028-9ac7-43d8-95da-931e7a25249a-kube-api-access-68rjp\") pod \"ceilometer-0\" (UID: \"7b033028-9ac7-43d8-95da-931e7a25249a\") " pod="openstack/ceilometer-0" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.980016 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1fa77b98-833c-4278-b615-49e4c28e69c5-db-sync-config-data\") pod \"cinder-db-sync-qrzvf\" (UID: \"1fa77b98-833c-4278-b615-49e4c28e69c5\") " pod="openstack/cinder-db-sync-qrzvf" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.980033 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a68e341-40f2-4b1b-b92f-bca9a4946b39-logs\") pod \"placement-db-sync-wqnnp\" (UID: \"5a68e341-40f2-4b1b-b92f-bca9a4946b39\") " pod="openstack/placement-db-sync-wqnnp" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.980057 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fa77b98-833c-4278-b615-49e4c28e69c5-combined-ca-bundle\") pod \"cinder-db-sync-qrzvf\" (UID: \"1fa77b98-833c-4278-b615-49e4c28e69c5\") " pod="openstack/cinder-db-sync-qrzvf" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.980072 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b033028-9ac7-43d8-95da-931e7a25249a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7b033028-9ac7-43d8-95da-931e7a25249a\") " pod="openstack/ceilometer-0" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.980090 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3052cba9-7666-438d-a17c-d3028c836c1d-combined-ca-bundle\") pod \"barbican-db-sync-6cr5h\" (UID: \"3052cba9-7666-438d-a17c-d3028c836c1d\") " pod="openstack/barbican-db-sync-6cr5h" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.980108 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fa77b98-833c-4278-b615-49e4c28e69c5-config-data\") pod \"cinder-db-sync-qrzvf\" (UID: \"1fa77b98-833c-4278-b615-49e4c28e69c5\") " pod="openstack/cinder-db-sync-qrzvf" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.980127 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1fa77b98-833c-4278-b615-49e4c28e69c5-etc-machine-id\") pod \"cinder-db-sync-qrzvf\" (UID: \"1fa77b98-833c-4278-b615-49e4c28e69c5\") " pod="openstack/cinder-db-sync-qrzvf" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.980148 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9rxx\" (UniqueName: \"kubernetes.io/projected/ec247cb2-074d-4c74-96fd-62353e78b898-kube-api-access-p9rxx\") pod \"dnsmasq-dns-7987f74bbc-vgsgg\" (UID: \"ec247cb2-074d-4c74-96fd-62353e78b898\") " pod="openstack/dnsmasq-dns-7987f74bbc-vgsgg" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.980169 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b033028-9ac7-43d8-95da-931e7a25249a-scripts\") pod \"ceilometer-0\" (UID: \"7b033028-9ac7-43d8-95da-931e7a25249a\") " pod="openstack/ceilometer-0" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.980186 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a68e341-40f2-4b1b-b92f-bca9a4946b39-config-data\") pod \"placement-db-sync-wqnnp\" (UID: \"5a68e341-40f2-4b1b-b92f-bca9a4946b39\") " pod="openstack/placement-db-sync-wqnnp" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.980203 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fa77b98-833c-4278-b615-49e4c28e69c5-scripts\") pod \"cinder-db-sync-qrzvf\" (UID: \"1fa77b98-833c-4278-b615-49e4c28e69c5\") " pod="openstack/cinder-db-sync-qrzvf" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.988405 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a68e341-40f2-4b1b-b92f-bca9a4946b39-logs\") pod \"placement-db-sync-wqnnp\" (UID: \"5a68e341-40f2-4b1b-b92f-bca9a4946b39\") " pod="openstack/placement-db-sync-wqnnp" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.988485 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1fa77b98-833c-4278-b615-49e4c28e69c5-etc-machine-id\") pod \"cinder-db-sync-qrzvf\" (UID: \"1fa77b98-833c-4278-b615-49e4c28e69c5\") " pod="openstack/cinder-db-sync-qrzvf" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.988649 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b033028-9ac7-43d8-95da-931e7a25249a-log-httpd\") pod \"ceilometer-0\" (UID: \"7b033028-9ac7-43d8-95da-931e7a25249a\") " pod="openstack/ceilometer-0" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.991112 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fa77b98-833c-4278-b615-49e4c28e69c5-scripts\") pod \"cinder-db-sync-qrzvf\" (UID: \"1fa77b98-833c-4278-b615-49e4c28e69c5\") " pod="openstack/cinder-db-sync-qrzvf" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.993808 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b033028-9ac7-43d8-95da-931e7a25249a-run-httpd\") pod \"ceilometer-0\" (UID: \"7b033028-9ac7-43d8-95da-931e7a25249a\") " pod="openstack/ceilometer-0" Feb 25 11:35:42 crc kubenswrapper[5005]: I0225 11:35:42.993882 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7b033028-9ac7-43d8-95da-931e7a25249a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7b033028-9ac7-43d8-95da-931e7a25249a\") " pod="openstack/ceilometer-0" Feb 25 11:35:43 crc kubenswrapper[5005]: I0225 11:35:43.000800 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fa77b98-833c-4278-b615-49e4c28e69c5-combined-ca-bundle\") pod \"cinder-db-sync-qrzvf\" (UID: \"1fa77b98-833c-4278-b615-49e4c28e69c5\") " pod="openstack/cinder-db-sync-qrzvf" Feb 25 11:35:43 crc kubenswrapper[5005]: I0225 11:35:43.000847 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a68e341-40f2-4b1b-b92f-bca9a4946b39-scripts\") pod \"placement-db-sync-wqnnp\" (UID: \"5a68e341-40f2-4b1b-b92f-bca9a4946b39\") " pod="openstack/placement-db-sync-wqnnp" Feb 25 11:35:43 crc kubenswrapper[5005]: I0225 11:35:43.001230 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b033028-9ac7-43d8-95da-931e7a25249a-scripts\") pod \"ceilometer-0\" (UID: \"7b033028-9ac7-43d8-95da-931e7a25249a\") " pod="openstack/ceilometer-0" Feb 25 11:35:43 crc kubenswrapper[5005]: I0225 11:35:43.001408 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b033028-9ac7-43d8-95da-931e7a25249a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7b033028-9ac7-43d8-95da-931e7a25249a\") " pod="openstack/ceilometer-0" Feb 25 11:35:43 crc kubenswrapper[5005]: I0225 11:35:43.001759 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fa77b98-833c-4278-b615-49e4c28e69c5-config-data\") pod \"cinder-db-sync-qrzvf\" (UID: \"1fa77b98-833c-4278-b615-49e4c28e69c5\") " pod="openstack/cinder-db-sync-qrzvf" Feb 25 11:35:43 crc kubenswrapper[5005]: I0225 11:35:43.004644 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a68e341-40f2-4b1b-b92f-bca9a4946b39-config-data\") pod \"placement-db-sync-wqnnp\" (UID: \"5a68e341-40f2-4b1b-b92f-bca9a4946b39\") " pod="openstack/placement-db-sync-wqnnp" Feb 25 11:35:43 crc kubenswrapper[5005]: I0225 11:35:43.005764 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b033028-9ac7-43d8-95da-931e7a25249a-config-data\") pod \"ceilometer-0\" (UID: \"7b033028-9ac7-43d8-95da-931e7a25249a\") " pod="openstack/ceilometer-0" Feb 25 11:35:43 crc kubenswrapper[5005]: I0225 11:35:43.006567 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1fa77b98-833c-4278-b615-49e4c28e69c5-db-sync-config-data\") pod \"cinder-db-sync-qrzvf\" (UID: \"1fa77b98-833c-4278-b615-49e4c28e69c5\") " pod="openstack/cinder-db-sync-qrzvf" Feb 25 11:35:43 crc kubenswrapper[5005]: I0225 11:35:43.007896 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a68e341-40f2-4b1b-b92f-bca9a4946b39-combined-ca-bundle\") pod \"placement-db-sync-wqnnp\" (UID: \"5a68e341-40f2-4b1b-b92f-bca9a4946b39\") " pod="openstack/placement-db-sync-wqnnp" Feb 25 11:35:43 crc kubenswrapper[5005]: I0225 11:35:43.008782 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-vgsgg"] Feb 25 11:35:43 crc kubenswrapper[5005]: I0225 11:35:43.009345 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzlv2\" (UniqueName: \"kubernetes.io/projected/1fa77b98-833c-4278-b615-49e4c28e69c5-kube-api-access-rzlv2\") pod \"cinder-db-sync-qrzvf\" (UID: \"1fa77b98-833c-4278-b615-49e4c28e69c5\") " pod="openstack/cinder-db-sync-qrzvf" Feb 25 11:35:43 crc kubenswrapper[5005]: I0225 11:35:43.028519 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ppfk\" (UniqueName: \"kubernetes.io/projected/5a68e341-40f2-4b1b-b92f-bca9a4946b39-kube-api-access-2ppfk\") pod \"placement-db-sync-wqnnp\" (UID: \"5a68e341-40f2-4b1b-b92f-bca9a4946b39\") " pod="openstack/placement-db-sync-wqnnp" Feb 25 11:35:43 crc kubenswrapper[5005]: I0225 11:35:43.028623 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68rjp\" (UniqueName: \"kubernetes.io/projected/7b033028-9ac7-43d8-95da-931e7a25249a-kube-api-access-68rjp\") pod \"ceilometer-0\" (UID: \"7b033028-9ac7-43d8-95da-931e7a25249a\") " pod="openstack/ceilometer-0" Feb 25 11:35:43 crc kubenswrapper[5005]: I0225 11:35:43.046054 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qrzvf" Feb 25 11:35:43 crc kubenswrapper[5005]: I0225 11:35:43.076850 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 25 11:35:43 crc kubenswrapper[5005]: I0225 11:35:43.078225 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-wqnnp" Feb 25 11:35:43 crc kubenswrapper[5005]: I0225 11:35:43.081740 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-d4b89f889-cxtxj"] Feb 25 11:35:43 crc kubenswrapper[5005]: I0225 11:35:43.083346 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4k8vh\" (UniqueName: \"kubernetes.io/projected/3052cba9-7666-438d-a17c-d3028c836c1d-kube-api-access-4k8vh\") pod \"barbican-db-sync-6cr5h\" (UID: \"3052cba9-7666-438d-a17c-d3028c836c1d\") " pod="openstack/barbican-db-sync-6cr5h" Feb 25 11:35:43 crc kubenswrapper[5005]: I0225 11:35:43.083406 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3052cba9-7666-438d-a17c-d3028c836c1d-db-sync-config-data\") pod \"barbican-db-sync-6cr5h\" (UID: \"3052cba9-7666-438d-a17c-d3028c836c1d\") " pod="openstack/barbican-db-sync-6cr5h" Feb 25 11:35:43 crc kubenswrapper[5005]: I0225 11:35:43.083425 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec247cb2-074d-4c74-96fd-62353e78b898-ovsdbserver-sb\") pod \"dnsmasq-dns-7987f74bbc-vgsgg\" (UID: \"ec247cb2-074d-4c74-96fd-62353e78b898\") " pod="openstack/dnsmasq-dns-7987f74bbc-vgsgg" Feb 25 11:35:43 crc kubenswrapper[5005]: I0225 11:35:43.083465 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec247cb2-074d-4c74-96fd-62353e78b898-ovsdbserver-nb\") pod \"dnsmasq-dns-7987f74bbc-vgsgg\" (UID: \"ec247cb2-074d-4c74-96fd-62353e78b898\") " pod="openstack/dnsmasq-dns-7987f74bbc-vgsgg" Feb 25 11:35:43 crc kubenswrapper[5005]: I0225 11:35:43.083485 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec247cb2-074d-4c74-96fd-62353e78b898-dns-svc\") pod \"dnsmasq-dns-7987f74bbc-vgsgg\" (UID: \"ec247cb2-074d-4c74-96fd-62353e78b898\") " pod="openstack/dnsmasq-dns-7987f74bbc-vgsgg" Feb 25 11:35:43 crc kubenswrapper[5005]: I0225 11:35:43.083512 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec247cb2-074d-4c74-96fd-62353e78b898-config\") pod \"dnsmasq-dns-7987f74bbc-vgsgg\" (UID: \"ec247cb2-074d-4c74-96fd-62353e78b898\") " pod="openstack/dnsmasq-dns-7987f74bbc-vgsgg" Feb 25 11:35:43 crc kubenswrapper[5005]: I0225 11:35:43.083545 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3052cba9-7666-438d-a17c-d3028c836c1d-combined-ca-bundle\") pod \"barbican-db-sync-6cr5h\" (UID: \"3052cba9-7666-438d-a17c-d3028c836c1d\") " pod="openstack/barbican-db-sync-6cr5h" Feb 25 11:35:43 crc kubenswrapper[5005]: I0225 11:35:43.083571 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9rxx\" (UniqueName: \"kubernetes.io/projected/ec247cb2-074d-4c74-96fd-62353e78b898-kube-api-access-p9rxx\") pod \"dnsmasq-dns-7987f74bbc-vgsgg\" (UID: \"ec247cb2-074d-4c74-96fd-62353e78b898\") " pod="openstack/dnsmasq-dns-7987f74bbc-vgsgg" Feb 25 11:35:43 crc kubenswrapper[5005]: I0225 11:35:43.083652 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-d4b89f889-cxtxj" Feb 25 11:35:43 crc kubenswrapper[5005]: I0225 11:35:43.086118 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec247cb2-074d-4c74-96fd-62353e78b898-dns-svc\") pod \"dnsmasq-dns-7987f74bbc-vgsgg\" (UID: \"ec247cb2-074d-4c74-96fd-62353e78b898\") " pod="openstack/dnsmasq-dns-7987f74bbc-vgsgg" Feb 25 11:35:43 crc kubenswrapper[5005]: I0225 11:35:43.090257 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec247cb2-074d-4c74-96fd-62353e78b898-ovsdbserver-nb\") pod \"dnsmasq-dns-7987f74bbc-vgsgg\" (UID: \"ec247cb2-074d-4c74-96fd-62353e78b898\") " pod="openstack/dnsmasq-dns-7987f74bbc-vgsgg" Feb 25 11:35:43 crc kubenswrapper[5005]: I0225 11:35:43.090642 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3052cba9-7666-438d-a17c-d3028c836c1d-combined-ca-bundle\") pod \"barbican-db-sync-6cr5h\" (UID: \"3052cba9-7666-438d-a17c-d3028c836c1d\") " pod="openstack/barbican-db-sync-6cr5h" Feb 25 11:35:43 crc kubenswrapper[5005]: I0225 11:35:43.091088 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec247cb2-074d-4c74-96fd-62353e78b898-ovsdbserver-sb\") pod \"dnsmasq-dns-7987f74bbc-vgsgg\" (UID: \"ec247cb2-074d-4c74-96fd-62353e78b898\") " pod="openstack/dnsmasq-dns-7987f74bbc-vgsgg" Feb 25 11:35:43 crc kubenswrapper[5005]: I0225 11:35:43.091301 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec247cb2-074d-4c74-96fd-62353e78b898-config\") pod \"dnsmasq-dns-7987f74bbc-vgsgg\" (UID: \"ec247cb2-074d-4c74-96fd-62353e78b898\") " pod="openstack/dnsmasq-dns-7987f74bbc-vgsgg" Feb 25 11:35:43 crc kubenswrapper[5005]: I0225 11:35:43.092859 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3052cba9-7666-438d-a17c-d3028c836c1d-db-sync-config-data\") pod \"barbican-db-sync-6cr5h\" (UID: \"3052cba9-7666-438d-a17c-d3028c836c1d\") " pod="openstack/barbican-db-sync-6cr5h" Feb 25 11:35:43 crc kubenswrapper[5005]: I0225 11:35:43.135111 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9rxx\" (UniqueName: \"kubernetes.io/projected/ec247cb2-074d-4c74-96fd-62353e78b898-kube-api-access-p9rxx\") pod \"dnsmasq-dns-7987f74bbc-vgsgg\" (UID: \"ec247cb2-074d-4c74-96fd-62353e78b898\") " pod="openstack/dnsmasq-dns-7987f74bbc-vgsgg" Feb 25 11:35:43 crc kubenswrapper[5005]: I0225 11:35:43.146533 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4k8vh\" (UniqueName: \"kubernetes.io/projected/3052cba9-7666-438d-a17c-d3028c836c1d-kube-api-access-4k8vh\") pod \"barbican-db-sync-6cr5h\" (UID: \"3052cba9-7666-438d-a17c-d3028c836c1d\") " pod="openstack/barbican-db-sync-6cr5h" Feb 25 11:35:43 crc kubenswrapper[5005]: I0225 11:35:43.185244 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a38c83b4-5b4f-4b02-b427-ff357cbbb47c-logs\") pod \"horizon-d4b89f889-cxtxj\" (UID: \"a38c83b4-5b4f-4b02-b427-ff357cbbb47c\") " pod="openstack/horizon-d4b89f889-cxtxj" Feb 25 11:35:43 crc kubenswrapper[5005]: I0225 11:35:43.185336 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a38c83b4-5b4f-4b02-b427-ff357cbbb47c-config-data\") pod \"horizon-d4b89f889-cxtxj\" (UID: \"a38c83b4-5b4f-4b02-b427-ff357cbbb47c\") " pod="openstack/horizon-d4b89f889-cxtxj" Feb 25 11:35:43 crc kubenswrapper[5005]: I0225 11:35:43.185411 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a38c83b4-5b4f-4b02-b427-ff357cbbb47c-horizon-secret-key\") pod \"horizon-d4b89f889-cxtxj\" (UID: \"a38c83b4-5b4f-4b02-b427-ff357cbbb47c\") " pod="openstack/horizon-d4b89f889-cxtxj" Feb 25 11:35:43 crc kubenswrapper[5005]: I0225 11:35:43.185431 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ktbv\" (UniqueName: \"kubernetes.io/projected/a38c83b4-5b4f-4b02-b427-ff357cbbb47c-kube-api-access-9ktbv\") pod \"horizon-d4b89f889-cxtxj\" (UID: \"a38c83b4-5b4f-4b02-b427-ff357cbbb47c\") " pod="openstack/horizon-d4b89f889-cxtxj" Feb 25 11:35:43 crc kubenswrapper[5005]: I0225 11:35:43.185478 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a38c83b4-5b4f-4b02-b427-ff357cbbb47c-scripts\") pod \"horizon-d4b89f889-cxtxj\" (UID: \"a38c83b4-5b4f-4b02-b427-ff357cbbb47c\") " pod="openstack/horizon-d4b89f889-cxtxj" Feb 25 11:35:43 crc kubenswrapper[5005]: I0225 11:35:43.199778 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-wngsg" Feb 25 11:35:43 crc kubenswrapper[5005]: I0225 11:35:43.201158 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-d4b89f889-cxtxj"] Feb 25 11:35:43 crc kubenswrapper[5005]: I0225 11:35:43.229245 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7ff575667-wqvzf" Feb 25 11:35:43 crc kubenswrapper[5005]: I0225 11:35:43.281579 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-mwr6c"] Feb 25 11:35:43 crc kubenswrapper[5005]: I0225 11:35:43.290515 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a38c83b4-5b4f-4b02-b427-ff357cbbb47c-scripts\") pod \"horizon-d4b89f889-cxtxj\" (UID: \"a38c83b4-5b4f-4b02-b427-ff357cbbb47c\") " pod="openstack/horizon-d4b89f889-cxtxj" Feb 25 11:35:43 crc kubenswrapper[5005]: I0225 11:35:43.290650 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a38c83b4-5b4f-4b02-b427-ff357cbbb47c-logs\") pod \"horizon-d4b89f889-cxtxj\" (UID: \"a38c83b4-5b4f-4b02-b427-ff357cbbb47c\") " pod="openstack/horizon-d4b89f889-cxtxj" Feb 25 11:35:43 crc kubenswrapper[5005]: I0225 11:35:43.290700 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a38c83b4-5b4f-4b02-b427-ff357cbbb47c-config-data\") pod \"horizon-d4b89f889-cxtxj\" (UID: \"a38c83b4-5b4f-4b02-b427-ff357cbbb47c\") " pod="openstack/horizon-d4b89f889-cxtxj" Feb 25 11:35:43 crc kubenswrapper[5005]: I0225 11:35:43.290754 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a38c83b4-5b4f-4b02-b427-ff357cbbb47c-horizon-secret-key\") pod \"horizon-d4b89f889-cxtxj\" (UID: \"a38c83b4-5b4f-4b02-b427-ff357cbbb47c\") " pod="openstack/horizon-d4b89f889-cxtxj" Feb 25 11:35:43 crc kubenswrapper[5005]: I0225 11:35:43.290789 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ktbv\" (UniqueName: \"kubernetes.io/projected/a38c83b4-5b4f-4b02-b427-ff357cbbb47c-kube-api-access-9ktbv\") pod \"horizon-d4b89f889-cxtxj\" (UID: \"a38c83b4-5b4f-4b02-b427-ff357cbbb47c\") " pod="openstack/horizon-d4b89f889-cxtxj" Feb 25 11:35:43 crc kubenswrapper[5005]: I0225 11:35:43.291195 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a38c83b4-5b4f-4b02-b427-ff357cbbb47c-scripts\") pod \"horizon-d4b89f889-cxtxj\" (UID: \"a38c83b4-5b4f-4b02-b427-ff357cbbb47c\") " pod="openstack/horizon-d4b89f889-cxtxj" Feb 25 11:35:43 crc kubenswrapper[5005]: I0225 11:35:43.291731 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a38c83b4-5b4f-4b02-b427-ff357cbbb47c-logs\") pod \"horizon-d4b89f889-cxtxj\" (UID: \"a38c83b4-5b4f-4b02-b427-ff357cbbb47c\") " pod="openstack/horizon-d4b89f889-cxtxj" Feb 25 11:35:43 crc kubenswrapper[5005]: I0225 11:35:43.293931 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a38c83b4-5b4f-4b02-b427-ff357cbbb47c-config-data\") pod \"horizon-d4b89f889-cxtxj\" (UID: \"a38c83b4-5b4f-4b02-b427-ff357cbbb47c\") " pod="openstack/horizon-d4b89f889-cxtxj" Feb 25 11:35:43 crc kubenswrapper[5005]: I0225 11:35:43.303344 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a38c83b4-5b4f-4b02-b427-ff357cbbb47c-horizon-secret-key\") pod \"horizon-d4b89f889-cxtxj\" (UID: \"a38c83b4-5b4f-4b02-b427-ff357cbbb47c\") " pod="openstack/horizon-d4b89f889-cxtxj" Feb 25 11:35:43 crc kubenswrapper[5005]: I0225 11:35:43.326671 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ktbv\" (UniqueName: \"kubernetes.io/projected/a38c83b4-5b4f-4b02-b427-ff357cbbb47c-kube-api-access-9ktbv\") pod \"horizon-d4b89f889-cxtxj\" (UID: \"a38c83b4-5b4f-4b02-b427-ff357cbbb47c\") " pod="openstack/horizon-d4b89f889-cxtxj" Feb 25 11:35:43 crc kubenswrapper[5005]: I0225 11:35:43.415159 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-6cr5h" Feb 25 11:35:43 crc kubenswrapper[5005]: I0225 11:35:43.415181 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-vgsgg" Feb 25 11:35:43 crc kubenswrapper[5005]: I0225 11:35:43.458119 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-d4b89f889-cxtxj" Feb 25 11:35:43 crc kubenswrapper[5005]: I0225 11:35:43.508490 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-kkthh"] Feb 25 11:35:43 crc kubenswrapper[5005]: W0225 11:35:43.586838 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9271ee2f_d4fa_48af_a5bb_80384bbf60dc.slice/crio-eafd3a9fc663c6a5076a7178cdf838e3d790c2abfb4960435728b7a3d572398a WatchSource:0}: Error finding container eafd3a9fc663c6a5076a7178cdf838e3d790c2abfb4960435728b7a3d572398a: Status 404 returned error can't find the container with id eafd3a9fc663c6a5076a7178cdf838e3d790c2abfb4960435728b7a3d572398a Feb 25 11:35:43 crc kubenswrapper[5005]: I0225 11:35:43.734568 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-wqnnp"] Feb 25 11:35:43 crc kubenswrapper[5005]: W0225 11:35:43.997248 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fa77b98_833c_4278_b615_49e4c28e69c5.slice/crio-a230d9c90dbfff681ec33f47f3b91d28ef124b70c065ac16bcddae1e24a25e96 WatchSource:0}: Error finding container a230d9c90dbfff681ec33f47f3b91d28ef124b70c065ac16bcddae1e24a25e96: Status 404 returned error can't find the container with id a230d9c90dbfff681ec33f47f3b91d28ef124b70c065ac16bcddae1e24a25e96 Feb 25 11:35:44 crc kubenswrapper[5005]: I0225 11:35:44.003518 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 25 11:35:44 crc kubenswrapper[5005]: I0225 11:35:44.019000 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-wngsg"] Feb 25 11:35:44 crc kubenswrapper[5005]: I0225 11:35:44.030617 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-qrzvf"] Feb 25 11:35:44 crc kubenswrapper[5005]: I0225 11:35:44.125503 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-wqnnp" event={"ID":"5a68e341-40f2-4b1b-b92f-bca9a4946b39","Type":"ContainerStarted","Data":"ba12959e3447ca77627118765a66de9640473945a13a6ee2487b7312b04d8226"} Feb 25 11:35:44 crc kubenswrapper[5005]: I0225 11:35:44.132063 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qrzvf" event={"ID":"1fa77b98-833c-4278-b615-49e4c28e69c5","Type":"ContainerStarted","Data":"a230d9c90dbfff681ec33f47f3b91d28ef124b70c065ac16bcddae1e24a25e96"} Feb 25 11:35:44 crc kubenswrapper[5005]: I0225 11:35:44.138893 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7ff575667-wqvzf"] Feb 25 11:35:44 crc kubenswrapper[5005]: I0225 11:35:44.139234 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b033028-9ac7-43d8-95da-931e7a25249a","Type":"ContainerStarted","Data":"d34e179fc10524e2df3a93523d86e2098350193a6f0bfd4f838b784475518238"} Feb 25 11:35:44 crc kubenswrapper[5005]: I0225 11:35:44.142576 5005 generic.go:334] "Generic (PLEG): container finished" podID="9271ee2f-d4fa-48af-a5bb-80384bbf60dc" containerID="a6ef5bfd69112929ffb979c384e0b497358d72ae927b7d16315aaff1a1a49bb4" exitCode=0 Feb 25 11:35:44 crc kubenswrapper[5005]: I0225 11:35:44.142782 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6546db6db7-kkthh" event={"ID":"9271ee2f-d4fa-48af-a5bb-80384bbf60dc","Type":"ContainerDied","Data":"a6ef5bfd69112929ffb979c384e0b497358d72ae927b7d16315aaff1a1a49bb4"} Feb 25 11:35:44 crc kubenswrapper[5005]: I0225 11:35:44.142820 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6546db6db7-kkthh" event={"ID":"9271ee2f-d4fa-48af-a5bb-80384bbf60dc","Type":"ContainerStarted","Data":"eafd3a9fc663c6a5076a7178cdf838e3d790c2abfb4960435728b7a3d572398a"} Feb 25 11:35:44 crc kubenswrapper[5005]: I0225 11:35:44.147837 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-wngsg" event={"ID":"b06f7ac8-89c3-4886-9cd8-58353d24e476","Type":"ContainerStarted","Data":"d8d3491a19fa29f50e593c46d8237fe01cc958c7309b457ab2140680ded9ff52"} Feb 25 11:35:44 crc kubenswrapper[5005]: I0225 11:35:44.155217 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mwr6c" event={"ID":"55f88f1a-0a87-458e-8800-ebd780ced0cb","Type":"ContainerStarted","Data":"dedcdd95d0fcaae436f2c94355a36ad69612b37b90325fbca4b37c8544993638"} Feb 25 11:35:44 crc kubenswrapper[5005]: I0225 11:35:44.155259 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mwr6c" event={"ID":"55f88f1a-0a87-458e-8800-ebd780ced0cb","Type":"ContainerStarted","Data":"3710a941cc7127c28fd9727d95cd3c9930e3c3531c22e68e5a387195aa71709e"} Feb 25 11:35:44 crc kubenswrapper[5005]: I0225 11:35:44.181823 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-mwr6c" podStartSLOduration=2.181806874 podStartE2EDuration="2.181806874s" podCreationTimestamp="2026-02-25 11:35:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:35:44.179889775 +0000 UTC m=+1058.220622102" watchObservedRunningTime="2026-02-25 11:35:44.181806874 +0000 UTC m=+1058.222539191" Feb 25 11:35:44 crc kubenswrapper[5005]: I0225 11:35:44.237034 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-d4b89f889-cxtxj"] Feb 25 11:35:44 crc kubenswrapper[5005]: I0225 11:35:44.251445 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-vgsgg"] Feb 25 11:35:44 crc kubenswrapper[5005]: W0225 11:35:44.252652 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec247cb2_074d_4c74_96fd_62353e78b898.slice/crio-14e7384adc7be20ad901dd1e55d343344b93d7a089aea9c37173950be8a28f7e WatchSource:0}: Error finding container 14e7384adc7be20ad901dd1e55d343344b93d7a089aea9c37173950be8a28f7e: Status 404 returned error can't find the container with id 14e7384adc7be20ad901dd1e55d343344b93d7a089aea9c37173950be8a28f7e Feb 25 11:35:44 crc kubenswrapper[5005]: I0225 11:35:44.264565 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-6cr5h"] Feb 25 11:35:44 crc kubenswrapper[5005]: I0225 11:35:44.434540 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6546db6db7-kkthh" Feb 25 11:35:44 crc kubenswrapper[5005]: I0225 11:35:44.548460 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 25 11:35:44 crc kubenswrapper[5005]: I0225 11:35:44.577524 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj46n\" (UniqueName: \"kubernetes.io/projected/9271ee2f-d4fa-48af-a5bb-80384bbf60dc-kube-api-access-pj46n\") pod \"9271ee2f-d4fa-48af-a5bb-80384bbf60dc\" (UID: \"9271ee2f-d4fa-48af-a5bb-80384bbf60dc\") " Feb 25 11:35:44 crc kubenswrapper[5005]: I0225 11:35:44.577569 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9271ee2f-d4fa-48af-a5bb-80384bbf60dc-ovsdbserver-sb\") pod \"9271ee2f-d4fa-48af-a5bb-80384bbf60dc\" (UID: \"9271ee2f-d4fa-48af-a5bb-80384bbf60dc\") " Feb 25 11:35:44 crc kubenswrapper[5005]: I0225 11:35:44.577607 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9271ee2f-d4fa-48af-a5bb-80384bbf60dc-ovsdbserver-nb\") pod \"9271ee2f-d4fa-48af-a5bb-80384bbf60dc\" (UID: \"9271ee2f-d4fa-48af-a5bb-80384bbf60dc\") " Feb 25 11:35:44 crc kubenswrapper[5005]: I0225 11:35:44.577633 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9271ee2f-d4fa-48af-a5bb-80384bbf60dc-config\") pod \"9271ee2f-d4fa-48af-a5bb-80384bbf60dc\" (UID: \"9271ee2f-d4fa-48af-a5bb-80384bbf60dc\") " Feb 25 11:35:44 crc kubenswrapper[5005]: I0225 11:35:44.577690 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9271ee2f-d4fa-48af-a5bb-80384bbf60dc-dns-svc\") pod \"9271ee2f-d4fa-48af-a5bb-80384bbf60dc\" (UID: \"9271ee2f-d4fa-48af-a5bb-80384bbf60dc\") " Feb 25 11:35:44 crc kubenswrapper[5005]: I0225 11:35:44.583502 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7ff575667-wqvzf"] Feb 25 11:35:44 crc kubenswrapper[5005]: I0225 11:35:44.597858 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9271ee2f-d4fa-48af-a5bb-80384bbf60dc-kube-api-access-pj46n" (OuterVolumeSpecName: "kube-api-access-pj46n") pod "9271ee2f-d4fa-48af-a5bb-80384bbf60dc" (UID: "9271ee2f-d4fa-48af-a5bb-80384bbf60dc"). InnerVolumeSpecName "kube-api-access-pj46n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:35:44 crc kubenswrapper[5005]: I0225 11:35:44.611970 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7bc6f57c7-g7rsc"] Feb 25 11:35:44 crc kubenswrapper[5005]: I0225 11:35:44.612254 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9271ee2f-d4fa-48af-a5bb-80384bbf60dc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9271ee2f-d4fa-48af-a5bb-80384bbf60dc" (UID: "9271ee2f-d4fa-48af-a5bb-80384bbf60dc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:35:44 crc kubenswrapper[5005]: E0225 11:35:44.612348 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9271ee2f-d4fa-48af-a5bb-80384bbf60dc" containerName="init" Feb 25 11:35:44 crc kubenswrapper[5005]: I0225 11:35:44.612380 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="9271ee2f-d4fa-48af-a5bb-80384bbf60dc" containerName="init" Feb 25 11:35:44 crc kubenswrapper[5005]: I0225 11:35:44.612530 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="9271ee2f-d4fa-48af-a5bb-80384bbf60dc" containerName="init" Feb 25 11:35:44 crc kubenswrapper[5005]: I0225 11:35:44.613318 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7bc6f57c7-g7rsc" Feb 25 11:35:44 crc kubenswrapper[5005]: I0225 11:35:44.632475 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9271ee2f-d4fa-48af-a5bb-80384bbf60dc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9271ee2f-d4fa-48af-a5bb-80384bbf60dc" (UID: "9271ee2f-d4fa-48af-a5bb-80384bbf60dc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:35:44 crc kubenswrapper[5005]: I0225 11:35:44.639333 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7bc6f57c7-g7rsc"] Feb 25 11:35:44 crc kubenswrapper[5005]: I0225 11:35:44.664925 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9271ee2f-d4fa-48af-a5bb-80384bbf60dc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9271ee2f-d4fa-48af-a5bb-80384bbf60dc" (UID: "9271ee2f-d4fa-48af-a5bb-80384bbf60dc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:35:44 crc kubenswrapper[5005]: I0225 11:35:44.666532 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9271ee2f-d4fa-48af-a5bb-80384bbf60dc-config" (OuterVolumeSpecName: "config") pod "9271ee2f-d4fa-48af-a5bb-80384bbf60dc" (UID: "9271ee2f-d4fa-48af-a5bb-80384bbf60dc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:35:44 crc kubenswrapper[5005]: I0225 11:35:44.679527 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fk7c\" (UniqueName: \"kubernetes.io/projected/4869a7a1-945a-47d9-9239-b1537f04be41-kube-api-access-2fk7c\") pod \"horizon-7bc6f57c7-g7rsc\" (UID: \"4869a7a1-945a-47d9-9239-b1537f04be41\") " pod="openstack/horizon-7bc6f57c7-g7rsc" Feb 25 11:35:44 crc kubenswrapper[5005]: I0225 11:35:44.679655 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4869a7a1-945a-47d9-9239-b1537f04be41-scripts\") pod \"horizon-7bc6f57c7-g7rsc\" (UID: \"4869a7a1-945a-47d9-9239-b1537f04be41\") " pod="openstack/horizon-7bc6f57c7-g7rsc" Feb 25 11:35:44 crc kubenswrapper[5005]: I0225 11:35:44.679689 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4869a7a1-945a-47d9-9239-b1537f04be41-horizon-secret-key\") pod \"horizon-7bc6f57c7-g7rsc\" (UID: \"4869a7a1-945a-47d9-9239-b1537f04be41\") " pod="openstack/horizon-7bc6f57c7-g7rsc" Feb 25 11:35:44 crc kubenswrapper[5005]: I0225 11:35:44.679731 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4869a7a1-945a-47d9-9239-b1537f04be41-config-data\") pod \"horizon-7bc6f57c7-g7rsc\" (UID: \"4869a7a1-945a-47d9-9239-b1537f04be41\") " pod="openstack/horizon-7bc6f57c7-g7rsc" Feb 25 11:35:44 crc kubenswrapper[5005]: I0225 11:35:44.679753 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4869a7a1-945a-47d9-9239-b1537f04be41-logs\") pod \"horizon-7bc6f57c7-g7rsc\" (UID: \"4869a7a1-945a-47d9-9239-b1537f04be41\") " pod="openstack/horizon-7bc6f57c7-g7rsc" Feb 25 11:35:44 crc kubenswrapper[5005]: I0225 11:35:44.679882 5005 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9271ee2f-d4fa-48af-a5bb-80384bbf60dc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 25 11:35:44 crc kubenswrapper[5005]: I0225 11:35:44.679904 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj46n\" (UniqueName: \"kubernetes.io/projected/9271ee2f-d4fa-48af-a5bb-80384bbf60dc-kube-api-access-pj46n\") on node \"crc\" DevicePath \"\"" Feb 25 11:35:44 crc kubenswrapper[5005]: I0225 11:35:44.679920 5005 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9271ee2f-d4fa-48af-a5bb-80384bbf60dc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 25 11:35:44 crc kubenswrapper[5005]: I0225 11:35:44.679931 5005 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9271ee2f-d4fa-48af-a5bb-80384bbf60dc-config\") on node \"crc\" DevicePath \"\"" Feb 25 11:35:44 crc kubenswrapper[5005]: I0225 11:35:44.679942 5005 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9271ee2f-d4fa-48af-a5bb-80384bbf60dc-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 25 11:35:44 crc kubenswrapper[5005]: I0225 11:35:44.781153 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fk7c\" (UniqueName: \"kubernetes.io/projected/4869a7a1-945a-47d9-9239-b1537f04be41-kube-api-access-2fk7c\") pod \"horizon-7bc6f57c7-g7rsc\" (UID: \"4869a7a1-945a-47d9-9239-b1537f04be41\") " pod="openstack/horizon-7bc6f57c7-g7rsc" Feb 25 11:35:44 crc kubenswrapper[5005]: I0225 11:35:44.781243 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4869a7a1-945a-47d9-9239-b1537f04be41-scripts\") pod \"horizon-7bc6f57c7-g7rsc\" (UID: \"4869a7a1-945a-47d9-9239-b1537f04be41\") " pod="openstack/horizon-7bc6f57c7-g7rsc" Feb 25 11:35:44 crc kubenswrapper[5005]: I0225 11:35:44.781272 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4869a7a1-945a-47d9-9239-b1537f04be41-horizon-secret-key\") pod \"horizon-7bc6f57c7-g7rsc\" (UID: \"4869a7a1-945a-47d9-9239-b1537f04be41\") " pod="openstack/horizon-7bc6f57c7-g7rsc" Feb 25 11:35:44 crc kubenswrapper[5005]: I0225 11:35:44.781306 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4869a7a1-945a-47d9-9239-b1537f04be41-config-data\") pod \"horizon-7bc6f57c7-g7rsc\" (UID: \"4869a7a1-945a-47d9-9239-b1537f04be41\") " pod="openstack/horizon-7bc6f57c7-g7rsc" Feb 25 11:35:44 crc kubenswrapper[5005]: I0225 11:35:44.781328 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4869a7a1-945a-47d9-9239-b1537f04be41-logs\") pod \"horizon-7bc6f57c7-g7rsc\" (UID: \"4869a7a1-945a-47d9-9239-b1537f04be41\") " pod="openstack/horizon-7bc6f57c7-g7rsc" Feb 25 11:35:44 crc kubenswrapper[5005]: I0225 11:35:44.782049 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4869a7a1-945a-47d9-9239-b1537f04be41-scripts\") pod \"horizon-7bc6f57c7-g7rsc\" (UID: \"4869a7a1-945a-47d9-9239-b1537f04be41\") " pod="openstack/horizon-7bc6f57c7-g7rsc" Feb 25 11:35:44 crc kubenswrapper[5005]: I0225 11:35:44.782490 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4869a7a1-945a-47d9-9239-b1537f04be41-logs\") pod \"horizon-7bc6f57c7-g7rsc\" (UID: \"4869a7a1-945a-47d9-9239-b1537f04be41\") " pod="openstack/horizon-7bc6f57c7-g7rsc" Feb 25 11:35:44 crc kubenswrapper[5005]: I0225 11:35:44.782956 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4869a7a1-945a-47d9-9239-b1537f04be41-config-data\") pod \"horizon-7bc6f57c7-g7rsc\" (UID: \"4869a7a1-945a-47d9-9239-b1537f04be41\") " pod="openstack/horizon-7bc6f57c7-g7rsc" Feb 25 11:35:44 crc kubenswrapper[5005]: I0225 11:35:44.784768 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4869a7a1-945a-47d9-9239-b1537f04be41-horizon-secret-key\") pod \"horizon-7bc6f57c7-g7rsc\" (UID: \"4869a7a1-945a-47d9-9239-b1537f04be41\") " pod="openstack/horizon-7bc6f57c7-g7rsc" Feb 25 11:35:44 crc kubenswrapper[5005]: I0225 11:35:44.798617 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fk7c\" (UniqueName: \"kubernetes.io/projected/4869a7a1-945a-47d9-9239-b1537f04be41-kube-api-access-2fk7c\") pod \"horizon-7bc6f57c7-g7rsc\" (UID: \"4869a7a1-945a-47d9-9239-b1537f04be41\") " pod="openstack/horizon-7bc6f57c7-g7rsc" Feb 25 11:35:44 crc kubenswrapper[5005]: I0225 11:35:44.940138 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7bc6f57c7-g7rsc" Feb 25 11:35:45 crc kubenswrapper[5005]: I0225 11:35:45.181552 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-d4b89f889-cxtxj" event={"ID":"a38c83b4-5b4f-4b02-b427-ff357cbbb47c","Type":"ContainerStarted","Data":"f3f47dc787ba9ff9b7d0e4d226b7af3ed6613f8e1a143348c16c68fd6624620e"} Feb 25 11:35:45 crc kubenswrapper[5005]: I0225 11:35:45.203365 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6546db6db7-kkthh" event={"ID":"9271ee2f-d4fa-48af-a5bb-80384bbf60dc","Type":"ContainerDied","Data":"eafd3a9fc663c6a5076a7178cdf838e3d790c2abfb4960435728b7a3d572398a"} Feb 25 11:35:45 crc kubenswrapper[5005]: I0225 11:35:45.203457 5005 scope.go:117] "RemoveContainer" containerID="a6ef5bfd69112929ffb979c384e0b497358d72ae927b7d16315aaff1a1a49bb4" Feb 25 11:35:45 crc kubenswrapper[5005]: I0225 11:35:45.203584 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6546db6db7-kkthh" Feb 25 11:35:45 crc kubenswrapper[5005]: I0225 11:35:45.211531 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-wngsg" event={"ID":"b06f7ac8-89c3-4886-9cd8-58353d24e476","Type":"ContainerStarted","Data":"2f1fd2b4d6c287f940d037a15029e5fa6f07d763296be423d3346c04527a84c6"} Feb 25 11:35:45 crc kubenswrapper[5005]: I0225 11:35:45.242787 5005 generic.go:334] "Generic (PLEG): container finished" podID="ec247cb2-074d-4c74-96fd-62353e78b898" containerID="aa7a9d8fd143595a707742ac934f2cc7a54f4c4ae8c5b8a2d61db4fa12497721" exitCode=0 Feb 25 11:35:45 crc kubenswrapper[5005]: I0225 11:35:45.242884 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-vgsgg" event={"ID":"ec247cb2-074d-4c74-96fd-62353e78b898","Type":"ContainerDied","Data":"aa7a9d8fd143595a707742ac934f2cc7a54f4c4ae8c5b8a2d61db4fa12497721"} Feb 25 11:35:45 crc kubenswrapper[5005]: I0225 11:35:45.242916 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-vgsgg" event={"ID":"ec247cb2-074d-4c74-96fd-62353e78b898","Type":"ContainerStarted","Data":"14e7384adc7be20ad901dd1e55d343344b93d7a089aea9c37173950be8a28f7e"} Feb 25 11:35:45 crc kubenswrapper[5005]: I0225 11:35:45.265647 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-6cr5h" event={"ID":"3052cba9-7666-438d-a17c-d3028c836c1d","Type":"ContainerStarted","Data":"c20535fdbb55684460420590cc6bd016c71b21c037bdd52527114a00d37621d2"} Feb 25 11:35:45 crc kubenswrapper[5005]: I0225 11:35:45.274839 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7ff575667-wqvzf" event={"ID":"9058179d-2c7b-4d0b-b162-10141ea754b0","Type":"ContainerStarted","Data":"965dbe4f972a93df810b74d2a00ab4be969aba708db62bfed4ffaeb10cb32304"} Feb 25 11:35:45 crc kubenswrapper[5005]: I0225 11:35:45.384910 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-kkthh"] Feb 25 11:35:45 crc kubenswrapper[5005]: I0225 11:35:45.460198 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-kkthh"] Feb 25 11:35:45 crc kubenswrapper[5005]: I0225 11:35:45.489631 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-wngsg" podStartSLOduration=3.489611686 podStartE2EDuration="3.489611686s" podCreationTimestamp="2026-02-25 11:35:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:35:45.384799427 +0000 UTC m=+1059.425531754" watchObservedRunningTime="2026-02-25 11:35:45.489611686 +0000 UTC m=+1059.530344013" Feb 25 11:35:45 crc kubenswrapper[5005]: I0225 11:35:45.531148 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7bc6f57c7-g7rsc"] Feb 25 11:35:45 crc kubenswrapper[5005]: W0225 11:35:45.804133 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4869a7a1_945a_47d9_9239_b1537f04be41.slice/crio-b05ebe99848f4cf675a24f3c1817f8ba628960f28a60561e56b5ec94fc2c4bcb WatchSource:0}: Error finding container b05ebe99848f4cf675a24f3c1817f8ba628960f28a60561e56b5ec94fc2c4bcb: Status 404 returned error can't find the container with id b05ebe99848f4cf675a24f3c1817f8ba628960f28a60561e56b5ec94fc2c4bcb Feb 25 11:35:46 crc kubenswrapper[5005]: I0225 11:35:46.286125 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bc6f57c7-g7rsc" event={"ID":"4869a7a1-945a-47d9-9239-b1537f04be41","Type":"ContainerStarted","Data":"b05ebe99848f4cf675a24f3c1817f8ba628960f28a60561e56b5ec94fc2c4bcb"} Feb 25 11:35:46 crc kubenswrapper[5005]: I0225 11:35:46.698240 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9271ee2f-d4fa-48af-a5bb-80384bbf60dc" path="/var/lib/kubelet/pods/9271ee2f-d4fa-48af-a5bb-80384bbf60dc/volumes" Feb 25 11:35:47 crc kubenswrapper[5005]: I0225 11:35:47.297159 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-vgsgg" event={"ID":"ec247cb2-074d-4c74-96fd-62353e78b898","Type":"ContainerStarted","Data":"c04c4388259f62c1d6f77ee3160313648fc0753c4b24d2afe7cb0263ca31d349"} Feb 25 11:35:47 crc kubenswrapper[5005]: I0225 11:35:47.297516 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7987f74bbc-vgsgg" Feb 25 11:35:47 crc kubenswrapper[5005]: I0225 11:35:47.318741 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7987f74bbc-vgsgg" podStartSLOduration=5.318724415 podStartE2EDuration="5.318724415s" podCreationTimestamp="2026-02-25 11:35:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:35:47.311783924 +0000 UTC m=+1061.352516251" watchObservedRunningTime="2026-02-25 11:35:47.318724415 +0000 UTC m=+1061.359456742" Feb 25 11:35:48 crc kubenswrapper[5005]: I0225 11:35:48.307461 5005 generic.go:334] "Generic (PLEG): container finished" podID="55f88f1a-0a87-458e-8800-ebd780ced0cb" containerID="dedcdd95d0fcaae436f2c94355a36ad69612b37b90325fbca4b37c8544993638" exitCode=0 Feb 25 11:35:48 crc kubenswrapper[5005]: I0225 11:35:48.308158 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mwr6c" event={"ID":"55f88f1a-0a87-458e-8800-ebd780ced0cb","Type":"ContainerDied","Data":"dedcdd95d0fcaae436f2c94355a36ad69612b37b90325fbca4b37c8544993638"} Feb 25 11:35:49 crc kubenswrapper[5005]: I0225 11:35:49.875058 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mwr6c" Feb 25 11:35:50 crc kubenswrapper[5005]: I0225 11:35:50.024451 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/55f88f1a-0a87-458e-8800-ebd780ced0cb-fernet-keys\") pod \"55f88f1a-0a87-458e-8800-ebd780ced0cb\" (UID: \"55f88f1a-0a87-458e-8800-ebd780ced0cb\") " Feb 25 11:35:50 crc kubenswrapper[5005]: I0225 11:35:50.024605 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55f88f1a-0a87-458e-8800-ebd780ced0cb-config-data\") pod \"55f88f1a-0a87-458e-8800-ebd780ced0cb\" (UID: \"55f88f1a-0a87-458e-8800-ebd780ced0cb\") " Feb 25 11:35:50 crc kubenswrapper[5005]: I0225 11:35:50.024663 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4wtr\" (UniqueName: \"kubernetes.io/projected/55f88f1a-0a87-458e-8800-ebd780ced0cb-kube-api-access-l4wtr\") pod \"55f88f1a-0a87-458e-8800-ebd780ced0cb\" (UID: \"55f88f1a-0a87-458e-8800-ebd780ced0cb\") " Feb 25 11:35:50 crc kubenswrapper[5005]: I0225 11:35:50.024688 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55f88f1a-0a87-458e-8800-ebd780ced0cb-scripts\") pod \"55f88f1a-0a87-458e-8800-ebd780ced0cb\" (UID: \"55f88f1a-0a87-458e-8800-ebd780ced0cb\") " Feb 25 11:35:50 crc kubenswrapper[5005]: I0225 11:35:50.024713 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/55f88f1a-0a87-458e-8800-ebd780ced0cb-credential-keys\") pod \"55f88f1a-0a87-458e-8800-ebd780ced0cb\" (UID: \"55f88f1a-0a87-458e-8800-ebd780ced0cb\") " Feb 25 11:35:50 crc kubenswrapper[5005]: I0225 11:35:50.024746 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55f88f1a-0a87-458e-8800-ebd780ced0cb-combined-ca-bundle\") pod \"55f88f1a-0a87-458e-8800-ebd780ced0cb\" (UID: \"55f88f1a-0a87-458e-8800-ebd780ced0cb\") " Feb 25 11:35:50 crc kubenswrapper[5005]: I0225 11:35:50.031257 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55f88f1a-0a87-458e-8800-ebd780ced0cb-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "55f88f1a-0a87-458e-8800-ebd780ced0cb" (UID: "55f88f1a-0a87-458e-8800-ebd780ced0cb"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:35:50 crc kubenswrapper[5005]: I0225 11:35:50.031561 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55f88f1a-0a87-458e-8800-ebd780ced0cb-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "55f88f1a-0a87-458e-8800-ebd780ced0cb" (UID: "55f88f1a-0a87-458e-8800-ebd780ced0cb"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:35:50 crc kubenswrapper[5005]: I0225 11:35:50.042387 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55f88f1a-0a87-458e-8800-ebd780ced0cb-scripts" (OuterVolumeSpecName: "scripts") pod "55f88f1a-0a87-458e-8800-ebd780ced0cb" (UID: "55f88f1a-0a87-458e-8800-ebd780ced0cb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:35:50 crc kubenswrapper[5005]: I0225 11:35:50.042403 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55f88f1a-0a87-458e-8800-ebd780ced0cb-kube-api-access-l4wtr" (OuterVolumeSpecName: "kube-api-access-l4wtr") pod "55f88f1a-0a87-458e-8800-ebd780ced0cb" (UID: "55f88f1a-0a87-458e-8800-ebd780ced0cb"). InnerVolumeSpecName "kube-api-access-l4wtr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:35:50 crc kubenswrapper[5005]: I0225 11:35:50.060027 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55f88f1a-0a87-458e-8800-ebd780ced0cb-config-data" (OuterVolumeSpecName: "config-data") pod "55f88f1a-0a87-458e-8800-ebd780ced0cb" (UID: "55f88f1a-0a87-458e-8800-ebd780ced0cb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:35:50 crc kubenswrapper[5005]: I0225 11:35:50.064751 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55f88f1a-0a87-458e-8800-ebd780ced0cb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "55f88f1a-0a87-458e-8800-ebd780ced0cb" (UID: "55f88f1a-0a87-458e-8800-ebd780ced0cb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:35:50 crc kubenswrapper[5005]: I0225 11:35:50.126950 5005 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/55f88f1a-0a87-458e-8800-ebd780ced0cb-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 25 11:35:50 crc kubenswrapper[5005]: I0225 11:35:50.127263 5005 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55f88f1a-0a87-458e-8800-ebd780ced0cb-config-data\") on node \"crc\" DevicePath \"\"" Feb 25 11:35:50 crc kubenswrapper[5005]: I0225 11:35:50.127274 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4wtr\" (UniqueName: \"kubernetes.io/projected/55f88f1a-0a87-458e-8800-ebd780ced0cb-kube-api-access-l4wtr\") on node \"crc\" DevicePath \"\"" Feb 25 11:35:50 crc kubenswrapper[5005]: I0225 11:35:50.127283 5005 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55f88f1a-0a87-458e-8800-ebd780ced0cb-scripts\") on node \"crc\" DevicePath \"\"" Feb 25 11:35:50 crc kubenswrapper[5005]: I0225 11:35:50.127293 5005 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/55f88f1a-0a87-458e-8800-ebd780ced0cb-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 25 11:35:50 crc kubenswrapper[5005]: I0225 11:35:50.127301 5005 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55f88f1a-0a87-458e-8800-ebd780ced0cb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 11:35:50 crc kubenswrapper[5005]: I0225 11:35:50.338532 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mwr6c" event={"ID":"55f88f1a-0a87-458e-8800-ebd780ced0cb","Type":"ContainerDied","Data":"3710a941cc7127c28fd9727d95cd3c9930e3c3531c22e68e5a387195aa71709e"} Feb 25 11:35:50 crc kubenswrapper[5005]: I0225 11:35:50.338569 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3710a941cc7127c28fd9727d95cd3c9930e3c3531c22e68e5a387195aa71709e" Feb 25 11:35:50 crc kubenswrapper[5005]: I0225 11:35:50.338607 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mwr6c" Feb 25 11:35:50 crc kubenswrapper[5005]: I0225 11:35:50.438418 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-mwr6c"] Feb 25 11:35:50 crc kubenswrapper[5005]: I0225 11:35:50.444722 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-mwr6c"] Feb 25 11:35:50 crc kubenswrapper[5005]: I0225 11:35:50.552885 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-hpfk4"] Feb 25 11:35:50 crc kubenswrapper[5005]: E0225 11:35:50.553202 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55f88f1a-0a87-458e-8800-ebd780ced0cb" containerName="keystone-bootstrap" Feb 25 11:35:50 crc kubenswrapper[5005]: I0225 11:35:50.553218 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="55f88f1a-0a87-458e-8800-ebd780ced0cb" containerName="keystone-bootstrap" Feb 25 11:35:50 crc kubenswrapper[5005]: I0225 11:35:50.553393 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="55f88f1a-0a87-458e-8800-ebd780ced0cb" containerName="keystone-bootstrap" Feb 25 11:35:50 crc kubenswrapper[5005]: I0225 11:35:50.553945 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hpfk4" Feb 25 11:35:50 crc kubenswrapper[5005]: I0225 11:35:50.555751 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 25 11:35:50 crc kubenswrapper[5005]: I0225 11:35:50.556066 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 25 11:35:50 crc kubenswrapper[5005]: I0225 11:35:50.556263 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 25 11:35:50 crc kubenswrapper[5005]: I0225 11:35:50.556346 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 25 11:35:50 crc kubenswrapper[5005]: I0225 11:35:50.556881 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-nppj9" Feb 25 11:35:50 crc kubenswrapper[5005]: I0225 11:35:50.572941 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-hpfk4"] Feb 25 11:35:50 crc kubenswrapper[5005]: I0225 11:35:50.640819 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz2sk\" (UniqueName: \"kubernetes.io/projected/753b53c4-aec7-4a64-bc02-76520bddb879-kube-api-access-mz2sk\") pod \"keystone-bootstrap-hpfk4\" (UID: \"753b53c4-aec7-4a64-bc02-76520bddb879\") " pod="openstack/keystone-bootstrap-hpfk4" Feb 25 11:35:50 crc kubenswrapper[5005]: I0225 11:35:50.640865 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/753b53c4-aec7-4a64-bc02-76520bddb879-credential-keys\") pod \"keystone-bootstrap-hpfk4\" (UID: \"753b53c4-aec7-4a64-bc02-76520bddb879\") " pod="openstack/keystone-bootstrap-hpfk4" Feb 25 11:35:50 crc kubenswrapper[5005]: I0225 11:35:50.640890 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/753b53c4-aec7-4a64-bc02-76520bddb879-combined-ca-bundle\") pod \"keystone-bootstrap-hpfk4\" (UID: \"753b53c4-aec7-4a64-bc02-76520bddb879\") " pod="openstack/keystone-bootstrap-hpfk4" Feb 25 11:35:50 crc kubenswrapper[5005]: I0225 11:35:50.641000 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/753b53c4-aec7-4a64-bc02-76520bddb879-fernet-keys\") pod \"keystone-bootstrap-hpfk4\" (UID: \"753b53c4-aec7-4a64-bc02-76520bddb879\") " pod="openstack/keystone-bootstrap-hpfk4" Feb 25 11:35:50 crc kubenswrapper[5005]: I0225 11:35:50.641076 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/753b53c4-aec7-4a64-bc02-76520bddb879-config-data\") pod \"keystone-bootstrap-hpfk4\" (UID: \"753b53c4-aec7-4a64-bc02-76520bddb879\") " pod="openstack/keystone-bootstrap-hpfk4" Feb 25 11:35:50 crc kubenswrapper[5005]: I0225 11:35:50.641133 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/753b53c4-aec7-4a64-bc02-76520bddb879-scripts\") pod \"keystone-bootstrap-hpfk4\" (UID: \"753b53c4-aec7-4a64-bc02-76520bddb879\") " pod="openstack/keystone-bootstrap-hpfk4" Feb 25 11:35:50 crc kubenswrapper[5005]: I0225 11:35:50.703808 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55f88f1a-0a87-458e-8800-ebd780ced0cb" path="/var/lib/kubelet/pods/55f88f1a-0a87-458e-8800-ebd780ced0cb/volumes" Feb 25 11:35:50 crc kubenswrapper[5005]: I0225 11:35:50.743663 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/753b53c4-aec7-4a64-bc02-76520bddb879-scripts\") pod \"keystone-bootstrap-hpfk4\" (UID: \"753b53c4-aec7-4a64-bc02-76520bddb879\") " pod="openstack/keystone-bootstrap-hpfk4" Feb 25 11:35:50 crc kubenswrapper[5005]: I0225 11:35:50.743711 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mz2sk\" (UniqueName: \"kubernetes.io/projected/753b53c4-aec7-4a64-bc02-76520bddb879-kube-api-access-mz2sk\") pod \"keystone-bootstrap-hpfk4\" (UID: \"753b53c4-aec7-4a64-bc02-76520bddb879\") " pod="openstack/keystone-bootstrap-hpfk4" Feb 25 11:35:50 crc kubenswrapper[5005]: I0225 11:35:50.743733 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/753b53c4-aec7-4a64-bc02-76520bddb879-credential-keys\") pod \"keystone-bootstrap-hpfk4\" (UID: \"753b53c4-aec7-4a64-bc02-76520bddb879\") " pod="openstack/keystone-bootstrap-hpfk4" Feb 25 11:35:50 crc kubenswrapper[5005]: I0225 11:35:50.743759 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/753b53c4-aec7-4a64-bc02-76520bddb879-combined-ca-bundle\") pod \"keystone-bootstrap-hpfk4\" (UID: \"753b53c4-aec7-4a64-bc02-76520bddb879\") " pod="openstack/keystone-bootstrap-hpfk4" Feb 25 11:35:50 crc kubenswrapper[5005]: I0225 11:35:50.744440 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/753b53c4-aec7-4a64-bc02-76520bddb879-fernet-keys\") pod \"keystone-bootstrap-hpfk4\" (UID: \"753b53c4-aec7-4a64-bc02-76520bddb879\") " pod="openstack/keystone-bootstrap-hpfk4" Feb 25 11:35:50 crc kubenswrapper[5005]: I0225 11:35:50.744495 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/753b53c4-aec7-4a64-bc02-76520bddb879-config-data\") pod \"keystone-bootstrap-hpfk4\" (UID: \"753b53c4-aec7-4a64-bc02-76520bddb879\") " pod="openstack/keystone-bootstrap-hpfk4" Feb 25 11:35:50 crc kubenswrapper[5005]: I0225 11:35:50.755758 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/753b53c4-aec7-4a64-bc02-76520bddb879-config-data\") pod \"keystone-bootstrap-hpfk4\" (UID: \"753b53c4-aec7-4a64-bc02-76520bddb879\") " pod="openstack/keystone-bootstrap-hpfk4" Feb 25 11:35:50 crc kubenswrapper[5005]: I0225 11:35:50.756099 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/753b53c4-aec7-4a64-bc02-76520bddb879-scripts\") pod \"keystone-bootstrap-hpfk4\" (UID: \"753b53c4-aec7-4a64-bc02-76520bddb879\") " pod="openstack/keystone-bootstrap-hpfk4" Feb 25 11:35:50 crc kubenswrapper[5005]: I0225 11:35:50.756311 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/753b53c4-aec7-4a64-bc02-76520bddb879-credential-keys\") pod \"keystone-bootstrap-hpfk4\" (UID: \"753b53c4-aec7-4a64-bc02-76520bddb879\") " pod="openstack/keystone-bootstrap-hpfk4" Feb 25 11:35:50 crc kubenswrapper[5005]: I0225 11:35:50.758075 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/753b53c4-aec7-4a64-bc02-76520bddb879-combined-ca-bundle\") pod \"keystone-bootstrap-hpfk4\" (UID: \"753b53c4-aec7-4a64-bc02-76520bddb879\") " pod="openstack/keystone-bootstrap-hpfk4" Feb 25 11:35:50 crc kubenswrapper[5005]: I0225 11:35:50.771202 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/753b53c4-aec7-4a64-bc02-76520bddb879-fernet-keys\") pod \"keystone-bootstrap-hpfk4\" (UID: \"753b53c4-aec7-4a64-bc02-76520bddb879\") " pod="openstack/keystone-bootstrap-hpfk4" Feb 25 11:35:50 crc kubenswrapper[5005]: I0225 11:35:50.781838 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mz2sk\" (UniqueName: \"kubernetes.io/projected/753b53c4-aec7-4a64-bc02-76520bddb879-kube-api-access-mz2sk\") pod \"keystone-bootstrap-hpfk4\" (UID: \"753b53c4-aec7-4a64-bc02-76520bddb879\") " pod="openstack/keystone-bootstrap-hpfk4" Feb 25 11:35:50 crc kubenswrapper[5005]: I0225 11:35:50.868548 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hpfk4" Feb 25 11:35:51 crc kubenswrapper[5005]: I0225 11:35:51.183579 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-d4b89f889-cxtxj"] Feb 25 11:35:51 crc kubenswrapper[5005]: I0225 11:35:51.216129 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7f46657c4d-qmj7f"] Feb 25 11:35:51 crc kubenswrapper[5005]: I0225 11:35:51.217362 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7f46657c4d-qmj7f" Feb 25 11:35:51 crc kubenswrapper[5005]: I0225 11:35:51.218911 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Feb 25 11:35:51 crc kubenswrapper[5005]: I0225 11:35:51.237663 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7f46657c4d-qmj7f"] Feb 25 11:35:51 crc kubenswrapper[5005]: I0225 11:35:51.342922 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7bc6f57c7-g7rsc"] Feb 25 11:35:51 crc kubenswrapper[5005]: I0225 11:35:51.355668 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f37d127e-e5d1-45f3-8e44-262ab354e0c2-combined-ca-bundle\") pod \"horizon-7f46657c4d-qmj7f\" (UID: \"f37d127e-e5d1-45f3-8e44-262ab354e0c2\") " pod="openstack/horizon-7f46657c4d-qmj7f" Feb 25 11:35:51 crc kubenswrapper[5005]: I0225 11:35:51.355736 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8kgw\" (UniqueName: \"kubernetes.io/projected/f37d127e-e5d1-45f3-8e44-262ab354e0c2-kube-api-access-k8kgw\") pod \"horizon-7f46657c4d-qmj7f\" (UID: \"f37d127e-e5d1-45f3-8e44-262ab354e0c2\") " pod="openstack/horizon-7f46657c4d-qmj7f" Feb 25 11:35:51 crc kubenswrapper[5005]: I0225 11:35:51.355791 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f37d127e-e5d1-45f3-8e44-262ab354e0c2-config-data\") pod \"horizon-7f46657c4d-qmj7f\" (UID: \"f37d127e-e5d1-45f3-8e44-262ab354e0c2\") " pod="openstack/horizon-7f46657c4d-qmj7f" Feb 25 11:35:51 crc kubenswrapper[5005]: I0225 11:35:51.355817 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/f37d127e-e5d1-45f3-8e44-262ab354e0c2-horizon-tls-certs\") pod \"horizon-7f46657c4d-qmj7f\" (UID: \"f37d127e-e5d1-45f3-8e44-262ab354e0c2\") " pod="openstack/horizon-7f46657c4d-qmj7f" Feb 25 11:35:51 crc kubenswrapper[5005]: I0225 11:35:51.355842 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f37d127e-e5d1-45f3-8e44-262ab354e0c2-scripts\") pod \"horizon-7f46657c4d-qmj7f\" (UID: \"f37d127e-e5d1-45f3-8e44-262ab354e0c2\") " pod="openstack/horizon-7f46657c4d-qmj7f" Feb 25 11:35:51 crc kubenswrapper[5005]: I0225 11:35:51.355995 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f37d127e-e5d1-45f3-8e44-262ab354e0c2-horizon-secret-key\") pod \"horizon-7f46657c4d-qmj7f\" (UID: \"f37d127e-e5d1-45f3-8e44-262ab354e0c2\") " pod="openstack/horizon-7f46657c4d-qmj7f" Feb 25 11:35:51 crc kubenswrapper[5005]: I0225 11:35:51.356077 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f37d127e-e5d1-45f3-8e44-262ab354e0c2-logs\") pod \"horizon-7f46657c4d-qmj7f\" (UID: \"f37d127e-e5d1-45f3-8e44-262ab354e0c2\") " pod="openstack/horizon-7f46657c4d-qmj7f" Feb 25 11:35:51 crc kubenswrapper[5005]: I0225 11:35:51.372109 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-f8bbcbf96-lg5q8"] Feb 25 11:35:51 crc kubenswrapper[5005]: I0225 11:35:51.373690 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f8bbcbf96-lg5q8" Feb 25 11:35:51 crc kubenswrapper[5005]: I0225 11:35:51.386680 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-f8bbcbf96-lg5q8"] Feb 25 11:35:51 crc kubenswrapper[5005]: I0225 11:35:51.460952 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e277143e-cdb1-4dda-976d-f06c58c14c33-config-data\") pod \"horizon-f8bbcbf96-lg5q8\" (UID: \"e277143e-cdb1-4dda-976d-f06c58c14c33\") " pod="openstack/horizon-f8bbcbf96-lg5q8" Feb 25 11:35:51 crc kubenswrapper[5005]: I0225 11:35:51.461012 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e277143e-cdb1-4dda-976d-f06c58c14c33-combined-ca-bundle\") pod \"horizon-f8bbcbf96-lg5q8\" (UID: \"e277143e-cdb1-4dda-976d-f06c58c14c33\") " pod="openstack/horizon-f8bbcbf96-lg5q8" Feb 25 11:35:51 crc kubenswrapper[5005]: I0225 11:35:51.461071 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f37d127e-e5d1-45f3-8e44-262ab354e0c2-combined-ca-bundle\") pod \"horizon-7f46657c4d-qmj7f\" (UID: \"f37d127e-e5d1-45f3-8e44-262ab354e0c2\") " pod="openstack/horizon-7f46657c4d-qmj7f" Feb 25 11:35:51 crc kubenswrapper[5005]: I0225 11:35:51.461187 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8kgw\" (UniqueName: \"kubernetes.io/projected/f37d127e-e5d1-45f3-8e44-262ab354e0c2-kube-api-access-k8kgw\") pod \"horizon-7f46657c4d-qmj7f\" (UID: \"f37d127e-e5d1-45f3-8e44-262ab354e0c2\") " pod="openstack/horizon-7f46657c4d-qmj7f" Feb 25 11:35:51 crc kubenswrapper[5005]: I0225 11:35:51.461220 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f37d127e-e5d1-45f3-8e44-262ab354e0c2-config-data\") pod \"horizon-7f46657c4d-qmj7f\" (UID: \"f37d127e-e5d1-45f3-8e44-262ab354e0c2\") " pod="openstack/horizon-7f46657c4d-qmj7f" Feb 25 11:35:51 crc kubenswrapper[5005]: I0225 11:35:51.461259 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/f37d127e-e5d1-45f3-8e44-262ab354e0c2-horizon-tls-certs\") pod \"horizon-7f46657c4d-qmj7f\" (UID: \"f37d127e-e5d1-45f3-8e44-262ab354e0c2\") " pod="openstack/horizon-7f46657c4d-qmj7f" Feb 25 11:35:51 crc kubenswrapper[5005]: I0225 11:35:51.461277 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvjxn\" (UniqueName: \"kubernetes.io/projected/e277143e-cdb1-4dda-976d-f06c58c14c33-kube-api-access-nvjxn\") pod \"horizon-f8bbcbf96-lg5q8\" (UID: \"e277143e-cdb1-4dda-976d-f06c58c14c33\") " pod="openstack/horizon-f8bbcbf96-lg5q8" Feb 25 11:35:51 crc kubenswrapper[5005]: I0225 11:35:51.461316 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f37d127e-e5d1-45f3-8e44-262ab354e0c2-scripts\") pod \"horizon-7f46657c4d-qmj7f\" (UID: \"f37d127e-e5d1-45f3-8e44-262ab354e0c2\") " pod="openstack/horizon-7f46657c4d-qmj7f" Feb 25 11:35:51 crc kubenswrapper[5005]: I0225 11:35:51.461341 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e277143e-cdb1-4dda-976d-f06c58c14c33-scripts\") pod \"horizon-f8bbcbf96-lg5q8\" (UID: \"e277143e-cdb1-4dda-976d-f06c58c14c33\") " pod="openstack/horizon-f8bbcbf96-lg5q8" Feb 25 11:35:51 crc kubenswrapper[5005]: I0225 11:35:51.461432 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f37d127e-e5d1-45f3-8e44-262ab354e0c2-horizon-secret-key\") pod \"horizon-7f46657c4d-qmj7f\" (UID: \"f37d127e-e5d1-45f3-8e44-262ab354e0c2\") " pod="openstack/horizon-7f46657c4d-qmj7f" Feb 25 11:35:51 crc kubenswrapper[5005]: I0225 11:35:51.461461 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e277143e-cdb1-4dda-976d-f06c58c14c33-horizon-secret-key\") pod \"horizon-f8bbcbf96-lg5q8\" (UID: \"e277143e-cdb1-4dda-976d-f06c58c14c33\") " pod="openstack/horizon-f8bbcbf96-lg5q8" Feb 25 11:35:51 crc kubenswrapper[5005]: I0225 11:35:51.461502 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f37d127e-e5d1-45f3-8e44-262ab354e0c2-logs\") pod \"horizon-7f46657c4d-qmj7f\" (UID: \"f37d127e-e5d1-45f3-8e44-262ab354e0c2\") " pod="openstack/horizon-7f46657c4d-qmj7f" Feb 25 11:35:51 crc kubenswrapper[5005]: I0225 11:35:51.461528 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e277143e-cdb1-4dda-976d-f06c58c14c33-horizon-tls-certs\") pod \"horizon-f8bbcbf96-lg5q8\" (UID: \"e277143e-cdb1-4dda-976d-f06c58c14c33\") " pod="openstack/horizon-f8bbcbf96-lg5q8" Feb 25 11:35:51 crc kubenswrapper[5005]: I0225 11:35:51.461551 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e277143e-cdb1-4dda-976d-f06c58c14c33-logs\") pod \"horizon-f8bbcbf96-lg5q8\" (UID: \"e277143e-cdb1-4dda-976d-f06c58c14c33\") " pod="openstack/horizon-f8bbcbf96-lg5q8" Feb 25 11:35:51 crc kubenswrapper[5005]: I0225 11:35:51.462622 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f37d127e-e5d1-45f3-8e44-262ab354e0c2-logs\") pod \"horizon-7f46657c4d-qmj7f\" (UID: \"f37d127e-e5d1-45f3-8e44-262ab354e0c2\") " pod="openstack/horizon-7f46657c4d-qmj7f" Feb 25 11:35:51 crc kubenswrapper[5005]: I0225 11:35:51.463917 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f37d127e-e5d1-45f3-8e44-262ab354e0c2-config-data\") pod \"horizon-7f46657c4d-qmj7f\" (UID: \"f37d127e-e5d1-45f3-8e44-262ab354e0c2\") " pod="openstack/horizon-7f46657c4d-qmj7f" Feb 25 11:35:51 crc kubenswrapper[5005]: I0225 11:35:51.464943 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f37d127e-e5d1-45f3-8e44-262ab354e0c2-scripts\") pod \"horizon-7f46657c4d-qmj7f\" (UID: \"f37d127e-e5d1-45f3-8e44-262ab354e0c2\") " pod="openstack/horizon-7f46657c4d-qmj7f" Feb 25 11:35:51 crc kubenswrapper[5005]: I0225 11:35:51.466420 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f37d127e-e5d1-45f3-8e44-262ab354e0c2-horizon-secret-key\") pod \"horizon-7f46657c4d-qmj7f\" (UID: \"f37d127e-e5d1-45f3-8e44-262ab354e0c2\") " pod="openstack/horizon-7f46657c4d-qmj7f" Feb 25 11:35:51 crc kubenswrapper[5005]: I0225 11:35:51.466884 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/f37d127e-e5d1-45f3-8e44-262ab354e0c2-horizon-tls-certs\") pod \"horizon-7f46657c4d-qmj7f\" (UID: \"f37d127e-e5d1-45f3-8e44-262ab354e0c2\") " pod="openstack/horizon-7f46657c4d-qmj7f" Feb 25 11:35:51 crc kubenswrapper[5005]: I0225 11:35:51.478098 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f37d127e-e5d1-45f3-8e44-262ab354e0c2-combined-ca-bundle\") pod \"horizon-7f46657c4d-qmj7f\" (UID: \"f37d127e-e5d1-45f3-8e44-262ab354e0c2\") " pod="openstack/horizon-7f46657c4d-qmj7f" Feb 25 11:35:51 crc kubenswrapper[5005]: I0225 11:35:51.484657 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8kgw\" (UniqueName: \"kubernetes.io/projected/f37d127e-e5d1-45f3-8e44-262ab354e0c2-kube-api-access-k8kgw\") pod \"horizon-7f46657c4d-qmj7f\" (UID: \"f37d127e-e5d1-45f3-8e44-262ab354e0c2\") " pod="openstack/horizon-7f46657c4d-qmj7f" Feb 25 11:35:51 crc kubenswrapper[5005]: I0225 11:35:51.541151 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7f46657c4d-qmj7f" Feb 25 11:35:51 crc kubenswrapper[5005]: I0225 11:35:51.562557 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e277143e-cdb1-4dda-976d-f06c58c14c33-logs\") pod \"horizon-f8bbcbf96-lg5q8\" (UID: \"e277143e-cdb1-4dda-976d-f06c58c14c33\") " pod="openstack/horizon-f8bbcbf96-lg5q8" Feb 25 11:35:51 crc kubenswrapper[5005]: I0225 11:35:51.562642 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e277143e-cdb1-4dda-976d-f06c58c14c33-config-data\") pod \"horizon-f8bbcbf96-lg5q8\" (UID: \"e277143e-cdb1-4dda-976d-f06c58c14c33\") " pod="openstack/horizon-f8bbcbf96-lg5q8" Feb 25 11:35:51 crc kubenswrapper[5005]: I0225 11:35:51.562673 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e277143e-cdb1-4dda-976d-f06c58c14c33-combined-ca-bundle\") pod \"horizon-f8bbcbf96-lg5q8\" (UID: \"e277143e-cdb1-4dda-976d-f06c58c14c33\") " pod="openstack/horizon-f8bbcbf96-lg5q8" Feb 25 11:35:51 crc kubenswrapper[5005]: I0225 11:35:51.562757 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvjxn\" (UniqueName: \"kubernetes.io/projected/e277143e-cdb1-4dda-976d-f06c58c14c33-kube-api-access-nvjxn\") pod \"horizon-f8bbcbf96-lg5q8\" (UID: \"e277143e-cdb1-4dda-976d-f06c58c14c33\") " pod="openstack/horizon-f8bbcbf96-lg5q8" Feb 25 11:35:51 crc kubenswrapper[5005]: I0225 11:35:51.562794 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e277143e-cdb1-4dda-976d-f06c58c14c33-scripts\") pod \"horizon-f8bbcbf96-lg5q8\" (UID: \"e277143e-cdb1-4dda-976d-f06c58c14c33\") " pod="openstack/horizon-f8bbcbf96-lg5q8" Feb 25 11:35:51 crc kubenswrapper[5005]: I0225 11:35:51.562845 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e277143e-cdb1-4dda-976d-f06c58c14c33-horizon-secret-key\") pod \"horizon-f8bbcbf96-lg5q8\" (UID: \"e277143e-cdb1-4dda-976d-f06c58c14c33\") " pod="openstack/horizon-f8bbcbf96-lg5q8" Feb 25 11:35:51 crc kubenswrapper[5005]: I0225 11:35:51.562887 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e277143e-cdb1-4dda-976d-f06c58c14c33-horizon-tls-certs\") pod \"horizon-f8bbcbf96-lg5q8\" (UID: \"e277143e-cdb1-4dda-976d-f06c58c14c33\") " pod="openstack/horizon-f8bbcbf96-lg5q8" Feb 25 11:35:51 crc kubenswrapper[5005]: I0225 11:35:51.563866 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e277143e-cdb1-4dda-976d-f06c58c14c33-logs\") pod \"horizon-f8bbcbf96-lg5q8\" (UID: \"e277143e-cdb1-4dda-976d-f06c58c14c33\") " pod="openstack/horizon-f8bbcbf96-lg5q8" Feb 25 11:35:51 crc kubenswrapper[5005]: I0225 11:35:51.564487 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e277143e-cdb1-4dda-976d-f06c58c14c33-scripts\") pod \"horizon-f8bbcbf96-lg5q8\" (UID: \"e277143e-cdb1-4dda-976d-f06c58c14c33\") " pod="openstack/horizon-f8bbcbf96-lg5q8" Feb 25 11:35:51 crc kubenswrapper[5005]: I0225 11:35:51.564883 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e277143e-cdb1-4dda-976d-f06c58c14c33-config-data\") pod \"horizon-f8bbcbf96-lg5q8\" (UID: \"e277143e-cdb1-4dda-976d-f06c58c14c33\") " pod="openstack/horizon-f8bbcbf96-lg5q8" Feb 25 11:35:51 crc kubenswrapper[5005]: I0225 11:35:51.565894 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e277143e-cdb1-4dda-976d-f06c58c14c33-horizon-tls-certs\") pod \"horizon-f8bbcbf96-lg5q8\" (UID: \"e277143e-cdb1-4dda-976d-f06c58c14c33\") " pod="openstack/horizon-f8bbcbf96-lg5q8" Feb 25 11:35:51 crc kubenswrapper[5005]: I0225 11:35:51.573870 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e277143e-cdb1-4dda-976d-f06c58c14c33-horizon-secret-key\") pod \"horizon-f8bbcbf96-lg5q8\" (UID: \"e277143e-cdb1-4dda-976d-f06c58c14c33\") " pod="openstack/horizon-f8bbcbf96-lg5q8" Feb 25 11:35:51 crc kubenswrapper[5005]: I0225 11:35:51.574269 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e277143e-cdb1-4dda-976d-f06c58c14c33-combined-ca-bundle\") pod \"horizon-f8bbcbf96-lg5q8\" (UID: \"e277143e-cdb1-4dda-976d-f06c58c14c33\") " pod="openstack/horizon-f8bbcbf96-lg5q8" Feb 25 11:35:51 crc kubenswrapper[5005]: I0225 11:35:51.581845 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvjxn\" (UniqueName: \"kubernetes.io/projected/e277143e-cdb1-4dda-976d-f06c58c14c33-kube-api-access-nvjxn\") pod \"horizon-f8bbcbf96-lg5q8\" (UID: \"e277143e-cdb1-4dda-976d-f06c58c14c33\") " pod="openstack/horizon-f8bbcbf96-lg5q8" Feb 25 11:35:51 crc kubenswrapper[5005]: I0225 11:35:51.688657 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f8bbcbf96-lg5q8" Feb 25 11:35:53 crc kubenswrapper[5005]: I0225 11:35:53.417225 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7987f74bbc-vgsgg" Feb 25 11:35:53 crc kubenswrapper[5005]: I0225 11:35:53.484840 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-shqxt"] Feb 25 11:35:53 crc kubenswrapper[5005]: I0225 11:35:53.485059 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-54f9b7b8d9-shqxt" podUID="7e5145ce-efae-4a28-9f07-d2922d2682bb" containerName="dnsmasq-dns" containerID="cri-o://2085434377bad043b70e66957f75af5fd0c473326a82f50f8848ccc9c8dba41e" gracePeriod=10 Feb 25 11:35:54 crc kubenswrapper[5005]: I0225 11:35:54.382667 5005 generic.go:334] "Generic (PLEG): container finished" podID="7e5145ce-efae-4a28-9f07-d2922d2682bb" containerID="2085434377bad043b70e66957f75af5fd0c473326a82f50f8848ccc9c8dba41e" exitCode=0 Feb 25 11:35:54 crc kubenswrapper[5005]: I0225 11:35:54.382708 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-shqxt" event={"ID":"7e5145ce-efae-4a28-9f07-d2922d2682bb","Type":"ContainerDied","Data":"2085434377bad043b70e66957f75af5fd0c473326a82f50f8848ccc9c8dba41e"} Feb 25 11:35:57 crc kubenswrapper[5005]: I0225 11:35:57.726421 5005 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-54f9b7b8d9-shqxt" podUID="7e5145ce-efae-4a28-9f07-d2922d2682bb" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.126:5353: connect: connection refused" Feb 25 11:36:00 crc kubenswrapper[5005]: I0225 11:36:00.129559 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533656-kxl9w"] Feb 25 11:36:00 crc kubenswrapper[5005]: I0225 11:36:00.132250 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533656-kxl9w" Feb 25 11:36:00 crc kubenswrapper[5005]: I0225 11:36:00.134027 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 11:36:00 crc kubenswrapper[5005]: I0225 11:36:00.134165 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 11:36:00 crc kubenswrapper[5005]: I0225 11:36:00.134217 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 11:36:00 crc kubenswrapper[5005]: I0225 11:36:00.138938 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533656-kxl9w"] Feb 25 11:36:00 crc kubenswrapper[5005]: I0225 11:36:00.236150 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqlng\" (UniqueName: \"kubernetes.io/projected/83478c50-2691-45b5-abc6-039a23bb1645-kube-api-access-gqlng\") pod \"auto-csr-approver-29533656-kxl9w\" (UID: \"83478c50-2691-45b5-abc6-039a23bb1645\") " pod="openshift-infra/auto-csr-approver-29533656-kxl9w" Feb 25 11:36:00 crc kubenswrapper[5005]: I0225 11:36:00.337848 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqlng\" (UniqueName: \"kubernetes.io/projected/83478c50-2691-45b5-abc6-039a23bb1645-kube-api-access-gqlng\") pod \"auto-csr-approver-29533656-kxl9w\" (UID: \"83478c50-2691-45b5-abc6-039a23bb1645\") " pod="openshift-infra/auto-csr-approver-29533656-kxl9w" Feb 25 11:36:00 crc kubenswrapper[5005]: I0225 11:36:00.356942 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqlng\" (UniqueName: \"kubernetes.io/projected/83478c50-2691-45b5-abc6-039a23bb1645-kube-api-access-gqlng\") pod \"auto-csr-approver-29533656-kxl9w\" (UID: \"83478c50-2691-45b5-abc6-039a23bb1645\") " pod="openshift-infra/auto-csr-approver-29533656-kxl9w" Feb 25 11:36:00 crc kubenswrapper[5005]: I0225 11:36:00.455679 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533656-kxl9w" Feb 25 11:36:02 crc kubenswrapper[5005]: I0225 11:36:02.449499 5005 generic.go:334] "Generic (PLEG): container finished" podID="b06f7ac8-89c3-4886-9cd8-58353d24e476" containerID="2f1fd2b4d6c287f940d037a15029e5fa6f07d763296be423d3346c04527a84c6" exitCode=0 Feb 25 11:36:02 crc kubenswrapper[5005]: I0225 11:36:02.449584 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-wngsg" event={"ID":"b06f7ac8-89c3-4886-9cd8-58353d24e476","Type":"ContainerDied","Data":"2f1fd2b4d6c287f940d037a15029e5fa6f07d763296be423d3346c04527a84c6"} Feb 25 11:36:06 crc kubenswrapper[5005]: E0225 11:36:06.381175 5005 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 25 11:36:06 crc kubenswrapper[5005]: E0225 11:36:06.382110 5005 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4k8vh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-6cr5h_openstack(3052cba9-7666-438d-a17c-d3028c836c1d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 25 11:36:06 crc kubenswrapper[5005]: E0225 11:36:06.383756 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-6cr5h" podUID="3052cba9-7666-438d-a17c-d3028c836c1d" Feb 25 11:36:06 crc kubenswrapper[5005]: E0225 11:36:06.487141 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-6cr5h" podUID="3052cba9-7666-438d-a17c-d3028c836c1d" Feb 25 11:36:07 crc kubenswrapper[5005]: I0225 11:36:07.726728 5005 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-54f9b7b8d9-shqxt" podUID="7e5145ce-efae-4a28-9f07-d2922d2682bb" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.126:5353: i/o timeout" Feb 25 11:36:08 crc kubenswrapper[5005]: I0225 11:36:08.077146 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54f9b7b8d9-shqxt" Feb 25 11:36:08 crc kubenswrapper[5005]: I0225 11:36:08.083167 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-wngsg" Feb 25 11:36:08 crc kubenswrapper[5005]: I0225 11:36:08.148075 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7e5145ce-efae-4a28-9f07-d2922d2682bb-ovsdbserver-nb\") pod \"7e5145ce-efae-4a28-9f07-d2922d2682bb\" (UID: \"7e5145ce-efae-4a28-9f07-d2922d2682bb\") " Feb 25 11:36:08 crc kubenswrapper[5005]: I0225 11:36:08.148431 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7e5145ce-efae-4a28-9f07-d2922d2682bb-ovsdbserver-sb\") pod \"7e5145ce-efae-4a28-9f07-d2922d2682bb\" (UID: \"7e5145ce-efae-4a28-9f07-d2922d2682bb\") " Feb 25 11:36:08 crc kubenswrapper[5005]: I0225 11:36:08.148465 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkp5r\" (UniqueName: \"kubernetes.io/projected/7e5145ce-efae-4a28-9f07-d2922d2682bb-kube-api-access-fkp5r\") pod \"7e5145ce-efae-4a28-9f07-d2922d2682bb\" (UID: \"7e5145ce-efae-4a28-9f07-d2922d2682bb\") " Feb 25 11:36:08 crc kubenswrapper[5005]: I0225 11:36:08.148545 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b06f7ac8-89c3-4886-9cd8-58353d24e476-config\") pod \"b06f7ac8-89c3-4886-9cd8-58353d24e476\" (UID: \"b06f7ac8-89c3-4886-9cd8-58353d24e476\") " Feb 25 11:36:08 crc kubenswrapper[5005]: I0225 11:36:08.149352 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b06f7ac8-89c3-4886-9cd8-58353d24e476-combined-ca-bundle\") pod \"b06f7ac8-89c3-4886-9cd8-58353d24e476\" (UID: \"b06f7ac8-89c3-4886-9cd8-58353d24e476\") " Feb 25 11:36:08 crc kubenswrapper[5005]: I0225 11:36:08.149415 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ql7lh\" (UniqueName: \"kubernetes.io/projected/b06f7ac8-89c3-4886-9cd8-58353d24e476-kube-api-access-ql7lh\") pod \"b06f7ac8-89c3-4886-9cd8-58353d24e476\" (UID: \"b06f7ac8-89c3-4886-9cd8-58353d24e476\") " Feb 25 11:36:08 crc kubenswrapper[5005]: I0225 11:36:08.149837 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e5145ce-efae-4a28-9f07-d2922d2682bb-dns-svc\") pod \"7e5145ce-efae-4a28-9f07-d2922d2682bb\" (UID: \"7e5145ce-efae-4a28-9f07-d2922d2682bb\") " Feb 25 11:36:08 crc kubenswrapper[5005]: I0225 11:36:08.149910 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e5145ce-efae-4a28-9f07-d2922d2682bb-config\") pod \"7e5145ce-efae-4a28-9f07-d2922d2682bb\" (UID: \"7e5145ce-efae-4a28-9f07-d2922d2682bb\") " Feb 25 11:36:08 crc kubenswrapper[5005]: I0225 11:36:08.160970 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b06f7ac8-89c3-4886-9cd8-58353d24e476-kube-api-access-ql7lh" (OuterVolumeSpecName: "kube-api-access-ql7lh") pod "b06f7ac8-89c3-4886-9cd8-58353d24e476" (UID: "b06f7ac8-89c3-4886-9cd8-58353d24e476"). InnerVolumeSpecName "kube-api-access-ql7lh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:36:08 crc kubenswrapper[5005]: I0225 11:36:08.167113 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e5145ce-efae-4a28-9f07-d2922d2682bb-kube-api-access-fkp5r" (OuterVolumeSpecName: "kube-api-access-fkp5r") pod "7e5145ce-efae-4a28-9f07-d2922d2682bb" (UID: "7e5145ce-efae-4a28-9f07-d2922d2682bb"). InnerVolumeSpecName "kube-api-access-fkp5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:36:08 crc kubenswrapper[5005]: I0225 11:36:08.184908 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b06f7ac8-89c3-4886-9cd8-58353d24e476-config" (OuterVolumeSpecName: "config") pod "b06f7ac8-89c3-4886-9cd8-58353d24e476" (UID: "b06f7ac8-89c3-4886-9cd8-58353d24e476"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:36:08 crc kubenswrapper[5005]: I0225 11:36:08.186642 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b06f7ac8-89c3-4886-9cd8-58353d24e476-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b06f7ac8-89c3-4886-9cd8-58353d24e476" (UID: "b06f7ac8-89c3-4886-9cd8-58353d24e476"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:36:08 crc kubenswrapper[5005]: I0225 11:36:08.241632 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e5145ce-efae-4a28-9f07-d2922d2682bb-config" (OuterVolumeSpecName: "config") pod "7e5145ce-efae-4a28-9f07-d2922d2682bb" (UID: "7e5145ce-efae-4a28-9f07-d2922d2682bb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:36:08 crc kubenswrapper[5005]: I0225 11:36:08.242524 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e5145ce-efae-4a28-9f07-d2922d2682bb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7e5145ce-efae-4a28-9f07-d2922d2682bb" (UID: "7e5145ce-efae-4a28-9f07-d2922d2682bb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:36:08 crc kubenswrapper[5005]: I0225 11:36:08.243564 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e5145ce-efae-4a28-9f07-d2922d2682bb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7e5145ce-efae-4a28-9f07-d2922d2682bb" (UID: "7e5145ce-efae-4a28-9f07-d2922d2682bb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:36:08 crc kubenswrapper[5005]: I0225 11:36:08.252927 5005 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e5145ce-efae-4a28-9f07-d2922d2682bb-config\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:08 crc kubenswrapper[5005]: I0225 11:36:08.253114 5005 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7e5145ce-efae-4a28-9f07-d2922d2682bb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:08 crc kubenswrapper[5005]: I0225 11:36:08.253310 5005 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7e5145ce-efae-4a28-9f07-d2922d2682bb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:08 crc kubenswrapper[5005]: I0225 11:36:08.253400 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkp5r\" (UniqueName: \"kubernetes.io/projected/7e5145ce-efae-4a28-9f07-d2922d2682bb-kube-api-access-fkp5r\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:08 crc kubenswrapper[5005]: I0225 11:36:08.253492 5005 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/b06f7ac8-89c3-4886-9cd8-58353d24e476-config\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:08 crc kubenswrapper[5005]: I0225 11:36:08.253551 5005 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b06f7ac8-89c3-4886-9cd8-58353d24e476-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:08 crc kubenswrapper[5005]: I0225 11:36:08.253602 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ql7lh\" (UniqueName: \"kubernetes.io/projected/b06f7ac8-89c3-4886-9cd8-58353d24e476-kube-api-access-ql7lh\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:08 crc kubenswrapper[5005]: I0225 11:36:08.273056 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e5145ce-efae-4a28-9f07-d2922d2682bb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7e5145ce-efae-4a28-9f07-d2922d2682bb" (UID: "7e5145ce-efae-4a28-9f07-d2922d2682bb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:36:08 crc kubenswrapper[5005]: I0225 11:36:08.357699 5005 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e5145ce-efae-4a28-9f07-d2922d2682bb-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:08 crc kubenswrapper[5005]: I0225 11:36:08.498246 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-hpfk4"] Feb 25 11:36:08 crc kubenswrapper[5005]: I0225 11:36:08.513277 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-shqxt" event={"ID":"7e5145ce-efae-4a28-9f07-d2922d2682bb","Type":"ContainerDied","Data":"c91c7bda94e61d228bb07edf2f314af3700c8bcac48fb0b33724c4d58bfe913a"} Feb 25 11:36:08 crc kubenswrapper[5005]: I0225 11:36:08.513322 5005 scope.go:117] "RemoveContainer" containerID="2085434377bad043b70e66957f75af5fd0c473326a82f50f8848ccc9c8dba41e" Feb 25 11:36:08 crc kubenswrapper[5005]: I0225 11:36:08.513435 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54f9b7b8d9-shqxt" Feb 25 11:36:08 crc kubenswrapper[5005]: I0225 11:36:08.526794 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-wngsg" event={"ID":"b06f7ac8-89c3-4886-9cd8-58353d24e476","Type":"ContainerDied","Data":"d8d3491a19fa29f50e593c46d8237fe01cc958c7309b457ab2140680ded9ff52"} Feb 25 11:36:08 crc kubenswrapper[5005]: I0225 11:36:08.526825 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8d3491a19fa29f50e593c46d8237fe01cc958c7309b457ab2140680ded9ff52" Feb 25 11:36:08 crc kubenswrapper[5005]: I0225 11:36:08.526861 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-wngsg" Feb 25 11:36:08 crc kubenswrapper[5005]: I0225 11:36:08.556577 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-shqxt"] Feb 25 11:36:08 crc kubenswrapper[5005]: I0225 11:36:08.566401 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-shqxt"] Feb 25 11:36:08 crc kubenswrapper[5005]: I0225 11:36:08.698333 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e5145ce-efae-4a28-9f07-d2922d2682bb" path="/var/lib/kubelet/pods/7e5145ce-efae-4a28-9f07-d2922d2682bb/volumes" Feb 25 11:36:09 crc kubenswrapper[5005]: I0225 11:36:09.290702 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-rccrg"] Feb 25 11:36:09 crc kubenswrapper[5005]: E0225 11:36:09.291043 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e5145ce-efae-4a28-9f07-d2922d2682bb" containerName="init" Feb 25 11:36:09 crc kubenswrapper[5005]: I0225 11:36:09.291057 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e5145ce-efae-4a28-9f07-d2922d2682bb" containerName="init" Feb 25 11:36:09 crc kubenswrapper[5005]: E0225 11:36:09.291066 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b06f7ac8-89c3-4886-9cd8-58353d24e476" containerName="neutron-db-sync" Feb 25 11:36:09 crc kubenswrapper[5005]: I0225 11:36:09.291073 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="b06f7ac8-89c3-4886-9cd8-58353d24e476" containerName="neutron-db-sync" Feb 25 11:36:09 crc kubenswrapper[5005]: E0225 11:36:09.291101 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e5145ce-efae-4a28-9f07-d2922d2682bb" containerName="dnsmasq-dns" Feb 25 11:36:09 crc kubenswrapper[5005]: I0225 11:36:09.291107 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e5145ce-efae-4a28-9f07-d2922d2682bb" containerName="dnsmasq-dns" Feb 25 11:36:09 crc kubenswrapper[5005]: I0225 11:36:09.291252 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e5145ce-efae-4a28-9f07-d2922d2682bb" containerName="dnsmasq-dns" Feb 25 11:36:09 crc kubenswrapper[5005]: I0225 11:36:09.291266 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="b06f7ac8-89c3-4886-9cd8-58353d24e476" containerName="neutron-db-sync" Feb 25 11:36:09 crc kubenswrapper[5005]: I0225 11:36:09.292101 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b946d459c-rccrg" Feb 25 11:36:09 crc kubenswrapper[5005]: I0225 11:36:09.318086 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-rccrg"] Feb 25 11:36:09 crc kubenswrapper[5005]: I0225 11:36:09.373207 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5f5b655546-bjxxg"] Feb 25 11:36:09 crc kubenswrapper[5005]: I0225 11:36:09.375459 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5f5b655546-bjxxg" Feb 25 11:36:09 crc kubenswrapper[5005]: I0225 11:36:09.378953 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 25 11:36:09 crc kubenswrapper[5005]: I0225 11:36:09.379150 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 25 11:36:09 crc kubenswrapper[5005]: I0225 11:36:09.379466 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 25 11:36:09 crc kubenswrapper[5005]: I0225 11:36:09.379573 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-sv6bf" Feb 25 11:36:09 crc kubenswrapper[5005]: I0225 11:36:09.386359 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5f5b655546-bjxxg"] Feb 25 11:36:09 crc kubenswrapper[5005]: I0225 11:36:09.473993 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/20571c67-683c-4662-b39e-6dbd75aa8c51-ovsdbserver-sb\") pod \"dnsmasq-dns-7b946d459c-rccrg\" (UID: \"20571c67-683c-4662-b39e-6dbd75aa8c51\") " pod="openstack/dnsmasq-dns-7b946d459c-rccrg" Feb 25 11:36:09 crc kubenswrapper[5005]: I0225 11:36:09.474264 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/20571c67-683c-4662-b39e-6dbd75aa8c51-ovsdbserver-nb\") pod \"dnsmasq-dns-7b946d459c-rccrg\" (UID: \"20571c67-683c-4662-b39e-6dbd75aa8c51\") " pod="openstack/dnsmasq-dns-7b946d459c-rccrg" Feb 25 11:36:09 crc kubenswrapper[5005]: I0225 11:36:09.477540 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20571c67-683c-4662-b39e-6dbd75aa8c51-config\") pod \"dnsmasq-dns-7b946d459c-rccrg\" (UID: \"20571c67-683c-4662-b39e-6dbd75aa8c51\") " pod="openstack/dnsmasq-dns-7b946d459c-rccrg" Feb 25 11:36:09 crc kubenswrapper[5005]: I0225 11:36:09.477638 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2mmh\" (UniqueName: \"kubernetes.io/projected/20571c67-683c-4662-b39e-6dbd75aa8c51-kube-api-access-x2mmh\") pod \"dnsmasq-dns-7b946d459c-rccrg\" (UID: \"20571c67-683c-4662-b39e-6dbd75aa8c51\") " pod="openstack/dnsmasq-dns-7b946d459c-rccrg" Feb 25 11:36:09 crc kubenswrapper[5005]: I0225 11:36:09.477761 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/20571c67-683c-4662-b39e-6dbd75aa8c51-dns-svc\") pod \"dnsmasq-dns-7b946d459c-rccrg\" (UID: \"20571c67-683c-4662-b39e-6dbd75aa8c51\") " pod="openstack/dnsmasq-dns-7b946d459c-rccrg" Feb 25 11:36:09 crc kubenswrapper[5005]: I0225 11:36:09.580087 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/20571c67-683c-4662-b39e-6dbd75aa8c51-ovsdbserver-nb\") pod \"dnsmasq-dns-7b946d459c-rccrg\" (UID: \"20571c67-683c-4662-b39e-6dbd75aa8c51\") " pod="openstack/dnsmasq-dns-7b946d459c-rccrg" Feb 25 11:36:09 crc kubenswrapper[5005]: I0225 11:36:09.580160 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b568af4f-9018-43a0-abb5-5e656bb4e039-config\") pod \"neutron-5f5b655546-bjxxg\" (UID: \"b568af4f-9018-43a0-abb5-5e656bb4e039\") " pod="openstack/neutron-5f5b655546-bjxxg" Feb 25 11:36:09 crc kubenswrapper[5005]: I0225 11:36:09.580201 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20571c67-683c-4662-b39e-6dbd75aa8c51-config\") pod \"dnsmasq-dns-7b946d459c-rccrg\" (UID: \"20571c67-683c-4662-b39e-6dbd75aa8c51\") " pod="openstack/dnsmasq-dns-7b946d459c-rccrg" Feb 25 11:36:09 crc kubenswrapper[5005]: I0225 11:36:09.580219 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2mmh\" (UniqueName: \"kubernetes.io/projected/20571c67-683c-4662-b39e-6dbd75aa8c51-kube-api-access-x2mmh\") pod \"dnsmasq-dns-7b946d459c-rccrg\" (UID: \"20571c67-683c-4662-b39e-6dbd75aa8c51\") " pod="openstack/dnsmasq-dns-7b946d459c-rccrg" Feb 25 11:36:09 crc kubenswrapper[5005]: I0225 11:36:09.580243 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b568af4f-9018-43a0-abb5-5e656bb4e039-httpd-config\") pod \"neutron-5f5b655546-bjxxg\" (UID: \"b568af4f-9018-43a0-abb5-5e656bb4e039\") " pod="openstack/neutron-5f5b655546-bjxxg" Feb 25 11:36:09 crc kubenswrapper[5005]: I0225 11:36:09.580278 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh6wh\" (UniqueName: \"kubernetes.io/projected/b568af4f-9018-43a0-abb5-5e656bb4e039-kube-api-access-hh6wh\") pod \"neutron-5f5b655546-bjxxg\" (UID: \"b568af4f-9018-43a0-abb5-5e656bb4e039\") " pod="openstack/neutron-5f5b655546-bjxxg" Feb 25 11:36:09 crc kubenswrapper[5005]: I0225 11:36:09.580317 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b568af4f-9018-43a0-abb5-5e656bb4e039-combined-ca-bundle\") pod \"neutron-5f5b655546-bjxxg\" (UID: \"b568af4f-9018-43a0-abb5-5e656bb4e039\") " pod="openstack/neutron-5f5b655546-bjxxg" Feb 25 11:36:09 crc kubenswrapper[5005]: I0225 11:36:09.580339 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b568af4f-9018-43a0-abb5-5e656bb4e039-ovndb-tls-certs\") pod \"neutron-5f5b655546-bjxxg\" (UID: \"b568af4f-9018-43a0-abb5-5e656bb4e039\") " pod="openstack/neutron-5f5b655546-bjxxg" Feb 25 11:36:09 crc kubenswrapper[5005]: I0225 11:36:09.580360 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/20571c67-683c-4662-b39e-6dbd75aa8c51-dns-svc\") pod \"dnsmasq-dns-7b946d459c-rccrg\" (UID: \"20571c67-683c-4662-b39e-6dbd75aa8c51\") " pod="openstack/dnsmasq-dns-7b946d459c-rccrg" Feb 25 11:36:09 crc kubenswrapper[5005]: I0225 11:36:09.580404 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/20571c67-683c-4662-b39e-6dbd75aa8c51-ovsdbserver-sb\") pod \"dnsmasq-dns-7b946d459c-rccrg\" (UID: \"20571c67-683c-4662-b39e-6dbd75aa8c51\") " pod="openstack/dnsmasq-dns-7b946d459c-rccrg" Feb 25 11:36:09 crc kubenswrapper[5005]: I0225 11:36:09.581212 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/20571c67-683c-4662-b39e-6dbd75aa8c51-ovsdbserver-sb\") pod \"dnsmasq-dns-7b946d459c-rccrg\" (UID: \"20571c67-683c-4662-b39e-6dbd75aa8c51\") " pod="openstack/dnsmasq-dns-7b946d459c-rccrg" Feb 25 11:36:09 crc kubenswrapper[5005]: I0225 11:36:09.581317 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/20571c67-683c-4662-b39e-6dbd75aa8c51-ovsdbserver-nb\") pod \"dnsmasq-dns-7b946d459c-rccrg\" (UID: \"20571c67-683c-4662-b39e-6dbd75aa8c51\") " pod="openstack/dnsmasq-dns-7b946d459c-rccrg" Feb 25 11:36:09 crc kubenswrapper[5005]: I0225 11:36:09.581766 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20571c67-683c-4662-b39e-6dbd75aa8c51-config\") pod \"dnsmasq-dns-7b946d459c-rccrg\" (UID: \"20571c67-683c-4662-b39e-6dbd75aa8c51\") " pod="openstack/dnsmasq-dns-7b946d459c-rccrg" Feb 25 11:36:09 crc kubenswrapper[5005]: I0225 11:36:09.582356 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/20571c67-683c-4662-b39e-6dbd75aa8c51-dns-svc\") pod \"dnsmasq-dns-7b946d459c-rccrg\" (UID: \"20571c67-683c-4662-b39e-6dbd75aa8c51\") " pod="openstack/dnsmasq-dns-7b946d459c-rccrg" Feb 25 11:36:09 crc kubenswrapper[5005]: I0225 11:36:09.607467 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2mmh\" (UniqueName: \"kubernetes.io/projected/20571c67-683c-4662-b39e-6dbd75aa8c51-kube-api-access-x2mmh\") pod \"dnsmasq-dns-7b946d459c-rccrg\" (UID: \"20571c67-683c-4662-b39e-6dbd75aa8c51\") " pod="openstack/dnsmasq-dns-7b946d459c-rccrg" Feb 25 11:36:09 crc kubenswrapper[5005]: I0225 11:36:09.618750 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b946d459c-rccrg" Feb 25 11:36:09 crc kubenswrapper[5005]: I0225 11:36:09.681389 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b568af4f-9018-43a0-abb5-5e656bb4e039-config\") pod \"neutron-5f5b655546-bjxxg\" (UID: \"b568af4f-9018-43a0-abb5-5e656bb4e039\") " pod="openstack/neutron-5f5b655546-bjxxg" Feb 25 11:36:09 crc kubenswrapper[5005]: I0225 11:36:09.681460 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b568af4f-9018-43a0-abb5-5e656bb4e039-httpd-config\") pod \"neutron-5f5b655546-bjxxg\" (UID: \"b568af4f-9018-43a0-abb5-5e656bb4e039\") " pod="openstack/neutron-5f5b655546-bjxxg" Feb 25 11:36:09 crc kubenswrapper[5005]: I0225 11:36:09.681510 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh6wh\" (UniqueName: \"kubernetes.io/projected/b568af4f-9018-43a0-abb5-5e656bb4e039-kube-api-access-hh6wh\") pod \"neutron-5f5b655546-bjxxg\" (UID: \"b568af4f-9018-43a0-abb5-5e656bb4e039\") " pod="openstack/neutron-5f5b655546-bjxxg" Feb 25 11:36:09 crc kubenswrapper[5005]: I0225 11:36:09.681539 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b568af4f-9018-43a0-abb5-5e656bb4e039-combined-ca-bundle\") pod \"neutron-5f5b655546-bjxxg\" (UID: \"b568af4f-9018-43a0-abb5-5e656bb4e039\") " pod="openstack/neutron-5f5b655546-bjxxg" Feb 25 11:36:09 crc kubenswrapper[5005]: I0225 11:36:09.681558 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b568af4f-9018-43a0-abb5-5e656bb4e039-ovndb-tls-certs\") pod \"neutron-5f5b655546-bjxxg\" (UID: \"b568af4f-9018-43a0-abb5-5e656bb4e039\") " pod="openstack/neutron-5f5b655546-bjxxg" Feb 25 11:36:09 crc kubenswrapper[5005]: I0225 11:36:09.685466 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b568af4f-9018-43a0-abb5-5e656bb4e039-ovndb-tls-certs\") pod \"neutron-5f5b655546-bjxxg\" (UID: \"b568af4f-9018-43a0-abb5-5e656bb4e039\") " pod="openstack/neutron-5f5b655546-bjxxg" Feb 25 11:36:09 crc kubenswrapper[5005]: I0225 11:36:09.686003 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b568af4f-9018-43a0-abb5-5e656bb4e039-combined-ca-bundle\") pod \"neutron-5f5b655546-bjxxg\" (UID: \"b568af4f-9018-43a0-abb5-5e656bb4e039\") " pod="openstack/neutron-5f5b655546-bjxxg" Feb 25 11:36:09 crc kubenswrapper[5005]: I0225 11:36:09.686043 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b568af4f-9018-43a0-abb5-5e656bb4e039-config\") pod \"neutron-5f5b655546-bjxxg\" (UID: \"b568af4f-9018-43a0-abb5-5e656bb4e039\") " pod="openstack/neutron-5f5b655546-bjxxg" Feb 25 11:36:09 crc kubenswrapper[5005]: I0225 11:36:09.686781 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b568af4f-9018-43a0-abb5-5e656bb4e039-httpd-config\") pod \"neutron-5f5b655546-bjxxg\" (UID: \"b568af4f-9018-43a0-abb5-5e656bb4e039\") " pod="openstack/neutron-5f5b655546-bjxxg" Feb 25 11:36:09 crc kubenswrapper[5005]: I0225 11:36:09.706629 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hh6wh\" (UniqueName: \"kubernetes.io/projected/b568af4f-9018-43a0-abb5-5e656bb4e039-kube-api-access-hh6wh\") pod \"neutron-5f5b655546-bjxxg\" (UID: \"b568af4f-9018-43a0-abb5-5e656bb4e039\") " pod="openstack/neutron-5f5b655546-bjxxg" Feb 25 11:36:10 crc kubenswrapper[5005]: I0225 11:36:09.999956 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5f5b655546-bjxxg" Feb 25 11:36:10 crc kubenswrapper[5005]: E0225 11:36:10.763803 5005 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 25 11:36:10 crc kubenswrapper[5005]: E0225 11:36:10.765077 5005 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rzlv2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-qrzvf_openstack(1fa77b98-833c-4278-b615-49e4c28e69c5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 25 11:36:10 crc kubenswrapper[5005]: E0225 11:36:10.766308 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-qrzvf" podUID="1fa77b98-833c-4278-b615-49e4c28e69c5" Feb 25 11:36:11 crc kubenswrapper[5005]: W0225 11:36:11.069544 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod753b53c4_aec7_4a64_bc02_76520bddb879.slice/crio-839246348f52b514e2c10f908a120e219736f2fd505c3cfb69f83c8280864a6e WatchSource:0}: Error finding container 839246348f52b514e2c10f908a120e219736f2fd505c3cfb69f83c8280864a6e: Status 404 returned error can't find the container with id 839246348f52b514e2c10f908a120e219736f2fd505c3cfb69f83c8280864a6e Feb 25 11:36:11 crc kubenswrapper[5005]: E0225 11:36:11.077133 5005 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Feb 25 11:36:11 crc kubenswrapper[5005]: E0225 11:36:11.077278 5005 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndch66bh55ch5bch549h5c4h99h557h5b4h644hc4h67dh97h56bh55h696hc8h57ch558h556h554h7dh5b7h85h665h5dhbfh65h5ddh646h578h554q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-68rjp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(7b033028-9ac7-43d8-95da-931e7a25249a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 25 11:36:11 crc kubenswrapper[5005]: I0225 11:36:11.086907 5005 scope.go:117] "RemoveContainer" containerID="3fef4ac12736813affee756604d2aa92c54cfe09710c675657f662a1a385c08e" Feb 25 11:36:11 crc kubenswrapper[5005]: I0225 11:36:11.087107 5005 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 25 11:36:11 crc kubenswrapper[5005]: I0225 11:36:11.092586 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 25 11:36:11 crc kubenswrapper[5005]: I0225 11:36:11.257323 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-b769c78f-h4vrt"] Feb 25 11:36:11 crc kubenswrapper[5005]: I0225 11:36:11.259169 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b769c78f-h4vrt" Feb 25 11:36:11 crc kubenswrapper[5005]: I0225 11:36:11.262425 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 25 11:36:11 crc kubenswrapper[5005]: I0225 11:36:11.263007 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 25 11:36:11 crc kubenswrapper[5005]: I0225 11:36:11.266163 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b769c78f-h4vrt"] Feb 25 11:36:11 crc kubenswrapper[5005]: I0225 11:36:11.322959 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d92e4f70-284c-4330-94bd-5a052d96ac39-public-tls-certs\") pod \"neutron-b769c78f-h4vrt\" (UID: \"d92e4f70-284c-4330-94bd-5a052d96ac39\") " pod="openstack/neutron-b769c78f-h4vrt" Feb 25 11:36:11 crc kubenswrapper[5005]: I0225 11:36:11.323109 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d92e4f70-284c-4330-94bd-5a052d96ac39-httpd-config\") pod \"neutron-b769c78f-h4vrt\" (UID: \"d92e4f70-284c-4330-94bd-5a052d96ac39\") " pod="openstack/neutron-b769c78f-h4vrt" Feb 25 11:36:11 crc kubenswrapper[5005]: I0225 11:36:11.323138 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlhzd\" (UniqueName: \"kubernetes.io/projected/d92e4f70-284c-4330-94bd-5a052d96ac39-kube-api-access-wlhzd\") pod \"neutron-b769c78f-h4vrt\" (UID: \"d92e4f70-284c-4330-94bd-5a052d96ac39\") " pod="openstack/neutron-b769c78f-h4vrt" Feb 25 11:36:11 crc kubenswrapper[5005]: I0225 11:36:11.323261 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d92e4f70-284c-4330-94bd-5a052d96ac39-internal-tls-certs\") pod \"neutron-b769c78f-h4vrt\" (UID: \"d92e4f70-284c-4330-94bd-5a052d96ac39\") " pod="openstack/neutron-b769c78f-h4vrt" Feb 25 11:36:11 crc kubenswrapper[5005]: I0225 11:36:11.323294 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d92e4f70-284c-4330-94bd-5a052d96ac39-combined-ca-bundle\") pod \"neutron-b769c78f-h4vrt\" (UID: \"d92e4f70-284c-4330-94bd-5a052d96ac39\") " pod="openstack/neutron-b769c78f-h4vrt" Feb 25 11:36:11 crc kubenswrapper[5005]: I0225 11:36:11.323319 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d92e4f70-284c-4330-94bd-5a052d96ac39-config\") pod \"neutron-b769c78f-h4vrt\" (UID: \"d92e4f70-284c-4330-94bd-5a052d96ac39\") " pod="openstack/neutron-b769c78f-h4vrt" Feb 25 11:36:11 crc kubenswrapper[5005]: I0225 11:36:11.323634 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d92e4f70-284c-4330-94bd-5a052d96ac39-ovndb-tls-certs\") pod \"neutron-b769c78f-h4vrt\" (UID: \"d92e4f70-284c-4330-94bd-5a052d96ac39\") " pod="openstack/neutron-b769c78f-h4vrt" Feb 25 11:36:11 crc kubenswrapper[5005]: I0225 11:36:11.425143 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d92e4f70-284c-4330-94bd-5a052d96ac39-internal-tls-certs\") pod \"neutron-b769c78f-h4vrt\" (UID: \"d92e4f70-284c-4330-94bd-5a052d96ac39\") " pod="openstack/neutron-b769c78f-h4vrt" Feb 25 11:36:11 crc kubenswrapper[5005]: I0225 11:36:11.425474 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d92e4f70-284c-4330-94bd-5a052d96ac39-combined-ca-bundle\") pod \"neutron-b769c78f-h4vrt\" (UID: \"d92e4f70-284c-4330-94bd-5a052d96ac39\") " pod="openstack/neutron-b769c78f-h4vrt" Feb 25 11:36:11 crc kubenswrapper[5005]: I0225 11:36:11.425517 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d92e4f70-284c-4330-94bd-5a052d96ac39-config\") pod \"neutron-b769c78f-h4vrt\" (UID: \"d92e4f70-284c-4330-94bd-5a052d96ac39\") " pod="openstack/neutron-b769c78f-h4vrt" Feb 25 11:36:11 crc kubenswrapper[5005]: I0225 11:36:11.425553 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d92e4f70-284c-4330-94bd-5a052d96ac39-ovndb-tls-certs\") pod \"neutron-b769c78f-h4vrt\" (UID: \"d92e4f70-284c-4330-94bd-5a052d96ac39\") " pod="openstack/neutron-b769c78f-h4vrt" Feb 25 11:36:11 crc kubenswrapper[5005]: I0225 11:36:11.425605 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d92e4f70-284c-4330-94bd-5a052d96ac39-public-tls-certs\") pod \"neutron-b769c78f-h4vrt\" (UID: \"d92e4f70-284c-4330-94bd-5a052d96ac39\") " pod="openstack/neutron-b769c78f-h4vrt" Feb 25 11:36:11 crc kubenswrapper[5005]: I0225 11:36:11.425626 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d92e4f70-284c-4330-94bd-5a052d96ac39-httpd-config\") pod \"neutron-b769c78f-h4vrt\" (UID: \"d92e4f70-284c-4330-94bd-5a052d96ac39\") " pod="openstack/neutron-b769c78f-h4vrt" Feb 25 11:36:11 crc kubenswrapper[5005]: I0225 11:36:11.425664 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlhzd\" (UniqueName: \"kubernetes.io/projected/d92e4f70-284c-4330-94bd-5a052d96ac39-kube-api-access-wlhzd\") pod \"neutron-b769c78f-h4vrt\" (UID: \"d92e4f70-284c-4330-94bd-5a052d96ac39\") " pod="openstack/neutron-b769c78f-h4vrt" Feb 25 11:36:11 crc kubenswrapper[5005]: I0225 11:36:11.436201 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d92e4f70-284c-4330-94bd-5a052d96ac39-combined-ca-bundle\") pod \"neutron-b769c78f-h4vrt\" (UID: \"d92e4f70-284c-4330-94bd-5a052d96ac39\") " pod="openstack/neutron-b769c78f-h4vrt" Feb 25 11:36:11 crc kubenswrapper[5005]: I0225 11:36:11.442317 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d92e4f70-284c-4330-94bd-5a052d96ac39-config\") pod \"neutron-b769c78f-h4vrt\" (UID: \"d92e4f70-284c-4330-94bd-5a052d96ac39\") " pod="openstack/neutron-b769c78f-h4vrt" Feb 25 11:36:11 crc kubenswrapper[5005]: I0225 11:36:11.446211 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d92e4f70-284c-4330-94bd-5a052d96ac39-public-tls-certs\") pod \"neutron-b769c78f-h4vrt\" (UID: \"d92e4f70-284c-4330-94bd-5a052d96ac39\") " pod="openstack/neutron-b769c78f-h4vrt" Feb 25 11:36:11 crc kubenswrapper[5005]: I0225 11:36:11.449897 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d92e4f70-284c-4330-94bd-5a052d96ac39-ovndb-tls-certs\") pod \"neutron-b769c78f-h4vrt\" (UID: \"d92e4f70-284c-4330-94bd-5a052d96ac39\") " pod="openstack/neutron-b769c78f-h4vrt" Feb 25 11:36:11 crc kubenswrapper[5005]: I0225 11:36:11.454451 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d92e4f70-284c-4330-94bd-5a052d96ac39-internal-tls-certs\") pod \"neutron-b769c78f-h4vrt\" (UID: \"d92e4f70-284c-4330-94bd-5a052d96ac39\") " pod="openstack/neutron-b769c78f-h4vrt" Feb 25 11:36:11 crc kubenswrapper[5005]: I0225 11:36:11.454681 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d92e4f70-284c-4330-94bd-5a052d96ac39-httpd-config\") pod \"neutron-b769c78f-h4vrt\" (UID: \"d92e4f70-284c-4330-94bd-5a052d96ac39\") " pod="openstack/neutron-b769c78f-h4vrt" Feb 25 11:36:11 crc kubenswrapper[5005]: I0225 11:36:11.455409 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlhzd\" (UniqueName: \"kubernetes.io/projected/d92e4f70-284c-4330-94bd-5a052d96ac39-kube-api-access-wlhzd\") pod \"neutron-b769c78f-h4vrt\" (UID: \"d92e4f70-284c-4330-94bd-5a052d96ac39\") " pod="openstack/neutron-b769c78f-h4vrt" Feb 25 11:36:11 crc kubenswrapper[5005]: I0225 11:36:11.572617 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7ff575667-wqvzf" event={"ID":"9058179d-2c7b-4d0b-b162-10141ea754b0","Type":"ContainerStarted","Data":"4097ac444a7b2c97b7909ed7ea55417354a1052de7b86d6c073a7b2af5b12b65"} Feb 25 11:36:11 crc kubenswrapper[5005]: I0225 11:36:11.574319 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hpfk4" event={"ID":"753b53c4-aec7-4a64-bc02-76520bddb879","Type":"ContainerStarted","Data":"7d355b56657a0b42dd9013ec613aab9610bf772b267f913540ed906741b34b9a"} Feb 25 11:36:11 crc kubenswrapper[5005]: I0225 11:36:11.574367 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hpfk4" event={"ID":"753b53c4-aec7-4a64-bc02-76520bddb879","Type":"ContainerStarted","Data":"839246348f52b514e2c10f908a120e219736f2fd505c3cfb69f83c8280864a6e"} Feb 25 11:36:11 crc kubenswrapper[5005]: E0225 11:36:11.591599 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-qrzvf" podUID="1fa77b98-833c-4278-b615-49e4c28e69c5" Feb 25 11:36:11 crc kubenswrapper[5005]: I0225 11:36:11.593592 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-hpfk4" podStartSLOduration=21.593574611 podStartE2EDuration="21.593574611s" podCreationTimestamp="2026-02-25 11:35:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:36:11.589707454 +0000 UTC m=+1085.630439781" watchObservedRunningTime="2026-02-25 11:36:11.593574611 +0000 UTC m=+1085.634306938" Feb 25 11:36:11 crc kubenswrapper[5005]: I0225 11:36:11.658913 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-f8bbcbf96-lg5q8"] Feb 25 11:36:11 crc kubenswrapper[5005]: W0225 11:36:11.665826 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode277143e_cdb1_4dda_976d_f06c58c14c33.slice/crio-da51cd824d1c01a80854d41d31fb9f38483ecc9203b7bf82ba1ccb919e1bd129 WatchSource:0}: Error finding container da51cd824d1c01a80854d41d31fb9f38483ecc9203b7bf82ba1ccb919e1bd129: Status 404 returned error can't find the container with id da51cd824d1c01a80854d41d31fb9f38483ecc9203b7bf82ba1ccb919e1bd129 Feb 25 11:36:11 crc kubenswrapper[5005]: I0225 11:36:11.676113 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b769c78f-h4vrt" Feb 25 11:36:11 crc kubenswrapper[5005]: I0225 11:36:11.736568 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533656-kxl9w"] Feb 25 11:36:11 crc kubenswrapper[5005]: I0225 11:36:11.761780 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7f46657c4d-qmj7f"] Feb 25 11:36:11 crc kubenswrapper[5005]: I0225 11:36:11.875271 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-rccrg"] Feb 25 11:36:11 crc kubenswrapper[5005]: I0225 11:36:11.989391 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5f5b655546-bjxxg"] Feb 25 11:36:12 crc kubenswrapper[5005]: I0225 11:36:12.185482 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b769c78f-h4vrt"] Feb 25 11:36:12 crc kubenswrapper[5005]: W0225 11:36:12.194637 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd92e4f70_284c_4330_94bd_5a052d96ac39.slice/crio-5a926ad74a4f081dd6ced4116be816057460a466924d8e9bd793b9a79e9d3d25 WatchSource:0}: Error finding container 5a926ad74a4f081dd6ced4116be816057460a466924d8e9bd793b9a79e9d3d25: Status 404 returned error can't find the container with id 5a926ad74a4f081dd6ced4116be816057460a466924d8e9bd793b9a79e9d3d25 Feb 25 11:36:12 crc kubenswrapper[5005]: I0225 11:36:12.624569 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f5b655546-bjxxg" event={"ID":"b568af4f-9018-43a0-abb5-5e656bb4e039","Type":"ContainerStarted","Data":"7c495e8fbfb1d6621c6e39cd8b4230628f57149e411e7d8c1965a39dd11cca94"} Feb 25 11:36:12 crc kubenswrapper[5005]: I0225 11:36:12.624897 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f5b655546-bjxxg" event={"ID":"b568af4f-9018-43a0-abb5-5e656bb4e039","Type":"ContainerStarted","Data":"779a2e11324bb5c2fa03b7614f573126227eb62d661596ef4c6d9d809909c6e5"} Feb 25 11:36:12 crc kubenswrapper[5005]: I0225 11:36:12.627140 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f8bbcbf96-lg5q8" event={"ID":"e277143e-cdb1-4dda-976d-f06c58c14c33","Type":"ContainerStarted","Data":"1b01f1349350e32c63d9335be8ad91c640bc897e0c85e1cc08a1dc33d0b95c39"} Feb 25 11:36:12 crc kubenswrapper[5005]: I0225 11:36:12.627165 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f8bbcbf96-lg5q8" event={"ID":"e277143e-cdb1-4dda-976d-f06c58c14c33","Type":"ContainerStarted","Data":"e654e9a495c35afe96db0ac37007c0fc82ebc935ee470c0dbf700dc892398d31"} Feb 25 11:36:12 crc kubenswrapper[5005]: I0225 11:36:12.627176 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f8bbcbf96-lg5q8" event={"ID":"e277143e-cdb1-4dda-976d-f06c58c14c33","Type":"ContainerStarted","Data":"da51cd824d1c01a80854d41d31fb9f38483ecc9203b7bf82ba1ccb919e1bd129"} Feb 25 11:36:12 crc kubenswrapper[5005]: I0225 11:36:12.629280 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-wqnnp" event={"ID":"5a68e341-40f2-4b1b-b92f-bca9a4946b39","Type":"ContainerStarted","Data":"812606c9aa7e1cf9a292b1c36a21333a0c4d5758c70359d5e418e1781af50daf"} Feb 25 11:36:12 crc kubenswrapper[5005]: I0225 11:36:12.636031 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b769c78f-h4vrt" event={"ID":"d92e4f70-284c-4330-94bd-5a052d96ac39","Type":"ContainerStarted","Data":"5a926ad74a4f081dd6ced4116be816057460a466924d8e9bd793b9a79e9d3d25"} Feb 25 11:36:12 crc kubenswrapper[5005]: I0225 11:36:12.646059 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bc6f57c7-g7rsc" event={"ID":"4869a7a1-945a-47d9-9239-b1537f04be41","Type":"ContainerStarted","Data":"c370cbe5acc689593d17ef4f49bb35b425291f847b51e4b1c7851cbb0b58c410"} Feb 25 11:36:12 crc kubenswrapper[5005]: I0225 11:36:12.646102 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bc6f57c7-g7rsc" event={"ID":"4869a7a1-945a-47d9-9239-b1537f04be41","Type":"ContainerStarted","Data":"1caaee8477a93e781a4248157c6c7308e9ec61e27aea473013e03c56bfb73ef7"} Feb 25 11:36:12 crc kubenswrapper[5005]: I0225 11:36:12.646210 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7bc6f57c7-g7rsc" podUID="4869a7a1-945a-47d9-9239-b1537f04be41" containerName="horizon-log" containerID="cri-o://c370cbe5acc689593d17ef4f49bb35b425291f847b51e4b1c7851cbb0b58c410" gracePeriod=30 Feb 25 11:36:12 crc kubenswrapper[5005]: I0225 11:36:12.646662 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7bc6f57c7-g7rsc" podUID="4869a7a1-945a-47d9-9239-b1537f04be41" containerName="horizon" containerID="cri-o://1caaee8477a93e781a4248157c6c7308e9ec61e27aea473013e03c56bfb73ef7" gracePeriod=30 Feb 25 11:36:12 crc kubenswrapper[5005]: I0225 11:36:12.654885 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533656-kxl9w" event={"ID":"83478c50-2691-45b5-abc6-039a23bb1645","Type":"ContainerStarted","Data":"beee6a3ab5e0c4e77f938cd192395fc3b8939a42d3d161e2d335cdbb786d47e5"} Feb 25 11:36:12 crc kubenswrapper[5005]: I0225 11:36:12.661846 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-f8bbcbf96-lg5q8" podStartSLOduration=21.661828736 podStartE2EDuration="21.661828736s" podCreationTimestamp="2026-02-25 11:35:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:36:12.650798791 +0000 UTC m=+1086.691531118" watchObservedRunningTime="2026-02-25 11:36:12.661828736 +0000 UTC m=+1086.702561063" Feb 25 11:36:12 crc kubenswrapper[5005]: I0225 11:36:12.667543 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f46657c4d-qmj7f" event={"ID":"f37d127e-e5d1-45f3-8e44-262ab354e0c2","Type":"ContainerStarted","Data":"4d2a9eb2800c2682c577225ec553c32240fbc85fb06c20035304d3f03ddba264"} Feb 25 11:36:12 crc kubenswrapper[5005]: I0225 11:36:12.667613 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f46657c4d-qmj7f" event={"ID":"f37d127e-e5d1-45f3-8e44-262ab354e0c2","Type":"ContainerStarted","Data":"69013907147be4d78676e13f1a212c8cf91c9975f9e239bda625c13ee8a696a3"} Feb 25 11:36:12 crc kubenswrapper[5005]: I0225 11:36:12.667632 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f46657c4d-qmj7f" event={"ID":"f37d127e-e5d1-45f3-8e44-262ab354e0c2","Type":"ContainerStarted","Data":"5a5e3519bce33a27a443a137d310a1c393059de2e401f08195ede211191831d4"} Feb 25 11:36:12 crc kubenswrapper[5005]: I0225 11:36:12.678186 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7bc6f57c7-g7rsc" podStartSLOduration=3.353270155 podStartE2EDuration="28.678167003s" podCreationTimestamp="2026-02-25 11:35:44 +0000 UTC" firstStartedPulling="2026-02-25 11:35:45.825301997 +0000 UTC m=+1059.866034324" lastFinishedPulling="2026-02-25 11:36:11.150198845 +0000 UTC m=+1085.190931172" observedRunningTime="2026-02-25 11:36:12.668426677 +0000 UTC m=+1086.709159004" watchObservedRunningTime="2026-02-25 11:36:12.678167003 +0000 UTC m=+1086.718899330" Feb 25 11:36:12 crc kubenswrapper[5005]: I0225 11:36:12.691571 5005 generic.go:334] "Generic (PLEG): container finished" podID="20571c67-683c-4662-b39e-6dbd75aa8c51" containerID="755608b8af2ea3340a43119a5bfc00daffbe414aa391bce553535b4e280fcd48" exitCode=0 Feb 25 11:36:12 crc kubenswrapper[5005]: I0225 11:36:12.696170 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-wqnnp" podStartSLOduration=6.465657173 podStartE2EDuration="30.696153701s" podCreationTimestamp="2026-02-25 11:35:42 +0000 UTC" firstStartedPulling="2026-02-25 11:35:43.746191523 +0000 UTC m=+1057.786923850" lastFinishedPulling="2026-02-25 11:36:07.976688061 +0000 UTC m=+1082.017420378" observedRunningTime="2026-02-25 11:36:12.692590722 +0000 UTC m=+1086.733323049" watchObservedRunningTime="2026-02-25 11:36:12.696153701 +0000 UTC m=+1086.736886028" Feb 25 11:36:12 crc kubenswrapper[5005]: I0225 11:36:12.701146 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7ff575667-wqvzf" podUID="9058179d-2c7b-4d0b-b162-10141ea754b0" containerName="horizon-log" containerID="cri-o://4097ac444a7b2c97b7909ed7ea55417354a1052de7b86d6c073a7b2af5b12b65" gracePeriod=30 Feb 25 11:36:12 crc kubenswrapper[5005]: I0225 11:36:12.701451 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7ff575667-wqvzf" podUID="9058179d-2c7b-4d0b-b162-10141ea754b0" containerName="horizon" containerID="cri-o://c5f4cbe17ef32f6a9fb426aab498217d62bf2bd2502d57cf58e11ff349c90126" gracePeriod=30 Feb 25 11:36:12 crc kubenswrapper[5005]: I0225 11:36:12.710120 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-d4b89f889-cxtxj" podUID="a38c83b4-5b4f-4b02-b427-ff357cbbb47c" containerName="horizon-log" containerID="cri-o://e5ff6c4894d10dce91d86d353419478f98f2b32e9775cb58f903f0f8348bd8af" gracePeriod=30 Feb 25 11:36:12 crc kubenswrapper[5005]: I0225 11:36:12.710255 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-d4b89f889-cxtxj" podUID="a38c83b4-5b4f-4b02-b427-ff357cbbb47c" containerName="horizon" containerID="cri-o://c0bbfe176cabf6d4dbe27ce68585a6d343e23b905b032d79cbe4c2c37ca69e25" gracePeriod=30 Feb 25 11:36:12 crc kubenswrapper[5005]: I0225 11:36:12.728461 5005 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-54f9b7b8d9-shqxt" podUID="7e5145ce-efae-4a28-9f07-d2922d2682bb" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.126:5353: i/o timeout" Feb 25 11:36:12 crc kubenswrapper[5005]: I0225 11:36:12.753399 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-rccrg" event={"ID":"20571c67-683c-4662-b39e-6dbd75aa8c51","Type":"ContainerDied","Data":"755608b8af2ea3340a43119a5bfc00daffbe414aa391bce553535b4e280fcd48"} Feb 25 11:36:12 crc kubenswrapper[5005]: I0225 11:36:12.753456 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-rccrg" event={"ID":"20571c67-683c-4662-b39e-6dbd75aa8c51","Type":"ContainerStarted","Data":"a08b37e749ef63e7c282c8030a83d73fb1283d24cd79b1695c09c55ebf83dd4e"} Feb 25 11:36:12 crc kubenswrapper[5005]: I0225 11:36:12.753467 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7ff575667-wqvzf" event={"ID":"9058179d-2c7b-4d0b-b162-10141ea754b0","Type":"ContainerStarted","Data":"c5f4cbe17ef32f6a9fb426aab498217d62bf2bd2502d57cf58e11ff349c90126"} Feb 25 11:36:12 crc kubenswrapper[5005]: I0225 11:36:12.753476 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-d4b89f889-cxtxj" event={"ID":"a38c83b4-5b4f-4b02-b427-ff357cbbb47c","Type":"ContainerStarted","Data":"e5ff6c4894d10dce91d86d353419478f98f2b32e9775cb58f903f0f8348bd8af"} Feb 25 11:36:12 crc kubenswrapper[5005]: I0225 11:36:12.753486 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-d4b89f889-cxtxj" event={"ID":"a38c83b4-5b4f-4b02-b427-ff357cbbb47c","Type":"ContainerStarted","Data":"c0bbfe176cabf6d4dbe27ce68585a6d343e23b905b032d79cbe4c2c37ca69e25"} Feb 25 11:36:12 crc kubenswrapper[5005]: I0225 11:36:12.803228 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7f46657c4d-qmj7f" podStartSLOduration=21.803201296 podStartE2EDuration="21.803201296s" podCreationTimestamp="2026-02-25 11:35:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:36:12.745730419 +0000 UTC m=+1086.786462746" watchObservedRunningTime="2026-02-25 11:36:12.803201296 +0000 UTC m=+1086.843933623" Feb 25 11:36:12 crc kubenswrapper[5005]: I0225 11:36:12.812898 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-d4b89f889-cxtxj" podStartSLOduration=7.09510825 podStartE2EDuration="30.812874651s" podCreationTimestamp="2026-02-25 11:35:42 +0000 UTC" firstStartedPulling="2026-02-25 11:35:44.258912309 +0000 UTC m=+1058.299644636" lastFinishedPulling="2026-02-25 11:36:07.97667872 +0000 UTC m=+1082.017411037" observedRunningTime="2026-02-25 11:36:12.767516151 +0000 UTC m=+1086.808248478" watchObservedRunningTime="2026-02-25 11:36:12.812874651 +0000 UTC m=+1086.853606978" Feb 25 11:36:12 crc kubenswrapper[5005]: I0225 11:36:12.818139 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7ff575667-wqvzf" podStartSLOduration=3.835868979 podStartE2EDuration="30.818124881s" podCreationTimestamp="2026-02-25 11:35:42 +0000 UTC" firstStartedPulling="2026-02-25 11:35:44.148159171 +0000 UTC m=+1058.188891498" lastFinishedPulling="2026-02-25 11:36:11.130415083 +0000 UTC m=+1085.171147400" observedRunningTime="2026-02-25 11:36:12.792776019 +0000 UTC m=+1086.833508346" watchObservedRunningTime="2026-02-25 11:36:12.818124881 +0000 UTC m=+1086.858857198" Feb 25 11:36:13 crc kubenswrapper[5005]: I0225 11:36:13.231067 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7ff575667-wqvzf" Feb 25 11:36:13 crc kubenswrapper[5005]: I0225 11:36:13.459503 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-d4b89f889-cxtxj" Feb 25 11:36:13 crc kubenswrapper[5005]: I0225 11:36:13.726802 5005 generic.go:334] "Generic (PLEG): container finished" podID="5a68e341-40f2-4b1b-b92f-bca9a4946b39" containerID="812606c9aa7e1cf9a292b1c36a21333a0c4d5758c70359d5e418e1781af50daf" exitCode=0 Feb 25 11:36:13 crc kubenswrapper[5005]: I0225 11:36:13.726864 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-wqnnp" event={"ID":"5a68e341-40f2-4b1b-b92f-bca9a4946b39","Type":"ContainerDied","Data":"812606c9aa7e1cf9a292b1c36a21333a0c4d5758c70359d5e418e1781af50daf"} Feb 25 11:36:13 crc kubenswrapper[5005]: I0225 11:36:13.728304 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b033028-9ac7-43d8-95da-931e7a25249a","Type":"ContainerStarted","Data":"fd4b1f1784a58de950cae7d57546ef271186ae04bd51d79894a828b440dd457d"} Feb 25 11:36:13 crc kubenswrapper[5005]: I0225 11:36:13.739934 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-rccrg" event={"ID":"20571c67-683c-4662-b39e-6dbd75aa8c51","Type":"ContainerStarted","Data":"d6d5ec54eb34191d83027271b852ea585a484b7cf2e27314616a6f49207384e1"} Feb 25 11:36:13 crc kubenswrapper[5005]: I0225 11:36:13.742797 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7b946d459c-rccrg" Feb 25 11:36:13 crc kubenswrapper[5005]: I0225 11:36:13.757971 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b769c78f-h4vrt" event={"ID":"d92e4f70-284c-4330-94bd-5a052d96ac39","Type":"ContainerStarted","Data":"3bf8952c510768b3c6c698898a465927d0d9eee5f51a20ebe731061666584d3b"} Feb 25 11:36:13 crc kubenswrapper[5005]: I0225 11:36:13.771644 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7b946d459c-rccrg" podStartSLOduration=4.771629834 podStartE2EDuration="4.771629834s" podCreationTimestamp="2026-02-25 11:36:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:36:13.77016628 +0000 UTC m=+1087.810898607" watchObservedRunningTime="2026-02-25 11:36:13.771629834 +0000 UTC m=+1087.812362161" Feb 25 11:36:13 crc kubenswrapper[5005]: I0225 11:36:13.773305 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f5b655546-bjxxg" event={"ID":"b568af4f-9018-43a0-abb5-5e656bb4e039","Type":"ContainerStarted","Data":"605440471bce3b821f21c3944458ca4a7b7938db6957d3b54b6bfe3a49a6298e"} Feb 25 11:36:13 crc kubenswrapper[5005]: I0225 11:36:13.773345 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5f5b655546-bjxxg" Feb 25 11:36:13 crc kubenswrapper[5005]: I0225 11:36:13.807242 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5f5b655546-bjxxg" podStartSLOduration=4.807226358 podStartE2EDuration="4.807226358s" podCreationTimestamp="2026-02-25 11:36:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:36:13.802096721 +0000 UTC m=+1087.842829038" watchObservedRunningTime="2026-02-25 11:36:13.807226358 +0000 UTC m=+1087.847958685" Feb 25 11:36:14 crc kubenswrapper[5005]: I0225 11:36:14.778446 5005 generic.go:334] "Generic (PLEG): container finished" podID="83478c50-2691-45b5-abc6-039a23bb1645" containerID="352b4c167508b99802a2b2e479fa1a8f87190e923206c04f5d0eb1bef74ba6a0" exitCode=0 Feb 25 11:36:14 crc kubenswrapper[5005]: I0225 11:36:14.778548 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533656-kxl9w" event={"ID":"83478c50-2691-45b5-abc6-039a23bb1645","Type":"ContainerDied","Data":"352b4c167508b99802a2b2e479fa1a8f87190e923206c04f5d0eb1bef74ba6a0"} Feb 25 11:36:14 crc kubenswrapper[5005]: I0225 11:36:14.781644 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b769c78f-h4vrt" event={"ID":"d92e4f70-284c-4330-94bd-5a052d96ac39","Type":"ContainerStarted","Data":"96eee5493eca403c6128dacb2ebbbc2998823cfe6ed94966954489cbb016c847"} Feb 25 11:36:14 crc kubenswrapper[5005]: I0225 11:36:14.835451 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-b769c78f-h4vrt" podStartSLOduration=3.835430904 podStartE2EDuration="3.835430904s" podCreationTimestamp="2026-02-25 11:36:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:36:14.829794712 +0000 UTC m=+1088.870527039" watchObservedRunningTime="2026-02-25 11:36:14.835430904 +0000 UTC m=+1088.876163231" Feb 25 11:36:14 crc kubenswrapper[5005]: I0225 11:36:14.940888 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7bc6f57c7-g7rsc" Feb 25 11:36:15 crc kubenswrapper[5005]: I0225 11:36:15.174842 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-wqnnp" Feb 25 11:36:15 crc kubenswrapper[5005]: I0225 11:36:15.216445 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a68e341-40f2-4b1b-b92f-bca9a4946b39-logs\") pod \"5a68e341-40f2-4b1b-b92f-bca9a4946b39\" (UID: \"5a68e341-40f2-4b1b-b92f-bca9a4946b39\") " Feb 25 11:36:15 crc kubenswrapper[5005]: I0225 11:36:15.216541 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a68e341-40f2-4b1b-b92f-bca9a4946b39-config-data\") pod \"5a68e341-40f2-4b1b-b92f-bca9a4946b39\" (UID: \"5a68e341-40f2-4b1b-b92f-bca9a4946b39\") " Feb 25 11:36:15 crc kubenswrapper[5005]: I0225 11:36:15.216612 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a68e341-40f2-4b1b-b92f-bca9a4946b39-scripts\") pod \"5a68e341-40f2-4b1b-b92f-bca9a4946b39\" (UID: \"5a68e341-40f2-4b1b-b92f-bca9a4946b39\") " Feb 25 11:36:15 crc kubenswrapper[5005]: I0225 11:36:15.216677 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a68e341-40f2-4b1b-b92f-bca9a4946b39-combined-ca-bundle\") pod \"5a68e341-40f2-4b1b-b92f-bca9a4946b39\" (UID: \"5a68e341-40f2-4b1b-b92f-bca9a4946b39\") " Feb 25 11:36:15 crc kubenswrapper[5005]: I0225 11:36:15.216709 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ppfk\" (UniqueName: \"kubernetes.io/projected/5a68e341-40f2-4b1b-b92f-bca9a4946b39-kube-api-access-2ppfk\") pod \"5a68e341-40f2-4b1b-b92f-bca9a4946b39\" (UID: \"5a68e341-40f2-4b1b-b92f-bca9a4946b39\") " Feb 25 11:36:15 crc kubenswrapper[5005]: I0225 11:36:15.216956 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a68e341-40f2-4b1b-b92f-bca9a4946b39-logs" (OuterVolumeSpecName: "logs") pod "5a68e341-40f2-4b1b-b92f-bca9a4946b39" (UID: "5a68e341-40f2-4b1b-b92f-bca9a4946b39"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:36:15 crc kubenswrapper[5005]: I0225 11:36:15.217384 5005 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a68e341-40f2-4b1b-b92f-bca9a4946b39-logs\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:15 crc kubenswrapper[5005]: I0225 11:36:15.225845 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a68e341-40f2-4b1b-b92f-bca9a4946b39-scripts" (OuterVolumeSpecName: "scripts") pod "5a68e341-40f2-4b1b-b92f-bca9a4946b39" (UID: "5a68e341-40f2-4b1b-b92f-bca9a4946b39"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:36:15 crc kubenswrapper[5005]: I0225 11:36:15.236548 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a68e341-40f2-4b1b-b92f-bca9a4946b39-kube-api-access-2ppfk" (OuterVolumeSpecName: "kube-api-access-2ppfk") pod "5a68e341-40f2-4b1b-b92f-bca9a4946b39" (UID: "5a68e341-40f2-4b1b-b92f-bca9a4946b39"). InnerVolumeSpecName "kube-api-access-2ppfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:36:15 crc kubenswrapper[5005]: I0225 11:36:15.250939 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a68e341-40f2-4b1b-b92f-bca9a4946b39-config-data" (OuterVolumeSpecName: "config-data") pod "5a68e341-40f2-4b1b-b92f-bca9a4946b39" (UID: "5a68e341-40f2-4b1b-b92f-bca9a4946b39"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:36:15 crc kubenswrapper[5005]: I0225 11:36:15.258632 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a68e341-40f2-4b1b-b92f-bca9a4946b39-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5a68e341-40f2-4b1b-b92f-bca9a4946b39" (UID: "5a68e341-40f2-4b1b-b92f-bca9a4946b39"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:36:15 crc kubenswrapper[5005]: I0225 11:36:15.320341 5005 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a68e341-40f2-4b1b-b92f-bca9a4946b39-config-data\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:15 crc kubenswrapper[5005]: I0225 11:36:15.320389 5005 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a68e341-40f2-4b1b-b92f-bca9a4946b39-scripts\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:15 crc kubenswrapper[5005]: I0225 11:36:15.320398 5005 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a68e341-40f2-4b1b-b92f-bca9a4946b39-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:15 crc kubenswrapper[5005]: I0225 11:36:15.320411 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ppfk\" (UniqueName: \"kubernetes.io/projected/5a68e341-40f2-4b1b-b92f-bca9a4946b39-kube-api-access-2ppfk\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:15 crc kubenswrapper[5005]: I0225 11:36:15.808168 5005 generic.go:334] "Generic (PLEG): container finished" podID="753b53c4-aec7-4a64-bc02-76520bddb879" containerID="7d355b56657a0b42dd9013ec613aab9610bf772b267f913540ed906741b34b9a" exitCode=0 Feb 25 11:36:15 crc kubenswrapper[5005]: I0225 11:36:15.808252 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hpfk4" event={"ID":"753b53c4-aec7-4a64-bc02-76520bddb879","Type":"ContainerDied","Data":"7d355b56657a0b42dd9013ec613aab9610bf772b267f913540ed906741b34b9a"} Feb 25 11:36:15 crc kubenswrapper[5005]: I0225 11:36:15.820456 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-wqnnp" event={"ID":"5a68e341-40f2-4b1b-b92f-bca9a4946b39","Type":"ContainerDied","Data":"ba12959e3447ca77627118765a66de9640473945a13a6ee2487b7312b04d8226"} Feb 25 11:36:15 crc kubenswrapper[5005]: I0225 11:36:15.820513 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba12959e3447ca77627118765a66de9640473945a13a6ee2487b7312b04d8226" Feb 25 11:36:15 crc kubenswrapper[5005]: I0225 11:36:15.821515 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-wqnnp" Feb 25 11:36:15 crc kubenswrapper[5005]: I0225 11:36:15.821600 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-b769c78f-h4vrt" Feb 25 11:36:15 crc kubenswrapper[5005]: I0225 11:36:15.905324 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7445447f66-tpwqp"] Feb 25 11:36:15 crc kubenswrapper[5005]: E0225 11:36:15.905671 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a68e341-40f2-4b1b-b92f-bca9a4946b39" containerName="placement-db-sync" Feb 25 11:36:15 crc kubenswrapper[5005]: I0225 11:36:15.905687 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a68e341-40f2-4b1b-b92f-bca9a4946b39" containerName="placement-db-sync" Feb 25 11:36:15 crc kubenswrapper[5005]: I0225 11:36:15.905857 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a68e341-40f2-4b1b-b92f-bca9a4946b39" containerName="placement-db-sync" Feb 25 11:36:15 crc kubenswrapper[5005]: I0225 11:36:15.906664 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7445447f66-tpwqp" Feb 25 11:36:15 crc kubenswrapper[5005]: I0225 11:36:15.921626 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 25 11:36:15 crc kubenswrapper[5005]: I0225 11:36:15.925887 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 25 11:36:15 crc kubenswrapper[5005]: I0225 11:36:15.926903 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 25 11:36:15 crc kubenswrapper[5005]: I0225 11:36:15.928854 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 25 11:36:15 crc kubenswrapper[5005]: I0225 11:36:15.929335 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-sz5nt" Feb 25 11:36:15 crc kubenswrapper[5005]: I0225 11:36:15.936379 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60d4805b-4c99-4d5c-9e31-744e20ba5c55-logs\") pod \"placement-7445447f66-tpwqp\" (UID: \"60d4805b-4c99-4d5c-9e31-744e20ba5c55\") " pod="openstack/placement-7445447f66-tpwqp" Feb 25 11:36:15 crc kubenswrapper[5005]: I0225 11:36:15.936434 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/60d4805b-4c99-4d5c-9e31-744e20ba5c55-public-tls-certs\") pod \"placement-7445447f66-tpwqp\" (UID: \"60d4805b-4c99-4d5c-9e31-744e20ba5c55\") " pod="openstack/placement-7445447f66-tpwqp" Feb 25 11:36:15 crc kubenswrapper[5005]: I0225 11:36:15.936471 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/60d4805b-4c99-4d5c-9e31-744e20ba5c55-internal-tls-certs\") pod \"placement-7445447f66-tpwqp\" (UID: \"60d4805b-4c99-4d5c-9e31-744e20ba5c55\") " pod="openstack/placement-7445447f66-tpwqp" Feb 25 11:36:15 crc kubenswrapper[5005]: I0225 11:36:15.936486 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60d4805b-4c99-4d5c-9e31-744e20ba5c55-config-data\") pod \"placement-7445447f66-tpwqp\" (UID: \"60d4805b-4c99-4d5c-9e31-744e20ba5c55\") " pod="openstack/placement-7445447f66-tpwqp" Feb 25 11:36:15 crc kubenswrapper[5005]: I0225 11:36:15.936514 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60d4805b-4c99-4d5c-9e31-744e20ba5c55-scripts\") pod \"placement-7445447f66-tpwqp\" (UID: \"60d4805b-4c99-4d5c-9e31-744e20ba5c55\") " pod="openstack/placement-7445447f66-tpwqp" Feb 25 11:36:15 crc kubenswrapper[5005]: I0225 11:36:15.936534 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v25sp\" (UniqueName: \"kubernetes.io/projected/60d4805b-4c99-4d5c-9e31-744e20ba5c55-kube-api-access-v25sp\") pod \"placement-7445447f66-tpwqp\" (UID: \"60d4805b-4c99-4d5c-9e31-744e20ba5c55\") " pod="openstack/placement-7445447f66-tpwqp" Feb 25 11:36:15 crc kubenswrapper[5005]: I0225 11:36:15.936585 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60d4805b-4c99-4d5c-9e31-744e20ba5c55-combined-ca-bundle\") pod \"placement-7445447f66-tpwqp\" (UID: \"60d4805b-4c99-4d5c-9e31-744e20ba5c55\") " pod="openstack/placement-7445447f66-tpwqp" Feb 25 11:36:15 crc kubenswrapper[5005]: I0225 11:36:15.943778 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7445447f66-tpwqp"] Feb 25 11:36:16 crc kubenswrapper[5005]: I0225 11:36:16.038318 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60d4805b-4c99-4d5c-9e31-744e20ba5c55-logs\") pod \"placement-7445447f66-tpwqp\" (UID: \"60d4805b-4c99-4d5c-9e31-744e20ba5c55\") " pod="openstack/placement-7445447f66-tpwqp" Feb 25 11:36:16 crc kubenswrapper[5005]: I0225 11:36:16.038386 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/60d4805b-4c99-4d5c-9e31-744e20ba5c55-public-tls-certs\") pod \"placement-7445447f66-tpwqp\" (UID: \"60d4805b-4c99-4d5c-9e31-744e20ba5c55\") " pod="openstack/placement-7445447f66-tpwqp" Feb 25 11:36:16 crc kubenswrapper[5005]: I0225 11:36:16.038419 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/60d4805b-4c99-4d5c-9e31-744e20ba5c55-internal-tls-certs\") pod \"placement-7445447f66-tpwqp\" (UID: \"60d4805b-4c99-4d5c-9e31-744e20ba5c55\") " pod="openstack/placement-7445447f66-tpwqp" Feb 25 11:36:16 crc kubenswrapper[5005]: I0225 11:36:16.038439 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60d4805b-4c99-4d5c-9e31-744e20ba5c55-config-data\") pod \"placement-7445447f66-tpwqp\" (UID: \"60d4805b-4c99-4d5c-9e31-744e20ba5c55\") " pod="openstack/placement-7445447f66-tpwqp" Feb 25 11:36:16 crc kubenswrapper[5005]: I0225 11:36:16.038466 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60d4805b-4c99-4d5c-9e31-744e20ba5c55-scripts\") pod \"placement-7445447f66-tpwqp\" (UID: \"60d4805b-4c99-4d5c-9e31-744e20ba5c55\") " pod="openstack/placement-7445447f66-tpwqp" Feb 25 11:36:16 crc kubenswrapper[5005]: I0225 11:36:16.038492 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v25sp\" (UniqueName: \"kubernetes.io/projected/60d4805b-4c99-4d5c-9e31-744e20ba5c55-kube-api-access-v25sp\") pod \"placement-7445447f66-tpwqp\" (UID: \"60d4805b-4c99-4d5c-9e31-744e20ba5c55\") " pod="openstack/placement-7445447f66-tpwqp" Feb 25 11:36:16 crc kubenswrapper[5005]: I0225 11:36:16.038521 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60d4805b-4c99-4d5c-9e31-744e20ba5c55-combined-ca-bundle\") pod \"placement-7445447f66-tpwqp\" (UID: \"60d4805b-4c99-4d5c-9e31-744e20ba5c55\") " pod="openstack/placement-7445447f66-tpwqp" Feb 25 11:36:16 crc kubenswrapper[5005]: I0225 11:36:16.044115 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60d4805b-4c99-4d5c-9e31-744e20ba5c55-combined-ca-bundle\") pod \"placement-7445447f66-tpwqp\" (UID: \"60d4805b-4c99-4d5c-9e31-744e20ba5c55\") " pod="openstack/placement-7445447f66-tpwqp" Feb 25 11:36:16 crc kubenswrapper[5005]: I0225 11:36:16.046196 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/60d4805b-4c99-4d5c-9e31-744e20ba5c55-internal-tls-certs\") pod \"placement-7445447f66-tpwqp\" (UID: \"60d4805b-4c99-4d5c-9e31-744e20ba5c55\") " pod="openstack/placement-7445447f66-tpwqp" Feb 25 11:36:16 crc kubenswrapper[5005]: I0225 11:36:16.046464 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60d4805b-4c99-4d5c-9e31-744e20ba5c55-logs\") pod \"placement-7445447f66-tpwqp\" (UID: \"60d4805b-4c99-4d5c-9e31-744e20ba5c55\") " pod="openstack/placement-7445447f66-tpwqp" Feb 25 11:36:16 crc kubenswrapper[5005]: I0225 11:36:16.051006 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60d4805b-4c99-4d5c-9e31-744e20ba5c55-config-data\") pod \"placement-7445447f66-tpwqp\" (UID: \"60d4805b-4c99-4d5c-9e31-744e20ba5c55\") " pod="openstack/placement-7445447f66-tpwqp" Feb 25 11:36:16 crc kubenswrapper[5005]: I0225 11:36:16.056741 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/60d4805b-4c99-4d5c-9e31-744e20ba5c55-public-tls-certs\") pod \"placement-7445447f66-tpwqp\" (UID: \"60d4805b-4c99-4d5c-9e31-744e20ba5c55\") " pod="openstack/placement-7445447f66-tpwqp" Feb 25 11:36:16 crc kubenswrapper[5005]: I0225 11:36:16.068115 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v25sp\" (UniqueName: \"kubernetes.io/projected/60d4805b-4c99-4d5c-9e31-744e20ba5c55-kube-api-access-v25sp\") pod \"placement-7445447f66-tpwqp\" (UID: \"60d4805b-4c99-4d5c-9e31-744e20ba5c55\") " pod="openstack/placement-7445447f66-tpwqp" Feb 25 11:36:16 crc kubenswrapper[5005]: I0225 11:36:16.069625 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60d4805b-4c99-4d5c-9e31-744e20ba5c55-scripts\") pod \"placement-7445447f66-tpwqp\" (UID: \"60d4805b-4c99-4d5c-9e31-744e20ba5c55\") " pod="openstack/placement-7445447f66-tpwqp" Feb 25 11:36:16 crc kubenswrapper[5005]: I0225 11:36:16.222184 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7445447f66-tpwqp" Feb 25 11:36:16 crc kubenswrapper[5005]: I0225 11:36:16.310602 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533656-kxl9w" Feb 25 11:36:16 crc kubenswrapper[5005]: I0225 11:36:16.342802 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqlng\" (UniqueName: \"kubernetes.io/projected/83478c50-2691-45b5-abc6-039a23bb1645-kube-api-access-gqlng\") pod \"83478c50-2691-45b5-abc6-039a23bb1645\" (UID: \"83478c50-2691-45b5-abc6-039a23bb1645\") " Feb 25 11:36:16 crc kubenswrapper[5005]: I0225 11:36:16.351644 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83478c50-2691-45b5-abc6-039a23bb1645-kube-api-access-gqlng" (OuterVolumeSpecName: "kube-api-access-gqlng") pod "83478c50-2691-45b5-abc6-039a23bb1645" (UID: "83478c50-2691-45b5-abc6-039a23bb1645"). InnerVolumeSpecName "kube-api-access-gqlng". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:36:16 crc kubenswrapper[5005]: I0225 11:36:16.444253 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqlng\" (UniqueName: \"kubernetes.io/projected/83478c50-2691-45b5-abc6-039a23bb1645-kube-api-access-gqlng\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:16 crc kubenswrapper[5005]: I0225 11:36:16.790274 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7445447f66-tpwqp"] Feb 25 11:36:16 crc kubenswrapper[5005]: W0225 11:36:16.799223 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod60d4805b_4c99_4d5c_9e31_744e20ba5c55.slice/crio-471925f01a0a55cc3786b6f5d0575a9d163e35bba711fb9c2b364656b1a5448c WatchSource:0}: Error finding container 471925f01a0a55cc3786b6f5d0575a9d163e35bba711fb9c2b364656b1a5448c: Status 404 returned error can't find the container with id 471925f01a0a55cc3786b6f5d0575a9d163e35bba711fb9c2b364656b1a5448c Feb 25 11:36:16 crc kubenswrapper[5005]: I0225 11:36:16.839355 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7445447f66-tpwqp" event={"ID":"60d4805b-4c99-4d5c-9e31-744e20ba5c55","Type":"ContainerStarted","Data":"471925f01a0a55cc3786b6f5d0575a9d163e35bba711fb9c2b364656b1a5448c"} Feb 25 11:36:16 crc kubenswrapper[5005]: I0225 11:36:16.842966 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533656-kxl9w" event={"ID":"83478c50-2691-45b5-abc6-039a23bb1645","Type":"ContainerDied","Data":"beee6a3ab5e0c4e77f938cd192395fc3b8939a42d3d161e2d335cdbb786d47e5"} Feb 25 11:36:16 crc kubenswrapper[5005]: I0225 11:36:16.842996 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="beee6a3ab5e0c4e77f938cd192395fc3b8939a42d3d161e2d335cdbb786d47e5" Feb 25 11:36:16 crc kubenswrapper[5005]: I0225 11:36:16.843046 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533656-kxl9w" Feb 25 11:36:17 crc kubenswrapper[5005]: I0225 11:36:17.368479 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533650-ntmlt"] Feb 25 11:36:17 crc kubenswrapper[5005]: I0225 11:36:17.376482 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533650-ntmlt"] Feb 25 11:36:17 crc kubenswrapper[5005]: I0225 11:36:17.561489 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hpfk4" Feb 25 11:36:17 crc kubenswrapper[5005]: I0225 11:36:17.706256 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/753b53c4-aec7-4a64-bc02-76520bddb879-scripts\") pod \"753b53c4-aec7-4a64-bc02-76520bddb879\" (UID: \"753b53c4-aec7-4a64-bc02-76520bddb879\") " Feb 25 11:36:17 crc kubenswrapper[5005]: I0225 11:36:17.706314 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mz2sk\" (UniqueName: \"kubernetes.io/projected/753b53c4-aec7-4a64-bc02-76520bddb879-kube-api-access-mz2sk\") pod \"753b53c4-aec7-4a64-bc02-76520bddb879\" (UID: \"753b53c4-aec7-4a64-bc02-76520bddb879\") " Feb 25 11:36:17 crc kubenswrapper[5005]: I0225 11:36:17.706380 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/753b53c4-aec7-4a64-bc02-76520bddb879-config-data\") pod \"753b53c4-aec7-4a64-bc02-76520bddb879\" (UID: \"753b53c4-aec7-4a64-bc02-76520bddb879\") " Feb 25 11:36:17 crc kubenswrapper[5005]: I0225 11:36:17.706421 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/753b53c4-aec7-4a64-bc02-76520bddb879-fernet-keys\") pod \"753b53c4-aec7-4a64-bc02-76520bddb879\" (UID: \"753b53c4-aec7-4a64-bc02-76520bddb879\") " Feb 25 11:36:17 crc kubenswrapper[5005]: I0225 11:36:17.706476 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/753b53c4-aec7-4a64-bc02-76520bddb879-combined-ca-bundle\") pod \"753b53c4-aec7-4a64-bc02-76520bddb879\" (UID: \"753b53c4-aec7-4a64-bc02-76520bddb879\") " Feb 25 11:36:17 crc kubenswrapper[5005]: I0225 11:36:17.706542 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/753b53c4-aec7-4a64-bc02-76520bddb879-credential-keys\") pod \"753b53c4-aec7-4a64-bc02-76520bddb879\" (UID: \"753b53c4-aec7-4a64-bc02-76520bddb879\") " Feb 25 11:36:17 crc kubenswrapper[5005]: I0225 11:36:17.713950 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/753b53c4-aec7-4a64-bc02-76520bddb879-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "753b53c4-aec7-4a64-bc02-76520bddb879" (UID: "753b53c4-aec7-4a64-bc02-76520bddb879"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:36:17 crc kubenswrapper[5005]: I0225 11:36:17.714070 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/753b53c4-aec7-4a64-bc02-76520bddb879-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "753b53c4-aec7-4a64-bc02-76520bddb879" (UID: "753b53c4-aec7-4a64-bc02-76520bddb879"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:36:17 crc kubenswrapper[5005]: I0225 11:36:17.718488 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/753b53c4-aec7-4a64-bc02-76520bddb879-scripts" (OuterVolumeSpecName: "scripts") pod "753b53c4-aec7-4a64-bc02-76520bddb879" (UID: "753b53c4-aec7-4a64-bc02-76520bddb879"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:36:17 crc kubenswrapper[5005]: I0225 11:36:17.724567 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/753b53c4-aec7-4a64-bc02-76520bddb879-kube-api-access-mz2sk" (OuterVolumeSpecName: "kube-api-access-mz2sk") pod "753b53c4-aec7-4a64-bc02-76520bddb879" (UID: "753b53c4-aec7-4a64-bc02-76520bddb879"). InnerVolumeSpecName "kube-api-access-mz2sk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:36:17 crc kubenswrapper[5005]: I0225 11:36:17.746508 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/753b53c4-aec7-4a64-bc02-76520bddb879-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "753b53c4-aec7-4a64-bc02-76520bddb879" (UID: "753b53c4-aec7-4a64-bc02-76520bddb879"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:36:17 crc kubenswrapper[5005]: I0225 11:36:17.770499 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/753b53c4-aec7-4a64-bc02-76520bddb879-config-data" (OuterVolumeSpecName: "config-data") pod "753b53c4-aec7-4a64-bc02-76520bddb879" (UID: "753b53c4-aec7-4a64-bc02-76520bddb879"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:36:17 crc kubenswrapper[5005]: I0225 11:36:17.809786 5005 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/753b53c4-aec7-4a64-bc02-76520bddb879-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:17 crc kubenswrapper[5005]: I0225 11:36:17.810738 5005 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/753b53c4-aec7-4a64-bc02-76520bddb879-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:17 crc kubenswrapper[5005]: I0225 11:36:17.810766 5005 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/753b53c4-aec7-4a64-bc02-76520bddb879-scripts\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:17 crc kubenswrapper[5005]: I0225 11:36:17.810780 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mz2sk\" (UniqueName: \"kubernetes.io/projected/753b53c4-aec7-4a64-bc02-76520bddb879-kube-api-access-mz2sk\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:17 crc kubenswrapper[5005]: I0225 11:36:17.810852 5005 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/753b53c4-aec7-4a64-bc02-76520bddb879-config-data\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:17 crc kubenswrapper[5005]: I0225 11:36:17.810867 5005 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/753b53c4-aec7-4a64-bc02-76520bddb879-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:17 crc kubenswrapper[5005]: I0225 11:36:17.873007 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7445447f66-tpwqp" event={"ID":"60d4805b-4c99-4d5c-9e31-744e20ba5c55","Type":"ContainerStarted","Data":"96ac18b7b690b3e2907e6a699ec81de0f5618c482b34150fe0e1f4ae27693deb"} Feb 25 11:36:17 crc kubenswrapper[5005]: I0225 11:36:17.873087 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7445447f66-tpwqp" event={"ID":"60d4805b-4c99-4d5c-9e31-744e20ba5c55","Type":"ContainerStarted","Data":"c506446cc34bef3a7aa96fc599a57363ab460e080fc6a1857943908fa464c118"} Feb 25 11:36:17 crc kubenswrapper[5005]: I0225 11:36:17.874092 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7445447f66-tpwqp" Feb 25 11:36:17 crc kubenswrapper[5005]: I0225 11:36:17.874315 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7445447f66-tpwqp" Feb 25 11:36:17 crc kubenswrapper[5005]: I0225 11:36:17.889947 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hpfk4" event={"ID":"753b53c4-aec7-4a64-bc02-76520bddb879","Type":"ContainerDied","Data":"839246348f52b514e2c10f908a120e219736f2fd505c3cfb69f83c8280864a6e"} Feb 25 11:36:17 crc kubenswrapper[5005]: I0225 11:36:17.890013 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="839246348f52b514e2c10f908a120e219736f2fd505c3cfb69f83c8280864a6e" Feb 25 11:36:17 crc kubenswrapper[5005]: I0225 11:36:17.890116 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hpfk4" Feb 25 11:36:17 crc kubenswrapper[5005]: I0225 11:36:17.941551 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7445447f66-tpwqp" podStartSLOduration=2.941522617 podStartE2EDuration="2.941522617s" podCreationTimestamp="2026-02-25 11:36:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:36:17.903466659 +0000 UTC m=+1091.944199006" watchObservedRunningTime="2026-02-25 11:36:17.941522617 +0000 UTC m=+1091.982254944" Feb 25 11:36:17 crc kubenswrapper[5005]: I0225 11:36:17.966006 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-74447dd785-mk8tc"] Feb 25 11:36:17 crc kubenswrapper[5005]: E0225 11:36:17.966631 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83478c50-2691-45b5-abc6-039a23bb1645" containerName="oc" Feb 25 11:36:17 crc kubenswrapper[5005]: I0225 11:36:17.966654 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="83478c50-2691-45b5-abc6-039a23bb1645" containerName="oc" Feb 25 11:36:17 crc kubenswrapper[5005]: E0225 11:36:17.966688 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="753b53c4-aec7-4a64-bc02-76520bddb879" containerName="keystone-bootstrap" Feb 25 11:36:17 crc kubenswrapper[5005]: I0225 11:36:17.966698 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="753b53c4-aec7-4a64-bc02-76520bddb879" containerName="keystone-bootstrap" Feb 25 11:36:17 crc kubenswrapper[5005]: I0225 11:36:17.966912 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="753b53c4-aec7-4a64-bc02-76520bddb879" containerName="keystone-bootstrap" Feb 25 11:36:17 crc kubenswrapper[5005]: I0225 11:36:17.966940 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="83478c50-2691-45b5-abc6-039a23bb1645" containerName="oc" Feb 25 11:36:17 crc kubenswrapper[5005]: I0225 11:36:17.968190 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-74447dd785-mk8tc" Feb 25 11:36:17 crc kubenswrapper[5005]: I0225 11:36:17.970415 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-nppj9" Feb 25 11:36:17 crc kubenswrapper[5005]: I0225 11:36:17.975311 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 25 11:36:17 crc kubenswrapper[5005]: I0225 11:36:17.975638 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 25 11:36:17 crc kubenswrapper[5005]: I0225 11:36:17.975797 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 25 11:36:17 crc kubenswrapper[5005]: I0225 11:36:17.975928 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 25 11:36:17 crc kubenswrapper[5005]: I0225 11:36:17.976074 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 25 11:36:17 crc kubenswrapper[5005]: I0225 11:36:17.988203 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-74447dd785-mk8tc"] Feb 25 11:36:18 crc kubenswrapper[5005]: I0225 11:36:18.121972 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/360249f4-3024-4567-afa0-f52fb42cc400-credential-keys\") pod \"keystone-74447dd785-mk8tc\" (UID: \"360249f4-3024-4567-afa0-f52fb42cc400\") " pod="openstack/keystone-74447dd785-mk8tc" Feb 25 11:36:18 crc kubenswrapper[5005]: I0225 11:36:18.122123 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/360249f4-3024-4567-afa0-f52fb42cc400-public-tls-certs\") pod \"keystone-74447dd785-mk8tc\" (UID: \"360249f4-3024-4567-afa0-f52fb42cc400\") " pod="openstack/keystone-74447dd785-mk8tc" Feb 25 11:36:18 crc kubenswrapper[5005]: I0225 11:36:18.122299 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/360249f4-3024-4567-afa0-f52fb42cc400-fernet-keys\") pod \"keystone-74447dd785-mk8tc\" (UID: \"360249f4-3024-4567-afa0-f52fb42cc400\") " pod="openstack/keystone-74447dd785-mk8tc" Feb 25 11:36:18 crc kubenswrapper[5005]: I0225 11:36:18.122397 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/360249f4-3024-4567-afa0-f52fb42cc400-scripts\") pod \"keystone-74447dd785-mk8tc\" (UID: \"360249f4-3024-4567-afa0-f52fb42cc400\") " pod="openstack/keystone-74447dd785-mk8tc" Feb 25 11:36:18 crc kubenswrapper[5005]: I0225 11:36:18.122529 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/360249f4-3024-4567-afa0-f52fb42cc400-internal-tls-certs\") pod \"keystone-74447dd785-mk8tc\" (UID: \"360249f4-3024-4567-afa0-f52fb42cc400\") " pod="openstack/keystone-74447dd785-mk8tc" Feb 25 11:36:18 crc kubenswrapper[5005]: I0225 11:36:18.122808 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/360249f4-3024-4567-afa0-f52fb42cc400-combined-ca-bundle\") pod \"keystone-74447dd785-mk8tc\" (UID: \"360249f4-3024-4567-afa0-f52fb42cc400\") " pod="openstack/keystone-74447dd785-mk8tc" Feb 25 11:36:18 crc kubenswrapper[5005]: I0225 11:36:18.122873 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/360249f4-3024-4567-afa0-f52fb42cc400-config-data\") pod \"keystone-74447dd785-mk8tc\" (UID: \"360249f4-3024-4567-afa0-f52fb42cc400\") " pod="openstack/keystone-74447dd785-mk8tc" Feb 25 11:36:18 crc kubenswrapper[5005]: I0225 11:36:18.122914 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmtnr\" (UniqueName: \"kubernetes.io/projected/360249f4-3024-4567-afa0-f52fb42cc400-kube-api-access-tmtnr\") pod \"keystone-74447dd785-mk8tc\" (UID: \"360249f4-3024-4567-afa0-f52fb42cc400\") " pod="openstack/keystone-74447dd785-mk8tc" Feb 25 11:36:18 crc kubenswrapper[5005]: I0225 11:36:18.224431 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/360249f4-3024-4567-afa0-f52fb42cc400-combined-ca-bundle\") pod \"keystone-74447dd785-mk8tc\" (UID: \"360249f4-3024-4567-afa0-f52fb42cc400\") " pod="openstack/keystone-74447dd785-mk8tc" Feb 25 11:36:18 crc kubenswrapper[5005]: I0225 11:36:18.224701 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/360249f4-3024-4567-afa0-f52fb42cc400-config-data\") pod \"keystone-74447dd785-mk8tc\" (UID: \"360249f4-3024-4567-afa0-f52fb42cc400\") " pod="openstack/keystone-74447dd785-mk8tc" Feb 25 11:36:18 crc kubenswrapper[5005]: I0225 11:36:18.224720 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmtnr\" (UniqueName: \"kubernetes.io/projected/360249f4-3024-4567-afa0-f52fb42cc400-kube-api-access-tmtnr\") pod \"keystone-74447dd785-mk8tc\" (UID: \"360249f4-3024-4567-afa0-f52fb42cc400\") " pod="openstack/keystone-74447dd785-mk8tc" Feb 25 11:36:18 crc kubenswrapper[5005]: I0225 11:36:18.224763 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/360249f4-3024-4567-afa0-f52fb42cc400-credential-keys\") pod \"keystone-74447dd785-mk8tc\" (UID: \"360249f4-3024-4567-afa0-f52fb42cc400\") " pod="openstack/keystone-74447dd785-mk8tc" Feb 25 11:36:18 crc kubenswrapper[5005]: I0225 11:36:18.224782 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/360249f4-3024-4567-afa0-f52fb42cc400-public-tls-certs\") pod \"keystone-74447dd785-mk8tc\" (UID: \"360249f4-3024-4567-afa0-f52fb42cc400\") " pod="openstack/keystone-74447dd785-mk8tc" Feb 25 11:36:18 crc kubenswrapper[5005]: I0225 11:36:18.224806 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/360249f4-3024-4567-afa0-f52fb42cc400-fernet-keys\") pod \"keystone-74447dd785-mk8tc\" (UID: \"360249f4-3024-4567-afa0-f52fb42cc400\") " pod="openstack/keystone-74447dd785-mk8tc" Feb 25 11:36:18 crc kubenswrapper[5005]: I0225 11:36:18.224825 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/360249f4-3024-4567-afa0-f52fb42cc400-scripts\") pod \"keystone-74447dd785-mk8tc\" (UID: \"360249f4-3024-4567-afa0-f52fb42cc400\") " pod="openstack/keystone-74447dd785-mk8tc" Feb 25 11:36:18 crc kubenswrapper[5005]: I0225 11:36:18.224868 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/360249f4-3024-4567-afa0-f52fb42cc400-internal-tls-certs\") pod \"keystone-74447dd785-mk8tc\" (UID: \"360249f4-3024-4567-afa0-f52fb42cc400\") " pod="openstack/keystone-74447dd785-mk8tc" Feb 25 11:36:18 crc kubenswrapper[5005]: I0225 11:36:18.230295 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/360249f4-3024-4567-afa0-f52fb42cc400-internal-tls-certs\") pod \"keystone-74447dd785-mk8tc\" (UID: \"360249f4-3024-4567-afa0-f52fb42cc400\") " pod="openstack/keystone-74447dd785-mk8tc" Feb 25 11:36:18 crc kubenswrapper[5005]: I0225 11:36:18.232163 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/360249f4-3024-4567-afa0-f52fb42cc400-credential-keys\") pod \"keystone-74447dd785-mk8tc\" (UID: \"360249f4-3024-4567-afa0-f52fb42cc400\") " pod="openstack/keystone-74447dd785-mk8tc" Feb 25 11:36:18 crc kubenswrapper[5005]: I0225 11:36:18.234717 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/360249f4-3024-4567-afa0-f52fb42cc400-public-tls-certs\") pod \"keystone-74447dd785-mk8tc\" (UID: \"360249f4-3024-4567-afa0-f52fb42cc400\") " pod="openstack/keystone-74447dd785-mk8tc" Feb 25 11:36:18 crc kubenswrapper[5005]: I0225 11:36:18.238258 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/360249f4-3024-4567-afa0-f52fb42cc400-combined-ca-bundle\") pod \"keystone-74447dd785-mk8tc\" (UID: \"360249f4-3024-4567-afa0-f52fb42cc400\") " pod="openstack/keystone-74447dd785-mk8tc" Feb 25 11:36:18 crc kubenswrapper[5005]: I0225 11:36:18.239350 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/360249f4-3024-4567-afa0-f52fb42cc400-scripts\") pod \"keystone-74447dd785-mk8tc\" (UID: \"360249f4-3024-4567-afa0-f52fb42cc400\") " pod="openstack/keystone-74447dd785-mk8tc" Feb 25 11:36:18 crc kubenswrapper[5005]: I0225 11:36:18.247049 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/360249f4-3024-4567-afa0-f52fb42cc400-fernet-keys\") pod \"keystone-74447dd785-mk8tc\" (UID: \"360249f4-3024-4567-afa0-f52fb42cc400\") " pod="openstack/keystone-74447dd785-mk8tc" Feb 25 11:36:18 crc kubenswrapper[5005]: I0225 11:36:18.255007 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/360249f4-3024-4567-afa0-f52fb42cc400-config-data\") pod \"keystone-74447dd785-mk8tc\" (UID: \"360249f4-3024-4567-afa0-f52fb42cc400\") " pod="openstack/keystone-74447dd785-mk8tc" Feb 25 11:36:18 crc kubenswrapper[5005]: I0225 11:36:18.268903 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmtnr\" (UniqueName: \"kubernetes.io/projected/360249f4-3024-4567-afa0-f52fb42cc400-kube-api-access-tmtnr\") pod \"keystone-74447dd785-mk8tc\" (UID: \"360249f4-3024-4567-afa0-f52fb42cc400\") " pod="openstack/keystone-74447dd785-mk8tc" Feb 25 11:36:18 crc kubenswrapper[5005]: I0225 11:36:18.315231 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-74447dd785-mk8tc" Feb 25 11:36:18 crc kubenswrapper[5005]: I0225 11:36:18.696665 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cda6332-8370-4c9a-813d-ebc3e97b91b9" path="/var/lib/kubelet/pods/3cda6332-8370-4c9a-813d-ebc3e97b91b9/volumes" Feb 25 11:36:19 crc kubenswrapper[5005]: I0225 11:36:19.621148 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7b946d459c-rccrg" Feb 25 11:36:19 crc kubenswrapper[5005]: I0225 11:36:19.726902 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-vgsgg"] Feb 25 11:36:19 crc kubenswrapper[5005]: I0225 11:36:19.727381 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7987f74bbc-vgsgg" podUID="ec247cb2-074d-4c74-96fd-62353e78b898" containerName="dnsmasq-dns" containerID="cri-o://c04c4388259f62c1d6f77ee3160313648fc0753c4b24d2afe7cb0263ca31d349" gracePeriod=10 Feb 25 11:36:19 crc kubenswrapper[5005]: I0225 11:36:19.924601 5005 generic.go:334] "Generic (PLEG): container finished" podID="ec247cb2-074d-4c74-96fd-62353e78b898" containerID="c04c4388259f62c1d6f77ee3160313648fc0753c4b24d2afe7cb0263ca31d349" exitCode=0 Feb 25 11:36:19 crc kubenswrapper[5005]: I0225 11:36:19.924695 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-vgsgg" event={"ID":"ec247cb2-074d-4c74-96fd-62353e78b898","Type":"ContainerDied","Data":"c04c4388259f62c1d6f77ee3160313648fc0753c4b24d2afe7cb0263ca31d349"} Feb 25 11:36:21 crc kubenswrapper[5005]: I0225 11:36:21.541750 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7f46657c4d-qmj7f" Feb 25 11:36:21 crc kubenswrapper[5005]: I0225 11:36:21.542171 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7f46657c4d-qmj7f" Feb 25 11:36:21 crc kubenswrapper[5005]: I0225 11:36:21.689689 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-f8bbcbf96-lg5q8" Feb 25 11:36:21 crc kubenswrapper[5005]: I0225 11:36:21.689732 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-f8bbcbf96-lg5q8" Feb 25 11:36:22 crc kubenswrapper[5005]: I0225 11:36:22.962330 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-vgsgg" event={"ID":"ec247cb2-074d-4c74-96fd-62353e78b898","Type":"ContainerDied","Data":"14e7384adc7be20ad901dd1e55d343344b93d7a089aea9c37173950be8a28f7e"} Feb 25 11:36:22 crc kubenswrapper[5005]: I0225 11:36:22.963363 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14e7384adc7be20ad901dd1e55d343344b93d7a089aea9c37173950be8a28f7e" Feb 25 11:36:23 crc kubenswrapper[5005]: I0225 11:36:23.107991 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-vgsgg" Feb 25 11:36:23 crc kubenswrapper[5005]: I0225 11:36:23.162519 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec247cb2-074d-4c74-96fd-62353e78b898-ovsdbserver-sb\") pod \"ec247cb2-074d-4c74-96fd-62353e78b898\" (UID: \"ec247cb2-074d-4c74-96fd-62353e78b898\") " Feb 25 11:36:23 crc kubenswrapper[5005]: I0225 11:36:23.162610 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec247cb2-074d-4c74-96fd-62353e78b898-config\") pod \"ec247cb2-074d-4c74-96fd-62353e78b898\" (UID: \"ec247cb2-074d-4c74-96fd-62353e78b898\") " Feb 25 11:36:23 crc kubenswrapper[5005]: I0225 11:36:23.162696 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec247cb2-074d-4c74-96fd-62353e78b898-ovsdbserver-nb\") pod \"ec247cb2-074d-4c74-96fd-62353e78b898\" (UID: \"ec247cb2-074d-4c74-96fd-62353e78b898\") " Feb 25 11:36:23 crc kubenswrapper[5005]: I0225 11:36:23.162740 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec247cb2-074d-4c74-96fd-62353e78b898-dns-svc\") pod \"ec247cb2-074d-4c74-96fd-62353e78b898\" (UID: \"ec247cb2-074d-4c74-96fd-62353e78b898\") " Feb 25 11:36:23 crc kubenswrapper[5005]: I0225 11:36:23.162758 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9rxx\" (UniqueName: \"kubernetes.io/projected/ec247cb2-074d-4c74-96fd-62353e78b898-kube-api-access-p9rxx\") pod \"ec247cb2-074d-4c74-96fd-62353e78b898\" (UID: \"ec247cb2-074d-4c74-96fd-62353e78b898\") " Feb 25 11:36:23 crc kubenswrapper[5005]: I0225 11:36:23.169112 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec247cb2-074d-4c74-96fd-62353e78b898-kube-api-access-p9rxx" (OuterVolumeSpecName: "kube-api-access-p9rxx") pod "ec247cb2-074d-4c74-96fd-62353e78b898" (UID: "ec247cb2-074d-4c74-96fd-62353e78b898"). InnerVolumeSpecName "kube-api-access-p9rxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:36:23 crc kubenswrapper[5005]: I0225 11:36:23.264836 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9rxx\" (UniqueName: \"kubernetes.io/projected/ec247cb2-074d-4c74-96fd-62353e78b898-kube-api-access-p9rxx\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:23 crc kubenswrapper[5005]: I0225 11:36:23.337927 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec247cb2-074d-4c74-96fd-62353e78b898-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ec247cb2-074d-4c74-96fd-62353e78b898" (UID: "ec247cb2-074d-4c74-96fd-62353e78b898"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:36:23 crc kubenswrapper[5005]: I0225 11:36:23.351169 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec247cb2-074d-4c74-96fd-62353e78b898-config" (OuterVolumeSpecName: "config") pod "ec247cb2-074d-4c74-96fd-62353e78b898" (UID: "ec247cb2-074d-4c74-96fd-62353e78b898"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:36:23 crc kubenswrapper[5005]: I0225 11:36:23.353912 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec247cb2-074d-4c74-96fd-62353e78b898-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ec247cb2-074d-4c74-96fd-62353e78b898" (UID: "ec247cb2-074d-4c74-96fd-62353e78b898"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:36:23 crc kubenswrapper[5005]: I0225 11:36:23.358831 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec247cb2-074d-4c74-96fd-62353e78b898-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ec247cb2-074d-4c74-96fd-62353e78b898" (UID: "ec247cb2-074d-4c74-96fd-62353e78b898"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:36:23 crc kubenswrapper[5005]: I0225 11:36:23.366274 5005 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec247cb2-074d-4c74-96fd-62353e78b898-config\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:23 crc kubenswrapper[5005]: I0225 11:36:23.366304 5005 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec247cb2-074d-4c74-96fd-62353e78b898-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:23 crc kubenswrapper[5005]: I0225 11:36:23.366314 5005 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec247cb2-074d-4c74-96fd-62353e78b898-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:23 crc kubenswrapper[5005]: I0225 11:36:23.366322 5005 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec247cb2-074d-4c74-96fd-62353e78b898-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:23 crc kubenswrapper[5005]: I0225 11:36:23.396675 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-74447dd785-mk8tc"] Feb 25 11:36:23 crc kubenswrapper[5005]: W0225 11:36:23.403767 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod360249f4_3024_4567_afa0_f52fb42cc400.slice/crio-c9377db6f04b3b11b7d4d4f9a9f954de49ab328c6dd667b1b9636f9e53191d87 WatchSource:0}: Error finding container c9377db6f04b3b11b7d4d4f9a9f954de49ab328c6dd667b1b9636f9e53191d87: Status 404 returned error can't find the container with id c9377db6f04b3b11b7d4d4f9a9f954de49ab328c6dd667b1b9636f9e53191d87 Feb 25 11:36:23 crc kubenswrapper[5005]: I0225 11:36:23.970551 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-6cr5h" event={"ID":"3052cba9-7666-438d-a17c-d3028c836c1d","Type":"ContainerStarted","Data":"a4886de982ae95cffe905f9aa650ce1944e0e94f50b541efd4073a37ba763d49"} Feb 25 11:36:23 crc kubenswrapper[5005]: I0225 11:36:23.985166 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-74447dd785-mk8tc" event={"ID":"360249f4-3024-4567-afa0-f52fb42cc400","Type":"ContainerStarted","Data":"475fae690e3493edcc25529e467bdc566a0d15dd3c530d6a3874ceb9e9cedbcf"} Feb 25 11:36:23 crc kubenswrapper[5005]: I0225 11:36:23.985219 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-74447dd785-mk8tc" event={"ID":"360249f4-3024-4567-afa0-f52fb42cc400","Type":"ContainerStarted","Data":"c9377db6f04b3b11b7d4d4f9a9f954de49ab328c6dd667b1b9636f9e53191d87"} Feb 25 11:36:23 crc kubenswrapper[5005]: I0225 11:36:23.985266 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-74447dd785-mk8tc" Feb 25 11:36:23 crc kubenswrapper[5005]: I0225 11:36:23.989582 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-6cr5h" podStartSLOduration=3.30960966 podStartE2EDuration="41.98956388s" podCreationTimestamp="2026-02-25 11:35:42 +0000 UTC" firstStartedPulling="2026-02-25 11:35:44.286482388 +0000 UTC m=+1058.327214715" lastFinishedPulling="2026-02-25 11:36:22.966436618 +0000 UTC m=+1097.007168935" observedRunningTime="2026-02-25 11:36:23.983429673 +0000 UTC m=+1098.024162000" watchObservedRunningTime="2026-02-25 11:36:23.98956388 +0000 UTC m=+1098.030296207" Feb 25 11:36:24 crc kubenswrapper[5005]: I0225 11:36:24.005953 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-vgsgg" Feb 25 11:36:24 crc kubenswrapper[5005]: I0225 11:36:24.006034 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b033028-9ac7-43d8-95da-931e7a25249a","Type":"ContainerStarted","Data":"62aff208cc62a3bdea4d6476a9d411808b725c45f76aafc0497409c55486ba84"} Feb 25 11:36:24 crc kubenswrapper[5005]: I0225 11:36:24.010893 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-74447dd785-mk8tc" podStartSLOduration=7.010875898 podStartE2EDuration="7.010875898s" podCreationTimestamp="2026-02-25 11:36:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:36:24.00601302 +0000 UTC m=+1098.046745347" watchObservedRunningTime="2026-02-25 11:36:24.010875898 +0000 UTC m=+1098.051608225" Feb 25 11:36:24 crc kubenswrapper[5005]: I0225 11:36:24.053712 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-vgsgg"] Feb 25 11:36:24 crc kubenswrapper[5005]: I0225 11:36:24.060643 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-vgsgg"] Feb 25 11:36:24 crc kubenswrapper[5005]: I0225 11:36:24.698195 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec247cb2-074d-4c74-96fd-62353e78b898" path="/var/lib/kubelet/pods/ec247cb2-074d-4c74-96fd-62353e78b898/volumes" Feb 25 11:36:25 crc kubenswrapper[5005]: I0225 11:36:25.024730 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qrzvf" event={"ID":"1fa77b98-833c-4278-b615-49e4c28e69c5","Type":"ContainerStarted","Data":"272f28327588b421ed42cdb025b601ba75a500b6159b3c18438088a373c9d492"} Feb 25 11:36:25 crc kubenswrapper[5005]: I0225 11:36:25.050870 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-qrzvf" podStartSLOduration=3.693903211 podStartE2EDuration="43.050851802s" podCreationTimestamp="2026-02-25 11:35:42 +0000 UTC" firstStartedPulling="2026-02-25 11:35:44.000564092 +0000 UTC m=+1058.041296419" lastFinishedPulling="2026-02-25 11:36:23.357512683 +0000 UTC m=+1097.398245010" observedRunningTime="2026-02-25 11:36:25.04057411 +0000 UTC m=+1099.081306437" watchObservedRunningTime="2026-02-25 11:36:25.050851802 +0000 UTC m=+1099.091584129" Feb 25 11:36:27 crc kubenswrapper[5005]: I0225 11:36:27.045671 5005 generic.go:334] "Generic (PLEG): container finished" podID="3052cba9-7666-438d-a17c-d3028c836c1d" containerID="a4886de982ae95cffe905f9aa650ce1944e0e94f50b541efd4073a37ba763d49" exitCode=0 Feb 25 11:36:27 crc kubenswrapper[5005]: I0225 11:36:27.045674 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-6cr5h" event={"ID":"3052cba9-7666-438d-a17c-d3028c836c1d","Type":"ContainerDied","Data":"a4886de982ae95cffe905f9aa650ce1944e0e94f50b541efd4073a37ba763d49"} Feb 25 11:36:29 crc kubenswrapper[5005]: I0225 11:36:29.073100 5005 generic.go:334] "Generic (PLEG): container finished" podID="1fa77b98-833c-4278-b615-49e4c28e69c5" containerID="272f28327588b421ed42cdb025b601ba75a500b6159b3c18438088a373c9d492" exitCode=0 Feb 25 11:36:29 crc kubenswrapper[5005]: I0225 11:36:29.073451 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qrzvf" event={"ID":"1fa77b98-833c-4278-b615-49e4c28e69c5","Type":"ContainerDied","Data":"272f28327588b421ed42cdb025b601ba75a500b6159b3c18438088a373c9d492"} Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.034832 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-6cr5h" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.041236 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qrzvf" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.098960 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qrzvf" event={"ID":"1fa77b98-833c-4278-b615-49e4c28e69c5","Type":"ContainerDied","Data":"a230d9c90dbfff681ec33f47f3b91d28ef124b70c065ac16bcddae1e24a25e96"} Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.099019 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a230d9c90dbfff681ec33f47f3b91d28ef124b70c065ac16bcddae1e24a25e96" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.099067 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qrzvf" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.107335 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-6cr5h" event={"ID":"3052cba9-7666-438d-a17c-d3028c836c1d","Type":"ContainerDied","Data":"c20535fdbb55684460420590cc6bd016c71b21c037bdd52527114a00d37621d2"} Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.107421 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-6cr5h" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.107376 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c20535fdbb55684460420590cc6bd016c71b21c037bdd52527114a00d37621d2" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.209426 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3052cba9-7666-438d-a17c-d3028c836c1d-db-sync-config-data\") pod \"3052cba9-7666-438d-a17c-d3028c836c1d\" (UID: \"3052cba9-7666-438d-a17c-d3028c836c1d\") " Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.209680 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1fa77b98-833c-4278-b615-49e4c28e69c5-etc-machine-id\") pod \"1fa77b98-833c-4278-b615-49e4c28e69c5\" (UID: \"1fa77b98-833c-4278-b615-49e4c28e69c5\") " Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.209754 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4k8vh\" (UniqueName: \"kubernetes.io/projected/3052cba9-7666-438d-a17c-d3028c836c1d-kube-api-access-4k8vh\") pod \"3052cba9-7666-438d-a17c-d3028c836c1d\" (UID: \"3052cba9-7666-438d-a17c-d3028c836c1d\") " Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.209830 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzlv2\" (UniqueName: \"kubernetes.io/projected/1fa77b98-833c-4278-b615-49e4c28e69c5-kube-api-access-rzlv2\") pod \"1fa77b98-833c-4278-b615-49e4c28e69c5\" (UID: \"1fa77b98-833c-4278-b615-49e4c28e69c5\") " Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.209963 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fa77b98-833c-4278-b615-49e4c28e69c5-config-data\") pod \"1fa77b98-833c-4278-b615-49e4c28e69c5\" (UID: \"1fa77b98-833c-4278-b615-49e4c28e69c5\") " Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.210098 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1fa77b98-833c-4278-b615-49e4c28e69c5-db-sync-config-data\") pod \"1fa77b98-833c-4278-b615-49e4c28e69c5\" (UID: \"1fa77b98-833c-4278-b615-49e4c28e69c5\") " Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.210176 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fa77b98-833c-4278-b615-49e4c28e69c5-scripts\") pod \"1fa77b98-833c-4278-b615-49e4c28e69c5\" (UID: \"1fa77b98-833c-4278-b615-49e4c28e69c5\") " Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.210276 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3052cba9-7666-438d-a17c-d3028c836c1d-combined-ca-bundle\") pod \"3052cba9-7666-438d-a17c-d3028c836c1d\" (UID: \"3052cba9-7666-438d-a17c-d3028c836c1d\") " Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.210398 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fa77b98-833c-4278-b615-49e4c28e69c5-combined-ca-bundle\") pod \"1fa77b98-833c-4278-b615-49e4c28e69c5\" (UID: \"1fa77b98-833c-4278-b615-49e4c28e69c5\") " Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.213511 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1fa77b98-833c-4278-b615-49e4c28e69c5-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "1fa77b98-833c-4278-b615-49e4c28e69c5" (UID: "1fa77b98-833c-4278-b615-49e4c28e69c5"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.218939 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fa77b98-833c-4278-b615-49e4c28e69c5-kube-api-access-rzlv2" (OuterVolumeSpecName: "kube-api-access-rzlv2") pod "1fa77b98-833c-4278-b615-49e4c28e69c5" (UID: "1fa77b98-833c-4278-b615-49e4c28e69c5"). InnerVolumeSpecName "kube-api-access-rzlv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.221682 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fa77b98-833c-4278-b615-49e4c28e69c5-scripts" (OuterVolumeSpecName: "scripts") pod "1fa77b98-833c-4278-b615-49e4c28e69c5" (UID: "1fa77b98-833c-4278-b615-49e4c28e69c5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.237489 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fa77b98-833c-4278-b615-49e4c28e69c5-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "1fa77b98-833c-4278-b615-49e4c28e69c5" (UID: "1fa77b98-833c-4278-b615-49e4c28e69c5"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.251906 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3052cba9-7666-438d-a17c-d3028c836c1d-kube-api-access-4k8vh" (OuterVolumeSpecName: "kube-api-access-4k8vh") pod "3052cba9-7666-438d-a17c-d3028c836c1d" (UID: "3052cba9-7666-438d-a17c-d3028c836c1d"). InnerVolumeSpecName "kube-api-access-4k8vh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.263534 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3052cba9-7666-438d-a17c-d3028c836c1d-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "3052cba9-7666-438d-a17c-d3028c836c1d" (UID: "3052cba9-7666-438d-a17c-d3028c836c1d"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.297525 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fa77b98-833c-4278-b615-49e4c28e69c5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1fa77b98-833c-4278-b615-49e4c28e69c5" (UID: "1fa77b98-833c-4278-b615-49e4c28e69c5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.314944 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3052cba9-7666-438d-a17c-d3028c836c1d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3052cba9-7666-438d-a17c-d3028c836c1d" (UID: "3052cba9-7666-438d-a17c-d3028c836c1d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.315510 5005 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1fa77b98-833c-4278-b615-49e4c28e69c5-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.315545 5005 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fa77b98-833c-4278-b615-49e4c28e69c5-scripts\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.315554 5005 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3052cba9-7666-438d-a17c-d3028c836c1d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.315564 5005 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fa77b98-833c-4278-b615-49e4c28e69c5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.315574 5005 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3052cba9-7666-438d-a17c-d3028c836c1d-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.315587 5005 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1fa77b98-833c-4278-b615-49e4c28e69c5-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.315595 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4k8vh\" (UniqueName: \"kubernetes.io/projected/3052cba9-7666-438d-a17c-d3028c836c1d-kube-api-access-4k8vh\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.315606 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzlv2\" (UniqueName: \"kubernetes.io/projected/1fa77b98-833c-4278-b615-49e4c28e69c5-kube-api-access-rzlv2\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.354746 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 25 11:36:31 crc kubenswrapper[5005]: E0225 11:36:31.355103 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec247cb2-074d-4c74-96fd-62353e78b898" containerName="init" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.355114 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec247cb2-074d-4c74-96fd-62353e78b898" containerName="init" Feb 25 11:36:31 crc kubenswrapper[5005]: E0225 11:36:31.355125 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec247cb2-074d-4c74-96fd-62353e78b898" containerName="dnsmasq-dns" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.355131 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec247cb2-074d-4c74-96fd-62353e78b898" containerName="dnsmasq-dns" Feb 25 11:36:31 crc kubenswrapper[5005]: E0225 11:36:31.355150 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fa77b98-833c-4278-b615-49e4c28e69c5" containerName="cinder-db-sync" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.355156 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fa77b98-833c-4278-b615-49e4c28e69c5" containerName="cinder-db-sync" Feb 25 11:36:31 crc kubenswrapper[5005]: E0225 11:36:31.355171 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3052cba9-7666-438d-a17c-d3028c836c1d" containerName="barbican-db-sync" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.355177 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="3052cba9-7666-438d-a17c-d3028c836c1d" containerName="barbican-db-sync" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.355325 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fa77b98-833c-4278-b615-49e4c28e69c5" containerName="cinder-db-sync" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.355344 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="3052cba9-7666-438d-a17c-d3028c836c1d" containerName="barbican-db-sync" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.355369 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec247cb2-074d-4c74-96fd-62353e78b898" containerName="dnsmasq-dns" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.356209 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.361608 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.399257 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.422515 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fa77b98-833c-4278-b615-49e4c28e69c5-config-data" (OuterVolumeSpecName: "config-data") pod "1fa77b98-833c-4278-b615-49e4c28e69c5" (UID: "1fa77b98-833c-4278-b615-49e4c28e69c5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.424575 5005 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fa77b98-833c-4278-b615-49e4c28e69c5-config-data\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.515998 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f64d5748f-cjvfj"] Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.529932 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/589109a6-a7f8-484f-a5e2-a83bd90c21c4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"589109a6-a7f8-484f-a5e2-a83bd90c21c4\") " pod="openstack/cinder-scheduler-0" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.530665 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ph5fz\" (UniqueName: \"kubernetes.io/projected/589109a6-a7f8-484f-a5e2-a83bd90c21c4-kube-api-access-ph5fz\") pod \"cinder-scheduler-0\" (UID: \"589109a6-a7f8-484f-a5e2-a83bd90c21c4\") " pod="openstack/cinder-scheduler-0" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.530774 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/589109a6-a7f8-484f-a5e2-a83bd90c21c4-scripts\") pod \"cinder-scheduler-0\" (UID: \"589109a6-a7f8-484f-a5e2-a83bd90c21c4\") " pod="openstack/cinder-scheduler-0" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.536605 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/589109a6-a7f8-484f-a5e2-a83bd90c21c4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"589109a6-a7f8-484f-a5e2-a83bd90c21c4\") " pod="openstack/cinder-scheduler-0" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.536703 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/589109a6-a7f8-484f-a5e2-a83bd90c21c4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"589109a6-a7f8-484f-a5e2-a83bd90c21c4\") " pod="openstack/cinder-scheduler-0" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.536763 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/589109a6-a7f8-484f-a5e2-a83bd90c21c4-config-data\") pod \"cinder-scheduler-0\" (UID: \"589109a6-a7f8-484f-a5e2-a83bd90c21c4\") " pod="openstack/cinder-scheduler-0" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.536896 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f64d5748f-cjvfj" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.548904 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f64d5748f-cjvfj"] Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.570161 5005 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7f46657c4d-qmj7f" podUID="f37d127e-e5d1-45f3-8e44-262ab354e0c2" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.582659 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.584207 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.591749 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.593136 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.638736 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/589109a6-a7f8-484f-a5e2-a83bd90c21c4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"589109a6-a7f8-484f-a5e2-a83bd90c21c4\") " pod="openstack/cinder-scheduler-0" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.638784 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6332cdda-48e3-42e5-95b6-60948bec88c4-config\") pod \"dnsmasq-dns-f64d5748f-cjvfj\" (UID: \"6332cdda-48e3-42e5-95b6-60948bec88c4\") " pod="openstack/dnsmasq-dns-f64d5748f-cjvfj" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.638808 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bfcab298-9e07-4fc1-aea3-7ac559e0c6f1-logs\") pod \"cinder-api-0\" (UID: \"bfcab298-9e07-4fc1-aea3-7ac559e0c6f1\") " pod="openstack/cinder-api-0" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.638831 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ph5fz\" (UniqueName: \"kubernetes.io/projected/589109a6-a7f8-484f-a5e2-a83bd90c21c4-kube-api-access-ph5fz\") pod \"cinder-scheduler-0\" (UID: \"589109a6-a7f8-484f-a5e2-a83bd90c21c4\") " pod="openstack/cinder-scheduler-0" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.638848 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfcab298-9e07-4fc1-aea3-7ac559e0c6f1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"bfcab298-9e07-4fc1-aea3-7ac559e0c6f1\") " pod="openstack/cinder-api-0" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.638874 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/589109a6-a7f8-484f-a5e2-a83bd90c21c4-scripts\") pod \"cinder-scheduler-0\" (UID: \"589109a6-a7f8-484f-a5e2-a83bd90c21c4\") " pod="openstack/cinder-scheduler-0" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.638894 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxb82\" (UniqueName: \"kubernetes.io/projected/6332cdda-48e3-42e5-95b6-60948bec88c4-kube-api-access-fxb82\") pod \"dnsmasq-dns-f64d5748f-cjvfj\" (UID: \"6332cdda-48e3-42e5-95b6-60948bec88c4\") " pod="openstack/dnsmasq-dns-f64d5748f-cjvfj" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.638910 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bfcab298-9e07-4fc1-aea3-7ac559e0c6f1-scripts\") pod \"cinder-api-0\" (UID: \"bfcab298-9e07-4fc1-aea3-7ac559e0c6f1\") " pod="openstack/cinder-api-0" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.638951 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bfcab298-9e07-4fc1-aea3-7ac559e0c6f1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"bfcab298-9e07-4fc1-aea3-7ac559e0c6f1\") " pod="openstack/cinder-api-0" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.638968 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/589109a6-a7f8-484f-a5e2-a83bd90c21c4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"589109a6-a7f8-484f-a5e2-a83bd90c21c4\") " pod="openstack/cinder-scheduler-0" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.638990 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6332cdda-48e3-42e5-95b6-60948bec88c4-ovsdbserver-nb\") pod \"dnsmasq-dns-f64d5748f-cjvfj\" (UID: \"6332cdda-48e3-42e5-95b6-60948bec88c4\") " pod="openstack/dnsmasq-dns-f64d5748f-cjvfj" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.639015 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/589109a6-a7f8-484f-a5e2-a83bd90c21c4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"589109a6-a7f8-484f-a5e2-a83bd90c21c4\") " pod="openstack/cinder-scheduler-0" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.639043 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/589109a6-a7f8-484f-a5e2-a83bd90c21c4-config-data\") pod \"cinder-scheduler-0\" (UID: \"589109a6-a7f8-484f-a5e2-a83bd90c21c4\") " pod="openstack/cinder-scheduler-0" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.639067 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bfcab298-9e07-4fc1-aea3-7ac559e0c6f1-config-data-custom\") pod \"cinder-api-0\" (UID: \"bfcab298-9e07-4fc1-aea3-7ac559e0c6f1\") " pod="openstack/cinder-api-0" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.639085 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xczd4\" (UniqueName: \"kubernetes.io/projected/bfcab298-9e07-4fc1-aea3-7ac559e0c6f1-kube-api-access-xczd4\") pod \"cinder-api-0\" (UID: \"bfcab298-9e07-4fc1-aea3-7ac559e0c6f1\") " pod="openstack/cinder-api-0" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.639118 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6332cdda-48e3-42e5-95b6-60948bec88c4-dns-svc\") pod \"dnsmasq-dns-f64d5748f-cjvfj\" (UID: \"6332cdda-48e3-42e5-95b6-60948bec88c4\") " pod="openstack/dnsmasq-dns-f64d5748f-cjvfj" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.639132 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfcab298-9e07-4fc1-aea3-7ac559e0c6f1-config-data\") pod \"cinder-api-0\" (UID: \"bfcab298-9e07-4fc1-aea3-7ac559e0c6f1\") " pod="openstack/cinder-api-0" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.639160 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6332cdda-48e3-42e5-95b6-60948bec88c4-ovsdbserver-sb\") pod \"dnsmasq-dns-f64d5748f-cjvfj\" (UID: \"6332cdda-48e3-42e5-95b6-60948bec88c4\") " pod="openstack/dnsmasq-dns-f64d5748f-cjvfj" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.641867 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/589109a6-a7f8-484f-a5e2-a83bd90c21c4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"589109a6-a7f8-484f-a5e2-a83bd90c21c4\") " pod="openstack/cinder-scheduler-0" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.646018 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/589109a6-a7f8-484f-a5e2-a83bd90c21c4-config-data\") pod \"cinder-scheduler-0\" (UID: \"589109a6-a7f8-484f-a5e2-a83bd90c21c4\") " pod="openstack/cinder-scheduler-0" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.647091 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/589109a6-a7f8-484f-a5e2-a83bd90c21c4-scripts\") pod \"cinder-scheduler-0\" (UID: \"589109a6-a7f8-484f-a5e2-a83bd90c21c4\") " pod="openstack/cinder-scheduler-0" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.648207 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/589109a6-a7f8-484f-a5e2-a83bd90c21c4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"589109a6-a7f8-484f-a5e2-a83bd90c21c4\") " pod="openstack/cinder-scheduler-0" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.656833 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ph5fz\" (UniqueName: \"kubernetes.io/projected/589109a6-a7f8-484f-a5e2-a83bd90c21c4-kube-api-access-ph5fz\") pod \"cinder-scheduler-0\" (UID: \"589109a6-a7f8-484f-a5e2-a83bd90c21c4\") " pod="openstack/cinder-scheduler-0" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.660958 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/589109a6-a7f8-484f-a5e2-a83bd90c21c4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"589109a6-a7f8-484f-a5e2-a83bd90c21c4\") " pod="openstack/cinder-scheduler-0" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.701995 5005 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-f8bbcbf96-lg5q8" podUID="e277143e-cdb1-4dda-976d-f06c58c14c33" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.702177 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.740956 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bfcab298-9e07-4fc1-aea3-7ac559e0c6f1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"bfcab298-9e07-4fc1-aea3-7ac559e0c6f1\") " pod="openstack/cinder-api-0" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.741009 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6332cdda-48e3-42e5-95b6-60948bec88c4-ovsdbserver-nb\") pod \"dnsmasq-dns-f64d5748f-cjvfj\" (UID: \"6332cdda-48e3-42e5-95b6-60948bec88c4\") " pod="openstack/dnsmasq-dns-f64d5748f-cjvfj" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.741093 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bfcab298-9e07-4fc1-aea3-7ac559e0c6f1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"bfcab298-9e07-4fc1-aea3-7ac559e0c6f1\") " pod="openstack/cinder-api-0" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.741101 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bfcab298-9e07-4fc1-aea3-7ac559e0c6f1-config-data-custom\") pod \"cinder-api-0\" (UID: \"bfcab298-9e07-4fc1-aea3-7ac559e0c6f1\") " pod="openstack/cinder-api-0" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.741323 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xczd4\" (UniqueName: \"kubernetes.io/projected/bfcab298-9e07-4fc1-aea3-7ac559e0c6f1-kube-api-access-xczd4\") pod \"cinder-api-0\" (UID: \"bfcab298-9e07-4fc1-aea3-7ac559e0c6f1\") " pod="openstack/cinder-api-0" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.741452 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6332cdda-48e3-42e5-95b6-60948bec88c4-dns-svc\") pod \"dnsmasq-dns-f64d5748f-cjvfj\" (UID: \"6332cdda-48e3-42e5-95b6-60948bec88c4\") " pod="openstack/dnsmasq-dns-f64d5748f-cjvfj" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.741481 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfcab298-9e07-4fc1-aea3-7ac559e0c6f1-config-data\") pod \"cinder-api-0\" (UID: \"bfcab298-9e07-4fc1-aea3-7ac559e0c6f1\") " pod="openstack/cinder-api-0" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.741547 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6332cdda-48e3-42e5-95b6-60948bec88c4-ovsdbserver-sb\") pod \"dnsmasq-dns-f64d5748f-cjvfj\" (UID: \"6332cdda-48e3-42e5-95b6-60948bec88c4\") " pod="openstack/dnsmasq-dns-f64d5748f-cjvfj" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.741625 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6332cdda-48e3-42e5-95b6-60948bec88c4-config\") pod \"dnsmasq-dns-f64d5748f-cjvfj\" (UID: \"6332cdda-48e3-42e5-95b6-60948bec88c4\") " pod="openstack/dnsmasq-dns-f64d5748f-cjvfj" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.741664 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bfcab298-9e07-4fc1-aea3-7ac559e0c6f1-logs\") pod \"cinder-api-0\" (UID: \"bfcab298-9e07-4fc1-aea3-7ac559e0c6f1\") " pod="openstack/cinder-api-0" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.741708 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfcab298-9e07-4fc1-aea3-7ac559e0c6f1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"bfcab298-9e07-4fc1-aea3-7ac559e0c6f1\") " pod="openstack/cinder-api-0" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.741781 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxb82\" (UniqueName: \"kubernetes.io/projected/6332cdda-48e3-42e5-95b6-60948bec88c4-kube-api-access-fxb82\") pod \"dnsmasq-dns-f64d5748f-cjvfj\" (UID: \"6332cdda-48e3-42e5-95b6-60948bec88c4\") " pod="openstack/dnsmasq-dns-f64d5748f-cjvfj" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.741805 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bfcab298-9e07-4fc1-aea3-7ac559e0c6f1-scripts\") pod \"cinder-api-0\" (UID: \"bfcab298-9e07-4fc1-aea3-7ac559e0c6f1\") " pod="openstack/cinder-api-0" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.742060 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6332cdda-48e3-42e5-95b6-60948bec88c4-ovsdbserver-nb\") pod \"dnsmasq-dns-f64d5748f-cjvfj\" (UID: \"6332cdda-48e3-42e5-95b6-60948bec88c4\") " pod="openstack/dnsmasq-dns-f64d5748f-cjvfj" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.742350 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bfcab298-9e07-4fc1-aea3-7ac559e0c6f1-logs\") pod \"cinder-api-0\" (UID: \"bfcab298-9e07-4fc1-aea3-7ac559e0c6f1\") " pod="openstack/cinder-api-0" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.742877 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6332cdda-48e3-42e5-95b6-60948bec88c4-ovsdbserver-sb\") pod \"dnsmasq-dns-f64d5748f-cjvfj\" (UID: \"6332cdda-48e3-42e5-95b6-60948bec88c4\") " pod="openstack/dnsmasq-dns-f64d5748f-cjvfj" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.742918 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6332cdda-48e3-42e5-95b6-60948bec88c4-config\") pod \"dnsmasq-dns-f64d5748f-cjvfj\" (UID: \"6332cdda-48e3-42e5-95b6-60948bec88c4\") " pod="openstack/dnsmasq-dns-f64d5748f-cjvfj" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.745839 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6332cdda-48e3-42e5-95b6-60948bec88c4-dns-svc\") pod \"dnsmasq-dns-f64d5748f-cjvfj\" (UID: \"6332cdda-48e3-42e5-95b6-60948bec88c4\") " pod="openstack/dnsmasq-dns-f64d5748f-cjvfj" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.746512 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bfcab298-9e07-4fc1-aea3-7ac559e0c6f1-scripts\") pod \"cinder-api-0\" (UID: \"bfcab298-9e07-4fc1-aea3-7ac559e0c6f1\") " pod="openstack/cinder-api-0" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.749846 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bfcab298-9e07-4fc1-aea3-7ac559e0c6f1-config-data-custom\") pod \"cinder-api-0\" (UID: \"bfcab298-9e07-4fc1-aea3-7ac559e0c6f1\") " pod="openstack/cinder-api-0" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.758051 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfcab298-9e07-4fc1-aea3-7ac559e0c6f1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"bfcab298-9e07-4fc1-aea3-7ac559e0c6f1\") " pod="openstack/cinder-api-0" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.760821 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfcab298-9e07-4fc1-aea3-7ac559e0c6f1-config-data\") pod \"cinder-api-0\" (UID: \"bfcab298-9e07-4fc1-aea3-7ac559e0c6f1\") " pod="openstack/cinder-api-0" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.765246 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxb82\" (UniqueName: \"kubernetes.io/projected/6332cdda-48e3-42e5-95b6-60948bec88c4-kube-api-access-fxb82\") pod \"dnsmasq-dns-f64d5748f-cjvfj\" (UID: \"6332cdda-48e3-42e5-95b6-60948bec88c4\") " pod="openstack/dnsmasq-dns-f64d5748f-cjvfj" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.765753 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xczd4\" (UniqueName: \"kubernetes.io/projected/bfcab298-9e07-4fc1-aea3-7ac559e0c6f1-kube-api-access-xczd4\") pod \"cinder-api-0\" (UID: \"bfcab298-9e07-4fc1-aea3-7ac559e0c6f1\") " pod="openstack/cinder-api-0" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.879728 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f64d5748f-cjvfj" Feb 25 11:36:31 crc kubenswrapper[5005]: I0225 11:36:31.917803 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.272993 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5fb6567bf4-7vxqw"] Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.274707 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5fb6567bf4-7vxqw" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.278356 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-9tbgz" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.279846 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.284039 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.300418 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-58f48f7767-7pg69"] Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.301832 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-58f48f7767-7pg69" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.318273 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5fb6567bf4-7vxqw"] Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.320430 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.355263 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/129a4e2f-9f64-4fbb-a36a-f894073762db-logs\") pod \"barbican-keystone-listener-5fb6567bf4-7vxqw\" (UID: \"129a4e2f-9f64-4fbb-a36a-f894073762db\") " pod="openstack/barbican-keystone-listener-5fb6567bf4-7vxqw" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.355306 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2458c085-0fe5-433b-8f9d-d7406e2cd54c-logs\") pod \"barbican-worker-58f48f7767-7pg69\" (UID: \"2458c085-0fe5-433b-8f9d-d7406e2cd54c\") " pod="openstack/barbican-worker-58f48f7767-7pg69" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.355335 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/129a4e2f-9f64-4fbb-a36a-f894073762db-config-data-custom\") pod \"barbican-keystone-listener-5fb6567bf4-7vxqw\" (UID: \"129a4e2f-9f64-4fbb-a36a-f894073762db\") " pod="openstack/barbican-keystone-listener-5fb6567bf4-7vxqw" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.355361 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7t85\" (UniqueName: \"kubernetes.io/projected/129a4e2f-9f64-4fbb-a36a-f894073762db-kube-api-access-s7t85\") pod \"barbican-keystone-listener-5fb6567bf4-7vxqw\" (UID: \"129a4e2f-9f64-4fbb-a36a-f894073762db\") " pod="openstack/barbican-keystone-listener-5fb6567bf4-7vxqw" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.355399 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2458c085-0fe5-433b-8f9d-d7406e2cd54c-config-data\") pod \"barbican-worker-58f48f7767-7pg69\" (UID: \"2458c085-0fe5-433b-8f9d-d7406e2cd54c\") " pod="openstack/barbican-worker-58f48f7767-7pg69" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.355440 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2458c085-0fe5-433b-8f9d-d7406e2cd54c-config-data-custom\") pod \"barbican-worker-58f48f7767-7pg69\" (UID: \"2458c085-0fe5-433b-8f9d-d7406e2cd54c\") " pod="openstack/barbican-worker-58f48f7767-7pg69" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.355458 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/129a4e2f-9f64-4fbb-a36a-f894073762db-combined-ca-bundle\") pod \"barbican-keystone-listener-5fb6567bf4-7vxqw\" (UID: \"129a4e2f-9f64-4fbb-a36a-f894073762db\") " pod="openstack/barbican-keystone-listener-5fb6567bf4-7vxqw" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.355478 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2458c085-0fe5-433b-8f9d-d7406e2cd54c-combined-ca-bundle\") pod \"barbican-worker-58f48f7767-7pg69\" (UID: \"2458c085-0fe5-433b-8f9d-d7406e2cd54c\") " pod="openstack/barbican-worker-58f48f7767-7pg69" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.355495 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/129a4e2f-9f64-4fbb-a36a-f894073762db-config-data\") pod \"barbican-keystone-listener-5fb6567bf4-7vxqw\" (UID: \"129a4e2f-9f64-4fbb-a36a-f894073762db\") " pod="openstack/barbican-keystone-listener-5fb6567bf4-7vxqw" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.355516 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcmfp\" (UniqueName: \"kubernetes.io/projected/2458c085-0fe5-433b-8f9d-d7406e2cd54c-kube-api-access-rcmfp\") pod \"barbican-worker-58f48f7767-7pg69\" (UID: \"2458c085-0fe5-433b-8f9d-d7406e2cd54c\") " pod="openstack/barbican-worker-58f48f7767-7pg69" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.373984 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-58f48f7767-7pg69"] Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.457099 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2458c085-0fe5-433b-8f9d-d7406e2cd54c-config-data\") pod \"barbican-worker-58f48f7767-7pg69\" (UID: \"2458c085-0fe5-433b-8f9d-d7406e2cd54c\") " pod="openstack/barbican-worker-58f48f7767-7pg69" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.457173 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2458c085-0fe5-433b-8f9d-d7406e2cd54c-config-data-custom\") pod \"barbican-worker-58f48f7767-7pg69\" (UID: \"2458c085-0fe5-433b-8f9d-d7406e2cd54c\") " pod="openstack/barbican-worker-58f48f7767-7pg69" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.457199 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/129a4e2f-9f64-4fbb-a36a-f894073762db-combined-ca-bundle\") pod \"barbican-keystone-listener-5fb6567bf4-7vxqw\" (UID: \"129a4e2f-9f64-4fbb-a36a-f894073762db\") " pod="openstack/barbican-keystone-listener-5fb6567bf4-7vxqw" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.457222 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2458c085-0fe5-433b-8f9d-d7406e2cd54c-combined-ca-bundle\") pod \"barbican-worker-58f48f7767-7pg69\" (UID: \"2458c085-0fe5-433b-8f9d-d7406e2cd54c\") " pod="openstack/barbican-worker-58f48f7767-7pg69" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.457244 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/129a4e2f-9f64-4fbb-a36a-f894073762db-config-data\") pod \"barbican-keystone-listener-5fb6567bf4-7vxqw\" (UID: \"129a4e2f-9f64-4fbb-a36a-f894073762db\") " pod="openstack/barbican-keystone-listener-5fb6567bf4-7vxqw" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.457268 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcmfp\" (UniqueName: \"kubernetes.io/projected/2458c085-0fe5-433b-8f9d-d7406e2cd54c-kube-api-access-rcmfp\") pod \"barbican-worker-58f48f7767-7pg69\" (UID: \"2458c085-0fe5-433b-8f9d-d7406e2cd54c\") " pod="openstack/barbican-worker-58f48f7767-7pg69" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.457350 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/129a4e2f-9f64-4fbb-a36a-f894073762db-logs\") pod \"barbican-keystone-listener-5fb6567bf4-7vxqw\" (UID: \"129a4e2f-9f64-4fbb-a36a-f894073762db\") " pod="openstack/barbican-keystone-listener-5fb6567bf4-7vxqw" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.457365 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2458c085-0fe5-433b-8f9d-d7406e2cd54c-logs\") pod \"barbican-worker-58f48f7767-7pg69\" (UID: \"2458c085-0fe5-433b-8f9d-d7406e2cd54c\") " pod="openstack/barbican-worker-58f48f7767-7pg69" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.457403 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/129a4e2f-9f64-4fbb-a36a-f894073762db-config-data-custom\") pod \"barbican-keystone-listener-5fb6567bf4-7vxqw\" (UID: \"129a4e2f-9f64-4fbb-a36a-f894073762db\") " pod="openstack/barbican-keystone-listener-5fb6567bf4-7vxqw" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.457429 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7t85\" (UniqueName: \"kubernetes.io/projected/129a4e2f-9f64-4fbb-a36a-f894073762db-kube-api-access-s7t85\") pod \"barbican-keystone-listener-5fb6567bf4-7vxqw\" (UID: \"129a4e2f-9f64-4fbb-a36a-f894073762db\") " pod="openstack/barbican-keystone-listener-5fb6567bf4-7vxqw" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.470272 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/129a4e2f-9f64-4fbb-a36a-f894073762db-combined-ca-bundle\") pod \"barbican-keystone-listener-5fb6567bf4-7vxqw\" (UID: \"129a4e2f-9f64-4fbb-a36a-f894073762db\") " pod="openstack/barbican-keystone-listener-5fb6567bf4-7vxqw" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.470649 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/129a4e2f-9f64-4fbb-a36a-f894073762db-logs\") pod \"barbican-keystone-listener-5fb6567bf4-7vxqw\" (UID: \"129a4e2f-9f64-4fbb-a36a-f894073762db\") " pod="openstack/barbican-keystone-listener-5fb6567bf4-7vxqw" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.471201 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2458c085-0fe5-433b-8f9d-d7406e2cd54c-logs\") pod \"barbican-worker-58f48f7767-7pg69\" (UID: \"2458c085-0fe5-433b-8f9d-d7406e2cd54c\") " pod="openstack/barbican-worker-58f48f7767-7pg69" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.471246 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f64d5748f-cjvfj"] Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.475383 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/129a4e2f-9f64-4fbb-a36a-f894073762db-config-data\") pod \"barbican-keystone-listener-5fb6567bf4-7vxqw\" (UID: \"129a4e2f-9f64-4fbb-a36a-f894073762db\") " pod="openstack/barbican-keystone-listener-5fb6567bf4-7vxqw" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.479236 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-wx6kk"] Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.480643 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-wx6kk" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.481934 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2458c085-0fe5-433b-8f9d-d7406e2cd54c-config-data-custom\") pod \"barbican-worker-58f48f7767-7pg69\" (UID: \"2458c085-0fe5-433b-8f9d-d7406e2cd54c\") " pod="openstack/barbican-worker-58f48f7767-7pg69" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.482653 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2458c085-0fe5-433b-8f9d-d7406e2cd54c-config-data\") pod \"barbican-worker-58f48f7767-7pg69\" (UID: \"2458c085-0fe5-433b-8f9d-d7406e2cd54c\") " pod="openstack/barbican-worker-58f48f7767-7pg69" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.484041 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2458c085-0fe5-433b-8f9d-d7406e2cd54c-combined-ca-bundle\") pod \"barbican-worker-58f48f7767-7pg69\" (UID: \"2458c085-0fe5-433b-8f9d-d7406e2cd54c\") " pod="openstack/barbican-worker-58f48f7767-7pg69" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.484444 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/129a4e2f-9f64-4fbb-a36a-f894073762db-config-data-custom\") pod \"barbican-keystone-listener-5fb6567bf4-7vxqw\" (UID: \"129a4e2f-9f64-4fbb-a36a-f894073762db\") " pod="openstack/barbican-keystone-listener-5fb6567bf4-7vxqw" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.487183 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-wx6kk"] Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.510208 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7t85\" (UniqueName: \"kubernetes.io/projected/129a4e2f-9f64-4fbb-a36a-f894073762db-kube-api-access-s7t85\") pod \"barbican-keystone-listener-5fb6567bf4-7vxqw\" (UID: \"129a4e2f-9f64-4fbb-a36a-f894073762db\") " pod="openstack/barbican-keystone-listener-5fb6567bf4-7vxqw" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.539504 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7568566766-7cjl5"] Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.540954 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7568566766-7cjl5" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.544463 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.552990 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7568566766-7cjl5"] Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.554887 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcmfp\" (UniqueName: \"kubernetes.io/projected/2458c085-0fe5-433b-8f9d-d7406e2cd54c-kube-api-access-rcmfp\") pod \"barbican-worker-58f48f7767-7pg69\" (UID: \"2458c085-0fe5-433b-8f9d-d7406e2cd54c\") " pod="openstack/barbican-worker-58f48f7767-7pg69" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.559605 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6f62bc8-4834-483f-b68d-9f4859378352-config\") pod \"dnsmasq-dns-6d97fcdd8f-wx6kk\" (UID: \"a6f62bc8-4834-483f-b68d-9f4859378352\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-wx6kk" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.559676 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8187472b-9222-4f32-a945-eaf167a7600d-config-data\") pod \"barbican-api-7568566766-7cjl5\" (UID: \"8187472b-9222-4f32-a945-eaf167a7600d\") " pod="openstack/barbican-api-7568566766-7cjl5" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.559699 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdgsh\" (UniqueName: \"kubernetes.io/projected/a6f62bc8-4834-483f-b68d-9f4859378352-kube-api-access-sdgsh\") pod \"dnsmasq-dns-6d97fcdd8f-wx6kk\" (UID: \"a6f62bc8-4834-483f-b68d-9f4859378352\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-wx6kk" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.559749 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a6f62bc8-4834-483f-b68d-9f4859378352-ovsdbserver-nb\") pod \"dnsmasq-dns-6d97fcdd8f-wx6kk\" (UID: \"a6f62bc8-4834-483f-b68d-9f4859378352\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-wx6kk" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.559765 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a6f62bc8-4834-483f-b68d-9f4859378352-ovsdbserver-sb\") pod \"dnsmasq-dns-6d97fcdd8f-wx6kk\" (UID: \"a6f62bc8-4834-483f-b68d-9f4859378352\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-wx6kk" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.559802 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8187472b-9222-4f32-a945-eaf167a7600d-config-data-custom\") pod \"barbican-api-7568566766-7cjl5\" (UID: \"8187472b-9222-4f32-a945-eaf167a7600d\") " pod="openstack/barbican-api-7568566766-7cjl5" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.559820 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5sb9\" (UniqueName: \"kubernetes.io/projected/8187472b-9222-4f32-a945-eaf167a7600d-kube-api-access-z5sb9\") pod \"barbican-api-7568566766-7cjl5\" (UID: \"8187472b-9222-4f32-a945-eaf167a7600d\") " pod="openstack/barbican-api-7568566766-7cjl5" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.559838 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8187472b-9222-4f32-a945-eaf167a7600d-logs\") pod \"barbican-api-7568566766-7cjl5\" (UID: \"8187472b-9222-4f32-a945-eaf167a7600d\") " pod="openstack/barbican-api-7568566766-7cjl5" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.559869 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a6f62bc8-4834-483f-b68d-9f4859378352-dns-svc\") pod \"dnsmasq-dns-6d97fcdd8f-wx6kk\" (UID: \"a6f62bc8-4834-483f-b68d-9f4859378352\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-wx6kk" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.559914 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8187472b-9222-4f32-a945-eaf167a7600d-combined-ca-bundle\") pod \"barbican-api-7568566766-7cjl5\" (UID: \"8187472b-9222-4f32-a945-eaf167a7600d\") " pod="openstack/barbican-api-7568566766-7cjl5" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.611788 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5fb6567bf4-7vxqw" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.651864 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-58f48f7767-7pg69" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.663524 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6f62bc8-4834-483f-b68d-9f4859378352-config\") pod \"dnsmasq-dns-6d97fcdd8f-wx6kk\" (UID: \"a6f62bc8-4834-483f-b68d-9f4859378352\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-wx6kk" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.663608 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8187472b-9222-4f32-a945-eaf167a7600d-config-data\") pod \"barbican-api-7568566766-7cjl5\" (UID: \"8187472b-9222-4f32-a945-eaf167a7600d\") " pod="openstack/barbican-api-7568566766-7cjl5" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.663640 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdgsh\" (UniqueName: \"kubernetes.io/projected/a6f62bc8-4834-483f-b68d-9f4859378352-kube-api-access-sdgsh\") pod \"dnsmasq-dns-6d97fcdd8f-wx6kk\" (UID: \"a6f62bc8-4834-483f-b68d-9f4859378352\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-wx6kk" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.663704 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a6f62bc8-4834-483f-b68d-9f4859378352-ovsdbserver-nb\") pod \"dnsmasq-dns-6d97fcdd8f-wx6kk\" (UID: \"a6f62bc8-4834-483f-b68d-9f4859378352\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-wx6kk" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.663724 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a6f62bc8-4834-483f-b68d-9f4859378352-ovsdbserver-sb\") pod \"dnsmasq-dns-6d97fcdd8f-wx6kk\" (UID: \"a6f62bc8-4834-483f-b68d-9f4859378352\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-wx6kk" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.663759 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8187472b-9222-4f32-a945-eaf167a7600d-config-data-custom\") pod \"barbican-api-7568566766-7cjl5\" (UID: \"8187472b-9222-4f32-a945-eaf167a7600d\") " pod="openstack/barbican-api-7568566766-7cjl5" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.663788 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5sb9\" (UniqueName: \"kubernetes.io/projected/8187472b-9222-4f32-a945-eaf167a7600d-kube-api-access-z5sb9\") pod \"barbican-api-7568566766-7cjl5\" (UID: \"8187472b-9222-4f32-a945-eaf167a7600d\") " pod="openstack/barbican-api-7568566766-7cjl5" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.663807 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8187472b-9222-4f32-a945-eaf167a7600d-logs\") pod \"barbican-api-7568566766-7cjl5\" (UID: \"8187472b-9222-4f32-a945-eaf167a7600d\") " pod="openstack/barbican-api-7568566766-7cjl5" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.663834 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a6f62bc8-4834-483f-b68d-9f4859378352-dns-svc\") pod \"dnsmasq-dns-6d97fcdd8f-wx6kk\" (UID: \"a6f62bc8-4834-483f-b68d-9f4859378352\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-wx6kk" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.663884 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8187472b-9222-4f32-a945-eaf167a7600d-combined-ca-bundle\") pod \"barbican-api-7568566766-7cjl5\" (UID: \"8187472b-9222-4f32-a945-eaf167a7600d\") " pod="openstack/barbican-api-7568566766-7cjl5" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.665147 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a6f62bc8-4834-483f-b68d-9f4859378352-ovsdbserver-sb\") pod \"dnsmasq-dns-6d97fcdd8f-wx6kk\" (UID: \"a6f62bc8-4834-483f-b68d-9f4859378352\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-wx6kk" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.666264 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6f62bc8-4834-483f-b68d-9f4859378352-config\") pod \"dnsmasq-dns-6d97fcdd8f-wx6kk\" (UID: \"a6f62bc8-4834-483f-b68d-9f4859378352\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-wx6kk" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.667227 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8187472b-9222-4f32-a945-eaf167a7600d-logs\") pod \"barbican-api-7568566766-7cjl5\" (UID: \"8187472b-9222-4f32-a945-eaf167a7600d\") " pod="openstack/barbican-api-7568566766-7cjl5" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.667836 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a6f62bc8-4834-483f-b68d-9f4859378352-dns-svc\") pod \"dnsmasq-dns-6d97fcdd8f-wx6kk\" (UID: \"a6f62bc8-4834-483f-b68d-9f4859378352\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-wx6kk" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.668398 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a6f62bc8-4834-483f-b68d-9f4859378352-ovsdbserver-nb\") pod \"dnsmasq-dns-6d97fcdd8f-wx6kk\" (UID: \"a6f62bc8-4834-483f-b68d-9f4859378352\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-wx6kk" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.670857 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8187472b-9222-4f32-a945-eaf167a7600d-combined-ca-bundle\") pod \"barbican-api-7568566766-7cjl5\" (UID: \"8187472b-9222-4f32-a945-eaf167a7600d\") " pod="openstack/barbican-api-7568566766-7cjl5" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.689113 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8187472b-9222-4f32-a945-eaf167a7600d-config-data-custom\") pod \"barbican-api-7568566766-7cjl5\" (UID: \"8187472b-9222-4f32-a945-eaf167a7600d\") " pod="openstack/barbican-api-7568566766-7cjl5" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.701070 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8187472b-9222-4f32-a945-eaf167a7600d-config-data\") pod \"barbican-api-7568566766-7cjl5\" (UID: \"8187472b-9222-4f32-a945-eaf167a7600d\") " pod="openstack/barbican-api-7568566766-7cjl5" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.709778 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5sb9\" (UniqueName: \"kubernetes.io/projected/8187472b-9222-4f32-a945-eaf167a7600d-kube-api-access-z5sb9\") pod \"barbican-api-7568566766-7cjl5\" (UID: \"8187472b-9222-4f32-a945-eaf167a7600d\") " pod="openstack/barbican-api-7568566766-7cjl5" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.720483 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdgsh\" (UniqueName: \"kubernetes.io/projected/a6f62bc8-4834-483f-b68d-9f4859378352-kube-api-access-sdgsh\") pod \"dnsmasq-dns-6d97fcdd8f-wx6kk\" (UID: \"a6f62bc8-4834-483f-b68d-9f4859378352\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-wx6kk" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.941745 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-wx6kk" Feb 25 11:36:32 crc kubenswrapper[5005]: I0225 11:36:32.964154 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7568566766-7cjl5" Feb 25 11:36:33 crc kubenswrapper[5005]: I0225 11:36:33.636870 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 25 11:36:34 crc kubenswrapper[5005]: E0225 11:36:34.681822 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="7b033028-9ac7-43d8-95da-931e7a25249a" Feb 25 11:36:34 crc kubenswrapper[5005]: I0225 11:36:34.952231 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-58f48f7767-7pg69"] Feb 25 11:36:34 crc kubenswrapper[5005]: W0225 11:36:34.957103 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2458c085_0fe5_433b_8f9d_d7406e2cd54c.slice/crio-f0642e66a8a5048e93e358f0b8eb538c4d259a3ed77fa5ec544613251ec72d1a WatchSource:0}: Error finding container f0642e66a8a5048e93e358f0b8eb538c4d259a3ed77fa5ec544613251ec72d1a: Status 404 returned error can't find the container with id f0642e66a8a5048e93e358f0b8eb538c4d259a3ed77fa5ec544613251ec72d1a Feb 25 11:36:35 crc kubenswrapper[5005]: I0225 11:36:35.157941 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b033028-9ac7-43d8-95da-931e7a25249a","Type":"ContainerStarted","Data":"de7563e33605dac2327bf1d243a5b148e27e347a04adc72180152695e2a54295"} Feb 25 11:36:35 crc kubenswrapper[5005]: I0225 11:36:35.158094 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7b033028-9ac7-43d8-95da-931e7a25249a" containerName="ceilometer-notification-agent" containerID="cri-o://fd4b1f1784a58de950cae7d57546ef271186ae04bd51d79894a828b440dd457d" gracePeriod=30 Feb 25 11:36:35 crc kubenswrapper[5005]: I0225 11:36:35.158344 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 25 11:36:35 crc kubenswrapper[5005]: I0225 11:36:35.158627 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7b033028-9ac7-43d8-95da-931e7a25249a" containerName="proxy-httpd" containerID="cri-o://de7563e33605dac2327bf1d243a5b148e27e347a04adc72180152695e2a54295" gracePeriod=30 Feb 25 11:36:35 crc kubenswrapper[5005]: I0225 11:36:35.158674 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7b033028-9ac7-43d8-95da-931e7a25249a" containerName="sg-core" containerID="cri-o://62aff208cc62a3bdea4d6476a9d411808b725c45f76aafc0497409c55486ba84" gracePeriod=30 Feb 25 11:36:35 crc kubenswrapper[5005]: I0225 11:36:35.160126 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-58f48f7767-7pg69" event={"ID":"2458c085-0fe5-433b-8f9d-d7406e2cd54c","Type":"ContainerStarted","Data":"f0642e66a8a5048e93e358f0b8eb538c4d259a3ed77fa5ec544613251ec72d1a"} Feb 25 11:36:35 crc kubenswrapper[5005]: I0225 11:36:35.299285 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 25 11:36:35 crc kubenswrapper[5005]: W0225 11:36:35.303041 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod129a4e2f_9f64_4fbb_a36a_f894073762db.slice/crio-cbe6d79283f3cea6c3da435b053c4c111798c77c11a1b2d900dfac19cd813a79 WatchSource:0}: Error finding container cbe6d79283f3cea6c3da435b053c4c111798c77c11a1b2d900dfac19cd813a79: Status 404 returned error can't find the container with id cbe6d79283f3cea6c3da435b053c4c111798c77c11a1b2d900dfac19cd813a79 Feb 25 11:36:35 crc kubenswrapper[5005]: I0225 11:36:35.310827 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5fb6567bf4-7vxqw"] Feb 25 11:36:35 crc kubenswrapper[5005]: W0225 11:36:35.314654 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbfcab298_9e07_4fc1_aea3_7ac559e0c6f1.slice/crio-19a118148bdbb2d477f1239a0387b03ca44ec718cc9d9870caa08cf8d3eb9efe WatchSource:0}: Error finding container 19a118148bdbb2d477f1239a0387b03ca44ec718cc9d9870caa08cf8d3eb9efe: Status 404 returned error can't find the container with id 19a118148bdbb2d477f1239a0387b03ca44ec718cc9d9870caa08cf8d3eb9efe Feb 25 11:36:35 crc kubenswrapper[5005]: I0225 11:36:35.318978 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f64d5748f-cjvfj"] Feb 25 11:36:35 crc kubenswrapper[5005]: I0225 11:36:35.330304 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 25 11:36:35 crc kubenswrapper[5005]: I0225 11:36:35.338885 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7568566766-7cjl5"] Feb 25 11:36:35 crc kubenswrapper[5005]: I0225 11:36:35.353167 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-wx6kk"] Feb 25 11:36:36 crc kubenswrapper[5005]: I0225 11:36:36.179800 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5fb6567bf4-7vxqw" event={"ID":"129a4e2f-9f64-4fbb-a36a-f894073762db","Type":"ContainerStarted","Data":"cbe6d79283f3cea6c3da435b053c4c111798c77c11a1b2d900dfac19cd813a79"} Feb 25 11:36:36 crc kubenswrapper[5005]: I0225 11:36:36.182937 5005 generic.go:334] "Generic (PLEG): container finished" podID="7b033028-9ac7-43d8-95da-931e7a25249a" containerID="de7563e33605dac2327bf1d243a5b148e27e347a04adc72180152695e2a54295" exitCode=0 Feb 25 11:36:36 crc kubenswrapper[5005]: I0225 11:36:36.182963 5005 generic.go:334] "Generic (PLEG): container finished" podID="7b033028-9ac7-43d8-95da-931e7a25249a" containerID="62aff208cc62a3bdea4d6476a9d411808b725c45f76aafc0497409c55486ba84" exitCode=2 Feb 25 11:36:36 crc kubenswrapper[5005]: I0225 11:36:36.183010 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b033028-9ac7-43d8-95da-931e7a25249a","Type":"ContainerDied","Data":"de7563e33605dac2327bf1d243a5b148e27e347a04adc72180152695e2a54295"} Feb 25 11:36:36 crc kubenswrapper[5005]: I0225 11:36:36.183056 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b033028-9ac7-43d8-95da-931e7a25249a","Type":"ContainerDied","Data":"62aff208cc62a3bdea4d6476a9d411808b725c45f76aafc0497409c55486ba84"} Feb 25 11:36:36 crc kubenswrapper[5005]: I0225 11:36:36.184774 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7568566766-7cjl5" event={"ID":"8187472b-9222-4f32-a945-eaf167a7600d","Type":"ContainerStarted","Data":"a345d21407ce5312910b20be2105f8bb11749b714233a03abf10ed5ec9c1d23c"} Feb 25 11:36:36 crc kubenswrapper[5005]: I0225 11:36:36.184802 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7568566766-7cjl5" event={"ID":"8187472b-9222-4f32-a945-eaf167a7600d","Type":"ContainerStarted","Data":"140611c9089ca0f94f3e19075737265ce48dd031b01903506309e2de63f0af5a"} Feb 25 11:36:36 crc kubenswrapper[5005]: I0225 11:36:36.184813 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7568566766-7cjl5" event={"ID":"8187472b-9222-4f32-a945-eaf167a7600d","Type":"ContainerStarted","Data":"5801abc1771605454fca162d8f1c18f97d7f6c751d23c9b881acc487a033d954"} Feb 25 11:36:36 crc kubenswrapper[5005]: I0225 11:36:36.185332 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7568566766-7cjl5" Feb 25 11:36:36 crc kubenswrapper[5005]: I0225 11:36:36.185391 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7568566766-7cjl5" Feb 25 11:36:36 crc kubenswrapper[5005]: I0225 11:36:36.188292 5005 generic.go:334] "Generic (PLEG): container finished" podID="a6f62bc8-4834-483f-b68d-9f4859378352" containerID="59ded5a7bff5500b0366ee3a26f41f9ff45436b7d4176028d930469f5f13f99e" exitCode=0 Feb 25 11:36:36 crc kubenswrapper[5005]: I0225 11:36:36.188388 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-wx6kk" event={"ID":"a6f62bc8-4834-483f-b68d-9f4859378352","Type":"ContainerDied","Data":"59ded5a7bff5500b0366ee3a26f41f9ff45436b7d4176028d930469f5f13f99e"} Feb 25 11:36:36 crc kubenswrapper[5005]: I0225 11:36:36.188409 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-wx6kk" event={"ID":"a6f62bc8-4834-483f-b68d-9f4859378352","Type":"ContainerStarted","Data":"6f3c6899a15a9e128749c77743644109c40c4f97d0359ed82d22ad76b1711658"} Feb 25 11:36:36 crc kubenswrapper[5005]: I0225 11:36:36.192729 5005 generic.go:334] "Generic (PLEG): container finished" podID="6332cdda-48e3-42e5-95b6-60948bec88c4" containerID="f51229d27a7158b96d53c69b87c854ae9fecb17eed462f51687d84ace05f690a" exitCode=0 Feb 25 11:36:36 crc kubenswrapper[5005]: I0225 11:36:36.192800 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f64d5748f-cjvfj" event={"ID":"6332cdda-48e3-42e5-95b6-60948bec88c4","Type":"ContainerDied","Data":"f51229d27a7158b96d53c69b87c854ae9fecb17eed462f51687d84ace05f690a"} Feb 25 11:36:36 crc kubenswrapper[5005]: I0225 11:36:36.192835 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f64d5748f-cjvfj" event={"ID":"6332cdda-48e3-42e5-95b6-60948bec88c4","Type":"ContainerStarted","Data":"817f351159d5b13a955dd9a3adb203962c192284db4a47ce56e4ddc26c274c43"} Feb 25 11:36:36 crc kubenswrapper[5005]: I0225 11:36:36.196619 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"589109a6-a7f8-484f-a5e2-a83bd90c21c4","Type":"ContainerStarted","Data":"c45447c083f0559152311b85ea7216fc8ae430c9108f05b65f7d19d958bbe617"} Feb 25 11:36:36 crc kubenswrapper[5005]: I0225 11:36:36.204226 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bfcab298-9e07-4fc1-aea3-7ac559e0c6f1","Type":"ContainerStarted","Data":"01b02c7df172f063809ed4b9894d989f5bf3b24387e8bc814c16cedb76679893"} Feb 25 11:36:36 crc kubenswrapper[5005]: I0225 11:36:36.204272 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bfcab298-9e07-4fc1-aea3-7ac559e0c6f1","Type":"ContainerStarted","Data":"19a118148bdbb2d477f1239a0387b03ca44ec718cc9d9870caa08cf8d3eb9efe"} Feb 25 11:36:36 crc kubenswrapper[5005]: I0225 11:36:36.218880 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7568566766-7cjl5" podStartSLOduration=4.218855383 podStartE2EDuration="4.218855383s" podCreationTimestamp="2026-02-25 11:36:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:36:36.203699614 +0000 UTC m=+1110.244431941" watchObservedRunningTime="2026-02-25 11:36:36.218855383 +0000 UTC m=+1110.259587710" Feb 25 11:36:36 crc kubenswrapper[5005]: I0225 11:36:36.799845 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f64d5748f-cjvfj" Feb 25 11:36:36 crc kubenswrapper[5005]: I0225 11:36:36.945021 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6332cdda-48e3-42e5-95b6-60948bec88c4-dns-svc\") pod \"6332cdda-48e3-42e5-95b6-60948bec88c4\" (UID: \"6332cdda-48e3-42e5-95b6-60948bec88c4\") " Feb 25 11:36:36 crc kubenswrapper[5005]: I0225 11:36:36.945060 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6332cdda-48e3-42e5-95b6-60948bec88c4-config\") pod \"6332cdda-48e3-42e5-95b6-60948bec88c4\" (UID: \"6332cdda-48e3-42e5-95b6-60948bec88c4\") " Feb 25 11:36:36 crc kubenswrapper[5005]: I0225 11:36:36.945131 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6332cdda-48e3-42e5-95b6-60948bec88c4-ovsdbserver-sb\") pod \"6332cdda-48e3-42e5-95b6-60948bec88c4\" (UID: \"6332cdda-48e3-42e5-95b6-60948bec88c4\") " Feb 25 11:36:36 crc kubenswrapper[5005]: I0225 11:36:36.945235 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxb82\" (UniqueName: \"kubernetes.io/projected/6332cdda-48e3-42e5-95b6-60948bec88c4-kube-api-access-fxb82\") pod \"6332cdda-48e3-42e5-95b6-60948bec88c4\" (UID: \"6332cdda-48e3-42e5-95b6-60948bec88c4\") " Feb 25 11:36:36 crc kubenswrapper[5005]: I0225 11:36:36.945281 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6332cdda-48e3-42e5-95b6-60948bec88c4-ovsdbserver-nb\") pod \"6332cdda-48e3-42e5-95b6-60948bec88c4\" (UID: \"6332cdda-48e3-42e5-95b6-60948bec88c4\") " Feb 25 11:36:36 crc kubenswrapper[5005]: I0225 11:36:36.952095 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6332cdda-48e3-42e5-95b6-60948bec88c4-kube-api-access-fxb82" (OuterVolumeSpecName: "kube-api-access-fxb82") pod "6332cdda-48e3-42e5-95b6-60948bec88c4" (UID: "6332cdda-48e3-42e5-95b6-60948bec88c4"). InnerVolumeSpecName "kube-api-access-fxb82". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:36:36 crc kubenswrapper[5005]: I0225 11:36:36.966444 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6332cdda-48e3-42e5-95b6-60948bec88c4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6332cdda-48e3-42e5-95b6-60948bec88c4" (UID: "6332cdda-48e3-42e5-95b6-60948bec88c4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:36:36 crc kubenswrapper[5005]: I0225 11:36:36.974552 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6332cdda-48e3-42e5-95b6-60948bec88c4-config" (OuterVolumeSpecName: "config") pod "6332cdda-48e3-42e5-95b6-60948bec88c4" (UID: "6332cdda-48e3-42e5-95b6-60948bec88c4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:36:36 crc kubenswrapper[5005]: I0225 11:36:36.977495 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6332cdda-48e3-42e5-95b6-60948bec88c4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6332cdda-48e3-42e5-95b6-60948bec88c4" (UID: "6332cdda-48e3-42e5-95b6-60948bec88c4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:36:36 crc kubenswrapper[5005]: I0225 11:36:36.981829 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6332cdda-48e3-42e5-95b6-60948bec88c4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6332cdda-48e3-42e5-95b6-60948bec88c4" (UID: "6332cdda-48e3-42e5-95b6-60948bec88c4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.047253 5005 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6332cdda-48e3-42e5-95b6-60948bec88c4-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.047290 5005 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6332cdda-48e3-42e5-95b6-60948bec88c4-config\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.047302 5005 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6332cdda-48e3-42e5-95b6-60948bec88c4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.047313 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxb82\" (UniqueName: \"kubernetes.io/projected/6332cdda-48e3-42e5-95b6-60948bec88c4-kube-api-access-fxb82\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.047322 5005 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6332cdda-48e3-42e5-95b6-60948bec88c4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.215901 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bfcab298-9e07-4fc1-aea3-7ac559e0c6f1","Type":"ContainerStarted","Data":"5a9be008fd3b33f0844f494fdd4a6050aa32fff4045bedb3d5e05c45c1314614"} Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.216224 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.215992 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="bfcab298-9e07-4fc1-aea3-7ac559e0c6f1" containerName="cinder-api" containerID="cri-o://5a9be008fd3b33f0844f494fdd4a6050aa32fff4045bedb3d5e05c45c1314614" gracePeriod=30 Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.215960 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="bfcab298-9e07-4fc1-aea3-7ac559e0c6f1" containerName="cinder-api-log" containerID="cri-o://01b02c7df172f063809ed4b9894d989f5bf3b24387e8bc814c16cedb76679893" gracePeriod=30 Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.228968 5005 generic.go:334] "Generic (PLEG): container finished" podID="7b033028-9ac7-43d8-95da-931e7a25249a" containerID="fd4b1f1784a58de950cae7d57546ef271186ae04bd51d79894a828b440dd457d" exitCode=0 Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.229061 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b033028-9ac7-43d8-95da-931e7a25249a","Type":"ContainerDied","Data":"fd4b1f1784a58de950cae7d57546ef271186ae04bd51d79894a828b440dd457d"} Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.237313 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.237300378 podStartE2EDuration="6.237300378s" podCreationTimestamp="2026-02-25 11:36:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:36:37.232768221 +0000 UTC m=+1111.273500548" watchObservedRunningTime="2026-02-25 11:36:37.237300378 +0000 UTC m=+1111.278032705" Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.239475 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f64d5748f-cjvfj" Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.240013 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f64d5748f-cjvfj" event={"ID":"6332cdda-48e3-42e5-95b6-60948bec88c4","Type":"ContainerDied","Data":"817f351159d5b13a955dd9a3adb203962c192284db4a47ce56e4ddc26c274c43"} Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.240047 5005 scope.go:117] "RemoveContainer" containerID="f51229d27a7158b96d53c69b87c854ae9fecb17eed462f51687d84ace05f690a" Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.575010 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f64d5748f-cjvfj"] Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.594262 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f64d5748f-cjvfj"] Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.640622 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.648643 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-64d599957d-86x2s"] Feb 25 11:36:37 crc kubenswrapper[5005]: E0225 11:36:37.648976 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6332cdda-48e3-42e5-95b6-60948bec88c4" containerName="init" Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.648989 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="6332cdda-48e3-42e5-95b6-60948bec88c4" containerName="init" Feb 25 11:36:37 crc kubenswrapper[5005]: E0225 11:36:37.649002 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b033028-9ac7-43d8-95da-931e7a25249a" containerName="proxy-httpd" Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.649007 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b033028-9ac7-43d8-95da-931e7a25249a" containerName="proxy-httpd" Feb 25 11:36:37 crc kubenswrapper[5005]: E0225 11:36:37.649023 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b033028-9ac7-43d8-95da-931e7a25249a" containerName="ceilometer-notification-agent" Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.649029 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b033028-9ac7-43d8-95da-931e7a25249a" containerName="ceilometer-notification-agent" Feb 25 11:36:37 crc kubenswrapper[5005]: E0225 11:36:37.649040 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b033028-9ac7-43d8-95da-931e7a25249a" containerName="sg-core" Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.649045 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b033028-9ac7-43d8-95da-931e7a25249a" containerName="sg-core" Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.649241 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b033028-9ac7-43d8-95da-931e7a25249a" containerName="sg-core" Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.649262 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b033028-9ac7-43d8-95da-931e7a25249a" containerName="proxy-httpd" Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.649274 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="6332cdda-48e3-42e5-95b6-60948bec88c4" containerName="init" Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.649281 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b033028-9ac7-43d8-95da-931e7a25249a" containerName="ceilometer-notification-agent" Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.651485 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-64d599957d-86x2s" Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.655571 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.655585 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.673651 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-64d599957d-86x2s"] Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.779694 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b033028-9ac7-43d8-95da-931e7a25249a-config-data\") pod \"7b033028-9ac7-43d8-95da-931e7a25249a\" (UID: \"7b033028-9ac7-43d8-95da-931e7a25249a\") " Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.780661 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b033028-9ac7-43d8-95da-931e7a25249a-combined-ca-bundle\") pod \"7b033028-9ac7-43d8-95da-931e7a25249a\" (UID: \"7b033028-9ac7-43d8-95da-931e7a25249a\") " Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.780776 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b033028-9ac7-43d8-95da-931e7a25249a-run-httpd\") pod \"7b033028-9ac7-43d8-95da-931e7a25249a\" (UID: \"7b033028-9ac7-43d8-95da-931e7a25249a\") " Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.780803 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b033028-9ac7-43d8-95da-931e7a25249a-log-httpd\") pod \"7b033028-9ac7-43d8-95da-931e7a25249a\" (UID: \"7b033028-9ac7-43d8-95da-931e7a25249a\") " Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.780949 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68rjp\" (UniqueName: \"kubernetes.io/projected/7b033028-9ac7-43d8-95da-931e7a25249a-kube-api-access-68rjp\") pod \"7b033028-9ac7-43d8-95da-931e7a25249a\" (UID: \"7b033028-9ac7-43d8-95da-931e7a25249a\") " Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.780983 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b033028-9ac7-43d8-95da-931e7a25249a-scripts\") pod \"7b033028-9ac7-43d8-95da-931e7a25249a\" (UID: \"7b033028-9ac7-43d8-95da-931e7a25249a\") " Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.781005 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7b033028-9ac7-43d8-95da-931e7a25249a-sg-core-conf-yaml\") pod \"7b033028-9ac7-43d8-95da-931e7a25249a\" (UID: \"7b033028-9ac7-43d8-95da-931e7a25249a\") " Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.781251 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e7a211eb-8b24-4344-a82e-65ad4770881a-logs\") pod \"barbican-api-64d599957d-86x2s\" (UID: \"e7a211eb-8b24-4344-a82e-65ad4770881a\") " pod="openstack/barbican-api-64d599957d-86x2s" Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.781355 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7a211eb-8b24-4344-a82e-65ad4770881a-config-data\") pod \"barbican-api-64d599957d-86x2s\" (UID: \"e7a211eb-8b24-4344-a82e-65ad4770881a\") " pod="openstack/barbican-api-64d599957d-86x2s" Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.781447 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7a211eb-8b24-4344-a82e-65ad4770881a-internal-tls-certs\") pod \"barbican-api-64d599957d-86x2s\" (UID: \"e7a211eb-8b24-4344-a82e-65ad4770881a\") " pod="openstack/barbican-api-64d599957d-86x2s" Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.781504 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c82r\" (UniqueName: \"kubernetes.io/projected/e7a211eb-8b24-4344-a82e-65ad4770881a-kube-api-access-8c82r\") pod \"barbican-api-64d599957d-86x2s\" (UID: \"e7a211eb-8b24-4344-a82e-65ad4770881a\") " pod="openstack/barbican-api-64d599957d-86x2s" Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.781541 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e7a211eb-8b24-4344-a82e-65ad4770881a-config-data-custom\") pod \"barbican-api-64d599957d-86x2s\" (UID: \"e7a211eb-8b24-4344-a82e-65ad4770881a\") " pod="openstack/barbican-api-64d599957d-86x2s" Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.781567 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7a211eb-8b24-4344-a82e-65ad4770881a-public-tls-certs\") pod \"barbican-api-64d599957d-86x2s\" (UID: \"e7a211eb-8b24-4344-a82e-65ad4770881a\") " pod="openstack/barbican-api-64d599957d-86x2s" Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.781705 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7a211eb-8b24-4344-a82e-65ad4770881a-combined-ca-bundle\") pod \"barbican-api-64d599957d-86x2s\" (UID: \"e7a211eb-8b24-4344-a82e-65ad4770881a\") " pod="openstack/barbican-api-64d599957d-86x2s" Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.782416 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b033028-9ac7-43d8-95da-931e7a25249a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7b033028-9ac7-43d8-95da-931e7a25249a" (UID: "7b033028-9ac7-43d8-95da-931e7a25249a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.784393 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b033028-9ac7-43d8-95da-931e7a25249a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7b033028-9ac7-43d8-95da-931e7a25249a" (UID: "7b033028-9ac7-43d8-95da-931e7a25249a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.789203 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b033028-9ac7-43d8-95da-931e7a25249a-scripts" (OuterVolumeSpecName: "scripts") pod "7b033028-9ac7-43d8-95da-931e7a25249a" (UID: "7b033028-9ac7-43d8-95da-931e7a25249a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.791975 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b033028-9ac7-43d8-95da-931e7a25249a-kube-api-access-68rjp" (OuterVolumeSpecName: "kube-api-access-68rjp") pod "7b033028-9ac7-43d8-95da-931e7a25249a" (UID: "7b033028-9ac7-43d8-95da-931e7a25249a"). InnerVolumeSpecName "kube-api-access-68rjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.817437 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b033028-9ac7-43d8-95da-931e7a25249a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7b033028-9ac7-43d8-95da-931e7a25249a" (UID: "7b033028-9ac7-43d8-95da-931e7a25249a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.846048 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b033028-9ac7-43d8-95da-931e7a25249a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7b033028-9ac7-43d8-95da-931e7a25249a" (UID: "7b033028-9ac7-43d8-95da-931e7a25249a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.885813 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e7a211eb-8b24-4344-a82e-65ad4770881a-logs\") pod \"barbican-api-64d599957d-86x2s\" (UID: \"e7a211eb-8b24-4344-a82e-65ad4770881a\") " pod="openstack/barbican-api-64d599957d-86x2s" Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.885873 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7a211eb-8b24-4344-a82e-65ad4770881a-config-data\") pod \"barbican-api-64d599957d-86x2s\" (UID: \"e7a211eb-8b24-4344-a82e-65ad4770881a\") " pod="openstack/barbican-api-64d599957d-86x2s" Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.886329 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e7a211eb-8b24-4344-a82e-65ad4770881a-logs\") pod \"barbican-api-64d599957d-86x2s\" (UID: \"e7a211eb-8b24-4344-a82e-65ad4770881a\") " pod="openstack/barbican-api-64d599957d-86x2s" Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.887542 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7a211eb-8b24-4344-a82e-65ad4770881a-internal-tls-certs\") pod \"barbican-api-64d599957d-86x2s\" (UID: \"e7a211eb-8b24-4344-a82e-65ad4770881a\") " pod="openstack/barbican-api-64d599957d-86x2s" Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.888236 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8c82r\" (UniqueName: \"kubernetes.io/projected/e7a211eb-8b24-4344-a82e-65ad4770881a-kube-api-access-8c82r\") pod \"barbican-api-64d599957d-86x2s\" (UID: \"e7a211eb-8b24-4344-a82e-65ad4770881a\") " pod="openstack/barbican-api-64d599957d-86x2s" Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.888276 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e7a211eb-8b24-4344-a82e-65ad4770881a-config-data-custom\") pod \"barbican-api-64d599957d-86x2s\" (UID: \"e7a211eb-8b24-4344-a82e-65ad4770881a\") " pod="openstack/barbican-api-64d599957d-86x2s" Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.888343 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7a211eb-8b24-4344-a82e-65ad4770881a-public-tls-certs\") pod \"barbican-api-64d599957d-86x2s\" (UID: \"e7a211eb-8b24-4344-a82e-65ad4770881a\") " pod="openstack/barbican-api-64d599957d-86x2s" Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.888481 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7a211eb-8b24-4344-a82e-65ad4770881a-combined-ca-bundle\") pod \"barbican-api-64d599957d-86x2s\" (UID: \"e7a211eb-8b24-4344-a82e-65ad4770881a\") " pod="openstack/barbican-api-64d599957d-86x2s" Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.889258 5005 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b033028-9ac7-43d8-95da-931e7a25249a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.889281 5005 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b033028-9ac7-43d8-95da-931e7a25249a-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.889293 5005 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b033028-9ac7-43d8-95da-931e7a25249a-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.893440 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68rjp\" (UniqueName: \"kubernetes.io/projected/7b033028-9ac7-43d8-95da-931e7a25249a-kube-api-access-68rjp\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.893491 5005 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b033028-9ac7-43d8-95da-931e7a25249a-scripts\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.893505 5005 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7b033028-9ac7-43d8-95da-931e7a25249a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.893399 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7a211eb-8b24-4344-a82e-65ad4770881a-internal-tls-certs\") pod \"barbican-api-64d599957d-86x2s\" (UID: \"e7a211eb-8b24-4344-a82e-65ad4770881a\") " pod="openstack/barbican-api-64d599957d-86x2s" Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.893165 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7a211eb-8b24-4344-a82e-65ad4770881a-config-data\") pod \"barbican-api-64d599957d-86x2s\" (UID: \"e7a211eb-8b24-4344-a82e-65ad4770881a\") " pod="openstack/barbican-api-64d599957d-86x2s" Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.892490 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7a211eb-8b24-4344-a82e-65ad4770881a-public-tls-certs\") pod \"barbican-api-64d599957d-86x2s\" (UID: \"e7a211eb-8b24-4344-a82e-65ad4770881a\") " pod="openstack/barbican-api-64d599957d-86x2s" Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.893297 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7a211eb-8b24-4344-a82e-65ad4770881a-combined-ca-bundle\") pod \"barbican-api-64d599957d-86x2s\" (UID: \"e7a211eb-8b24-4344-a82e-65ad4770881a\") " pod="openstack/barbican-api-64d599957d-86x2s" Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.893988 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b033028-9ac7-43d8-95da-931e7a25249a-config-data" (OuterVolumeSpecName: "config-data") pod "7b033028-9ac7-43d8-95da-931e7a25249a" (UID: "7b033028-9ac7-43d8-95da-931e7a25249a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.896067 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e7a211eb-8b24-4344-a82e-65ad4770881a-config-data-custom\") pod \"barbican-api-64d599957d-86x2s\" (UID: \"e7a211eb-8b24-4344-a82e-65ad4770881a\") " pod="openstack/barbican-api-64d599957d-86x2s" Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.904478 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8c82r\" (UniqueName: \"kubernetes.io/projected/e7a211eb-8b24-4344-a82e-65ad4770881a-kube-api-access-8c82r\") pod \"barbican-api-64d599957d-86x2s\" (UID: \"e7a211eb-8b24-4344-a82e-65ad4770881a\") " pod="openstack/barbican-api-64d599957d-86x2s" Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.983430 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-64d599957d-86x2s" Feb 25 11:36:37 crc kubenswrapper[5005]: I0225 11:36:37.996474 5005 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b033028-9ac7-43d8-95da-931e7a25249a-config-data\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.054665 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.199038 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bfcab298-9e07-4fc1-aea3-7ac559e0c6f1-scripts\") pod \"bfcab298-9e07-4fc1-aea3-7ac559e0c6f1\" (UID: \"bfcab298-9e07-4fc1-aea3-7ac559e0c6f1\") " Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.199485 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bfcab298-9e07-4fc1-aea3-7ac559e0c6f1-etc-machine-id\") pod \"bfcab298-9e07-4fc1-aea3-7ac559e0c6f1\" (UID: \"bfcab298-9e07-4fc1-aea3-7ac559e0c6f1\") " Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.199583 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bfcab298-9e07-4fc1-aea3-7ac559e0c6f1-config-data-custom\") pod \"bfcab298-9e07-4fc1-aea3-7ac559e0c6f1\" (UID: \"bfcab298-9e07-4fc1-aea3-7ac559e0c6f1\") " Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.199683 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xczd4\" (UniqueName: \"kubernetes.io/projected/bfcab298-9e07-4fc1-aea3-7ac559e0c6f1-kube-api-access-xczd4\") pod \"bfcab298-9e07-4fc1-aea3-7ac559e0c6f1\" (UID: \"bfcab298-9e07-4fc1-aea3-7ac559e0c6f1\") " Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.199819 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfcab298-9e07-4fc1-aea3-7ac559e0c6f1-config-data\") pod \"bfcab298-9e07-4fc1-aea3-7ac559e0c6f1\" (UID: \"bfcab298-9e07-4fc1-aea3-7ac559e0c6f1\") " Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.199895 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfcab298-9e07-4fc1-aea3-7ac559e0c6f1-combined-ca-bundle\") pod \"bfcab298-9e07-4fc1-aea3-7ac559e0c6f1\" (UID: \"bfcab298-9e07-4fc1-aea3-7ac559e0c6f1\") " Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.200038 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bfcab298-9e07-4fc1-aea3-7ac559e0c6f1-logs\") pod \"bfcab298-9e07-4fc1-aea3-7ac559e0c6f1\" (UID: \"bfcab298-9e07-4fc1-aea3-7ac559e0c6f1\") " Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.199902 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfcab298-9e07-4fc1-aea3-7ac559e0c6f1-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "bfcab298-9e07-4fc1-aea3-7ac559e0c6f1" (UID: "bfcab298-9e07-4fc1-aea3-7ac559e0c6f1"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.201339 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfcab298-9e07-4fc1-aea3-7ac559e0c6f1-logs" (OuterVolumeSpecName: "logs") pod "bfcab298-9e07-4fc1-aea3-7ac559e0c6f1" (UID: "bfcab298-9e07-4fc1-aea3-7ac559e0c6f1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.205443 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfcab298-9e07-4fc1-aea3-7ac559e0c6f1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "bfcab298-9e07-4fc1-aea3-7ac559e0c6f1" (UID: "bfcab298-9e07-4fc1-aea3-7ac559e0c6f1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.207760 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfcab298-9e07-4fc1-aea3-7ac559e0c6f1-scripts" (OuterVolumeSpecName: "scripts") pod "bfcab298-9e07-4fc1-aea3-7ac559e0c6f1" (UID: "bfcab298-9e07-4fc1-aea3-7ac559e0c6f1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.208008 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfcab298-9e07-4fc1-aea3-7ac559e0c6f1-kube-api-access-xczd4" (OuterVolumeSpecName: "kube-api-access-xczd4") pod "bfcab298-9e07-4fc1-aea3-7ac559e0c6f1" (UID: "bfcab298-9e07-4fc1-aea3-7ac559e0c6f1"). InnerVolumeSpecName "kube-api-access-xczd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.252572 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfcab298-9e07-4fc1-aea3-7ac559e0c6f1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bfcab298-9e07-4fc1-aea3-7ac559e0c6f1" (UID: "bfcab298-9e07-4fc1-aea3-7ac559e0c6f1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.259998 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfcab298-9e07-4fc1-aea3-7ac559e0c6f1-config-data" (OuterVolumeSpecName: "config-data") pod "bfcab298-9e07-4fc1-aea3-7ac559e0c6f1" (UID: "bfcab298-9e07-4fc1-aea3-7ac559e0c6f1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.283550 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b033028-9ac7-43d8-95da-931e7a25249a","Type":"ContainerDied","Data":"d34e179fc10524e2df3a93523d86e2098350193a6f0bfd4f838b784475518238"} Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.283602 5005 scope.go:117] "RemoveContainer" containerID="de7563e33605dac2327bf1d243a5b148e27e347a04adc72180152695e2a54295" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.283721 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.292740 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-58f48f7767-7pg69" event={"ID":"2458c085-0fe5-433b-8f9d-d7406e2cd54c","Type":"ContainerStarted","Data":"440d29e791e19cfb894bc4079ce9e651e162fcbbd56386dc737e2afc9972044b"} Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.292777 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-58f48f7767-7pg69" event={"ID":"2458c085-0fe5-433b-8f9d-d7406e2cd54c","Type":"ContainerStarted","Data":"85b6001d0aec3e0bcc71755b9a7c717212361113839e871354b073a19e379af9"} Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.297268 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-wx6kk" event={"ID":"a6f62bc8-4834-483f-b68d-9f4859378352","Type":"ContainerStarted","Data":"b058113ab4aabc2fbb0bd521ac34532433b4f3944166e7fa34a033e8a1ff8016"} Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.297305 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d97fcdd8f-wx6kk" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.300884 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"589109a6-a7f8-484f-a5e2-a83bd90c21c4","Type":"ContainerStarted","Data":"7acf02f2b76172ff659a62918955296b591175fa8dcc1c99c9fe19da0d02dec0"} Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.301893 5005 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bfcab298-9e07-4fc1-aea3-7ac559e0c6f1-logs\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.301919 5005 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bfcab298-9e07-4fc1-aea3-7ac559e0c6f1-scripts\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.301932 5005 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bfcab298-9e07-4fc1-aea3-7ac559e0c6f1-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.301973 5005 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bfcab298-9e07-4fc1-aea3-7ac559e0c6f1-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.301985 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xczd4\" (UniqueName: \"kubernetes.io/projected/bfcab298-9e07-4fc1-aea3-7ac559e0c6f1-kube-api-access-xczd4\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.301996 5005 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfcab298-9e07-4fc1-aea3-7ac559e0c6f1-config-data\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.302006 5005 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfcab298-9e07-4fc1-aea3-7ac559e0c6f1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.303133 5005 generic.go:334] "Generic (PLEG): container finished" podID="bfcab298-9e07-4fc1-aea3-7ac559e0c6f1" containerID="5a9be008fd3b33f0844f494fdd4a6050aa32fff4045bedb3d5e05c45c1314614" exitCode=0 Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.303156 5005 generic.go:334] "Generic (PLEG): container finished" podID="bfcab298-9e07-4fc1-aea3-7ac559e0c6f1" containerID="01b02c7df172f063809ed4b9894d989f5bf3b24387e8bc814c16cedb76679893" exitCode=143 Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.303191 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bfcab298-9e07-4fc1-aea3-7ac559e0c6f1","Type":"ContainerDied","Data":"5a9be008fd3b33f0844f494fdd4a6050aa32fff4045bedb3d5e05c45c1314614"} Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.303215 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bfcab298-9e07-4fc1-aea3-7ac559e0c6f1","Type":"ContainerDied","Data":"01b02c7df172f063809ed4b9894d989f5bf3b24387e8bc814c16cedb76679893"} Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.303225 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bfcab298-9e07-4fc1-aea3-7ac559e0c6f1","Type":"ContainerDied","Data":"19a118148bdbb2d477f1239a0387b03ca44ec718cc9d9870caa08cf8d3eb9efe"} Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.303277 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.313073 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5fb6567bf4-7vxqw" event={"ID":"129a4e2f-9f64-4fbb-a36a-f894073762db","Type":"ContainerStarted","Data":"4e182dfbf075a6173a159443e87c893fd3044a3ab30c2cdc0d7d6d52a0114154"} Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.313118 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5fb6567bf4-7vxqw" event={"ID":"129a4e2f-9f64-4fbb-a36a-f894073762db","Type":"ContainerStarted","Data":"f289bee1e29ed9be257539cef4a121093729488b2229ef32a885923a76ce157e"} Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.343879 5005 scope.go:117] "RemoveContainer" containerID="62aff208cc62a3bdea4d6476a9d411808b725c45f76aafc0497409c55486ba84" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.360053 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5fb6567bf4-7vxqw" podStartSLOduration=4.444190247 podStartE2EDuration="6.360036862s" podCreationTimestamp="2026-02-25 11:36:32 +0000 UTC" firstStartedPulling="2026-02-25 11:36:35.30447316 +0000 UTC m=+1109.345205487" lastFinishedPulling="2026-02-25 11:36:37.220319775 +0000 UTC m=+1111.261052102" observedRunningTime="2026-02-25 11:36:38.346977247 +0000 UTC m=+1112.387709564" watchObservedRunningTime="2026-02-25 11:36:38.360036862 +0000 UTC m=+1112.400769189" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.360235 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-58f48f7767-7pg69" podStartSLOduration=4.104656903 podStartE2EDuration="6.360231198s" podCreationTimestamp="2026-02-25 11:36:32 +0000 UTC" firstStartedPulling="2026-02-25 11:36:34.959073248 +0000 UTC m=+1108.999805575" lastFinishedPulling="2026-02-25 11:36:37.214647543 +0000 UTC m=+1111.255379870" observedRunningTime="2026-02-25 11:36:38.321529536 +0000 UTC m=+1112.362261863" watchObservedRunningTime="2026-02-25 11:36:38.360231198 +0000 UTC m=+1112.400963525" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.375615 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d97fcdd8f-wx6kk" podStartSLOduration=6.375597914 podStartE2EDuration="6.375597914s" podCreationTimestamp="2026-02-25 11:36:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:36:38.369512529 +0000 UTC m=+1112.410244856" watchObservedRunningTime="2026-02-25 11:36:38.375597914 +0000 UTC m=+1112.416330241" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.379824 5005 scope.go:117] "RemoveContainer" containerID="fd4b1f1784a58de950cae7d57546ef271186ae04bd51d79894a828b440dd457d" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.440550 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.463222 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.475871 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 25 11:36:38 crc kubenswrapper[5005]: E0225 11:36:38.477611 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfcab298-9e07-4fc1-aea3-7ac559e0c6f1" containerName="cinder-api-log" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.477633 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfcab298-9e07-4fc1-aea3-7ac559e0c6f1" containerName="cinder-api-log" Feb 25 11:36:38 crc kubenswrapper[5005]: E0225 11:36:38.477654 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfcab298-9e07-4fc1-aea3-7ac559e0c6f1" containerName="cinder-api" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.477660 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfcab298-9e07-4fc1-aea3-7ac559e0c6f1" containerName="cinder-api" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.477826 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfcab298-9e07-4fc1-aea3-7ac559e0c6f1" containerName="cinder-api" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.477841 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfcab298-9e07-4fc1-aea3-7ac559e0c6f1" containerName="cinder-api-log" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.480333 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.482194 5005 scope.go:117] "RemoveContainer" containerID="5a9be008fd3b33f0844f494fdd4a6050aa32fff4045bedb3d5e05c45c1314614" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.484745 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.486052 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.494205 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.495669 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.505744 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.518973 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.520242 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.522928 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.522977 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.523109 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.544815 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.551933 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-64d599957d-86x2s"] Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.598470 5005 scope.go:117] "RemoveContainer" containerID="01b02c7df172f063809ed4b9894d989f5bf3b24387e8bc814c16cedb76679893" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.617024 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99qvw\" (UniqueName: \"kubernetes.io/projected/fe51dd1b-0b94-44f9-b7a5-e76b07e62f10-kube-api-access-99qvw\") pod \"cinder-api-0\" (UID: \"fe51dd1b-0b94-44f9-b7a5-e76b07e62f10\") " pod="openstack/cinder-api-0" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.617104 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe51dd1b-0b94-44f9-b7a5-e76b07e62f10-scripts\") pod \"cinder-api-0\" (UID: \"fe51dd1b-0b94-44f9-b7a5-e76b07e62f10\") " pod="openstack/cinder-api-0" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.617131 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe51dd1b-0b94-44f9-b7a5-e76b07e62f10-config-data\") pod \"cinder-api-0\" (UID: \"fe51dd1b-0b94-44f9-b7a5-e76b07e62f10\") " pod="openstack/cinder-api-0" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.617148 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/097d7f93-e779-4856-a6d1-57be2bcba899-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"097d7f93-e779-4856-a6d1-57be2bcba899\") " pod="openstack/ceilometer-0" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.617163 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/097d7f93-e779-4856-a6d1-57be2bcba899-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"097d7f93-e779-4856-a6d1-57be2bcba899\") " pod="openstack/ceilometer-0" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.617287 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/097d7f93-e779-4856-a6d1-57be2bcba899-scripts\") pod \"ceilometer-0\" (UID: \"097d7f93-e779-4856-a6d1-57be2bcba899\") " pod="openstack/ceilometer-0" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.617311 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/097d7f93-e779-4856-a6d1-57be2bcba899-run-httpd\") pod \"ceilometer-0\" (UID: \"097d7f93-e779-4856-a6d1-57be2bcba899\") " pod="openstack/ceilometer-0" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.617333 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe51dd1b-0b94-44f9-b7a5-e76b07e62f10-public-tls-certs\") pod \"cinder-api-0\" (UID: \"fe51dd1b-0b94-44f9-b7a5-e76b07e62f10\") " pod="openstack/cinder-api-0" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.617347 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe51dd1b-0b94-44f9-b7a5-e76b07e62f10-logs\") pod \"cinder-api-0\" (UID: \"fe51dd1b-0b94-44f9-b7a5-e76b07e62f10\") " pod="openstack/cinder-api-0" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.617387 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fe51dd1b-0b94-44f9-b7a5-e76b07e62f10-etc-machine-id\") pod \"cinder-api-0\" (UID: \"fe51dd1b-0b94-44f9-b7a5-e76b07e62f10\") " pod="openstack/cinder-api-0" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.617426 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/097d7f93-e779-4856-a6d1-57be2bcba899-log-httpd\") pod \"ceilometer-0\" (UID: \"097d7f93-e779-4856-a6d1-57be2bcba899\") " pod="openstack/ceilometer-0" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.617470 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h46fp\" (UniqueName: \"kubernetes.io/projected/097d7f93-e779-4856-a6d1-57be2bcba899-kube-api-access-h46fp\") pod \"ceilometer-0\" (UID: \"097d7f93-e779-4856-a6d1-57be2bcba899\") " pod="openstack/ceilometer-0" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.617496 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe51dd1b-0b94-44f9-b7a5-e76b07e62f10-config-data-custom\") pod \"cinder-api-0\" (UID: \"fe51dd1b-0b94-44f9-b7a5-e76b07e62f10\") " pod="openstack/cinder-api-0" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.617529 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe51dd1b-0b94-44f9-b7a5-e76b07e62f10-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"fe51dd1b-0b94-44f9-b7a5-e76b07e62f10\") " pod="openstack/cinder-api-0" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.617545 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe51dd1b-0b94-44f9-b7a5-e76b07e62f10-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"fe51dd1b-0b94-44f9-b7a5-e76b07e62f10\") " pod="openstack/cinder-api-0" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.617635 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/097d7f93-e779-4856-a6d1-57be2bcba899-config-data\") pod \"ceilometer-0\" (UID: \"097d7f93-e779-4856-a6d1-57be2bcba899\") " pod="openstack/ceilometer-0" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.662468 5005 scope.go:117] "RemoveContainer" containerID="5a9be008fd3b33f0844f494fdd4a6050aa32fff4045bedb3d5e05c45c1314614" Feb 25 11:36:38 crc kubenswrapper[5005]: E0225 11:36:38.662851 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a9be008fd3b33f0844f494fdd4a6050aa32fff4045bedb3d5e05c45c1314614\": container with ID starting with 5a9be008fd3b33f0844f494fdd4a6050aa32fff4045bedb3d5e05c45c1314614 not found: ID does not exist" containerID="5a9be008fd3b33f0844f494fdd4a6050aa32fff4045bedb3d5e05c45c1314614" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.662899 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a9be008fd3b33f0844f494fdd4a6050aa32fff4045bedb3d5e05c45c1314614"} err="failed to get container status \"5a9be008fd3b33f0844f494fdd4a6050aa32fff4045bedb3d5e05c45c1314614\": rpc error: code = NotFound desc = could not find container \"5a9be008fd3b33f0844f494fdd4a6050aa32fff4045bedb3d5e05c45c1314614\": container with ID starting with 5a9be008fd3b33f0844f494fdd4a6050aa32fff4045bedb3d5e05c45c1314614 not found: ID does not exist" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.663035 5005 scope.go:117] "RemoveContainer" containerID="01b02c7df172f063809ed4b9894d989f5bf3b24387e8bc814c16cedb76679893" Feb 25 11:36:38 crc kubenswrapper[5005]: E0225 11:36:38.663390 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01b02c7df172f063809ed4b9894d989f5bf3b24387e8bc814c16cedb76679893\": container with ID starting with 01b02c7df172f063809ed4b9894d989f5bf3b24387e8bc814c16cedb76679893 not found: ID does not exist" containerID="01b02c7df172f063809ed4b9894d989f5bf3b24387e8bc814c16cedb76679893" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.663429 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01b02c7df172f063809ed4b9894d989f5bf3b24387e8bc814c16cedb76679893"} err="failed to get container status \"01b02c7df172f063809ed4b9894d989f5bf3b24387e8bc814c16cedb76679893\": rpc error: code = NotFound desc = could not find container \"01b02c7df172f063809ed4b9894d989f5bf3b24387e8bc814c16cedb76679893\": container with ID starting with 01b02c7df172f063809ed4b9894d989f5bf3b24387e8bc814c16cedb76679893 not found: ID does not exist" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.663454 5005 scope.go:117] "RemoveContainer" containerID="5a9be008fd3b33f0844f494fdd4a6050aa32fff4045bedb3d5e05c45c1314614" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.663704 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a9be008fd3b33f0844f494fdd4a6050aa32fff4045bedb3d5e05c45c1314614"} err="failed to get container status \"5a9be008fd3b33f0844f494fdd4a6050aa32fff4045bedb3d5e05c45c1314614\": rpc error: code = NotFound desc = could not find container \"5a9be008fd3b33f0844f494fdd4a6050aa32fff4045bedb3d5e05c45c1314614\": container with ID starting with 5a9be008fd3b33f0844f494fdd4a6050aa32fff4045bedb3d5e05c45c1314614 not found: ID does not exist" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.663723 5005 scope.go:117] "RemoveContainer" containerID="01b02c7df172f063809ed4b9894d989f5bf3b24387e8bc814c16cedb76679893" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.663951 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01b02c7df172f063809ed4b9894d989f5bf3b24387e8bc814c16cedb76679893"} err="failed to get container status \"01b02c7df172f063809ed4b9894d989f5bf3b24387e8bc814c16cedb76679893\": rpc error: code = NotFound desc = could not find container \"01b02c7df172f063809ed4b9894d989f5bf3b24387e8bc814c16cedb76679893\": container with ID starting with 01b02c7df172f063809ed4b9894d989f5bf3b24387e8bc814c16cedb76679893 not found: ID does not exist" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.695071 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6332cdda-48e3-42e5-95b6-60948bec88c4" path="/var/lib/kubelet/pods/6332cdda-48e3-42e5-95b6-60948bec88c4/volumes" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.695748 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b033028-9ac7-43d8-95da-931e7a25249a" path="/var/lib/kubelet/pods/7b033028-9ac7-43d8-95da-931e7a25249a/volumes" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.696363 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfcab298-9e07-4fc1-aea3-7ac559e0c6f1" path="/var/lib/kubelet/pods/bfcab298-9e07-4fc1-aea3-7ac559e0c6f1/volumes" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.718779 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe51dd1b-0b94-44f9-b7a5-e76b07e62f10-config-data-custom\") pod \"cinder-api-0\" (UID: \"fe51dd1b-0b94-44f9-b7a5-e76b07e62f10\") " pod="openstack/cinder-api-0" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.718829 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe51dd1b-0b94-44f9-b7a5-e76b07e62f10-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"fe51dd1b-0b94-44f9-b7a5-e76b07e62f10\") " pod="openstack/cinder-api-0" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.718851 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe51dd1b-0b94-44f9-b7a5-e76b07e62f10-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"fe51dd1b-0b94-44f9-b7a5-e76b07e62f10\") " pod="openstack/cinder-api-0" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.718879 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/097d7f93-e779-4856-a6d1-57be2bcba899-config-data\") pod \"ceilometer-0\" (UID: \"097d7f93-e779-4856-a6d1-57be2bcba899\") " pod="openstack/ceilometer-0" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.718909 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99qvw\" (UniqueName: \"kubernetes.io/projected/fe51dd1b-0b94-44f9-b7a5-e76b07e62f10-kube-api-access-99qvw\") pod \"cinder-api-0\" (UID: \"fe51dd1b-0b94-44f9-b7a5-e76b07e62f10\") " pod="openstack/cinder-api-0" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.718926 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe51dd1b-0b94-44f9-b7a5-e76b07e62f10-scripts\") pod \"cinder-api-0\" (UID: \"fe51dd1b-0b94-44f9-b7a5-e76b07e62f10\") " pod="openstack/cinder-api-0" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.718949 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe51dd1b-0b94-44f9-b7a5-e76b07e62f10-config-data\") pod \"cinder-api-0\" (UID: \"fe51dd1b-0b94-44f9-b7a5-e76b07e62f10\") " pod="openstack/cinder-api-0" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.718966 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/097d7f93-e779-4856-a6d1-57be2bcba899-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"097d7f93-e779-4856-a6d1-57be2bcba899\") " pod="openstack/ceilometer-0" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.718983 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/097d7f93-e779-4856-a6d1-57be2bcba899-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"097d7f93-e779-4856-a6d1-57be2bcba899\") " pod="openstack/ceilometer-0" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.719013 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/097d7f93-e779-4856-a6d1-57be2bcba899-scripts\") pod \"ceilometer-0\" (UID: \"097d7f93-e779-4856-a6d1-57be2bcba899\") " pod="openstack/ceilometer-0" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.719029 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/097d7f93-e779-4856-a6d1-57be2bcba899-run-httpd\") pod \"ceilometer-0\" (UID: \"097d7f93-e779-4856-a6d1-57be2bcba899\") " pod="openstack/ceilometer-0" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.719045 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe51dd1b-0b94-44f9-b7a5-e76b07e62f10-public-tls-certs\") pod \"cinder-api-0\" (UID: \"fe51dd1b-0b94-44f9-b7a5-e76b07e62f10\") " pod="openstack/cinder-api-0" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.719059 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe51dd1b-0b94-44f9-b7a5-e76b07e62f10-logs\") pod \"cinder-api-0\" (UID: \"fe51dd1b-0b94-44f9-b7a5-e76b07e62f10\") " pod="openstack/cinder-api-0" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.719084 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fe51dd1b-0b94-44f9-b7a5-e76b07e62f10-etc-machine-id\") pod \"cinder-api-0\" (UID: \"fe51dd1b-0b94-44f9-b7a5-e76b07e62f10\") " pod="openstack/cinder-api-0" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.719104 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/097d7f93-e779-4856-a6d1-57be2bcba899-log-httpd\") pod \"ceilometer-0\" (UID: \"097d7f93-e779-4856-a6d1-57be2bcba899\") " pod="openstack/ceilometer-0" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.719141 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h46fp\" (UniqueName: \"kubernetes.io/projected/097d7f93-e779-4856-a6d1-57be2bcba899-kube-api-access-h46fp\") pod \"ceilometer-0\" (UID: \"097d7f93-e779-4856-a6d1-57be2bcba899\") " pod="openstack/ceilometer-0" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.724743 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe51dd1b-0b94-44f9-b7a5-e76b07e62f10-logs\") pod \"cinder-api-0\" (UID: \"fe51dd1b-0b94-44f9-b7a5-e76b07e62f10\") " pod="openstack/cinder-api-0" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.725077 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fe51dd1b-0b94-44f9-b7a5-e76b07e62f10-etc-machine-id\") pod \"cinder-api-0\" (UID: \"fe51dd1b-0b94-44f9-b7a5-e76b07e62f10\") " pod="openstack/cinder-api-0" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.729556 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/097d7f93-e779-4856-a6d1-57be2bcba899-run-httpd\") pod \"ceilometer-0\" (UID: \"097d7f93-e779-4856-a6d1-57be2bcba899\") " pod="openstack/ceilometer-0" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.731021 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe51dd1b-0b94-44f9-b7a5-e76b07e62f10-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"fe51dd1b-0b94-44f9-b7a5-e76b07e62f10\") " pod="openstack/cinder-api-0" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.731813 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/097d7f93-e779-4856-a6d1-57be2bcba899-log-httpd\") pod \"ceilometer-0\" (UID: \"097d7f93-e779-4856-a6d1-57be2bcba899\") " pod="openstack/ceilometer-0" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.732057 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe51dd1b-0b94-44f9-b7a5-e76b07e62f10-config-data-custom\") pod \"cinder-api-0\" (UID: \"fe51dd1b-0b94-44f9-b7a5-e76b07e62f10\") " pod="openstack/cinder-api-0" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.732104 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/097d7f93-e779-4856-a6d1-57be2bcba899-scripts\") pod \"ceilometer-0\" (UID: \"097d7f93-e779-4856-a6d1-57be2bcba899\") " pod="openstack/ceilometer-0" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.732572 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/097d7f93-e779-4856-a6d1-57be2bcba899-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"097d7f93-e779-4856-a6d1-57be2bcba899\") " pod="openstack/ceilometer-0" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.732937 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe51dd1b-0b94-44f9-b7a5-e76b07e62f10-config-data\") pod \"cinder-api-0\" (UID: \"fe51dd1b-0b94-44f9-b7a5-e76b07e62f10\") " pod="openstack/cinder-api-0" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.733918 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe51dd1b-0b94-44f9-b7a5-e76b07e62f10-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"fe51dd1b-0b94-44f9-b7a5-e76b07e62f10\") " pod="openstack/cinder-api-0" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.734876 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/097d7f93-e779-4856-a6d1-57be2bcba899-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"097d7f93-e779-4856-a6d1-57be2bcba899\") " pod="openstack/ceilometer-0" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.738663 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe51dd1b-0b94-44f9-b7a5-e76b07e62f10-scripts\") pod \"cinder-api-0\" (UID: \"fe51dd1b-0b94-44f9-b7a5-e76b07e62f10\") " pod="openstack/cinder-api-0" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.739239 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/097d7f93-e779-4856-a6d1-57be2bcba899-config-data\") pod \"ceilometer-0\" (UID: \"097d7f93-e779-4856-a6d1-57be2bcba899\") " pod="openstack/ceilometer-0" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.746963 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h46fp\" (UniqueName: \"kubernetes.io/projected/097d7f93-e779-4856-a6d1-57be2bcba899-kube-api-access-h46fp\") pod \"ceilometer-0\" (UID: \"097d7f93-e779-4856-a6d1-57be2bcba899\") " pod="openstack/ceilometer-0" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.756044 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99qvw\" (UniqueName: \"kubernetes.io/projected/fe51dd1b-0b94-44f9-b7a5-e76b07e62f10-kube-api-access-99qvw\") pod \"cinder-api-0\" (UID: \"fe51dd1b-0b94-44f9-b7a5-e76b07e62f10\") " pod="openstack/cinder-api-0" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.757871 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe51dd1b-0b94-44f9-b7a5-e76b07e62f10-public-tls-certs\") pod \"cinder-api-0\" (UID: \"fe51dd1b-0b94-44f9-b7a5-e76b07e62f10\") " pod="openstack/cinder-api-0" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.815479 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 25 11:36:38 crc kubenswrapper[5005]: I0225 11:36:38.899100 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 25 11:36:39 crc kubenswrapper[5005]: I0225 11:36:39.324714 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"589109a6-a7f8-484f-a5e2-a83bd90c21c4","Type":"ContainerStarted","Data":"9526254a94b7abd9ed1e760bbc415671a99ce7ff8204c02ebb75234234e8183f"} Feb 25 11:36:39 crc kubenswrapper[5005]: I0225 11:36:39.328100 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-64d599957d-86x2s" event={"ID":"e7a211eb-8b24-4344-a82e-65ad4770881a","Type":"ContainerStarted","Data":"62d0493776774d0af28937a08370c436ecbfa0e38a411240a689f9a3be7aea42"} Feb 25 11:36:39 crc kubenswrapper[5005]: I0225 11:36:39.328330 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-64d599957d-86x2s" event={"ID":"e7a211eb-8b24-4344-a82e-65ad4770881a","Type":"ContainerStarted","Data":"8801a7ee692a120f3aee7887a5c27d9fdd04c7fe79e561f1e8df58bd71104e3e"} Feb 25 11:36:39 crc kubenswrapper[5005]: I0225 11:36:39.384648 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 25 11:36:39 crc kubenswrapper[5005]: I0225 11:36:39.397186 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=7.55079854 podStartE2EDuration="8.397161213s" podCreationTimestamp="2026-02-25 11:36:31 +0000 UTC" firstStartedPulling="2026-02-25 11:36:35.335817259 +0000 UTC m=+1109.376549586" lastFinishedPulling="2026-02-25 11:36:36.182179932 +0000 UTC m=+1110.222912259" observedRunningTime="2026-02-25 11:36:39.346242371 +0000 UTC m=+1113.386974698" watchObservedRunningTime="2026-02-25 11:36:39.397161213 +0000 UTC m=+1113.437893540" Feb 25 11:36:39 crc kubenswrapper[5005]: I0225 11:36:39.512945 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 25 11:36:40 crc kubenswrapper[5005]: I0225 11:36:40.019230 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5f5b655546-bjxxg" Feb 25 11:36:40 crc kubenswrapper[5005]: I0225 11:36:40.278989 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-b769c78f-h4vrt"] Feb 25 11:36:40 crc kubenswrapper[5005]: I0225 11:36:40.280278 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-b769c78f-h4vrt" podUID="d92e4f70-284c-4330-94bd-5a052d96ac39" containerName="neutron-api" containerID="cri-o://3bf8952c510768b3c6c698898a465927d0d9eee5f51a20ebe731061666584d3b" gracePeriod=30 Feb 25 11:36:40 crc kubenswrapper[5005]: I0225 11:36:40.280828 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-b769c78f-h4vrt" podUID="d92e4f70-284c-4330-94bd-5a052d96ac39" containerName="neutron-httpd" containerID="cri-o://96eee5493eca403c6128dacb2ebbbc2998823cfe6ed94966954489cbb016c847" gracePeriod=30 Feb 25 11:36:40 crc kubenswrapper[5005]: I0225 11:36:40.328353 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-84f5fb5d89-rmhhp"] Feb 25 11:36:40 crc kubenswrapper[5005]: I0225 11:36:40.330143 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-84f5fb5d89-rmhhp" Feb 25 11:36:40 crc kubenswrapper[5005]: I0225 11:36:40.345607 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-b769c78f-h4vrt" Feb 25 11:36:40 crc kubenswrapper[5005]: I0225 11:36:40.371727 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-84f5fb5d89-rmhhp"] Feb 25 11:36:40 crc kubenswrapper[5005]: I0225 11:36:40.392906 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ced877ad-f6e2-4f0e-a4dd-6bf09e612717-config\") pod \"neutron-84f5fb5d89-rmhhp\" (UID: \"ced877ad-f6e2-4f0e-a4dd-6bf09e612717\") " pod="openstack/neutron-84f5fb5d89-rmhhp" Feb 25 11:36:40 crc kubenswrapper[5005]: I0225 11:36:40.392952 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ced877ad-f6e2-4f0e-a4dd-6bf09e612717-ovndb-tls-certs\") pod \"neutron-84f5fb5d89-rmhhp\" (UID: \"ced877ad-f6e2-4f0e-a4dd-6bf09e612717\") " pod="openstack/neutron-84f5fb5d89-rmhhp" Feb 25 11:36:40 crc kubenswrapper[5005]: I0225 11:36:40.392993 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lxv4\" (UniqueName: \"kubernetes.io/projected/ced877ad-f6e2-4f0e-a4dd-6bf09e612717-kube-api-access-7lxv4\") pod \"neutron-84f5fb5d89-rmhhp\" (UID: \"ced877ad-f6e2-4f0e-a4dd-6bf09e612717\") " pod="openstack/neutron-84f5fb5d89-rmhhp" Feb 25 11:36:40 crc kubenswrapper[5005]: I0225 11:36:40.393009 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ced877ad-f6e2-4f0e-a4dd-6bf09e612717-public-tls-certs\") pod \"neutron-84f5fb5d89-rmhhp\" (UID: \"ced877ad-f6e2-4f0e-a4dd-6bf09e612717\") " pod="openstack/neutron-84f5fb5d89-rmhhp" Feb 25 11:36:40 crc kubenswrapper[5005]: I0225 11:36:40.393027 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ced877ad-f6e2-4f0e-a4dd-6bf09e612717-combined-ca-bundle\") pod \"neutron-84f5fb5d89-rmhhp\" (UID: \"ced877ad-f6e2-4f0e-a4dd-6bf09e612717\") " pod="openstack/neutron-84f5fb5d89-rmhhp" Feb 25 11:36:40 crc kubenswrapper[5005]: I0225 11:36:40.393048 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ced877ad-f6e2-4f0e-a4dd-6bf09e612717-internal-tls-certs\") pod \"neutron-84f5fb5d89-rmhhp\" (UID: \"ced877ad-f6e2-4f0e-a4dd-6bf09e612717\") " pod="openstack/neutron-84f5fb5d89-rmhhp" Feb 25 11:36:40 crc kubenswrapper[5005]: I0225 11:36:40.393096 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ced877ad-f6e2-4f0e-a4dd-6bf09e612717-httpd-config\") pod \"neutron-84f5fb5d89-rmhhp\" (UID: \"ced877ad-f6e2-4f0e-a4dd-6bf09e612717\") " pod="openstack/neutron-84f5fb5d89-rmhhp" Feb 25 11:36:40 crc kubenswrapper[5005]: I0225 11:36:40.418575 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fe51dd1b-0b94-44f9-b7a5-e76b07e62f10","Type":"ContainerStarted","Data":"bfe06e0cb65b9e291f75dab9cbd192c63f5d3cfb6af4149a519a9a0ad94ae7c2"} Feb 25 11:36:40 crc kubenswrapper[5005]: I0225 11:36:40.418621 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fe51dd1b-0b94-44f9-b7a5-e76b07e62f10","Type":"ContainerStarted","Data":"4ae157fc18b2f0ae909c9e7dfec16d5f105a69c6310a591548af6f37b823205c"} Feb 25 11:36:40 crc kubenswrapper[5005]: I0225 11:36:40.440161 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"097d7f93-e779-4856-a6d1-57be2bcba899","Type":"ContainerStarted","Data":"379ea284f05c30caaa955948d781b47438af9a09729d15fc5f003f54cb5857ad"} Feb 25 11:36:40 crc kubenswrapper[5005]: I0225 11:36:40.448018 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-64d599957d-86x2s" event={"ID":"e7a211eb-8b24-4344-a82e-65ad4770881a","Type":"ContainerStarted","Data":"b58752cff85f27b1b6d32165910b460cf68b356f38fa62442b4d43af8b9ed050"} Feb 25 11:36:40 crc kubenswrapper[5005]: I0225 11:36:40.448081 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-64d599957d-86x2s" Feb 25 11:36:40 crc kubenswrapper[5005]: I0225 11:36:40.448098 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-64d599957d-86x2s" Feb 25 11:36:40 crc kubenswrapper[5005]: I0225 11:36:40.476734 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-64d599957d-86x2s" podStartSLOduration=3.476714189 podStartE2EDuration="3.476714189s" podCreationTimestamp="2026-02-25 11:36:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:36:40.464090737 +0000 UTC m=+1114.504823074" watchObservedRunningTime="2026-02-25 11:36:40.476714189 +0000 UTC m=+1114.517446516" Feb 25 11:36:40 crc kubenswrapper[5005]: I0225 11:36:40.509135 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ced877ad-f6e2-4f0e-a4dd-6bf09e612717-config\") pod \"neutron-84f5fb5d89-rmhhp\" (UID: \"ced877ad-f6e2-4f0e-a4dd-6bf09e612717\") " pod="openstack/neutron-84f5fb5d89-rmhhp" Feb 25 11:36:40 crc kubenswrapper[5005]: I0225 11:36:40.509248 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ced877ad-f6e2-4f0e-a4dd-6bf09e612717-ovndb-tls-certs\") pod \"neutron-84f5fb5d89-rmhhp\" (UID: \"ced877ad-f6e2-4f0e-a4dd-6bf09e612717\") " pod="openstack/neutron-84f5fb5d89-rmhhp" Feb 25 11:36:40 crc kubenswrapper[5005]: I0225 11:36:40.509682 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lxv4\" (UniqueName: \"kubernetes.io/projected/ced877ad-f6e2-4f0e-a4dd-6bf09e612717-kube-api-access-7lxv4\") pod \"neutron-84f5fb5d89-rmhhp\" (UID: \"ced877ad-f6e2-4f0e-a4dd-6bf09e612717\") " pod="openstack/neutron-84f5fb5d89-rmhhp" Feb 25 11:36:40 crc kubenswrapper[5005]: I0225 11:36:40.509719 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ced877ad-f6e2-4f0e-a4dd-6bf09e612717-public-tls-certs\") pod \"neutron-84f5fb5d89-rmhhp\" (UID: \"ced877ad-f6e2-4f0e-a4dd-6bf09e612717\") " pod="openstack/neutron-84f5fb5d89-rmhhp" Feb 25 11:36:40 crc kubenswrapper[5005]: I0225 11:36:40.509746 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ced877ad-f6e2-4f0e-a4dd-6bf09e612717-combined-ca-bundle\") pod \"neutron-84f5fb5d89-rmhhp\" (UID: \"ced877ad-f6e2-4f0e-a4dd-6bf09e612717\") " pod="openstack/neutron-84f5fb5d89-rmhhp" Feb 25 11:36:40 crc kubenswrapper[5005]: I0225 11:36:40.509776 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ced877ad-f6e2-4f0e-a4dd-6bf09e612717-internal-tls-certs\") pod \"neutron-84f5fb5d89-rmhhp\" (UID: \"ced877ad-f6e2-4f0e-a4dd-6bf09e612717\") " pod="openstack/neutron-84f5fb5d89-rmhhp" Feb 25 11:36:40 crc kubenswrapper[5005]: I0225 11:36:40.509923 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ced877ad-f6e2-4f0e-a4dd-6bf09e612717-httpd-config\") pod \"neutron-84f5fb5d89-rmhhp\" (UID: \"ced877ad-f6e2-4f0e-a4dd-6bf09e612717\") " pod="openstack/neutron-84f5fb5d89-rmhhp" Feb 25 11:36:40 crc kubenswrapper[5005]: I0225 11:36:40.516038 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ced877ad-f6e2-4f0e-a4dd-6bf09e612717-internal-tls-certs\") pod \"neutron-84f5fb5d89-rmhhp\" (UID: \"ced877ad-f6e2-4f0e-a4dd-6bf09e612717\") " pod="openstack/neutron-84f5fb5d89-rmhhp" Feb 25 11:36:40 crc kubenswrapper[5005]: I0225 11:36:40.516359 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ced877ad-f6e2-4f0e-a4dd-6bf09e612717-public-tls-certs\") pod \"neutron-84f5fb5d89-rmhhp\" (UID: \"ced877ad-f6e2-4f0e-a4dd-6bf09e612717\") " pod="openstack/neutron-84f5fb5d89-rmhhp" Feb 25 11:36:40 crc kubenswrapper[5005]: I0225 11:36:40.517421 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ced877ad-f6e2-4f0e-a4dd-6bf09e612717-httpd-config\") pod \"neutron-84f5fb5d89-rmhhp\" (UID: \"ced877ad-f6e2-4f0e-a4dd-6bf09e612717\") " pod="openstack/neutron-84f5fb5d89-rmhhp" Feb 25 11:36:40 crc kubenswrapper[5005]: I0225 11:36:40.518158 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ced877ad-f6e2-4f0e-a4dd-6bf09e612717-combined-ca-bundle\") pod \"neutron-84f5fb5d89-rmhhp\" (UID: \"ced877ad-f6e2-4f0e-a4dd-6bf09e612717\") " pod="openstack/neutron-84f5fb5d89-rmhhp" Feb 25 11:36:40 crc kubenswrapper[5005]: I0225 11:36:40.519635 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ced877ad-f6e2-4f0e-a4dd-6bf09e612717-ovndb-tls-certs\") pod \"neutron-84f5fb5d89-rmhhp\" (UID: \"ced877ad-f6e2-4f0e-a4dd-6bf09e612717\") " pod="openstack/neutron-84f5fb5d89-rmhhp" Feb 25 11:36:40 crc kubenswrapper[5005]: I0225 11:36:40.519841 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ced877ad-f6e2-4f0e-a4dd-6bf09e612717-config\") pod \"neutron-84f5fb5d89-rmhhp\" (UID: \"ced877ad-f6e2-4f0e-a4dd-6bf09e612717\") " pod="openstack/neutron-84f5fb5d89-rmhhp" Feb 25 11:36:40 crc kubenswrapper[5005]: I0225 11:36:40.532177 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lxv4\" (UniqueName: \"kubernetes.io/projected/ced877ad-f6e2-4f0e-a4dd-6bf09e612717-kube-api-access-7lxv4\") pod \"neutron-84f5fb5d89-rmhhp\" (UID: \"ced877ad-f6e2-4f0e-a4dd-6bf09e612717\") " pod="openstack/neutron-84f5fb5d89-rmhhp" Feb 25 11:36:40 crc kubenswrapper[5005]: I0225 11:36:40.745507 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-84f5fb5d89-rmhhp" Feb 25 11:36:41 crc kubenswrapper[5005]: I0225 11:36:41.342538 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-84f5fb5d89-rmhhp"] Feb 25 11:36:41 crc kubenswrapper[5005]: I0225 11:36:41.456027 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84f5fb5d89-rmhhp" event={"ID":"ced877ad-f6e2-4f0e-a4dd-6bf09e612717","Type":"ContainerStarted","Data":"b1e02e9087305ea4af5ec2f7ebaac90d7c18354a483a2936e5283df3711aefda"} Feb 25 11:36:41 crc kubenswrapper[5005]: I0225 11:36:41.458051 5005 generic.go:334] "Generic (PLEG): container finished" podID="d92e4f70-284c-4330-94bd-5a052d96ac39" containerID="96eee5493eca403c6128dacb2ebbbc2998823cfe6ed94966954489cbb016c847" exitCode=0 Feb 25 11:36:41 crc kubenswrapper[5005]: I0225 11:36:41.458095 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b769c78f-h4vrt" event={"ID":"d92e4f70-284c-4330-94bd-5a052d96ac39","Type":"ContainerDied","Data":"96eee5493eca403c6128dacb2ebbbc2998823cfe6ed94966954489cbb016c847"} Feb 25 11:36:41 crc kubenswrapper[5005]: I0225 11:36:41.460193 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fe51dd1b-0b94-44f9-b7a5-e76b07e62f10","Type":"ContainerStarted","Data":"1cf8bbac45b886f49e3f98f3f3fba1505086fce00afcea6e2c78f65e3c0c0d08"} Feb 25 11:36:41 crc kubenswrapper[5005]: I0225 11:36:41.460322 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 25 11:36:41 crc kubenswrapper[5005]: I0225 11:36:41.462029 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"097d7f93-e779-4856-a6d1-57be2bcba899","Type":"ContainerStarted","Data":"eefed9e1b907d4da7ea0caf8cc5f3b30234611827cf8c3652fce559c4548094c"} Feb 25 11:36:41 crc kubenswrapper[5005]: I0225 11:36:41.462183 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"097d7f93-e779-4856-a6d1-57be2bcba899","Type":"ContainerStarted","Data":"b60d0c9060998ffc054412737a4e6e862d044b5bba0d11788fd64d351fa21e51"} Feb 25 11:36:41 crc kubenswrapper[5005]: I0225 11:36:41.481413 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.481395387 podStartE2EDuration="3.481395387s" podCreationTimestamp="2026-02-25 11:36:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:36:41.480265314 +0000 UTC m=+1115.520997641" watchObservedRunningTime="2026-02-25 11:36:41.481395387 +0000 UTC m=+1115.522127714" Feb 25 11:36:41 crc kubenswrapper[5005]: I0225 11:36:41.677522 5005 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-b769c78f-h4vrt" podUID="d92e4f70-284c-4330-94bd-5a052d96ac39" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.151:9696/\": dial tcp 10.217.0.151:9696: connect: connection refused" Feb 25 11:36:41 crc kubenswrapper[5005]: I0225 11:36:41.705476 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 25 11:36:42 crc kubenswrapper[5005]: I0225 11:36:42.472324 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84f5fb5d89-rmhhp" event={"ID":"ced877ad-f6e2-4f0e-a4dd-6bf09e612717","Type":"ContainerStarted","Data":"892a83cb1efc57cfad185ce709de2ce082fb177877d94aa78f3f19a913f7a283"} Feb 25 11:36:42 crc kubenswrapper[5005]: I0225 11:36:42.472622 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84f5fb5d89-rmhhp" event={"ID":"ced877ad-f6e2-4f0e-a4dd-6bf09e612717","Type":"ContainerStarted","Data":"3834e702c616ad54e99a3a42b7fec760352c9989c654ab50ea84f964656d8391"} Feb 25 11:36:42 crc kubenswrapper[5005]: I0225 11:36:42.472639 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-84f5fb5d89-rmhhp" Feb 25 11:36:42 crc kubenswrapper[5005]: I0225 11:36:42.474676 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"097d7f93-e779-4856-a6d1-57be2bcba899","Type":"ContainerStarted","Data":"934789b36a79ce497a935186463f6c524add0e6c8c8062d0d85a1f74fda9697d"} Feb 25 11:36:42 crc kubenswrapper[5005]: I0225 11:36:42.496329 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-84f5fb5d89-rmhhp" podStartSLOduration=2.496288566 podStartE2EDuration="2.496288566s" podCreationTimestamp="2026-02-25 11:36:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:36:42.487452648 +0000 UTC m=+1116.528184975" watchObservedRunningTime="2026-02-25 11:36:42.496288566 +0000 UTC m=+1116.537020933" Feb 25 11:36:42 crc kubenswrapper[5005]: I0225 11:36:42.945592 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d97fcdd8f-wx6kk" Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.031999 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-rccrg"] Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.032215 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7b946d459c-rccrg" podUID="20571c67-683c-4662-b39e-6dbd75aa8c51" containerName="dnsmasq-dns" containerID="cri-o://d6d5ec54eb34191d83027271b852ea585a484b7cf2e27314616a6f49207384e1" gracePeriod=10 Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.285809 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7bc6f57c7-g7rsc" Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.399279 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4869a7a1-945a-47d9-9239-b1537f04be41-config-data\") pod \"4869a7a1-945a-47d9-9239-b1537f04be41\" (UID: \"4869a7a1-945a-47d9-9239-b1537f04be41\") " Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.399392 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4869a7a1-945a-47d9-9239-b1537f04be41-horizon-secret-key\") pod \"4869a7a1-945a-47d9-9239-b1537f04be41\" (UID: \"4869a7a1-945a-47d9-9239-b1537f04be41\") " Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.399496 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fk7c\" (UniqueName: \"kubernetes.io/projected/4869a7a1-945a-47d9-9239-b1537f04be41-kube-api-access-2fk7c\") pod \"4869a7a1-945a-47d9-9239-b1537f04be41\" (UID: \"4869a7a1-945a-47d9-9239-b1537f04be41\") " Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.399595 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4869a7a1-945a-47d9-9239-b1537f04be41-logs\") pod \"4869a7a1-945a-47d9-9239-b1537f04be41\" (UID: \"4869a7a1-945a-47d9-9239-b1537f04be41\") " Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.399674 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4869a7a1-945a-47d9-9239-b1537f04be41-scripts\") pod \"4869a7a1-945a-47d9-9239-b1537f04be41\" (UID: \"4869a7a1-945a-47d9-9239-b1537f04be41\") " Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.402072 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4869a7a1-945a-47d9-9239-b1537f04be41-logs" (OuterVolumeSpecName: "logs") pod "4869a7a1-945a-47d9-9239-b1537f04be41" (UID: "4869a7a1-945a-47d9-9239-b1537f04be41"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.407108 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4869a7a1-945a-47d9-9239-b1537f04be41-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "4869a7a1-945a-47d9-9239-b1537f04be41" (UID: "4869a7a1-945a-47d9-9239-b1537f04be41"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.425620 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4869a7a1-945a-47d9-9239-b1537f04be41-kube-api-access-2fk7c" (OuterVolumeSpecName: "kube-api-access-2fk7c") pod "4869a7a1-945a-47d9-9239-b1537f04be41" (UID: "4869a7a1-945a-47d9-9239-b1537f04be41"). InnerVolumeSpecName "kube-api-access-2fk7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.451220 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4869a7a1-945a-47d9-9239-b1537f04be41-config-data" (OuterVolumeSpecName: "config-data") pod "4869a7a1-945a-47d9-9239-b1537f04be41" (UID: "4869a7a1-945a-47d9-9239-b1537f04be41"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.502594 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fk7c\" (UniqueName: \"kubernetes.io/projected/4869a7a1-945a-47d9-9239-b1537f04be41-kube-api-access-2fk7c\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.502620 5005 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4869a7a1-945a-47d9-9239-b1537f04be41-logs\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.502628 5005 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4869a7a1-945a-47d9-9239-b1537f04be41-config-data\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.502636 5005 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4869a7a1-945a-47d9-9239-b1537f04be41-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.530522 5005 generic.go:334] "Generic (PLEG): container finished" podID="4869a7a1-945a-47d9-9239-b1537f04be41" containerID="1caaee8477a93e781a4248157c6c7308e9ec61e27aea473013e03c56bfb73ef7" exitCode=137 Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.530555 5005 generic.go:334] "Generic (PLEG): container finished" podID="4869a7a1-945a-47d9-9239-b1537f04be41" containerID="c370cbe5acc689593d17ef4f49bb35b425291f847b51e4b1c7851cbb0b58c410" exitCode=137 Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.530598 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bc6f57c7-g7rsc" event={"ID":"4869a7a1-945a-47d9-9239-b1537f04be41","Type":"ContainerDied","Data":"1caaee8477a93e781a4248157c6c7308e9ec61e27aea473013e03c56bfb73ef7"} Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.530624 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bc6f57c7-g7rsc" event={"ID":"4869a7a1-945a-47d9-9239-b1537f04be41","Type":"ContainerDied","Data":"c370cbe5acc689593d17ef4f49bb35b425291f847b51e4b1c7851cbb0b58c410"} Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.530634 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bc6f57c7-g7rsc" event={"ID":"4869a7a1-945a-47d9-9239-b1537f04be41","Type":"ContainerDied","Data":"b05ebe99848f4cf675a24f3c1817f8ba628960f28a60561e56b5ec94fc2c4bcb"} Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.530650 5005 scope.go:117] "RemoveContainer" containerID="1caaee8477a93e781a4248157c6c7308e9ec61e27aea473013e03c56bfb73ef7" Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.530789 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7bc6f57c7-g7rsc" Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.533707 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4869a7a1-945a-47d9-9239-b1537f04be41-scripts" (OuterVolumeSpecName: "scripts") pod "4869a7a1-945a-47d9-9239-b1537f04be41" (UID: "4869a7a1-945a-47d9-9239-b1537f04be41"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.569097 5005 generic.go:334] "Generic (PLEG): container finished" podID="20571c67-683c-4662-b39e-6dbd75aa8c51" containerID="d6d5ec54eb34191d83027271b852ea585a484b7cf2e27314616a6f49207384e1" exitCode=0 Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.569146 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-rccrg" event={"ID":"20571c67-683c-4662-b39e-6dbd75aa8c51","Type":"ContainerDied","Data":"d6d5ec54eb34191d83027271b852ea585a484b7cf2e27314616a6f49207384e1"} Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.601280 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7ff575667-wqvzf" Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.603822 5005 generic.go:334] "Generic (PLEG): container finished" podID="9058179d-2c7b-4d0b-b162-10141ea754b0" containerID="c5f4cbe17ef32f6a9fb426aab498217d62bf2bd2502d57cf58e11ff349c90126" exitCode=137 Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.603847 5005 generic.go:334] "Generic (PLEG): container finished" podID="9058179d-2c7b-4d0b-b162-10141ea754b0" containerID="4097ac444a7b2c97b7909ed7ea55417354a1052de7b86d6c073a7b2af5b12b65" exitCode=137 Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.603890 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7ff575667-wqvzf" event={"ID":"9058179d-2c7b-4d0b-b162-10141ea754b0","Type":"ContainerDied","Data":"c5f4cbe17ef32f6a9fb426aab498217d62bf2bd2502d57cf58e11ff349c90126"} Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.604056 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7ff575667-wqvzf" event={"ID":"9058179d-2c7b-4d0b-b162-10141ea754b0","Type":"ContainerDied","Data":"4097ac444a7b2c97b7909ed7ea55417354a1052de7b86d6c073a7b2af5b12b65"} Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.604069 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7ff575667-wqvzf" event={"ID":"9058179d-2c7b-4d0b-b162-10141ea754b0","Type":"ContainerDied","Data":"965dbe4f972a93df810b74d2a00ab4be969aba708db62bfed4ffaeb10cb32304"} Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.604399 5005 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4869a7a1-945a-47d9-9239-b1537f04be41-scripts\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.622887 5005 generic.go:334] "Generic (PLEG): container finished" podID="a38c83b4-5b4f-4b02-b427-ff357cbbb47c" containerID="c0bbfe176cabf6d4dbe27ce68585a6d343e23b905b032d79cbe4c2c37ca69e25" exitCode=137 Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.622928 5005 generic.go:334] "Generic (PLEG): container finished" podID="a38c83b4-5b4f-4b02-b427-ff357cbbb47c" containerID="e5ff6c4894d10dce91d86d353419478f98f2b32e9775cb58f903f0f8348bd8af" exitCode=137 Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.623020 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-d4b89f889-cxtxj" event={"ID":"a38c83b4-5b4f-4b02-b427-ff357cbbb47c","Type":"ContainerDied","Data":"c0bbfe176cabf6d4dbe27ce68585a6d343e23b905b032d79cbe4c2c37ca69e25"} Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.623065 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-d4b89f889-cxtxj" event={"ID":"a38c83b4-5b4f-4b02-b427-ff357cbbb47c","Type":"ContainerDied","Data":"e5ff6c4894d10dce91d86d353419478f98f2b32e9775cb58f903f0f8348bd8af"} Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.623075 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-d4b89f889-cxtxj" event={"ID":"a38c83b4-5b4f-4b02-b427-ff357cbbb47c","Type":"ContainerDied","Data":"f3f47dc787ba9ff9b7d0e4d226b7af3ed6613f8e1a143348c16c68fd6624620e"} Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.623087 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3f47dc787ba9ff9b7d0e4d226b7af3ed6613f8e1a143348c16c68fd6624620e" Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.651560 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-d4b89f889-cxtxj" Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.704997 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ktbv\" (UniqueName: \"kubernetes.io/projected/a38c83b4-5b4f-4b02-b427-ff357cbbb47c-kube-api-access-9ktbv\") pod \"a38c83b4-5b4f-4b02-b427-ff357cbbb47c\" (UID: \"a38c83b4-5b4f-4b02-b427-ff357cbbb47c\") " Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.705120 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a38c83b4-5b4f-4b02-b427-ff357cbbb47c-logs\") pod \"a38c83b4-5b4f-4b02-b427-ff357cbbb47c\" (UID: \"a38c83b4-5b4f-4b02-b427-ff357cbbb47c\") " Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.705150 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9058179d-2c7b-4d0b-b162-10141ea754b0-horizon-secret-key\") pod \"9058179d-2c7b-4d0b-b162-10141ea754b0\" (UID: \"9058179d-2c7b-4d0b-b162-10141ea754b0\") " Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.705225 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a38c83b4-5b4f-4b02-b427-ff357cbbb47c-horizon-secret-key\") pod \"a38c83b4-5b4f-4b02-b427-ff357cbbb47c\" (UID: \"a38c83b4-5b4f-4b02-b427-ff357cbbb47c\") " Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.705315 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9058179d-2c7b-4d0b-b162-10141ea754b0-logs\") pod \"9058179d-2c7b-4d0b-b162-10141ea754b0\" (UID: \"9058179d-2c7b-4d0b-b162-10141ea754b0\") " Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.705341 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a38c83b4-5b4f-4b02-b427-ff357cbbb47c-scripts\") pod \"a38c83b4-5b4f-4b02-b427-ff357cbbb47c\" (UID: \"a38c83b4-5b4f-4b02-b427-ff357cbbb47c\") " Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.705364 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmf79\" (UniqueName: \"kubernetes.io/projected/9058179d-2c7b-4d0b-b162-10141ea754b0-kube-api-access-jmf79\") pod \"9058179d-2c7b-4d0b-b162-10141ea754b0\" (UID: \"9058179d-2c7b-4d0b-b162-10141ea754b0\") " Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.705428 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9058179d-2c7b-4d0b-b162-10141ea754b0-scripts\") pod \"9058179d-2c7b-4d0b-b162-10141ea754b0\" (UID: \"9058179d-2c7b-4d0b-b162-10141ea754b0\") " Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.705458 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a38c83b4-5b4f-4b02-b427-ff357cbbb47c-config-data\") pod \"a38c83b4-5b4f-4b02-b427-ff357cbbb47c\" (UID: \"a38c83b4-5b4f-4b02-b427-ff357cbbb47c\") " Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.705486 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9058179d-2c7b-4d0b-b162-10141ea754b0-config-data\") pod \"9058179d-2c7b-4d0b-b162-10141ea754b0\" (UID: \"9058179d-2c7b-4d0b-b162-10141ea754b0\") " Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.706787 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9058179d-2c7b-4d0b-b162-10141ea754b0-logs" (OuterVolumeSpecName: "logs") pod "9058179d-2c7b-4d0b-b162-10141ea754b0" (UID: "9058179d-2c7b-4d0b-b162-10141ea754b0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.708701 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a38c83b4-5b4f-4b02-b427-ff357cbbb47c-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "a38c83b4-5b4f-4b02-b427-ff357cbbb47c" (UID: "a38c83b4-5b4f-4b02-b427-ff357cbbb47c"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.709022 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a38c83b4-5b4f-4b02-b427-ff357cbbb47c-logs" (OuterVolumeSpecName: "logs") pod "a38c83b4-5b4f-4b02-b427-ff357cbbb47c" (UID: "a38c83b4-5b4f-4b02-b427-ff357cbbb47c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.709326 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a38c83b4-5b4f-4b02-b427-ff357cbbb47c-kube-api-access-9ktbv" (OuterVolumeSpecName: "kube-api-access-9ktbv") pod "a38c83b4-5b4f-4b02-b427-ff357cbbb47c" (UID: "a38c83b4-5b4f-4b02-b427-ff357cbbb47c"). InnerVolumeSpecName "kube-api-access-9ktbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.711591 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9058179d-2c7b-4d0b-b162-10141ea754b0-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "9058179d-2c7b-4d0b-b162-10141ea754b0" (UID: "9058179d-2c7b-4d0b-b162-10141ea754b0"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.713557 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9058179d-2c7b-4d0b-b162-10141ea754b0-kube-api-access-jmf79" (OuterVolumeSpecName: "kube-api-access-jmf79") pod "9058179d-2c7b-4d0b-b162-10141ea754b0" (UID: "9058179d-2c7b-4d0b-b162-10141ea754b0"). InnerVolumeSpecName "kube-api-access-jmf79". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.737457 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a38c83b4-5b4f-4b02-b427-ff357cbbb47c-scripts" (OuterVolumeSpecName: "scripts") pod "a38c83b4-5b4f-4b02-b427-ff357cbbb47c" (UID: "a38c83b4-5b4f-4b02-b427-ff357cbbb47c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.737574 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a38c83b4-5b4f-4b02-b427-ff357cbbb47c-config-data" (OuterVolumeSpecName: "config-data") pod "a38c83b4-5b4f-4b02-b427-ff357cbbb47c" (UID: "a38c83b4-5b4f-4b02-b427-ff357cbbb47c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.745207 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9058179d-2c7b-4d0b-b162-10141ea754b0-config-data" (OuterVolumeSpecName: "config-data") pod "9058179d-2c7b-4d0b-b162-10141ea754b0" (UID: "9058179d-2c7b-4d0b-b162-10141ea754b0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.763552 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9058179d-2c7b-4d0b-b162-10141ea754b0-scripts" (OuterVolumeSpecName: "scripts") pod "9058179d-2c7b-4d0b-b162-10141ea754b0" (UID: "9058179d-2c7b-4d0b-b162-10141ea754b0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.768052 5005 scope.go:117] "RemoveContainer" containerID="c370cbe5acc689593d17ef4f49bb35b425291f847b51e4b1c7851cbb0b58c410" Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.810708 5005 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9058179d-2c7b-4d0b-b162-10141ea754b0-logs\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.810741 5005 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a38c83b4-5b4f-4b02-b427-ff357cbbb47c-scripts\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.810750 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmf79\" (UniqueName: \"kubernetes.io/projected/9058179d-2c7b-4d0b-b162-10141ea754b0-kube-api-access-jmf79\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.810759 5005 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9058179d-2c7b-4d0b-b162-10141ea754b0-scripts\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.810768 5005 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a38c83b4-5b4f-4b02-b427-ff357cbbb47c-config-data\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.810776 5005 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9058179d-2c7b-4d0b-b162-10141ea754b0-config-data\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.810784 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ktbv\" (UniqueName: \"kubernetes.io/projected/a38c83b4-5b4f-4b02-b427-ff357cbbb47c-kube-api-access-9ktbv\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.810794 5005 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a38c83b4-5b4f-4b02-b427-ff357cbbb47c-logs\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.810805 5005 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9058179d-2c7b-4d0b-b162-10141ea754b0-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.810814 5005 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a38c83b4-5b4f-4b02-b427-ff357cbbb47c-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.842616 5005 scope.go:117] "RemoveContainer" containerID="1caaee8477a93e781a4248157c6c7308e9ec61e27aea473013e03c56bfb73ef7" Feb 25 11:36:43 crc kubenswrapper[5005]: E0225 11:36:43.843762 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1caaee8477a93e781a4248157c6c7308e9ec61e27aea473013e03c56bfb73ef7\": container with ID starting with 1caaee8477a93e781a4248157c6c7308e9ec61e27aea473013e03c56bfb73ef7 not found: ID does not exist" containerID="1caaee8477a93e781a4248157c6c7308e9ec61e27aea473013e03c56bfb73ef7" Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.843804 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1caaee8477a93e781a4248157c6c7308e9ec61e27aea473013e03c56bfb73ef7"} err="failed to get container status \"1caaee8477a93e781a4248157c6c7308e9ec61e27aea473013e03c56bfb73ef7\": rpc error: code = NotFound desc = could not find container \"1caaee8477a93e781a4248157c6c7308e9ec61e27aea473013e03c56bfb73ef7\": container with ID starting with 1caaee8477a93e781a4248157c6c7308e9ec61e27aea473013e03c56bfb73ef7 not found: ID does not exist" Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.843830 5005 scope.go:117] "RemoveContainer" containerID="c370cbe5acc689593d17ef4f49bb35b425291f847b51e4b1c7851cbb0b58c410" Feb 25 11:36:43 crc kubenswrapper[5005]: E0225 11:36:43.844160 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c370cbe5acc689593d17ef4f49bb35b425291f847b51e4b1c7851cbb0b58c410\": container with ID starting with c370cbe5acc689593d17ef4f49bb35b425291f847b51e4b1c7851cbb0b58c410 not found: ID does not exist" containerID="c370cbe5acc689593d17ef4f49bb35b425291f847b51e4b1c7851cbb0b58c410" Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.844208 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c370cbe5acc689593d17ef4f49bb35b425291f847b51e4b1c7851cbb0b58c410"} err="failed to get container status \"c370cbe5acc689593d17ef4f49bb35b425291f847b51e4b1c7851cbb0b58c410\": rpc error: code = NotFound desc = could not find container \"c370cbe5acc689593d17ef4f49bb35b425291f847b51e4b1c7851cbb0b58c410\": container with ID starting with c370cbe5acc689593d17ef4f49bb35b425291f847b51e4b1c7851cbb0b58c410 not found: ID does not exist" Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.844233 5005 scope.go:117] "RemoveContainer" containerID="1caaee8477a93e781a4248157c6c7308e9ec61e27aea473013e03c56bfb73ef7" Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.846743 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1caaee8477a93e781a4248157c6c7308e9ec61e27aea473013e03c56bfb73ef7"} err="failed to get container status \"1caaee8477a93e781a4248157c6c7308e9ec61e27aea473013e03c56bfb73ef7\": rpc error: code = NotFound desc = could not find container \"1caaee8477a93e781a4248157c6c7308e9ec61e27aea473013e03c56bfb73ef7\": container with ID starting with 1caaee8477a93e781a4248157c6c7308e9ec61e27aea473013e03c56bfb73ef7 not found: ID does not exist" Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.846763 5005 scope.go:117] "RemoveContainer" containerID="c370cbe5acc689593d17ef4f49bb35b425291f847b51e4b1c7851cbb0b58c410" Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.850310 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c370cbe5acc689593d17ef4f49bb35b425291f847b51e4b1c7851cbb0b58c410"} err="failed to get container status \"c370cbe5acc689593d17ef4f49bb35b425291f847b51e4b1c7851cbb0b58c410\": rpc error: code = NotFound desc = could not find container \"c370cbe5acc689593d17ef4f49bb35b425291f847b51e4b1c7851cbb0b58c410\": container with ID starting with c370cbe5acc689593d17ef4f49bb35b425291f847b51e4b1c7851cbb0b58c410 not found: ID does not exist" Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.850339 5005 scope.go:117] "RemoveContainer" containerID="c5f4cbe17ef32f6a9fb426aab498217d62bf2bd2502d57cf58e11ff349c90126" Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.874157 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7bc6f57c7-g7rsc"] Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.878392 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7bc6f57c7-g7rsc"] Feb 25 11:36:43 crc kubenswrapper[5005]: I0225 11:36:43.883846 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7f46657c4d-qmj7f" Feb 25 11:36:44 crc kubenswrapper[5005]: I0225 11:36:44.099718 5005 scope.go:117] "RemoveContainer" containerID="4097ac444a7b2c97b7909ed7ea55417354a1052de7b86d6c073a7b2af5b12b65" Feb 25 11:36:44 crc kubenswrapper[5005]: I0225 11:36:44.119136 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b946d459c-rccrg" Feb 25 11:36:44 crc kubenswrapper[5005]: I0225 11:36:44.131638 5005 scope.go:117] "RemoveContainer" containerID="c5f4cbe17ef32f6a9fb426aab498217d62bf2bd2502d57cf58e11ff349c90126" Feb 25 11:36:44 crc kubenswrapper[5005]: E0225 11:36:44.131945 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5f4cbe17ef32f6a9fb426aab498217d62bf2bd2502d57cf58e11ff349c90126\": container with ID starting with c5f4cbe17ef32f6a9fb426aab498217d62bf2bd2502d57cf58e11ff349c90126 not found: ID does not exist" containerID="c5f4cbe17ef32f6a9fb426aab498217d62bf2bd2502d57cf58e11ff349c90126" Feb 25 11:36:44 crc kubenswrapper[5005]: I0225 11:36:44.131970 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5f4cbe17ef32f6a9fb426aab498217d62bf2bd2502d57cf58e11ff349c90126"} err="failed to get container status \"c5f4cbe17ef32f6a9fb426aab498217d62bf2bd2502d57cf58e11ff349c90126\": rpc error: code = NotFound desc = could not find container \"c5f4cbe17ef32f6a9fb426aab498217d62bf2bd2502d57cf58e11ff349c90126\": container with ID starting with c5f4cbe17ef32f6a9fb426aab498217d62bf2bd2502d57cf58e11ff349c90126 not found: ID does not exist" Feb 25 11:36:44 crc kubenswrapper[5005]: I0225 11:36:44.132018 5005 scope.go:117] "RemoveContainer" containerID="4097ac444a7b2c97b7909ed7ea55417354a1052de7b86d6c073a7b2af5b12b65" Feb 25 11:36:44 crc kubenswrapper[5005]: E0225 11:36:44.132173 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4097ac444a7b2c97b7909ed7ea55417354a1052de7b86d6c073a7b2af5b12b65\": container with ID starting with 4097ac444a7b2c97b7909ed7ea55417354a1052de7b86d6c073a7b2af5b12b65 not found: ID does not exist" containerID="4097ac444a7b2c97b7909ed7ea55417354a1052de7b86d6c073a7b2af5b12b65" Feb 25 11:36:44 crc kubenswrapper[5005]: I0225 11:36:44.132188 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4097ac444a7b2c97b7909ed7ea55417354a1052de7b86d6c073a7b2af5b12b65"} err="failed to get container status \"4097ac444a7b2c97b7909ed7ea55417354a1052de7b86d6c073a7b2af5b12b65\": rpc error: code = NotFound desc = could not find container \"4097ac444a7b2c97b7909ed7ea55417354a1052de7b86d6c073a7b2af5b12b65\": container with ID starting with 4097ac444a7b2c97b7909ed7ea55417354a1052de7b86d6c073a7b2af5b12b65 not found: ID does not exist" Feb 25 11:36:44 crc kubenswrapper[5005]: I0225 11:36:44.132200 5005 scope.go:117] "RemoveContainer" containerID="c5f4cbe17ef32f6a9fb426aab498217d62bf2bd2502d57cf58e11ff349c90126" Feb 25 11:36:44 crc kubenswrapper[5005]: I0225 11:36:44.132362 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5f4cbe17ef32f6a9fb426aab498217d62bf2bd2502d57cf58e11ff349c90126"} err="failed to get container status \"c5f4cbe17ef32f6a9fb426aab498217d62bf2bd2502d57cf58e11ff349c90126\": rpc error: code = NotFound desc = could not find container \"c5f4cbe17ef32f6a9fb426aab498217d62bf2bd2502d57cf58e11ff349c90126\": container with ID starting with c5f4cbe17ef32f6a9fb426aab498217d62bf2bd2502d57cf58e11ff349c90126 not found: ID does not exist" Feb 25 11:36:44 crc kubenswrapper[5005]: I0225 11:36:44.132532 5005 scope.go:117] "RemoveContainer" containerID="4097ac444a7b2c97b7909ed7ea55417354a1052de7b86d6c073a7b2af5b12b65" Feb 25 11:36:44 crc kubenswrapper[5005]: I0225 11:36:44.132725 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4097ac444a7b2c97b7909ed7ea55417354a1052de7b86d6c073a7b2af5b12b65"} err="failed to get container status \"4097ac444a7b2c97b7909ed7ea55417354a1052de7b86d6c073a7b2af5b12b65\": rpc error: code = NotFound desc = could not find container \"4097ac444a7b2c97b7909ed7ea55417354a1052de7b86d6c073a7b2af5b12b65\": container with ID starting with 4097ac444a7b2c97b7909ed7ea55417354a1052de7b86d6c073a7b2af5b12b65 not found: ID does not exist" Feb 25 11:36:44 crc kubenswrapper[5005]: I0225 11:36:44.224648 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/20571c67-683c-4662-b39e-6dbd75aa8c51-ovsdbserver-sb\") pod \"20571c67-683c-4662-b39e-6dbd75aa8c51\" (UID: \"20571c67-683c-4662-b39e-6dbd75aa8c51\") " Feb 25 11:36:44 crc kubenswrapper[5005]: I0225 11:36:44.224689 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/20571c67-683c-4662-b39e-6dbd75aa8c51-ovsdbserver-nb\") pod \"20571c67-683c-4662-b39e-6dbd75aa8c51\" (UID: \"20571c67-683c-4662-b39e-6dbd75aa8c51\") " Feb 25 11:36:44 crc kubenswrapper[5005]: I0225 11:36:44.224707 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2mmh\" (UniqueName: \"kubernetes.io/projected/20571c67-683c-4662-b39e-6dbd75aa8c51-kube-api-access-x2mmh\") pod \"20571c67-683c-4662-b39e-6dbd75aa8c51\" (UID: \"20571c67-683c-4662-b39e-6dbd75aa8c51\") " Feb 25 11:36:44 crc kubenswrapper[5005]: I0225 11:36:44.224742 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/20571c67-683c-4662-b39e-6dbd75aa8c51-dns-svc\") pod \"20571c67-683c-4662-b39e-6dbd75aa8c51\" (UID: \"20571c67-683c-4662-b39e-6dbd75aa8c51\") " Feb 25 11:36:44 crc kubenswrapper[5005]: I0225 11:36:44.224773 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20571c67-683c-4662-b39e-6dbd75aa8c51-config\") pod \"20571c67-683c-4662-b39e-6dbd75aa8c51\" (UID: \"20571c67-683c-4662-b39e-6dbd75aa8c51\") " Feb 25 11:36:44 crc kubenswrapper[5005]: I0225 11:36:44.234404 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20571c67-683c-4662-b39e-6dbd75aa8c51-kube-api-access-x2mmh" (OuterVolumeSpecName: "kube-api-access-x2mmh") pod "20571c67-683c-4662-b39e-6dbd75aa8c51" (UID: "20571c67-683c-4662-b39e-6dbd75aa8c51"). InnerVolumeSpecName "kube-api-access-x2mmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:36:44 crc kubenswrapper[5005]: I0225 11:36:44.286593 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20571c67-683c-4662-b39e-6dbd75aa8c51-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "20571c67-683c-4662-b39e-6dbd75aa8c51" (UID: "20571c67-683c-4662-b39e-6dbd75aa8c51"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:36:44 crc kubenswrapper[5005]: I0225 11:36:44.316791 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20571c67-683c-4662-b39e-6dbd75aa8c51-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "20571c67-683c-4662-b39e-6dbd75aa8c51" (UID: "20571c67-683c-4662-b39e-6dbd75aa8c51"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:36:44 crc kubenswrapper[5005]: I0225 11:36:44.327045 5005 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/20571c67-683c-4662-b39e-6dbd75aa8c51-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:44 crc kubenswrapper[5005]: I0225 11:36:44.327078 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2mmh\" (UniqueName: \"kubernetes.io/projected/20571c67-683c-4662-b39e-6dbd75aa8c51-kube-api-access-x2mmh\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:44 crc kubenswrapper[5005]: I0225 11:36:44.327091 5005 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/20571c67-683c-4662-b39e-6dbd75aa8c51-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:44 crc kubenswrapper[5005]: I0225 11:36:44.328475 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20571c67-683c-4662-b39e-6dbd75aa8c51-config" (OuterVolumeSpecName: "config") pod "20571c67-683c-4662-b39e-6dbd75aa8c51" (UID: "20571c67-683c-4662-b39e-6dbd75aa8c51"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:36:44 crc kubenswrapper[5005]: I0225 11:36:44.347680 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20571c67-683c-4662-b39e-6dbd75aa8c51-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "20571c67-683c-4662-b39e-6dbd75aa8c51" (UID: "20571c67-683c-4662-b39e-6dbd75aa8c51"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:36:44 crc kubenswrapper[5005]: I0225 11:36:44.388643 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-f8bbcbf96-lg5q8" Feb 25 11:36:44 crc kubenswrapper[5005]: I0225 11:36:44.428550 5005 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/20571c67-683c-4662-b39e-6dbd75aa8c51-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:44 crc kubenswrapper[5005]: I0225 11:36:44.428586 5005 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20571c67-683c-4662-b39e-6dbd75aa8c51-config\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:44 crc kubenswrapper[5005]: I0225 11:36:44.632922 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b946d459c-rccrg" Feb 25 11:36:44 crc kubenswrapper[5005]: I0225 11:36:44.632925 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-rccrg" event={"ID":"20571c67-683c-4662-b39e-6dbd75aa8c51","Type":"ContainerDied","Data":"a08b37e749ef63e7c282c8030a83d73fb1283d24cd79b1695c09c55ebf83dd4e"} Feb 25 11:36:44 crc kubenswrapper[5005]: I0225 11:36:44.633041 5005 scope.go:117] "RemoveContainer" containerID="d6d5ec54eb34191d83027271b852ea585a484b7cf2e27314616a6f49207384e1" Feb 25 11:36:44 crc kubenswrapper[5005]: I0225 11:36:44.637344 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7ff575667-wqvzf" Feb 25 11:36:44 crc kubenswrapper[5005]: I0225 11:36:44.646283 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"097d7f93-e779-4856-a6d1-57be2bcba899","Type":"ContainerStarted","Data":"0a8e28b73392089baf878f2894a4ea60b0fb43195c8b47dfc91273f6ee18d6ef"} Feb 25 11:36:44 crc kubenswrapper[5005]: I0225 11:36:44.646580 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 25 11:36:44 crc kubenswrapper[5005]: I0225 11:36:44.655074 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-d4b89f889-cxtxj" Feb 25 11:36:44 crc kubenswrapper[5005]: I0225 11:36:44.657707 5005 scope.go:117] "RemoveContainer" containerID="755608b8af2ea3340a43119a5bfc00daffbe414aa391bce553535b4e280fcd48" Feb 25 11:36:44 crc kubenswrapper[5005]: I0225 11:36:44.679047 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.4792830869999998 podStartE2EDuration="6.679023593s" podCreationTimestamp="2026-02-25 11:36:38 +0000 UTC" firstStartedPulling="2026-02-25 11:36:39.383850521 +0000 UTC m=+1113.424582848" lastFinishedPulling="2026-02-25 11:36:43.583591027 +0000 UTC m=+1117.624323354" observedRunningTime="2026-02-25 11:36:44.671676961 +0000 UTC m=+1118.712409288" watchObservedRunningTime="2026-02-25 11:36:44.679023593 +0000 UTC m=+1118.719755910" Feb 25 11:36:44 crc kubenswrapper[5005]: I0225 11:36:44.717224 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4869a7a1-945a-47d9-9239-b1537f04be41" path="/var/lib/kubelet/pods/4869a7a1-945a-47d9-9239-b1537f04be41/volumes" Feb 25 11:36:44 crc kubenswrapper[5005]: I0225 11:36:44.717956 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-rccrg"] Feb 25 11:36:44 crc kubenswrapper[5005]: I0225 11:36:44.717983 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-rccrg"] Feb 25 11:36:44 crc kubenswrapper[5005]: I0225 11:36:44.737450 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7ff575667-wqvzf"] Feb 25 11:36:44 crc kubenswrapper[5005]: I0225 11:36:44.754443 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7ff575667-wqvzf"] Feb 25 11:36:44 crc kubenswrapper[5005]: I0225 11:36:44.764450 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-d4b89f889-cxtxj"] Feb 25 11:36:44 crc kubenswrapper[5005]: I0225 11:36:44.769762 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-d4b89f889-cxtxj"] Feb 25 11:36:45 crc kubenswrapper[5005]: I0225 11:36:45.406036 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b769c78f-h4vrt" Feb 25 11:36:45 crc kubenswrapper[5005]: I0225 11:36:45.450329 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d92e4f70-284c-4330-94bd-5a052d96ac39-public-tls-certs\") pod \"d92e4f70-284c-4330-94bd-5a052d96ac39\" (UID: \"d92e4f70-284c-4330-94bd-5a052d96ac39\") " Feb 25 11:36:45 crc kubenswrapper[5005]: I0225 11:36:45.450492 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d92e4f70-284c-4330-94bd-5a052d96ac39-httpd-config\") pod \"d92e4f70-284c-4330-94bd-5a052d96ac39\" (UID: \"d92e4f70-284c-4330-94bd-5a052d96ac39\") " Feb 25 11:36:45 crc kubenswrapper[5005]: I0225 11:36:45.450521 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d92e4f70-284c-4330-94bd-5a052d96ac39-internal-tls-certs\") pod \"d92e4f70-284c-4330-94bd-5a052d96ac39\" (UID: \"d92e4f70-284c-4330-94bd-5a052d96ac39\") " Feb 25 11:36:45 crc kubenswrapper[5005]: I0225 11:36:45.450546 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d92e4f70-284c-4330-94bd-5a052d96ac39-combined-ca-bundle\") pod \"d92e4f70-284c-4330-94bd-5a052d96ac39\" (UID: \"d92e4f70-284c-4330-94bd-5a052d96ac39\") " Feb 25 11:36:45 crc kubenswrapper[5005]: I0225 11:36:45.450585 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d92e4f70-284c-4330-94bd-5a052d96ac39-config\") pod \"d92e4f70-284c-4330-94bd-5a052d96ac39\" (UID: \"d92e4f70-284c-4330-94bd-5a052d96ac39\") " Feb 25 11:36:45 crc kubenswrapper[5005]: I0225 11:36:45.450625 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlhzd\" (UniqueName: \"kubernetes.io/projected/d92e4f70-284c-4330-94bd-5a052d96ac39-kube-api-access-wlhzd\") pod \"d92e4f70-284c-4330-94bd-5a052d96ac39\" (UID: \"d92e4f70-284c-4330-94bd-5a052d96ac39\") " Feb 25 11:36:45 crc kubenswrapper[5005]: I0225 11:36:45.450664 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d92e4f70-284c-4330-94bd-5a052d96ac39-ovndb-tls-certs\") pod \"d92e4f70-284c-4330-94bd-5a052d96ac39\" (UID: \"d92e4f70-284c-4330-94bd-5a052d96ac39\") " Feb 25 11:36:45 crc kubenswrapper[5005]: I0225 11:36:45.464261 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d92e4f70-284c-4330-94bd-5a052d96ac39-kube-api-access-wlhzd" (OuterVolumeSpecName: "kube-api-access-wlhzd") pod "d92e4f70-284c-4330-94bd-5a052d96ac39" (UID: "d92e4f70-284c-4330-94bd-5a052d96ac39"). InnerVolumeSpecName "kube-api-access-wlhzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:36:45 crc kubenswrapper[5005]: I0225 11:36:45.470905 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d92e4f70-284c-4330-94bd-5a052d96ac39-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "d92e4f70-284c-4330-94bd-5a052d96ac39" (UID: "d92e4f70-284c-4330-94bd-5a052d96ac39"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:36:45 crc kubenswrapper[5005]: I0225 11:36:45.532496 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d92e4f70-284c-4330-94bd-5a052d96ac39-config" (OuterVolumeSpecName: "config") pod "d92e4f70-284c-4330-94bd-5a052d96ac39" (UID: "d92e4f70-284c-4330-94bd-5a052d96ac39"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:36:45 crc kubenswrapper[5005]: I0225 11:36:45.538253 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d92e4f70-284c-4330-94bd-5a052d96ac39-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d92e4f70-284c-4330-94bd-5a052d96ac39" (UID: "d92e4f70-284c-4330-94bd-5a052d96ac39"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:36:45 crc kubenswrapper[5005]: I0225 11:36:45.540465 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d92e4f70-284c-4330-94bd-5a052d96ac39-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d92e4f70-284c-4330-94bd-5a052d96ac39" (UID: "d92e4f70-284c-4330-94bd-5a052d96ac39"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:36:45 crc kubenswrapper[5005]: I0225 11:36:45.543565 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d92e4f70-284c-4330-94bd-5a052d96ac39-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d92e4f70-284c-4330-94bd-5a052d96ac39" (UID: "d92e4f70-284c-4330-94bd-5a052d96ac39"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:36:45 crc kubenswrapper[5005]: I0225 11:36:45.560458 5005 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d92e4f70-284c-4330-94bd-5a052d96ac39-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:45 crc kubenswrapper[5005]: I0225 11:36:45.560489 5005 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d92e4f70-284c-4330-94bd-5a052d96ac39-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:45 crc kubenswrapper[5005]: I0225 11:36:45.560499 5005 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d92e4f70-284c-4330-94bd-5a052d96ac39-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:45 crc kubenswrapper[5005]: I0225 11:36:45.560511 5005 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/d92e4f70-284c-4330-94bd-5a052d96ac39-config\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:45 crc kubenswrapper[5005]: I0225 11:36:45.560520 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlhzd\" (UniqueName: \"kubernetes.io/projected/d92e4f70-284c-4330-94bd-5a052d96ac39-kube-api-access-wlhzd\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:45 crc kubenswrapper[5005]: I0225 11:36:45.560531 5005 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d92e4f70-284c-4330-94bd-5a052d96ac39-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:45 crc kubenswrapper[5005]: I0225 11:36:45.589518 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d92e4f70-284c-4330-94bd-5a052d96ac39-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "d92e4f70-284c-4330-94bd-5a052d96ac39" (UID: "d92e4f70-284c-4330-94bd-5a052d96ac39"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:36:45 crc kubenswrapper[5005]: I0225 11:36:45.661818 5005 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d92e4f70-284c-4330-94bd-5a052d96ac39-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:45 crc kubenswrapper[5005]: I0225 11:36:45.668415 5005 generic.go:334] "Generic (PLEG): container finished" podID="d92e4f70-284c-4330-94bd-5a052d96ac39" containerID="3bf8952c510768b3c6c698898a465927d0d9eee5f51a20ebe731061666584d3b" exitCode=0 Feb 25 11:36:45 crc kubenswrapper[5005]: I0225 11:36:45.668471 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b769c78f-h4vrt" event={"ID":"d92e4f70-284c-4330-94bd-5a052d96ac39","Type":"ContainerDied","Data":"3bf8952c510768b3c6c698898a465927d0d9eee5f51a20ebe731061666584d3b"} Feb 25 11:36:45 crc kubenswrapper[5005]: I0225 11:36:45.668534 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b769c78f-h4vrt" event={"ID":"d92e4f70-284c-4330-94bd-5a052d96ac39","Type":"ContainerDied","Data":"5a926ad74a4f081dd6ced4116be816057460a466924d8e9bd793b9a79e9d3d25"} Feb 25 11:36:45 crc kubenswrapper[5005]: I0225 11:36:45.668494 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b769c78f-h4vrt" Feb 25 11:36:45 crc kubenswrapper[5005]: I0225 11:36:45.668554 5005 scope.go:117] "RemoveContainer" containerID="96eee5493eca403c6128dacb2ebbbc2998823cfe6ed94966954489cbb016c847" Feb 25 11:36:45 crc kubenswrapper[5005]: I0225 11:36:45.695290 5005 scope.go:117] "RemoveContainer" containerID="3bf8952c510768b3c6c698898a465927d0d9eee5f51a20ebe731061666584d3b" Feb 25 11:36:45 crc kubenswrapper[5005]: I0225 11:36:45.704576 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-b769c78f-h4vrt"] Feb 25 11:36:45 crc kubenswrapper[5005]: I0225 11:36:45.719007 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-b769c78f-h4vrt"] Feb 25 11:36:45 crc kubenswrapper[5005]: I0225 11:36:45.726492 5005 scope.go:117] "RemoveContainer" containerID="96eee5493eca403c6128dacb2ebbbc2998823cfe6ed94966954489cbb016c847" Feb 25 11:36:45 crc kubenswrapper[5005]: E0225 11:36:45.726956 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96eee5493eca403c6128dacb2ebbbc2998823cfe6ed94966954489cbb016c847\": container with ID starting with 96eee5493eca403c6128dacb2ebbbc2998823cfe6ed94966954489cbb016c847 not found: ID does not exist" containerID="96eee5493eca403c6128dacb2ebbbc2998823cfe6ed94966954489cbb016c847" Feb 25 11:36:45 crc kubenswrapper[5005]: I0225 11:36:45.726990 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96eee5493eca403c6128dacb2ebbbc2998823cfe6ed94966954489cbb016c847"} err="failed to get container status \"96eee5493eca403c6128dacb2ebbbc2998823cfe6ed94966954489cbb016c847\": rpc error: code = NotFound desc = could not find container \"96eee5493eca403c6128dacb2ebbbc2998823cfe6ed94966954489cbb016c847\": container with ID starting with 96eee5493eca403c6128dacb2ebbbc2998823cfe6ed94966954489cbb016c847 not found: ID does not exist" Feb 25 11:36:45 crc kubenswrapper[5005]: I0225 11:36:45.727013 5005 scope.go:117] "RemoveContainer" containerID="3bf8952c510768b3c6c698898a465927d0d9eee5f51a20ebe731061666584d3b" Feb 25 11:36:45 crc kubenswrapper[5005]: E0225 11:36:45.727210 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bf8952c510768b3c6c698898a465927d0d9eee5f51a20ebe731061666584d3b\": container with ID starting with 3bf8952c510768b3c6c698898a465927d0d9eee5f51a20ebe731061666584d3b not found: ID does not exist" containerID="3bf8952c510768b3c6c698898a465927d0d9eee5f51a20ebe731061666584d3b" Feb 25 11:36:45 crc kubenswrapper[5005]: I0225 11:36:45.727260 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bf8952c510768b3c6c698898a465927d0d9eee5f51a20ebe731061666584d3b"} err="failed to get container status \"3bf8952c510768b3c6c698898a465927d0d9eee5f51a20ebe731061666584d3b\": rpc error: code = NotFound desc = could not find container \"3bf8952c510768b3c6c698898a465927d0d9eee5f51a20ebe731061666584d3b\": container with ID starting with 3bf8952c510768b3c6c698898a465927d0d9eee5f51a20ebe731061666584d3b not found: ID does not exist" Feb 25 11:36:45 crc kubenswrapper[5005]: I0225 11:36:45.836533 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7568566766-7cjl5" Feb 25 11:36:45 crc kubenswrapper[5005]: I0225 11:36:45.848704 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7568566766-7cjl5" Feb 25 11:36:46 crc kubenswrapper[5005]: I0225 11:36:46.084041 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-7f46657c4d-qmj7f" Feb 25 11:36:46 crc kubenswrapper[5005]: I0225 11:36:46.702937 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20571c67-683c-4662-b39e-6dbd75aa8c51" path="/var/lib/kubelet/pods/20571c67-683c-4662-b39e-6dbd75aa8c51/volumes" Feb 25 11:36:46 crc kubenswrapper[5005]: I0225 11:36:46.703593 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9058179d-2c7b-4d0b-b162-10141ea754b0" path="/var/lib/kubelet/pods/9058179d-2c7b-4d0b-b162-10141ea754b0/volumes" Feb 25 11:36:46 crc kubenswrapper[5005]: I0225 11:36:46.704183 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a38c83b4-5b4f-4b02-b427-ff357cbbb47c" path="/var/lib/kubelet/pods/a38c83b4-5b4f-4b02-b427-ff357cbbb47c/volumes" Feb 25 11:36:46 crc kubenswrapper[5005]: I0225 11:36:46.705185 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d92e4f70-284c-4330-94bd-5a052d96ac39" path="/var/lib/kubelet/pods/d92e4f70-284c-4330-94bd-5a052d96ac39/volumes" Feb 25 11:36:46 crc kubenswrapper[5005]: I0225 11:36:46.866587 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-f8bbcbf96-lg5q8" Feb 25 11:36:46 crc kubenswrapper[5005]: I0225 11:36:46.955897 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7f46657c4d-qmj7f"] Feb 25 11:36:46 crc kubenswrapper[5005]: I0225 11:36:46.964030 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7f46657c4d-qmj7f" podUID="f37d127e-e5d1-45f3-8e44-262ab354e0c2" containerName="horizon" containerID="cri-o://4d2a9eb2800c2682c577225ec553c32240fbc85fb06c20035304d3f03ddba264" gracePeriod=30 Feb 25 11:36:46 crc kubenswrapper[5005]: I0225 11:36:46.963354 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7f46657c4d-qmj7f" podUID="f37d127e-e5d1-45f3-8e44-262ab354e0c2" containerName="horizon-log" containerID="cri-o://69013907147be4d78676e13f1a212c8cf91c9975f9e239bda625c13ee8a696a3" gracePeriod=30 Feb 25 11:36:47 crc kubenswrapper[5005]: I0225 11:36:47.060916 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 25 11:36:47 crc kubenswrapper[5005]: I0225 11:36:47.120530 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 25 11:36:47 crc kubenswrapper[5005]: I0225 11:36:47.574726 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7445447f66-tpwqp" Feb 25 11:36:47 crc kubenswrapper[5005]: I0225 11:36:47.689111 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="589109a6-a7f8-484f-a5e2-a83bd90c21c4" containerName="cinder-scheduler" containerID="cri-o://7acf02f2b76172ff659a62918955296b591175fa8dcc1c99c9fe19da0d02dec0" gracePeriod=30 Feb 25 11:36:47 crc kubenswrapper[5005]: I0225 11:36:47.689201 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="589109a6-a7f8-484f-a5e2-a83bd90c21c4" containerName="probe" containerID="cri-o://9526254a94b7abd9ed1e760bbc415671a99ce7ff8204c02ebb75234234e8183f" gracePeriod=30 Feb 25 11:36:47 crc kubenswrapper[5005]: I0225 11:36:47.711859 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7445447f66-tpwqp" Feb 25 11:36:48 crc kubenswrapper[5005]: I0225 11:36:48.092231 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6c94f4bc5b-tpjtv"] Feb 25 11:36:48 crc kubenswrapper[5005]: E0225 11:36:48.093192 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d92e4f70-284c-4330-94bd-5a052d96ac39" containerName="neutron-httpd" Feb 25 11:36:48 crc kubenswrapper[5005]: I0225 11:36:48.093215 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="d92e4f70-284c-4330-94bd-5a052d96ac39" containerName="neutron-httpd" Feb 25 11:36:48 crc kubenswrapper[5005]: E0225 11:36:48.093233 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d92e4f70-284c-4330-94bd-5a052d96ac39" containerName="neutron-api" Feb 25 11:36:48 crc kubenswrapper[5005]: I0225 11:36:48.093240 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="d92e4f70-284c-4330-94bd-5a052d96ac39" containerName="neutron-api" Feb 25 11:36:48 crc kubenswrapper[5005]: E0225 11:36:48.093266 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20571c67-683c-4662-b39e-6dbd75aa8c51" containerName="init" Feb 25 11:36:48 crc kubenswrapper[5005]: I0225 11:36:48.093273 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="20571c67-683c-4662-b39e-6dbd75aa8c51" containerName="init" Feb 25 11:36:48 crc kubenswrapper[5005]: E0225 11:36:48.093293 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4869a7a1-945a-47d9-9239-b1537f04be41" containerName="horizon" Feb 25 11:36:48 crc kubenswrapper[5005]: I0225 11:36:48.093301 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="4869a7a1-945a-47d9-9239-b1537f04be41" containerName="horizon" Feb 25 11:36:48 crc kubenswrapper[5005]: E0225 11:36:48.093314 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a38c83b4-5b4f-4b02-b427-ff357cbbb47c" containerName="horizon" Feb 25 11:36:48 crc kubenswrapper[5005]: I0225 11:36:48.093321 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="a38c83b4-5b4f-4b02-b427-ff357cbbb47c" containerName="horizon" Feb 25 11:36:48 crc kubenswrapper[5005]: E0225 11:36:48.093330 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9058179d-2c7b-4d0b-b162-10141ea754b0" containerName="horizon" Feb 25 11:36:48 crc kubenswrapper[5005]: I0225 11:36:48.093337 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="9058179d-2c7b-4d0b-b162-10141ea754b0" containerName="horizon" Feb 25 11:36:48 crc kubenswrapper[5005]: E0225 11:36:48.093349 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20571c67-683c-4662-b39e-6dbd75aa8c51" containerName="dnsmasq-dns" Feb 25 11:36:48 crc kubenswrapper[5005]: I0225 11:36:48.093356 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="20571c67-683c-4662-b39e-6dbd75aa8c51" containerName="dnsmasq-dns" Feb 25 11:36:48 crc kubenswrapper[5005]: E0225 11:36:48.102497 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a38c83b4-5b4f-4b02-b427-ff357cbbb47c" containerName="horizon-log" Feb 25 11:36:48 crc kubenswrapper[5005]: I0225 11:36:48.102551 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="a38c83b4-5b4f-4b02-b427-ff357cbbb47c" containerName="horizon-log" Feb 25 11:36:48 crc kubenswrapper[5005]: E0225 11:36:48.102580 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4869a7a1-945a-47d9-9239-b1537f04be41" containerName="horizon-log" Feb 25 11:36:48 crc kubenswrapper[5005]: I0225 11:36:48.102593 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="4869a7a1-945a-47d9-9239-b1537f04be41" containerName="horizon-log" Feb 25 11:36:48 crc kubenswrapper[5005]: E0225 11:36:48.102606 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9058179d-2c7b-4d0b-b162-10141ea754b0" containerName="horizon-log" Feb 25 11:36:48 crc kubenswrapper[5005]: I0225 11:36:48.102614 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="9058179d-2c7b-4d0b-b162-10141ea754b0" containerName="horizon-log" Feb 25 11:36:48 crc kubenswrapper[5005]: I0225 11:36:48.105848 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="a38c83b4-5b4f-4b02-b427-ff357cbbb47c" containerName="horizon" Feb 25 11:36:48 crc kubenswrapper[5005]: I0225 11:36:48.105906 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="4869a7a1-945a-47d9-9239-b1537f04be41" containerName="horizon" Feb 25 11:36:48 crc kubenswrapper[5005]: I0225 11:36:48.105948 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="9058179d-2c7b-4d0b-b162-10141ea754b0" containerName="horizon" Feb 25 11:36:48 crc kubenswrapper[5005]: I0225 11:36:48.105961 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="20571c67-683c-4662-b39e-6dbd75aa8c51" containerName="dnsmasq-dns" Feb 25 11:36:48 crc kubenswrapper[5005]: I0225 11:36:48.105995 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="d92e4f70-284c-4330-94bd-5a052d96ac39" containerName="neutron-api" Feb 25 11:36:48 crc kubenswrapper[5005]: I0225 11:36:48.106023 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="9058179d-2c7b-4d0b-b162-10141ea754b0" containerName="horizon-log" Feb 25 11:36:48 crc kubenswrapper[5005]: I0225 11:36:48.106045 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="a38c83b4-5b4f-4b02-b427-ff357cbbb47c" containerName="horizon-log" Feb 25 11:36:48 crc kubenswrapper[5005]: I0225 11:36:48.106086 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="4869a7a1-945a-47d9-9239-b1537f04be41" containerName="horizon-log" Feb 25 11:36:48 crc kubenswrapper[5005]: I0225 11:36:48.106115 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="d92e4f70-284c-4330-94bd-5a052d96ac39" containerName="neutron-httpd" Feb 25 11:36:48 crc kubenswrapper[5005]: I0225 11:36:48.108268 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6c94f4bc5b-tpjtv" Feb 25 11:36:48 crc kubenswrapper[5005]: I0225 11:36:48.141962 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6c94f4bc5b-tpjtv"] Feb 25 11:36:48 crc kubenswrapper[5005]: I0225 11:36:48.221474 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fed08fee-c796-4162-b342-458bb0d5fc68-config-data\") pod \"placement-6c94f4bc5b-tpjtv\" (UID: \"fed08fee-c796-4162-b342-458bb0d5fc68\") " pod="openstack/placement-6c94f4bc5b-tpjtv" Feb 25 11:36:48 crc kubenswrapper[5005]: I0225 11:36:48.221557 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fed08fee-c796-4162-b342-458bb0d5fc68-internal-tls-certs\") pod \"placement-6c94f4bc5b-tpjtv\" (UID: \"fed08fee-c796-4162-b342-458bb0d5fc68\") " pod="openstack/placement-6c94f4bc5b-tpjtv" Feb 25 11:36:48 crc kubenswrapper[5005]: I0225 11:36:48.221598 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fed08fee-c796-4162-b342-458bb0d5fc68-scripts\") pod \"placement-6c94f4bc5b-tpjtv\" (UID: \"fed08fee-c796-4162-b342-458bb0d5fc68\") " pod="openstack/placement-6c94f4bc5b-tpjtv" Feb 25 11:36:48 crc kubenswrapper[5005]: I0225 11:36:48.221652 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fed08fee-c796-4162-b342-458bb0d5fc68-logs\") pod \"placement-6c94f4bc5b-tpjtv\" (UID: \"fed08fee-c796-4162-b342-458bb0d5fc68\") " pod="openstack/placement-6c94f4bc5b-tpjtv" Feb 25 11:36:48 crc kubenswrapper[5005]: I0225 11:36:48.221677 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94sgf\" (UniqueName: \"kubernetes.io/projected/fed08fee-c796-4162-b342-458bb0d5fc68-kube-api-access-94sgf\") pod \"placement-6c94f4bc5b-tpjtv\" (UID: \"fed08fee-c796-4162-b342-458bb0d5fc68\") " pod="openstack/placement-6c94f4bc5b-tpjtv" Feb 25 11:36:48 crc kubenswrapper[5005]: I0225 11:36:48.221702 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fed08fee-c796-4162-b342-458bb0d5fc68-combined-ca-bundle\") pod \"placement-6c94f4bc5b-tpjtv\" (UID: \"fed08fee-c796-4162-b342-458bb0d5fc68\") " pod="openstack/placement-6c94f4bc5b-tpjtv" Feb 25 11:36:48 crc kubenswrapper[5005]: I0225 11:36:48.221741 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fed08fee-c796-4162-b342-458bb0d5fc68-public-tls-certs\") pod \"placement-6c94f4bc5b-tpjtv\" (UID: \"fed08fee-c796-4162-b342-458bb0d5fc68\") " pod="openstack/placement-6c94f4bc5b-tpjtv" Feb 25 11:36:48 crc kubenswrapper[5005]: I0225 11:36:48.323323 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fed08fee-c796-4162-b342-458bb0d5fc68-public-tls-certs\") pod \"placement-6c94f4bc5b-tpjtv\" (UID: \"fed08fee-c796-4162-b342-458bb0d5fc68\") " pod="openstack/placement-6c94f4bc5b-tpjtv" Feb 25 11:36:48 crc kubenswrapper[5005]: I0225 11:36:48.323449 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fed08fee-c796-4162-b342-458bb0d5fc68-config-data\") pod \"placement-6c94f4bc5b-tpjtv\" (UID: \"fed08fee-c796-4162-b342-458bb0d5fc68\") " pod="openstack/placement-6c94f4bc5b-tpjtv" Feb 25 11:36:48 crc kubenswrapper[5005]: I0225 11:36:48.323482 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fed08fee-c796-4162-b342-458bb0d5fc68-internal-tls-certs\") pod \"placement-6c94f4bc5b-tpjtv\" (UID: \"fed08fee-c796-4162-b342-458bb0d5fc68\") " pod="openstack/placement-6c94f4bc5b-tpjtv" Feb 25 11:36:48 crc kubenswrapper[5005]: I0225 11:36:48.323506 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fed08fee-c796-4162-b342-458bb0d5fc68-scripts\") pod \"placement-6c94f4bc5b-tpjtv\" (UID: \"fed08fee-c796-4162-b342-458bb0d5fc68\") " pod="openstack/placement-6c94f4bc5b-tpjtv" Feb 25 11:36:48 crc kubenswrapper[5005]: I0225 11:36:48.323546 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fed08fee-c796-4162-b342-458bb0d5fc68-logs\") pod \"placement-6c94f4bc5b-tpjtv\" (UID: \"fed08fee-c796-4162-b342-458bb0d5fc68\") " pod="openstack/placement-6c94f4bc5b-tpjtv" Feb 25 11:36:48 crc kubenswrapper[5005]: I0225 11:36:48.323562 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94sgf\" (UniqueName: \"kubernetes.io/projected/fed08fee-c796-4162-b342-458bb0d5fc68-kube-api-access-94sgf\") pod \"placement-6c94f4bc5b-tpjtv\" (UID: \"fed08fee-c796-4162-b342-458bb0d5fc68\") " pod="openstack/placement-6c94f4bc5b-tpjtv" Feb 25 11:36:48 crc kubenswrapper[5005]: I0225 11:36:48.323580 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fed08fee-c796-4162-b342-458bb0d5fc68-combined-ca-bundle\") pod \"placement-6c94f4bc5b-tpjtv\" (UID: \"fed08fee-c796-4162-b342-458bb0d5fc68\") " pod="openstack/placement-6c94f4bc5b-tpjtv" Feb 25 11:36:48 crc kubenswrapper[5005]: I0225 11:36:48.325574 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fed08fee-c796-4162-b342-458bb0d5fc68-logs\") pod \"placement-6c94f4bc5b-tpjtv\" (UID: \"fed08fee-c796-4162-b342-458bb0d5fc68\") " pod="openstack/placement-6c94f4bc5b-tpjtv" Feb 25 11:36:48 crc kubenswrapper[5005]: I0225 11:36:48.330024 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fed08fee-c796-4162-b342-458bb0d5fc68-internal-tls-certs\") pod \"placement-6c94f4bc5b-tpjtv\" (UID: \"fed08fee-c796-4162-b342-458bb0d5fc68\") " pod="openstack/placement-6c94f4bc5b-tpjtv" Feb 25 11:36:48 crc kubenswrapper[5005]: I0225 11:36:48.332302 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fed08fee-c796-4162-b342-458bb0d5fc68-scripts\") pod \"placement-6c94f4bc5b-tpjtv\" (UID: \"fed08fee-c796-4162-b342-458bb0d5fc68\") " pod="openstack/placement-6c94f4bc5b-tpjtv" Feb 25 11:36:48 crc kubenswrapper[5005]: I0225 11:36:48.342981 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fed08fee-c796-4162-b342-458bb0d5fc68-public-tls-certs\") pod \"placement-6c94f4bc5b-tpjtv\" (UID: \"fed08fee-c796-4162-b342-458bb0d5fc68\") " pod="openstack/placement-6c94f4bc5b-tpjtv" Feb 25 11:36:48 crc kubenswrapper[5005]: I0225 11:36:48.352306 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fed08fee-c796-4162-b342-458bb0d5fc68-combined-ca-bundle\") pod \"placement-6c94f4bc5b-tpjtv\" (UID: \"fed08fee-c796-4162-b342-458bb0d5fc68\") " pod="openstack/placement-6c94f4bc5b-tpjtv" Feb 25 11:36:48 crc kubenswrapper[5005]: I0225 11:36:48.353034 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94sgf\" (UniqueName: \"kubernetes.io/projected/fed08fee-c796-4162-b342-458bb0d5fc68-kube-api-access-94sgf\") pod \"placement-6c94f4bc5b-tpjtv\" (UID: \"fed08fee-c796-4162-b342-458bb0d5fc68\") " pod="openstack/placement-6c94f4bc5b-tpjtv" Feb 25 11:36:48 crc kubenswrapper[5005]: I0225 11:36:48.366348 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fed08fee-c796-4162-b342-458bb0d5fc68-config-data\") pod \"placement-6c94f4bc5b-tpjtv\" (UID: \"fed08fee-c796-4162-b342-458bb0d5fc68\") " pod="openstack/placement-6c94f4bc5b-tpjtv" Feb 25 11:36:48 crc kubenswrapper[5005]: I0225 11:36:48.461788 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6c94f4bc5b-tpjtv" Feb 25 11:36:48 crc kubenswrapper[5005]: I0225 11:36:48.995290 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6c94f4bc5b-tpjtv"] Feb 25 11:36:49 crc kubenswrapper[5005]: W0225 11:36:49.004249 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfed08fee_c796_4162_b342_458bb0d5fc68.slice/crio-869ddd55481eb7c3035b086cc10829dbb6576fbc37244b7509f1fc62e5f56906 WatchSource:0}: Error finding container 869ddd55481eb7c3035b086cc10829dbb6576fbc37244b7509f1fc62e5f56906: Status 404 returned error can't find the container with id 869ddd55481eb7c3035b086cc10829dbb6576fbc37244b7509f1fc62e5f56906 Feb 25 11:36:49 crc kubenswrapper[5005]: I0225 11:36:49.755618 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6c94f4bc5b-tpjtv" event={"ID":"fed08fee-c796-4162-b342-458bb0d5fc68","Type":"ContainerStarted","Data":"11fbe31139a9aaa09d2517104ba216d34ed63f6b3e1fb3a536d7f54b57598966"} Feb 25 11:36:49 crc kubenswrapper[5005]: I0225 11:36:49.755893 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6c94f4bc5b-tpjtv" event={"ID":"fed08fee-c796-4162-b342-458bb0d5fc68","Type":"ContainerStarted","Data":"4a78c5c4b587aeb791a72a6b63c332a28a0749092df6d33c3f0a9083db9dd544"} Feb 25 11:36:49 crc kubenswrapper[5005]: I0225 11:36:49.755904 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6c94f4bc5b-tpjtv" event={"ID":"fed08fee-c796-4162-b342-458bb0d5fc68","Type":"ContainerStarted","Data":"869ddd55481eb7c3035b086cc10829dbb6576fbc37244b7509f1fc62e5f56906"} Feb 25 11:36:49 crc kubenswrapper[5005]: I0225 11:36:49.756495 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6c94f4bc5b-tpjtv" Feb 25 11:36:49 crc kubenswrapper[5005]: I0225 11:36:49.756530 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6c94f4bc5b-tpjtv" Feb 25 11:36:49 crc kubenswrapper[5005]: I0225 11:36:49.759828 5005 generic.go:334] "Generic (PLEG): container finished" podID="589109a6-a7f8-484f-a5e2-a83bd90c21c4" containerID="9526254a94b7abd9ed1e760bbc415671a99ce7ff8204c02ebb75234234e8183f" exitCode=0 Feb 25 11:36:49 crc kubenswrapper[5005]: I0225 11:36:49.759849 5005 generic.go:334] "Generic (PLEG): container finished" podID="589109a6-a7f8-484f-a5e2-a83bd90c21c4" containerID="7acf02f2b76172ff659a62918955296b591175fa8dcc1c99c9fe19da0d02dec0" exitCode=0 Feb 25 11:36:49 crc kubenswrapper[5005]: I0225 11:36:49.759866 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"589109a6-a7f8-484f-a5e2-a83bd90c21c4","Type":"ContainerDied","Data":"9526254a94b7abd9ed1e760bbc415671a99ce7ff8204c02ebb75234234e8183f"} Feb 25 11:36:49 crc kubenswrapper[5005]: I0225 11:36:49.759883 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"589109a6-a7f8-484f-a5e2-a83bd90c21c4","Type":"ContainerDied","Data":"7acf02f2b76172ff659a62918955296b591175fa8dcc1c99c9fe19da0d02dec0"} Feb 25 11:36:49 crc kubenswrapper[5005]: I0225 11:36:49.784519 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6c94f4bc5b-tpjtv" podStartSLOduration=1.784501422 podStartE2EDuration="1.784501422s" podCreationTimestamp="2026-02-25 11:36:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:36:49.779938883 +0000 UTC m=+1123.820671210" watchObservedRunningTime="2026-02-25 11:36:49.784501422 +0000 UTC m=+1123.825233749" Feb 25 11:36:49 crc kubenswrapper[5005]: I0225 11:36:49.922297 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 25 11:36:50 crc kubenswrapper[5005]: I0225 11:36:50.052128 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/589109a6-a7f8-484f-a5e2-a83bd90c21c4-scripts\") pod \"589109a6-a7f8-484f-a5e2-a83bd90c21c4\" (UID: \"589109a6-a7f8-484f-a5e2-a83bd90c21c4\") " Feb 25 11:36:50 crc kubenswrapper[5005]: I0225 11:36:50.052481 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/589109a6-a7f8-484f-a5e2-a83bd90c21c4-config-data\") pod \"589109a6-a7f8-484f-a5e2-a83bd90c21c4\" (UID: \"589109a6-a7f8-484f-a5e2-a83bd90c21c4\") " Feb 25 11:36:50 crc kubenswrapper[5005]: I0225 11:36:50.052531 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/589109a6-a7f8-484f-a5e2-a83bd90c21c4-config-data-custom\") pod \"589109a6-a7f8-484f-a5e2-a83bd90c21c4\" (UID: \"589109a6-a7f8-484f-a5e2-a83bd90c21c4\") " Feb 25 11:36:50 crc kubenswrapper[5005]: I0225 11:36:50.052548 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/589109a6-a7f8-484f-a5e2-a83bd90c21c4-combined-ca-bundle\") pod \"589109a6-a7f8-484f-a5e2-a83bd90c21c4\" (UID: \"589109a6-a7f8-484f-a5e2-a83bd90c21c4\") " Feb 25 11:36:50 crc kubenswrapper[5005]: I0225 11:36:50.052594 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/589109a6-a7f8-484f-a5e2-a83bd90c21c4-etc-machine-id\") pod \"589109a6-a7f8-484f-a5e2-a83bd90c21c4\" (UID: \"589109a6-a7f8-484f-a5e2-a83bd90c21c4\") " Feb 25 11:36:50 crc kubenswrapper[5005]: I0225 11:36:50.052619 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ph5fz\" (UniqueName: \"kubernetes.io/projected/589109a6-a7f8-484f-a5e2-a83bd90c21c4-kube-api-access-ph5fz\") pod \"589109a6-a7f8-484f-a5e2-a83bd90c21c4\" (UID: \"589109a6-a7f8-484f-a5e2-a83bd90c21c4\") " Feb 25 11:36:50 crc kubenswrapper[5005]: I0225 11:36:50.059955 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/589109a6-a7f8-484f-a5e2-a83bd90c21c4-scripts" (OuterVolumeSpecName: "scripts") pod "589109a6-a7f8-484f-a5e2-a83bd90c21c4" (UID: "589109a6-a7f8-484f-a5e2-a83bd90c21c4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:36:50 crc kubenswrapper[5005]: I0225 11:36:50.060122 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/589109a6-a7f8-484f-a5e2-a83bd90c21c4-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "589109a6-a7f8-484f-a5e2-a83bd90c21c4" (UID: "589109a6-a7f8-484f-a5e2-a83bd90c21c4"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 25 11:36:50 crc kubenswrapper[5005]: I0225 11:36:50.064763 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/589109a6-a7f8-484f-a5e2-a83bd90c21c4-kube-api-access-ph5fz" (OuterVolumeSpecName: "kube-api-access-ph5fz") pod "589109a6-a7f8-484f-a5e2-a83bd90c21c4" (UID: "589109a6-a7f8-484f-a5e2-a83bd90c21c4"). InnerVolumeSpecName "kube-api-access-ph5fz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:36:50 crc kubenswrapper[5005]: I0225 11:36:50.097433 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/589109a6-a7f8-484f-a5e2-a83bd90c21c4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "589109a6-a7f8-484f-a5e2-a83bd90c21c4" (UID: "589109a6-a7f8-484f-a5e2-a83bd90c21c4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:36:50 crc kubenswrapper[5005]: I0225 11:36:50.117049 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/589109a6-a7f8-484f-a5e2-a83bd90c21c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "589109a6-a7f8-484f-a5e2-a83bd90c21c4" (UID: "589109a6-a7f8-484f-a5e2-a83bd90c21c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:36:50 crc kubenswrapper[5005]: I0225 11:36:50.154105 5005 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/589109a6-a7f8-484f-a5e2-a83bd90c21c4-scripts\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:50 crc kubenswrapper[5005]: I0225 11:36:50.154135 5005 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/589109a6-a7f8-484f-a5e2-a83bd90c21c4-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:50 crc kubenswrapper[5005]: I0225 11:36:50.154145 5005 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/589109a6-a7f8-484f-a5e2-a83bd90c21c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:50 crc kubenswrapper[5005]: I0225 11:36:50.154153 5005 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/589109a6-a7f8-484f-a5e2-a83bd90c21c4-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:50 crc kubenswrapper[5005]: I0225 11:36:50.154161 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ph5fz\" (UniqueName: \"kubernetes.io/projected/589109a6-a7f8-484f-a5e2-a83bd90c21c4-kube-api-access-ph5fz\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:50 crc kubenswrapper[5005]: I0225 11:36:50.188195 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/589109a6-a7f8-484f-a5e2-a83bd90c21c4-config-data" (OuterVolumeSpecName: "config-data") pod "589109a6-a7f8-484f-a5e2-a83bd90c21c4" (UID: "589109a6-a7f8-484f-a5e2-a83bd90c21c4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:36:50 crc kubenswrapper[5005]: I0225 11:36:50.255736 5005 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/589109a6-a7f8-484f-a5e2-a83bd90c21c4-config-data\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:50 crc kubenswrapper[5005]: I0225 11:36:50.333852 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-64d599957d-86x2s" Feb 25 11:36:50 crc kubenswrapper[5005]: I0225 11:36:50.399720 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-74447dd785-mk8tc" Feb 25 11:36:50 crc kubenswrapper[5005]: I0225 11:36:50.628339 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-64d599957d-86x2s" Feb 25 11:36:50 crc kubenswrapper[5005]: I0225 11:36:50.699542 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7568566766-7cjl5"] Feb 25 11:36:50 crc kubenswrapper[5005]: I0225 11:36:50.699739 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7568566766-7cjl5" podUID="8187472b-9222-4f32-a945-eaf167a7600d" containerName="barbican-api-log" containerID="cri-o://140611c9089ca0f94f3e19075737265ce48dd031b01903506309e2de63f0af5a" gracePeriod=30 Feb 25 11:36:50 crc kubenswrapper[5005]: I0225 11:36:50.700018 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7568566766-7cjl5" podUID="8187472b-9222-4f32-a945-eaf167a7600d" containerName="barbican-api" containerID="cri-o://a345d21407ce5312910b20be2105f8bb11749b714233a03abf10ed5ec9c1d23c" gracePeriod=30 Feb 25 11:36:50 crc kubenswrapper[5005]: I0225 11:36:50.789106 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"589109a6-a7f8-484f-a5e2-a83bd90c21c4","Type":"ContainerDied","Data":"c45447c083f0559152311b85ea7216fc8ae430c9108f05b65f7d19d958bbe617"} Feb 25 11:36:50 crc kubenswrapper[5005]: I0225 11:36:50.789418 5005 scope.go:117] "RemoveContainer" containerID="9526254a94b7abd9ed1e760bbc415671a99ce7ff8204c02ebb75234234e8183f" Feb 25 11:36:50 crc kubenswrapper[5005]: I0225 11:36:50.789824 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 25 11:36:50 crc kubenswrapper[5005]: I0225 11:36:50.793538 5005 generic.go:334] "Generic (PLEG): container finished" podID="f37d127e-e5d1-45f3-8e44-262ab354e0c2" containerID="4d2a9eb2800c2682c577225ec553c32240fbc85fb06c20035304d3f03ddba264" exitCode=0 Feb 25 11:36:50 crc kubenswrapper[5005]: I0225 11:36:50.794061 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f46657c4d-qmj7f" event={"ID":"f37d127e-e5d1-45f3-8e44-262ab354e0c2","Type":"ContainerDied","Data":"4d2a9eb2800c2682c577225ec553c32240fbc85fb06c20035304d3f03ddba264"} Feb 25 11:36:50 crc kubenswrapper[5005]: I0225 11:36:50.830432 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 25 11:36:50 crc kubenswrapper[5005]: I0225 11:36:50.832829 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 25 11:36:50 crc kubenswrapper[5005]: I0225 11:36:50.834844 5005 scope.go:117] "RemoveContainer" containerID="7acf02f2b76172ff659a62918955296b591175fa8dcc1c99c9fe19da0d02dec0" Feb 25 11:36:50 crc kubenswrapper[5005]: I0225 11:36:50.861566 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 25 11:36:50 crc kubenswrapper[5005]: E0225 11:36:50.861963 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="589109a6-a7f8-484f-a5e2-a83bd90c21c4" containerName="cinder-scheduler" Feb 25 11:36:50 crc kubenswrapper[5005]: I0225 11:36:50.861980 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="589109a6-a7f8-484f-a5e2-a83bd90c21c4" containerName="cinder-scheduler" Feb 25 11:36:50 crc kubenswrapper[5005]: E0225 11:36:50.862004 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="589109a6-a7f8-484f-a5e2-a83bd90c21c4" containerName="probe" Feb 25 11:36:50 crc kubenswrapper[5005]: I0225 11:36:50.862013 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="589109a6-a7f8-484f-a5e2-a83bd90c21c4" containerName="probe" Feb 25 11:36:50 crc kubenswrapper[5005]: I0225 11:36:50.862176 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="589109a6-a7f8-484f-a5e2-a83bd90c21c4" containerName="cinder-scheduler" Feb 25 11:36:50 crc kubenswrapper[5005]: I0225 11:36:50.862192 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="589109a6-a7f8-484f-a5e2-a83bd90c21c4" containerName="probe" Feb 25 11:36:50 crc kubenswrapper[5005]: I0225 11:36:50.863073 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 25 11:36:50 crc kubenswrapper[5005]: I0225 11:36:50.865700 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 25 11:36:50 crc kubenswrapper[5005]: I0225 11:36:50.870792 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 25 11:36:50 crc kubenswrapper[5005]: I0225 11:36:50.973098 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29bc251d-1657-4787-81f8-83fdf903f229-scripts\") pod \"cinder-scheduler-0\" (UID: \"29bc251d-1657-4787-81f8-83fdf903f229\") " pod="openstack/cinder-scheduler-0" Feb 25 11:36:50 crc kubenswrapper[5005]: I0225 11:36:50.973154 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29bc251d-1657-4787-81f8-83fdf903f229-config-data\") pod \"cinder-scheduler-0\" (UID: \"29bc251d-1657-4787-81f8-83fdf903f229\") " pod="openstack/cinder-scheduler-0" Feb 25 11:36:50 crc kubenswrapper[5005]: I0225 11:36:50.973183 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29bc251d-1657-4787-81f8-83fdf903f229-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"29bc251d-1657-4787-81f8-83fdf903f229\") " pod="openstack/cinder-scheduler-0" Feb 25 11:36:50 crc kubenswrapper[5005]: I0225 11:36:50.973251 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pclzx\" (UniqueName: \"kubernetes.io/projected/29bc251d-1657-4787-81f8-83fdf903f229-kube-api-access-pclzx\") pod \"cinder-scheduler-0\" (UID: \"29bc251d-1657-4787-81f8-83fdf903f229\") " pod="openstack/cinder-scheduler-0" Feb 25 11:36:50 crc kubenswrapper[5005]: I0225 11:36:50.973311 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29bc251d-1657-4787-81f8-83fdf903f229-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"29bc251d-1657-4787-81f8-83fdf903f229\") " pod="openstack/cinder-scheduler-0" Feb 25 11:36:50 crc kubenswrapper[5005]: I0225 11:36:50.973419 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29bc251d-1657-4787-81f8-83fdf903f229-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"29bc251d-1657-4787-81f8-83fdf903f229\") " pod="openstack/cinder-scheduler-0" Feb 25 11:36:51 crc kubenswrapper[5005]: I0225 11:36:51.076169 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29bc251d-1657-4787-81f8-83fdf903f229-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"29bc251d-1657-4787-81f8-83fdf903f229\") " pod="openstack/cinder-scheduler-0" Feb 25 11:36:51 crc kubenswrapper[5005]: I0225 11:36:51.076242 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29bc251d-1657-4787-81f8-83fdf903f229-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"29bc251d-1657-4787-81f8-83fdf903f229\") " pod="openstack/cinder-scheduler-0" Feb 25 11:36:51 crc kubenswrapper[5005]: I0225 11:36:51.076330 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29bc251d-1657-4787-81f8-83fdf903f229-scripts\") pod \"cinder-scheduler-0\" (UID: \"29bc251d-1657-4787-81f8-83fdf903f229\") " pod="openstack/cinder-scheduler-0" Feb 25 11:36:51 crc kubenswrapper[5005]: I0225 11:36:51.076348 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29bc251d-1657-4787-81f8-83fdf903f229-config-data\") pod \"cinder-scheduler-0\" (UID: \"29bc251d-1657-4787-81f8-83fdf903f229\") " pod="openstack/cinder-scheduler-0" Feb 25 11:36:51 crc kubenswrapper[5005]: I0225 11:36:51.076379 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29bc251d-1657-4787-81f8-83fdf903f229-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"29bc251d-1657-4787-81f8-83fdf903f229\") " pod="openstack/cinder-scheduler-0" Feb 25 11:36:51 crc kubenswrapper[5005]: I0225 11:36:51.076418 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pclzx\" (UniqueName: \"kubernetes.io/projected/29bc251d-1657-4787-81f8-83fdf903f229-kube-api-access-pclzx\") pod \"cinder-scheduler-0\" (UID: \"29bc251d-1657-4787-81f8-83fdf903f229\") " pod="openstack/cinder-scheduler-0" Feb 25 11:36:51 crc kubenswrapper[5005]: I0225 11:36:51.077395 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29bc251d-1657-4787-81f8-83fdf903f229-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"29bc251d-1657-4787-81f8-83fdf903f229\") " pod="openstack/cinder-scheduler-0" Feb 25 11:36:51 crc kubenswrapper[5005]: I0225 11:36:51.081177 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29bc251d-1657-4787-81f8-83fdf903f229-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"29bc251d-1657-4787-81f8-83fdf903f229\") " pod="openstack/cinder-scheduler-0" Feb 25 11:36:51 crc kubenswrapper[5005]: I0225 11:36:51.081469 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29bc251d-1657-4787-81f8-83fdf903f229-config-data\") pod \"cinder-scheduler-0\" (UID: \"29bc251d-1657-4787-81f8-83fdf903f229\") " pod="openstack/cinder-scheduler-0" Feb 25 11:36:51 crc kubenswrapper[5005]: I0225 11:36:51.081490 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29bc251d-1657-4787-81f8-83fdf903f229-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"29bc251d-1657-4787-81f8-83fdf903f229\") " pod="openstack/cinder-scheduler-0" Feb 25 11:36:51 crc kubenswrapper[5005]: I0225 11:36:51.090328 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29bc251d-1657-4787-81f8-83fdf903f229-scripts\") pod \"cinder-scheduler-0\" (UID: \"29bc251d-1657-4787-81f8-83fdf903f229\") " pod="openstack/cinder-scheduler-0" Feb 25 11:36:51 crc kubenswrapper[5005]: I0225 11:36:51.096944 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pclzx\" (UniqueName: \"kubernetes.io/projected/29bc251d-1657-4787-81f8-83fdf903f229-kube-api-access-pclzx\") pod \"cinder-scheduler-0\" (UID: \"29bc251d-1657-4787-81f8-83fdf903f229\") " pod="openstack/cinder-scheduler-0" Feb 25 11:36:51 crc kubenswrapper[5005]: I0225 11:36:51.214917 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 25 11:36:51 crc kubenswrapper[5005]: I0225 11:36:51.230347 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 25 11:36:51 crc kubenswrapper[5005]: I0225 11:36:51.542067 5005 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7f46657c4d-qmj7f" podUID="f37d127e-e5d1-45f3-8e44-262ab354e0c2" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Feb 25 11:36:51 crc kubenswrapper[5005]: I0225 11:36:51.803989 5005 generic.go:334] "Generic (PLEG): container finished" podID="8187472b-9222-4f32-a945-eaf167a7600d" containerID="140611c9089ca0f94f3e19075737265ce48dd031b01903506309e2de63f0af5a" exitCode=143 Feb 25 11:36:51 crc kubenswrapper[5005]: I0225 11:36:51.804028 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7568566766-7cjl5" event={"ID":"8187472b-9222-4f32-a945-eaf167a7600d","Type":"ContainerDied","Data":"140611c9089ca0f94f3e19075737265ce48dd031b01903506309e2de63f0af5a"} Feb 25 11:36:51 crc kubenswrapper[5005]: I0225 11:36:51.901331 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 25 11:36:51 crc kubenswrapper[5005]: W0225 11:36:51.903174 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29bc251d_1657_4787_81f8_83fdf903f229.slice/crio-e377c39f22fc5d01cfb242bddd815a922ef5fc47e8cd7e14746ff16ed276bdbb WatchSource:0}: Error finding container e377c39f22fc5d01cfb242bddd815a922ef5fc47e8cd7e14746ff16ed276bdbb: Status 404 returned error can't find the container with id e377c39f22fc5d01cfb242bddd815a922ef5fc47e8cd7e14746ff16ed276bdbb Feb 25 11:36:52 crc kubenswrapper[5005]: I0225 11:36:52.707975 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="589109a6-a7f8-484f-a5e2-a83bd90c21c4" path="/var/lib/kubelet/pods/589109a6-a7f8-484f-a5e2-a83bd90c21c4/volumes" Feb 25 11:36:52 crc kubenswrapper[5005]: I0225 11:36:52.824818 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"29bc251d-1657-4787-81f8-83fdf903f229","Type":"ContainerStarted","Data":"2c3a60635af4fc73a066c3339c1d9c760839c90404b351ada511fb6788efe091"} Feb 25 11:36:52 crc kubenswrapper[5005]: I0225 11:36:52.824865 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"29bc251d-1657-4787-81f8-83fdf903f229","Type":"ContainerStarted","Data":"e377c39f22fc5d01cfb242bddd815a922ef5fc47e8cd7e14746ff16ed276bdbb"} Feb 25 11:36:53 crc kubenswrapper[5005]: I0225 11:36:53.834205 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"29bc251d-1657-4787-81f8-83fdf903f229","Type":"ContainerStarted","Data":"41e5e15edb83bb3f8197d44d25232d619a538b720a24acf3837fc1eaee3097ba"} Feb 25 11:36:53 crc kubenswrapper[5005]: I0225 11:36:53.857944 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.857927502 podStartE2EDuration="3.857927502s" podCreationTimestamp="2026-02-25 11:36:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:36:53.849707603 +0000 UTC m=+1127.890439940" watchObservedRunningTime="2026-02-25 11:36:53.857927502 +0000 UTC m=+1127.898659829" Feb 25 11:36:53 crc kubenswrapper[5005]: I0225 11:36:53.880878 5005 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7568566766-7cjl5" podUID="8187472b-9222-4f32-a945-eaf167a7600d" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.160:9311/healthcheck\": read tcp 10.217.0.2:51782->10.217.0.160:9311: read: connection reset by peer" Feb 25 11:36:53 crc kubenswrapper[5005]: I0225 11:36:53.880942 5005 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7568566766-7cjl5" podUID="8187472b-9222-4f32-a945-eaf167a7600d" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.160:9311/healthcheck\": read tcp 10.217.0.2:51780->10.217.0.160:9311: read: connection reset by peer" Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.054438 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.055867 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.058104 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-bfqsw" Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.058348 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.058658 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.066048 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.146125 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a54289db-956b-413f-9aee-fae71e1630a9-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a54289db-956b-413f-9aee-fae71e1630a9\") " pod="openstack/openstackclient" Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.146190 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7q6k7\" (UniqueName: \"kubernetes.io/projected/a54289db-956b-413f-9aee-fae71e1630a9-kube-api-access-7q6k7\") pod \"openstackclient\" (UID: \"a54289db-956b-413f-9aee-fae71e1630a9\") " pod="openstack/openstackclient" Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.146822 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a54289db-956b-413f-9aee-fae71e1630a9-openstack-config\") pod \"openstackclient\" (UID: \"a54289db-956b-413f-9aee-fae71e1630a9\") " pod="openstack/openstackclient" Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.146883 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a54289db-956b-413f-9aee-fae71e1630a9-openstack-config-secret\") pod \"openstackclient\" (UID: \"a54289db-956b-413f-9aee-fae71e1630a9\") " pod="openstack/openstackclient" Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.248314 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a54289db-956b-413f-9aee-fae71e1630a9-openstack-config\") pod \"openstackclient\" (UID: \"a54289db-956b-413f-9aee-fae71e1630a9\") " pod="openstack/openstackclient" Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.248405 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a54289db-956b-413f-9aee-fae71e1630a9-openstack-config-secret\") pod \"openstackclient\" (UID: \"a54289db-956b-413f-9aee-fae71e1630a9\") " pod="openstack/openstackclient" Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.248471 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a54289db-956b-413f-9aee-fae71e1630a9-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a54289db-956b-413f-9aee-fae71e1630a9\") " pod="openstack/openstackclient" Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.248526 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7q6k7\" (UniqueName: \"kubernetes.io/projected/a54289db-956b-413f-9aee-fae71e1630a9-kube-api-access-7q6k7\") pod \"openstackclient\" (UID: \"a54289db-956b-413f-9aee-fae71e1630a9\") " pod="openstack/openstackclient" Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.250146 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a54289db-956b-413f-9aee-fae71e1630a9-openstack-config\") pod \"openstackclient\" (UID: \"a54289db-956b-413f-9aee-fae71e1630a9\") " pod="openstack/openstackclient" Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.254779 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a54289db-956b-413f-9aee-fae71e1630a9-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a54289db-956b-413f-9aee-fae71e1630a9\") " pod="openstack/openstackclient" Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.255702 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a54289db-956b-413f-9aee-fae71e1630a9-openstack-config-secret\") pod \"openstackclient\" (UID: \"a54289db-956b-413f-9aee-fae71e1630a9\") " pod="openstack/openstackclient" Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.264792 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7q6k7\" (UniqueName: \"kubernetes.io/projected/a54289db-956b-413f-9aee-fae71e1630a9-kube-api-access-7q6k7\") pod \"openstackclient\" (UID: \"a54289db-956b-413f-9aee-fae71e1630a9\") " pod="openstack/openstackclient" Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.299485 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.300156 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.314023 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.338429 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.339643 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.381427 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.425797 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7568566766-7cjl5" Feb 25 11:36:54 crc kubenswrapper[5005]: E0225 11:36:54.455903 5005 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 25 11:36:54 crc kubenswrapper[5005]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_a54289db-956b-413f-9aee-fae71e1630a9_0(056aaa571d3710ca3538b26c712c0b193a39a94284641acf587f7f2c0275e104): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"056aaa571d3710ca3538b26c712c0b193a39a94284641acf587f7f2c0275e104" Netns:"/var/run/netns/ef3a4bc2-448c-4dd3-b103-6ac8a0ecd4d6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=056aaa571d3710ca3538b26c712c0b193a39a94284641acf587f7f2c0275e104;K8S_POD_UID=a54289db-956b-413f-9aee-fae71e1630a9" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/a54289db-956b-413f-9aee-fae71e1630a9]: expected pod UID "a54289db-956b-413f-9aee-fae71e1630a9" but got "7513bdda-48d0-4c57-a264-56886c4a89bd" from Kube API Feb 25 11:36:54 crc kubenswrapper[5005]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 25 11:36:54 crc kubenswrapper[5005]: > Feb 25 11:36:54 crc kubenswrapper[5005]: E0225 11:36:54.455976 5005 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 25 11:36:54 crc kubenswrapper[5005]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_a54289db-956b-413f-9aee-fae71e1630a9_0(056aaa571d3710ca3538b26c712c0b193a39a94284641acf587f7f2c0275e104): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"056aaa571d3710ca3538b26c712c0b193a39a94284641acf587f7f2c0275e104" Netns:"/var/run/netns/ef3a4bc2-448c-4dd3-b103-6ac8a0ecd4d6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=056aaa571d3710ca3538b26c712c0b193a39a94284641acf587f7f2c0275e104;K8S_POD_UID=a54289db-956b-413f-9aee-fae71e1630a9" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/a54289db-956b-413f-9aee-fae71e1630a9]: expected pod UID "a54289db-956b-413f-9aee-fae71e1630a9" but got "7513bdda-48d0-4c57-a264-56886c4a89bd" from Kube API Feb 25 11:36:54 crc kubenswrapper[5005]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 25 11:36:54 crc kubenswrapper[5005]: > pod="openstack/openstackclient" Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.457994 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7513bdda-48d0-4c57-a264-56886c4a89bd-openstack-config\") pod \"openstackclient\" (UID: \"7513bdda-48d0-4c57-a264-56886c4a89bd\") " pod="openstack/openstackclient" Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.458049 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfjzg\" (UniqueName: \"kubernetes.io/projected/7513bdda-48d0-4c57-a264-56886c4a89bd-kube-api-access-tfjzg\") pod \"openstackclient\" (UID: \"7513bdda-48d0-4c57-a264-56886c4a89bd\") " pod="openstack/openstackclient" Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.458097 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7513bdda-48d0-4c57-a264-56886c4a89bd-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7513bdda-48d0-4c57-a264-56886c4a89bd\") " pod="openstack/openstackclient" Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.458152 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7513bdda-48d0-4c57-a264-56886c4a89bd-openstack-config-secret\") pod \"openstackclient\" (UID: \"7513bdda-48d0-4c57-a264-56886c4a89bd\") " pod="openstack/openstackclient" Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.559656 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5sb9\" (UniqueName: \"kubernetes.io/projected/8187472b-9222-4f32-a945-eaf167a7600d-kube-api-access-z5sb9\") pod \"8187472b-9222-4f32-a945-eaf167a7600d\" (UID: \"8187472b-9222-4f32-a945-eaf167a7600d\") " Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.560019 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8187472b-9222-4f32-a945-eaf167a7600d-combined-ca-bundle\") pod \"8187472b-9222-4f32-a945-eaf167a7600d\" (UID: \"8187472b-9222-4f32-a945-eaf167a7600d\") " Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.560104 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8187472b-9222-4f32-a945-eaf167a7600d-logs\") pod \"8187472b-9222-4f32-a945-eaf167a7600d\" (UID: \"8187472b-9222-4f32-a945-eaf167a7600d\") " Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.560142 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8187472b-9222-4f32-a945-eaf167a7600d-config-data-custom\") pod \"8187472b-9222-4f32-a945-eaf167a7600d\" (UID: \"8187472b-9222-4f32-a945-eaf167a7600d\") " Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.560222 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8187472b-9222-4f32-a945-eaf167a7600d-config-data\") pod \"8187472b-9222-4f32-a945-eaf167a7600d\" (UID: \"8187472b-9222-4f32-a945-eaf167a7600d\") " Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.560484 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7513bdda-48d0-4c57-a264-56886c4a89bd-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7513bdda-48d0-4c57-a264-56886c4a89bd\") " pod="openstack/openstackclient" Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.560556 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7513bdda-48d0-4c57-a264-56886c4a89bd-openstack-config-secret\") pod \"openstackclient\" (UID: \"7513bdda-48d0-4c57-a264-56886c4a89bd\") " pod="openstack/openstackclient" Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.560609 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7513bdda-48d0-4c57-a264-56886c4a89bd-openstack-config\") pod \"openstackclient\" (UID: \"7513bdda-48d0-4c57-a264-56886c4a89bd\") " pod="openstack/openstackclient" Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.560617 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8187472b-9222-4f32-a945-eaf167a7600d-logs" (OuterVolumeSpecName: "logs") pod "8187472b-9222-4f32-a945-eaf167a7600d" (UID: "8187472b-9222-4f32-a945-eaf167a7600d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.560651 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfjzg\" (UniqueName: \"kubernetes.io/projected/7513bdda-48d0-4c57-a264-56886c4a89bd-kube-api-access-tfjzg\") pod \"openstackclient\" (UID: \"7513bdda-48d0-4c57-a264-56886c4a89bd\") " pod="openstack/openstackclient" Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.560716 5005 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8187472b-9222-4f32-a945-eaf167a7600d-logs\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.561351 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7513bdda-48d0-4c57-a264-56886c4a89bd-openstack-config\") pod \"openstackclient\" (UID: \"7513bdda-48d0-4c57-a264-56886c4a89bd\") " pod="openstack/openstackclient" Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.563410 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8187472b-9222-4f32-a945-eaf167a7600d-kube-api-access-z5sb9" (OuterVolumeSpecName: "kube-api-access-z5sb9") pod "8187472b-9222-4f32-a945-eaf167a7600d" (UID: "8187472b-9222-4f32-a945-eaf167a7600d"). InnerVolumeSpecName "kube-api-access-z5sb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.565486 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7513bdda-48d0-4c57-a264-56886c4a89bd-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7513bdda-48d0-4c57-a264-56886c4a89bd\") " pod="openstack/openstackclient" Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.569478 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8187472b-9222-4f32-a945-eaf167a7600d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8187472b-9222-4f32-a945-eaf167a7600d" (UID: "8187472b-9222-4f32-a945-eaf167a7600d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.576406 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfjzg\" (UniqueName: \"kubernetes.io/projected/7513bdda-48d0-4c57-a264-56886c4a89bd-kube-api-access-tfjzg\") pod \"openstackclient\" (UID: \"7513bdda-48d0-4c57-a264-56886c4a89bd\") " pod="openstack/openstackclient" Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.577979 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7513bdda-48d0-4c57-a264-56886c4a89bd-openstack-config-secret\") pod \"openstackclient\" (UID: \"7513bdda-48d0-4c57-a264-56886c4a89bd\") " pod="openstack/openstackclient" Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.593554 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8187472b-9222-4f32-a945-eaf167a7600d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8187472b-9222-4f32-a945-eaf167a7600d" (UID: "8187472b-9222-4f32-a945-eaf167a7600d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.613482 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8187472b-9222-4f32-a945-eaf167a7600d-config-data" (OuterVolumeSpecName: "config-data") pod "8187472b-9222-4f32-a945-eaf167a7600d" (UID: "8187472b-9222-4f32-a945-eaf167a7600d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.662128 5005 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8187472b-9222-4f32-a945-eaf167a7600d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.662346 5005 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8187472b-9222-4f32-a945-eaf167a7600d-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.662422 5005 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8187472b-9222-4f32-a945-eaf167a7600d-config-data\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.662506 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5sb9\" (UniqueName: \"kubernetes.io/projected/8187472b-9222-4f32-a945-eaf167a7600d-kube-api-access-z5sb9\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.735547 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.847400 5005 generic.go:334] "Generic (PLEG): container finished" podID="8187472b-9222-4f32-a945-eaf167a7600d" containerID="a345d21407ce5312910b20be2105f8bb11749b714233a03abf10ed5ec9c1d23c" exitCode=0 Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.847458 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.847533 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7568566766-7cjl5" Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.847630 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7568566766-7cjl5" event={"ID":"8187472b-9222-4f32-a945-eaf167a7600d","Type":"ContainerDied","Data":"a345d21407ce5312910b20be2105f8bb11749b714233a03abf10ed5ec9c1d23c"} Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.847652 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7568566766-7cjl5" event={"ID":"8187472b-9222-4f32-a945-eaf167a7600d","Type":"ContainerDied","Data":"5801abc1771605454fca162d8f1c18f97d7f6c751d23c9b881acc487a033d954"} Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.847668 5005 scope.go:117] "RemoveContainer" containerID="a345d21407ce5312910b20be2105f8bb11749b714233a03abf10ed5ec9c1d23c" Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.872474 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.887707 5005 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="a54289db-956b-413f-9aee-fae71e1630a9" podUID="7513bdda-48d0-4c57-a264-56886c4a89bd" Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.896699 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7568566766-7cjl5"] Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.904297 5005 scope.go:117] "RemoveContainer" containerID="140611c9089ca0f94f3e19075737265ce48dd031b01903506309e2de63f0af5a" Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.909253 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7568566766-7cjl5"] Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.957841 5005 scope.go:117] "RemoveContainer" containerID="a345d21407ce5312910b20be2105f8bb11749b714233a03abf10ed5ec9c1d23c" Feb 25 11:36:54 crc kubenswrapper[5005]: E0225 11:36:54.960045 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a345d21407ce5312910b20be2105f8bb11749b714233a03abf10ed5ec9c1d23c\": container with ID starting with a345d21407ce5312910b20be2105f8bb11749b714233a03abf10ed5ec9c1d23c not found: ID does not exist" containerID="a345d21407ce5312910b20be2105f8bb11749b714233a03abf10ed5ec9c1d23c" Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.960113 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a345d21407ce5312910b20be2105f8bb11749b714233a03abf10ed5ec9c1d23c"} err="failed to get container status \"a345d21407ce5312910b20be2105f8bb11749b714233a03abf10ed5ec9c1d23c\": rpc error: code = NotFound desc = could not find container \"a345d21407ce5312910b20be2105f8bb11749b714233a03abf10ed5ec9c1d23c\": container with ID starting with a345d21407ce5312910b20be2105f8bb11749b714233a03abf10ed5ec9c1d23c not found: ID does not exist" Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.960143 5005 scope.go:117] "RemoveContainer" containerID="140611c9089ca0f94f3e19075737265ce48dd031b01903506309e2de63f0af5a" Feb 25 11:36:54 crc kubenswrapper[5005]: E0225 11:36:54.964505 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"140611c9089ca0f94f3e19075737265ce48dd031b01903506309e2de63f0af5a\": container with ID starting with 140611c9089ca0f94f3e19075737265ce48dd031b01903506309e2de63f0af5a not found: ID does not exist" containerID="140611c9089ca0f94f3e19075737265ce48dd031b01903506309e2de63f0af5a" Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.964552 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"140611c9089ca0f94f3e19075737265ce48dd031b01903506309e2de63f0af5a"} err="failed to get container status \"140611c9089ca0f94f3e19075737265ce48dd031b01903506309e2de63f0af5a\": rpc error: code = NotFound desc = could not find container \"140611c9089ca0f94f3e19075737265ce48dd031b01903506309e2de63f0af5a\": container with ID starting with 140611c9089ca0f94f3e19075737265ce48dd031b01903506309e2de63f0af5a not found: ID does not exist" Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.966968 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a54289db-956b-413f-9aee-fae71e1630a9-combined-ca-bundle\") pod \"a54289db-956b-413f-9aee-fae71e1630a9\" (UID: \"a54289db-956b-413f-9aee-fae71e1630a9\") " Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.967174 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a54289db-956b-413f-9aee-fae71e1630a9-openstack-config\") pod \"a54289db-956b-413f-9aee-fae71e1630a9\" (UID: \"a54289db-956b-413f-9aee-fae71e1630a9\") " Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.967343 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a54289db-956b-413f-9aee-fae71e1630a9-openstack-config-secret\") pod \"a54289db-956b-413f-9aee-fae71e1630a9\" (UID: \"a54289db-956b-413f-9aee-fae71e1630a9\") " Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.967521 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7q6k7\" (UniqueName: \"kubernetes.io/projected/a54289db-956b-413f-9aee-fae71e1630a9-kube-api-access-7q6k7\") pod \"a54289db-956b-413f-9aee-fae71e1630a9\" (UID: \"a54289db-956b-413f-9aee-fae71e1630a9\") " Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.967839 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a54289db-956b-413f-9aee-fae71e1630a9-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "a54289db-956b-413f-9aee-fae71e1630a9" (UID: "a54289db-956b-413f-9aee-fae71e1630a9"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.968485 5005 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a54289db-956b-413f-9aee-fae71e1630a9-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.972831 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a54289db-956b-413f-9aee-fae71e1630a9-kube-api-access-7q6k7" (OuterVolumeSpecName: "kube-api-access-7q6k7") pod "a54289db-956b-413f-9aee-fae71e1630a9" (UID: "a54289db-956b-413f-9aee-fae71e1630a9"). InnerVolumeSpecName "kube-api-access-7q6k7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.985498 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a54289db-956b-413f-9aee-fae71e1630a9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a54289db-956b-413f-9aee-fae71e1630a9" (UID: "a54289db-956b-413f-9aee-fae71e1630a9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:36:54 crc kubenswrapper[5005]: I0225 11:36:54.991729 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a54289db-956b-413f-9aee-fae71e1630a9-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "a54289db-956b-413f-9aee-fae71e1630a9" (UID: "a54289db-956b-413f-9aee-fae71e1630a9"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:36:55 crc kubenswrapper[5005]: I0225 11:36:55.070118 5005 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a54289db-956b-413f-9aee-fae71e1630a9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:55 crc kubenswrapper[5005]: I0225 11:36:55.070149 5005 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a54289db-956b-413f-9aee-fae71e1630a9-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:55 crc kubenswrapper[5005]: I0225 11:36:55.070160 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7q6k7\" (UniqueName: \"kubernetes.io/projected/a54289db-956b-413f-9aee-fae71e1630a9-kube-api-access-7q6k7\") on node \"crc\" DevicePath \"\"" Feb 25 11:36:55 crc kubenswrapper[5005]: I0225 11:36:55.229419 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 25 11:36:55 crc kubenswrapper[5005]: W0225 11:36:55.230645 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7513bdda_48d0_4c57_a264_56886c4a89bd.slice/crio-0107c0984715428607d27e496407470fc0ee0c0b904f4c0caef75d9afe9532e5 WatchSource:0}: Error finding container 0107c0984715428607d27e496407470fc0ee0c0b904f4c0caef75d9afe9532e5: Status 404 returned error can't find the container with id 0107c0984715428607d27e496407470fc0ee0c0b904f4c0caef75d9afe9532e5 Feb 25 11:36:55 crc kubenswrapper[5005]: I0225 11:36:55.866918 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"7513bdda-48d0-4c57-a264-56886c4a89bd","Type":"ContainerStarted","Data":"0107c0984715428607d27e496407470fc0ee0c0b904f4c0caef75d9afe9532e5"} Feb 25 11:36:55 crc kubenswrapper[5005]: I0225 11:36:55.866933 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 25 11:36:55 crc kubenswrapper[5005]: I0225 11:36:55.887427 5005 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="a54289db-956b-413f-9aee-fae71e1630a9" podUID="7513bdda-48d0-4c57-a264-56886c4a89bd" Feb 25 11:36:56 crc kubenswrapper[5005]: I0225 11:36:56.231836 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 25 11:36:56 crc kubenswrapper[5005]: I0225 11:36:56.697569 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8187472b-9222-4f32-a945-eaf167a7600d" path="/var/lib/kubelet/pods/8187472b-9222-4f32-a945-eaf167a7600d/volumes" Feb 25 11:36:56 crc kubenswrapper[5005]: I0225 11:36:56.698384 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a54289db-956b-413f-9aee-fae71e1630a9" path="/var/lib/kubelet/pods/a54289db-956b-413f-9aee-fae71e1630a9/volumes" Feb 25 11:36:58 crc kubenswrapper[5005]: I0225 11:36:58.087318 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 11:36:58 crc kubenswrapper[5005]: I0225 11:36:58.087423 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 11:37:01 crc kubenswrapper[5005]: I0225 11:37:01.447861 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 25 11:37:01 crc kubenswrapper[5005]: I0225 11:37:01.544481 5005 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7f46657c4d-qmj7f" podUID="f37d127e-e5d1-45f3-8e44-262ab354e0c2" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Feb 25 11:37:04 crc kubenswrapper[5005]: I0225 11:37:04.445219 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 25 11:37:04 crc kubenswrapper[5005]: I0225 11:37:04.446128 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="097d7f93-e779-4856-a6d1-57be2bcba899" containerName="ceilometer-central-agent" containerID="cri-o://b60d0c9060998ffc054412737a4e6e862d044b5bba0d11788fd64d351fa21e51" gracePeriod=30 Feb 25 11:37:04 crc kubenswrapper[5005]: I0225 11:37:04.446277 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="097d7f93-e779-4856-a6d1-57be2bcba899" containerName="proxy-httpd" containerID="cri-o://0a8e28b73392089baf878f2894a4ea60b0fb43195c8b47dfc91273f6ee18d6ef" gracePeriod=30 Feb 25 11:37:04 crc kubenswrapper[5005]: I0225 11:37:04.446338 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="097d7f93-e779-4856-a6d1-57be2bcba899" containerName="sg-core" containerID="cri-o://934789b36a79ce497a935186463f6c524add0e6c8c8062d0d85a1f74fda9697d" gracePeriod=30 Feb 25 11:37:04 crc kubenswrapper[5005]: I0225 11:37:04.446449 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="097d7f93-e779-4856-a6d1-57be2bcba899" containerName="ceilometer-notification-agent" containerID="cri-o://eefed9e1b907d4da7ea0caf8cc5f3b30234611827cf8c3652fce559c4548094c" gracePeriod=30 Feb 25 11:37:04 crc kubenswrapper[5005]: I0225 11:37:04.453662 5005 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="097d7f93-e779-4856-a6d1-57be2bcba899" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.162:3000/\": EOF" Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.012332 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"7513bdda-48d0-4c57-a264-56886c4a89bd","Type":"ContainerStarted","Data":"327b78b414a5814bfc11d98f85a3c57b0273d9d8e075b96286669fb707b8bd7b"} Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.017467 5005 generic.go:334] "Generic (PLEG): container finished" podID="097d7f93-e779-4856-a6d1-57be2bcba899" containerID="0a8e28b73392089baf878f2894a4ea60b0fb43195c8b47dfc91273f6ee18d6ef" exitCode=0 Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.017502 5005 generic.go:334] "Generic (PLEG): container finished" podID="097d7f93-e779-4856-a6d1-57be2bcba899" containerID="934789b36a79ce497a935186463f6c524add0e6c8c8062d0d85a1f74fda9697d" exitCode=2 Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.017515 5005 generic.go:334] "Generic (PLEG): container finished" podID="097d7f93-e779-4856-a6d1-57be2bcba899" containerID="b60d0c9060998ffc054412737a4e6e862d044b5bba0d11788fd64d351fa21e51" exitCode=0 Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.017536 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"097d7f93-e779-4856-a6d1-57be2bcba899","Type":"ContainerDied","Data":"0a8e28b73392089baf878f2894a4ea60b0fb43195c8b47dfc91273f6ee18d6ef"} Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.017561 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"097d7f93-e779-4856-a6d1-57be2bcba899","Type":"ContainerDied","Data":"934789b36a79ce497a935186463f6c524add0e6c8c8062d0d85a1f74fda9697d"} Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.017574 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"097d7f93-e779-4856-a6d1-57be2bcba899","Type":"ContainerDied","Data":"b60d0c9060998ffc054412737a4e6e862d044b5bba0d11788fd64d351fa21e51"} Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.034320 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.109560855 podStartE2EDuration="11.034301856s" podCreationTimestamp="2026-02-25 11:36:54 +0000 UTC" firstStartedPulling="2026-02-25 11:36:55.232805352 +0000 UTC m=+1129.273537679" lastFinishedPulling="2026-02-25 11:37:04.157546363 +0000 UTC m=+1138.198278680" observedRunningTime="2026-02-25 11:37:05.029199012 +0000 UTC m=+1139.069931349" watchObservedRunningTime="2026-02-25 11:37:05.034301856 +0000 UTC m=+1139.075034203" Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.274928 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-qdf8s"] Feb 25 11:37:05 crc kubenswrapper[5005]: E0225 11:37:05.275274 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8187472b-9222-4f32-a945-eaf167a7600d" containerName="barbican-api-log" Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.275289 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="8187472b-9222-4f32-a945-eaf167a7600d" containerName="barbican-api-log" Feb 25 11:37:05 crc kubenswrapper[5005]: E0225 11:37:05.275308 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8187472b-9222-4f32-a945-eaf167a7600d" containerName="barbican-api" Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.275315 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="8187472b-9222-4f32-a945-eaf167a7600d" containerName="barbican-api" Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.275511 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="8187472b-9222-4f32-a945-eaf167a7600d" containerName="barbican-api-log" Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.275527 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="8187472b-9222-4f32-a945-eaf167a7600d" containerName="barbican-api" Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.276102 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-qdf8s" Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.296412 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-qdf8s"] Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.405459 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-w992v"] Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.406676 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-w992v" Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.445443 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-3048-account-create-update-h86sj"] Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.446864 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3048-account-create-update-h86sj" Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.453687 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.464420 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-w992v"] Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.467691 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da5c495c-be81-4a63-b604-a9c3f5d2de7c-operator-scripts\") pod \"nova-api-db-create-qdf8s\" (UID: \"da5c495c-be81-4a63-b604-a9c3f5d2de7c\") " pod="openstack/nova-api-db-create-qdf8s" Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.467750 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z6fs\" (UniqueName: \"kubernetes.io/projected/da5c495c-be81-4a63-b604-a9c3f5d2de7c-kube-api-access-7z6fs\") pod \"nova-api-db-create-qdf8s\" (UID: \"da5c495c-be81-4a63-b604-a9c3f5d2de7c\") " pod="openstack/nova-api-db-create-qdf8s" Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.481606 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-3048-account-create-update-h86sj"] Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.564453 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-n8jp5"] Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.566204 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-n8jp5" Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.571707 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-n8jp5"] Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.571729 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da5c495c-be81-4a63-b604-a9c3f5d2de7c-operator-scripts\") pod \"nova-api-db-create-qdf8s\" (UID: \"da5c495c-be81-4a63-b604-a9c3f5d2de7c\") " pod="openstack/nova-api-db-create-qdf8s" Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.571796 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bafe0b13-d056-4990-b93d-f4cb487c7cd2-operator-scripts\") pod \"nova-api-3048-account-create-update-h86sj\" (UID: \"bafe0b13-d056-4990-b93d-f4cb487c7cd2\") " pod="openstack/nova-api-3048-account-create-update-h86sj" Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.571824 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7z6fs\" (UniqueName: \"kubernetes.io/projected/da5c495c-be81-4a63-b604-a9c3f5d2de7c-kube-api-access-7z6fs\") pod \"nova-api-db-create-qdf8s\" (UID: \"da5c495c-be81-4a63-b604-a9c3f5d2de7c\") " pod="openstack/nova-api-db-create-qdf8s" Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.571858 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlsmm\" (UniqueName: \"kubernetes.io/projected/019aa605-4910-4b2d-aba0-de303611c1f4-kube-api-access-hlsmm\") pod \"nova-cell0-db-create-w992v\" (UID: \"019aa605-4910-4b2d-aba0-de303611c1f4\") " pod="openstack/nova-cell0-db-create-w992v" Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.571887 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5plc7\" (UniqueName: \"kubernetes.io/projected/bafe0b13-d056-4990-b93d-f4cb487c7cd2-kube-api-access-5plc7\") pod \"nova-api-3048-account-create-update-h86sj\" (UID: \"bafe0b13-d056-4990-b93d-f4cb487c7cd2\") " pod="openstack/nova-api-3048-account-create-update-h86sj" Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.571927 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/019aa605-4910-4b2d-aba0-de303611c1f4-operator-scripts\") pod \"nova-cell0-db-create-w992v\" (UID: \"019aa605-4910-4b2d-aba0-de303611c1f4\") " pod="openstack/nova-cell0-db-create-w992v" Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.572481 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da5c495c-be81-4a63-b604-a9c3f5d2de7c-operator-scripts\") pod \"nova-api-db-create-qdf8s\" (UID: \"da5c495c-be81-4a63-b604-a9c3f5d2de7c\") " pod="openstack/nova-api-db-create-qdf8s" Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.582890 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-7a9e-account-create-update-v5tcv"] Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.584321 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7a9e-account-create-update-v5tcv" Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.589337 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.595297 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z6fs\" (UniqueName: \"kubernetes.io/projected/da5c495c-be81-4a63-b604-a9c3f5d2de7c-kube-api-access-7z6fs\") pod \"nova-api-db-create-qdf8s\" (UID: \"da5c495c-be81-4a63-b604-a9c3f5d2de7c\") " pod="openstack/nova-api-db-create-qdf8s" Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.595658 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-qdf8s" Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.602201 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-7a9e-account-create-update-v5tcv"] Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.673326 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlsmm\" (UniqueName: \"kubernetes.io/projected/019aa605-4910-4b2d-aba0-de303611c1f4-kube-api-access-hlsmm\") pod \"nova-cell0-db-create-w992v\" (UID: \"019aa605-4910-4b2d-aba0-de303611c1f4\") " pod="openstack/nova-cell0-db-create-w992v" Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.673438 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5plc7\" (UniqueName: \"kubernetes.io/projected/bafe0b13-d056-4990-b93d-f4cb487c7cd2-kube-api-access-5plc7\") pod \"nova-api-3048-account-create-update-h86sj\" (UID: \"bafe0b13-d056-4990-b93d-f4cb487c7cd2\") " pod="openstack/nova-api-3048-account-create-update-h86sj" Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.673486 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55mnk\" (UniqueName: \"kubernetes.io/projected/2c0d38a7-c7e8-4dbd-80a4-403075937b43-kube-api-access-55mnk\") pod \"nova-cell1-db-create-n8jp5\" (UID: \"2c0d38a7-c7e8-4dbd-80a4-403075937b43\") " pod="openstack/nova-cell1-db-create-n8jp5" Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.673511 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/019aa605-4910-4b2d-aba0-de303611c1f4-operator-scripts\") pod \"nova-cell0-db-create-w992v\" (UID: \"019aa605-4910-4b2d-aba0-de303611c1f4\") " pod="openstack/nova-cell0-db-create-w992v" Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.673571 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7da5666d-9f95-46f2-9455-ee3eaecf137d-operator-scripts\") pod \"nova-cell0-7a9e-account-create-update-v5tcv\" (UID: \"7da5666d-9f95-46f2-9455-ee3eaecf137d\") " pod="openstack/nova-cell0-7a9e-account-create-update-v5tcv" Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.673596 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk7vw\" (UniqueName: \"kubernetes.io/projected/7da5666d-9f95-46f2-9455-ee3eaecf137d-kube-api-access-zk7vw\") pod \"nova-cell0-7a9e-account-create-update-v5tcv\" (UID: \"7da5666d-9f95-46f2-9455-ee3eaecf137d\") " pod="openstack/nova-cell0-7a9e-account-create-update-v5tcv" Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.673623 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c0d38a7-c7e8-4dbd-80a4-403075937b43-operator-scripts\") pod \"nova-cell1-db-create-n8jp5\" (UID: \"2c0d38a7-c7e8-4dbd-80a4-403075937b43\") " pod="openstack/nova-cell1-db-create-n8jp5" Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.673649 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bafe0b13-d056-4990-b93d-f4cb487c7cd2-operator-scripts\") pod \"nova-api-3048-account-create-update-h86sj\" (UID: \"bafe0b13-d056-4990-b93d-f4cb487c7cd2\") " pod="openstack/nova-api-3048-account-create-update-h86sj" Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.674292 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bafe0b13-d056-4990-b93d-f4cb487c7cd2-operator-scripts\") pod \"nova-api-3048-account-create-update-h86sj\" (UID: \"bafe0b13-d056-4990-b93d-f4cb487c7cd2\") " pod="openstack/nova-api-3048-account-create-update-h86sj" Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.674489 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/019aa605-4910-4b2d-aba0-de303611c1f4-operator-scripts\") pod \"nova-cell0-db-create-w992v\" (UID: \"019aa605-4910-4b2d-aba0-de303611c1f4\") " pod="openstack/nova-cell0-db-create-w992v" Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.691887 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlsmm\" (UniqueName: \"kubernetes.io/projected/019aa605-4910-4b2d-aba0-de303611c1f4-kube-api-access-hlsmm\") pod \"nova-cell0-db-create-w992v\" (UID: \"019aa605-4910-4b2d-aba0-de303611c1f4\") " pod="openstack/nova-cell0-db-create-w992v" Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.692171 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5plc7\" (UniqueName: \"kubernetes.io/projected/bafe0b13-d056-4990-b93d-f4cb487c7cd2-kube-api-access-5plc7\") pod \"nova-api-3048-account-create-update-h86sj\" (UID: \"bafe0b13-d056-4990-b93d-f4cb487c7cd2\") " pod="openstack/nova-api-3048-account-create-update-h86sj" Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.749677 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-w992v" Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.776458 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7da5666d-9f95-46f2-9455-ee3eaecf137d-operator-scripts\") pod \"nova-cell0-7a9e-account-create-update-v5tcv\" (UID: \"7da5666d-9f95-46f2-9455-ee3eaecf137d\") " pod="openstack/nova-cell0-7a9e-account-create-update-v5tcv" Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.776520 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zk7vw\" (UniqueName: \"kubernetes.io/projected/7da5666d-9f95-46f2-9455-ee3eaecf137d-kube-api-access-zk7vw\") pod \"nova-cell0-7a9e-account-create-update-v5tcv\" (UID: \"7da5666d-9f95-46f2-9455-ee3eaecf137d\") " pod="openstack/nova-cell0-7a9e-account-create-update-v5tcv" Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.776576 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c0d38a7-c7e8-4dbd-80a4-403075937b43-operator-scripts\") pod \"nova-cell1-db-create-n8jp5\" (UID: \"2c0d38a7-c7e8-4dbd-80a4-403075937b43\") " pod="openstack/nova-cell1-db-create-n8jp5" Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.776707 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55mnk\" (UniqueName: \"kubernetes.io/projected/2c0d38a7-c7e8-4dbd-80a4-403075937b43-kube-api-access-55mnk\") pod \"nova-cell1-db-create-n8jp5\" (UID: \"2c0d38a7-c7e8-4dbd-80a4-403075937b43\") " pod="openstack/nova-cell1-db-create-n8jp5" Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.778289 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7da5666d-9f95-46f2-9455-ee3eaecf137d-operator-scripts\") pod \"nova-cell0-7a9e-account-create-update-v5tcv\" (UID: \"7da5666d-9f95-46f2-9455-ee3eaecf137d\") " pod="openstack/nova-cell0-7a9e-account-create-update-v5tcv" Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.779172 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c0d38a7-c7e8-4dbd-80a4-403075937b43-operator-scripts\") pod \"nova-cell1-db-create-n8jp5\" (UID: \"2c0d38a7-c7e8-4dbd-80a4-403075937b43\") " pod="openstack/nova-cell1-db-create-n8jp5" Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.794235 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-3c62-account-create-update-5949j"] Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.795479 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3c62-account-create-update-5949j" Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.795774 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3048-account-create-update-h86sj" Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.799157 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.804253 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zk7vw\" (UniqueName: \"kubernetes.io/projected/7da5666d-9f95-46f2-9455-ee3eaecf137d-kube-api-access-zk7vw\") pod \"nova-cell0-7a9e-account-create-update-v5tcv\" (UID: \"7da5666d-9f95-46f2-9455-ee3eaecf137d\") " pod="openstack/nova-cell0-7a9e-account-create-update-v5tcv" Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.810212 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55mnk\" (UniqueName: \"kubernetes.io/projected/2c0d38a7-c7e8-4dbd-80a4-403075937b43-kube-api-access-55mnk\") pod \"nova-cell1-db-create-n8jp5\" (UID: \"2c0d38a7-c7e8-4dbd-80a4-403075937b43\") " pod="openstack/nova-cell1-db-create-n8jp5" Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.815474 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-3c62-account-create-update-5949j"] Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.878849 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt28g\" (UniqueName: \"kubernetes.io/projected/daf2d0ce-59f4-40b5-a2fd-fe4991a62dda-kube-api-access-kt28g\") pod \"nova-cell1-3c62-account-create-update-5949j\" (UID: \"daf2d0ce-59f4-40b5-a2fd-fe4991a62dda\") " pod="openstack/nova-cell1-3c62-account-create-update-5949j" Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.878913 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/daf2d0ce-59f4-40b5-a2fd-fe4991a62dda-operator-scripts\") pod \"nova-cell1-3c62-account-create-update-5949j\" (UID: \"daf2d0ce-59f4-40b5-a2fd-fe4991a62dda\") " pod="openstack/nova-cell1-3c62-account-create-update-5949j" Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.882353 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-n8jp5" Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.980503 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt28g\" (UniqueName: \"kubernetes.io/projected/daf2d0ce-59f4-40b5-a2fd-fe4991a62dda-kube-api-access-kt28g\") pod \"nova-cell1-3c62-account-create-update-5949j\" (UID: \"daf2d0ce-59f4-40b5-a2fd-fe4991a62dda\") " pod="openstack/nova-cell1-3c62-account-create-update-5949j" Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.980588 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/daf2d0ce-59f4-40b5-a2fd-fe4991a62dda-operator-scripts\") pod \"nova-cell1-3c62-account-create-update-5949j\" (UID: \"daf2d0ce-59f4-40b5-a2fd-fe4991a62dda\") " pod="openstack/nova-cell1-3c62-account-create-update-5949j" Feb 25 11:37:05 crc kubenswrapper[5005]: I0225 11:37:05.981323 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/daf2d0ce-59f4-40b5-a2fd-fe4991a62dda-operator-scripts\") pod \"nova-cell1-3c62-account-create-update-5949j\" (UID: \"daf2d0ce-59f4-40b5-a2fd-fe4991a62dda\") " pod="openstack/nova-cell1-3c62-account-create-update-5949j" Feb 25 11:37:06 crc kubenswrapper[5005]: I0225 11:37:06.000470 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt28g\" (UniqueName: \"kubernetes.io/projected/daf2d0ce-59f4-40b5-a2fd-fe4991a62dda-kube-api-access-kt28g\") pod \"nova-cell1-3c62-account-create-update-5949j\" (UID: \"daf2d0ce-59f4-40b5-a2fd-fe4991a62dda\") " pod="openstack/nova-cell1-3c62-account-create-update-5949j" Feb 25 11:37:06 crc kubenswrapper[5005]: I0225 11:37:06.055049 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7a9e-account-create-update-v5tcv" Feb 25 11:37:06 crc kubenswrapper[5005]: I0225 11:37:06.082421 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-qdf8s"] Feb 25 11:37:06 crc kubenswrapper[5005]: I0225 11:37:06.116268 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3c62-account-create-update-5949j" Feb 25 11:37:06 crc kubenswrapper[5005]: I0225 11:37:06.311285 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-w992v"] Feb 25 11:37:06 crc kubenswrapper[5005]: W0225 11:37:06.321052 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod019aa605_4910_4b2d_aba0_de303611c1f4.slice/crio-3442fe0c0fd78e235ef1a201a1de487c377fd39a1c295742716d2102177dacbc WatchSource:0}: Error finding container 3442fe0c0fd78e235ef1a201a1de487c377fd39a1c295742716d2102177dacbc: Status 404 returned error can't find the container with id 3442fe0c0fd78e235ef1a201a1de487c377fd39a1c295742716d2102177dacbc Feb 25 11:37:06 crc kubenswrapper[5005]: I0225 11:37:06.345018 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-3048-account-create-update-h86sj"] Feb 25 11:37:06 crc kubenswrapper[5005]: W0225 11:37:06.355870 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbafe0b13_d056_4990_b93d_f4cb487c7cd2.slice/crio-a9c0c5307ec25aea807062911f30dddc5cc87ae5dc5cbb40010312c67a79383e WatchSource:0}: Error finding container a9c0c5307ec25aea807062911f30dddc5cc87ae5dc5cbb40010312c67a79383e: Status 404 returned error can't find the container with id a9c0c5307ec25aea807062911f30dddc5cc87ae5dc5cbb40010312c67a79383e Feb 25 11:37:06 crc kubenswrapper[5005]: I0225 11:37:06.451652 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-n8jp5"] Feb 25 11:37:06 crc kubenswrapper[5005]: I0225 11:37:06.484758 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-3c62-account-create-update-5949j"] Feb 25 11:37:06 crc kubenswrapper[5005]: I0225 11:37:06.590250 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-7a9e-account-create-update-v5tcv"] Feb 25 11:37:06 crc kubenswrapper[5005]: I0225 11:37:06.614667 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 25 11:37:06 crc kubenswrapper[5005]: E0225 11:37:06.941970 5005 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda5c495c_be81_4a63_b604_a9c3f5d2de7c.slice/crio-conmon-5f462d533525280e83df1e793a4c5a455a8e4d66e81bef7ece22ac36f9d25951.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda5c495c_be81_4a63_b604_a9c3f5d2de7c.slice/crio-5f462d533525280e83df1e793a4c5a455a8e4d66e81bef7ece22ac36f9d25951.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbafe0b13_d056_4990_b93d_f4cb487c7cd2.slice/crio-conmon-8267108057d7041af0dff418f9ec4f85bf5d371cfe92a23210fa40f55471ecc5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c0d38a7_c7e8_4dbd_80a4_403075937b43.slice/crio-5a73deb7775153963dcb14bf1d55d8ab13fb9490574f4825b410412ba017db70.scope\": RecentStats: unable to find data in memory cache]" Feb 25 11:37:07 crc kubenswrapper[5005]: I0225 11:37:07.046636 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7a9e-account-create-update-v5tcv" event={"ID":"7da5666d-9f95-46f2-9455-ee3eaecf137d","Type":"ContainerStarted","Data":"38fd8d510baa510ac34891a0c2bf83c739295f5de5ada34a3e25b041e67c7532"} Feb 25 11:37:07 crc kubenswrapper[5005]: I0225 11:37:07.046688 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7a9e-account-create-update-v5tcv" event={"ID":"7da5666d-9f95-46f2-9455-ee3eaecf137d","Type":"ContainerStarted","Data":"0555572c47f23a8591931fdd2372caa58523dd02dcf74f141602b4bf44c693b7"} Feb 25 11:37:07 crc kubenswrapper[5005]: I0225 11:37:07.049198 5005 generic.go:334] "Generic (PLEG): container finished" podID="019aa605-4910-4b2d-aba0-de303611c1f4" containerID="bf2586cb62f01cfd5f893e76eaf953c726779ca6b3b72e19e33751979d6ef325" exitCode=0 Feb 25 11:37:07 crc kubenswrapper[5005]: I0225 11:37:07.049267 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-w992v" event={"ID":"019aa605-4910-4b2d-aba0-de303611c1f4","Type":"ContainerDied","Data":"bf2586cb62f01cfd5f893e76eaf953c726779ca6b3b72e19e33751979d6ef325"} Feb 25 11:37:07 crc kubenswrapper[5005]: I0225 11:37:07.049290 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-w992v" event={"ID":"019aa605-4910-4b2d-aba0-de303611c1f4","Type":"ContainerStarted","Data":"3442fe0c0fd78e235ef1a201a1de487c377fd39a1c295742716d2102177dacbc"} Feb 25 11:37:07 crc kubenswrapper[5005]: I0225 11:37:07.050998 5005 generic.go:334] "Generic (PLEG): container finished" podID="2c0d38a7-c7e8-4dbd-80a4-403075937b43" containerID="5a73deb7775153963dcb14bf1d55d8ab13fb9490574f4825b410412ba017db70" exitCode=0 Feb 25 11:37:07 crc kubenswrapper[5005]: I0225 11:37:07.051059 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-n8jp5" event={"ID":"2c0d38a7-c7e8-4dbd-80a4-403075937b43","Type":"ContainerDied","Data":"5a73deb7775153963dcb14bf1d55d8ab13fb9490574f4825b410412ba017db70"} Feb 25 11:37:07 crc kubenswrapper[5005]: I0225 11:37:07.051102 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-n8jp5" event={"ID":"2c0d38a7-c7e8-4dbd-80a4-403075937b43","Type":"ContainerStarted","Data":"6100e5243f429d205c28c5b6ac6f35967f03c0b41f0f53ff0f7ac35cbddc65c3"} Feb 25 11:37:07 crc kubenswrapper[5005]: I0225 11:37:07.052363 5005 generic.go:334] "Generic (PLEG): container finished" podID="da5c495c-be81-4a63-b604-a9c3f5d2de7c" containerID="5f462d533525280e83df1e793a4c5a455a8e4d66e81bef7ece22ac36f9d25951" exitCode=0 Feb 25 11:37:07 crc kubenswrapper[5005]: I0225 11:37:07.052413 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-qdf8s" event={"ID":"da5c495c-be81-4a63-b604-a9c3f5d2de7c","Type":"ContainerDied","Data":"5f462d533525280e83df1e793a4c5a455a8e4d66e81bef7ece22ac36f9d25951"} Feb 25 11:37:07 crc kubenswrapper[5005]: I0225 11:37:07.052445 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-qdf8s" event={"ID":"da5c495c-be81-4a63-b604-a9c3f5d2de7c","Type":"ContainerStarted","Data":"f7b5058914cd7dec3fbac191881b9a5b1649fae4bf9df9af0b3d41247237f565"} Feb 25 11:37:07 crc kubenswrapper[5005]: I0225 11:37:07.053495 5005 generic.go:334] "Generic (PLEG): container finished" podID="daf2d0ce-59f4-40b5-a2fd-fe4991a62dda" containerID="40dddeef3284b447a221f8bad3dd3c30f0d9ff7f43aff7d15107af580728b618" exitCode=0 Feb 25 11:37:07 crc kubenswrapper[5005]: I0225 11:37:07.053551 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3c62-account-create-update-5949j" event={"ID":"daf2d0ce-59f4-40b5-a2fd-fe4991a62dda","Type":"ContainerDied","Data":"40dddeef3284b447a221f8bad3dd3c30f0d9ff7f43aff7d15107af580728b618"} Feb 25 11:37:07 crc kubenswrapper[5005]: I0225 11:37:07.053588 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3c62-account-create-update-5949j" event={"ID":"daf2d0ce-59f4-40b5-a2fd-fe4991a62dda","Type":"ContainerStarted","Data":"c029e95bda9b6fa2a38fde097b3f129c04f6f12484c6a5179dfe39c72e635754"} Feb 25 11:37:07 crc kubenswrapper[5005]: I0225 11:37:07.055883 5005 generic.go:334] "Generic (PLEG): container finished" podID="bafe0b13-d056-4990-b93d-f4cb487c7cd2" containerID="8267108057d7041af0dff418f9ec4f85bf5d371cfe92a23210fa40f55471ecc5" exitCode=0 Feb 25 11:37:07 crc kubenswrapper[5005]: I0225 11:37:07.055926 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-3048-account-create-update-h86sj" event={"ID":"bafe0b13-d056-4990-b93d-f4cb487c7cd2","Type":"ContainerDied","Data":"8267108057d7041af0dff418f9ec4f85bf5d371cfe92a23210fa40f55471ecc5"} Feb 25 11:37:07 crc kubenswrapper[5005]: I0225 11:37:07.055953 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-3048-account-create-update-h86sj" event={"ID":"bafe0b13-d056-4990-b93d-f4cb487c7cd2","Type":"ContainerStarted","Data":"a9c0c5307ec25aea807062911f30dddc5cc87ae5dc5cbb40010312c67a79383e"} Feb 25 11:37:07 crc kubenswrapper[5005]: I0225 11:37:07.061862 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-7a9e-account-create-update-v5tcv" podStartSLOduration=2.061839878 podStartE2EDuration="2.061839878s" podCreationTimestamp="2026-02-25 11:37:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:37:07.05960695 +0000 UTC m=+1141.100339277" watchObservedRunningTime="2026-02-25 11:37:07.061839878 +0000 UTC m=+1141.102572205" Feb 25 11:37:08 crc kubenswrapper[5005]: I0225 11:37:08.065661 5005 generic.go:334] "Generic (PLEG): container finished" podID="7da5666d-9f95-46f2-9455-ee3eaecf137d" containerID="38fd8d510baa510ac34891a0c2bf83c739295f5de5ada34a3e25b041e67c7532" exitCode=0 Feb 25 11:37:08 crc kubenswrapper[5005]: I0225 11:37:08.065698 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7a9e-account-create-update-v5tcv" event={"ID":"7da5666d-9f95-46f2-9455-ee3eaecf137d","Type":"ContainerDied","Data":"38fd8d510baa510ac34891a0c2bf83c739295f5de5ada34a3e25b041e67c7532"} Feb 25 11:37:08 crc kubenswrapper[5005]: I0225 11:37:08.500330 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-w992v" Feb 25 11:37:08 crc kubenswrapper[5005]: I0225 11:37:08.624085 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/019aa605-4910-4b2d-aba0-de303611c1f4-operator-scripts\") pod \"019aa605-4910-4b2d-aba0-de303611c1f4\" (UID: \"019aa605-4910-4b2d-aba0-de303611c1f4\") " Feb 25 11:37:08 crc kubenswrapper[5005]: I0225 11:37:08.624537 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlsmm\" (UniqueName: \"kubernetes.io/projected/019aa605-4910-4b2d-aba0-de303611c1f4-kube-api-access-hlsmm\") pod \"019aa605-4910-4b2d-aba0-de303611c1f4\" (UID: \"019aa605-4910-4b2d-aba0-de303611c1f4\") " Feb 25 11:37:08 crc kubenswrapper[5005]: I0225 11:37:08.626278 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/019aa605-4910-4b2d-aba0-de303611c1f4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "019aa605-4910-4b2d-aba0-de303611c1f4" (UID: "019aa605-4910-4b2d-aba0-de303611c1f4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:37:08 crc kubenswrapper[5005]: I0225 11:37:08.667583 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/019aa605-4910-4b2d-aba0-de303611c1f4-kube-api-access-hlsmm" (OuterVolumeSpecName: "kube-api-access-hlsmm") pod "019aa605-4910-4b2d-aba0-de303611c1f4" (UID: "019aa605-4910-4b2d-aba0-de303611c1f4"). InnerVolumeSpecName "kube-api-access-hlsmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:37:08 crc kubenswrapper[5005]: I0225 11:37:08.729639 5005 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/019aa605-4910-4b2d-aba0-de303611c1f4-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:08 crc kubenswrapper[5005]: I0225 11:37:08.729674 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlsmm\" (UniqueName: \"kubernetes.io/projected/019aa605-4910-4b2d-aba0-de303611c1f4-kube-api-access-hlsmm\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:08 crc kubenswrapper[5005]: I0225 11:37:08.746082 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3048-account-create-update-h86sj" Feb 25 11:37:08 crc kubenswrapper[5005]: I0225 11:37:08.776080 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-qdf8s" Feb 25 11:37:08 crc kubenswrapper[5005]: I0225 11:37:08.790179 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-n8jp5" Feb 25 11:37:08 crc kubenswrapper[5005]: I0225 11:37:08.791876 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3c62-account-create-update-5949j" Feb 25 11:37:08 crc kubenswrapper[5005]: I0225 11:37:08.817570 5005 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="097d7f93-e779-4856-a6d1-57be2bcba899" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.162:3000/\": dial tcp 10.217.0.162:3000: connect: connection refused" Feb 25 11:37:08 crc kubenswrapper[5005]: I0225 11:37:08.831559 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5plc7\" (UniqueName: \"kubernetes.io/projected/bafe0b13-d056-4990-b93d-f4cb487c7cd2-kube-api-access-5plc7\") pod \"bafe0b13-d056-4990-b93d-f4cb487c7cd2\" (UID: \"bafe0b13-d056-4990-b93d-f4cb487c7cd2\") " Feb 25 11:37:08 crc kubenswrapper[5005]: I0225 11:37:08.831770 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7z6fs\" (UniqueName: \"kubernetes.io/projected/da5c495c-be81-4a63-b604-a9c3f5d2de7c-kube-api-access-7z6fs\") pod \"da5c495c-be81-4a63-b604-a9c3f5d2de7c\" (UID: \"da5c495c-be81-4a63-b604-a9c3f5d2de7c\") " Feb 25 11:37:08 crc kubenswrapper[5005]: I0225 11:37:08.831849 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bafe0b13-d056-4990-b93d-f4cb487c7cd2-operator-scripts\") pod \"bafe0b13-d056-4990-b93d-f4cb487c7cd2\" (UID: \"bafe0b13-d056-4990-b93d-f4cb487c7cd2\") " Feb 25 11:37:08 crc kubenswrapper[5005]: I0225 11:37:08.831901 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da5c495c-be81-4a63-b604-a9c3f5d2de7c-operator-scripts\") pod \"da5c495c-be81-4a63-b604-a9c3f5d2de7c\" (UID: \"da5c495c-be81-4a63-b604-a9c3f5d2de7c\") " Feb 25 11:37:08 crc kubenswrapper[5005]: I0225 11:37:08.832349 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bafe0b13-d056-4990-b93d-f4cb487c7cd2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bafe0b13-d056-4990-b93d-f4cb487c7cd2" (UID: "bafe0b13-d056-4990-b93d-f4cb487c7cd2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:37:08 crc kubenswrapper[5005]: I0225 11:37:08.832957 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da5c495c-be81-4a63-b604-a9c3f5d2de7c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "da5c495c-be81-4a63-b604-a9c3f5d2de7c" (UID: "da5c495c-be81-4a63-b604-a9c3f5d2de7c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:37:08 crc kubenswrapper[5005]: I0225 11:37:08.836488 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da5c495c-be81-4a63-b604-a9c3f5d2de7c-kube-api-access-7z6fs" (OuterVolumeSpecName: "kube-api-access-7z6fs") pod "da5c495c-be81-4a63-b604-a9c3f5d2de7c" (UID: "da5c495c-be81-4a63-b604-a9c3f5d2de7c"). InnerVolumeSpecName "kube-api-access-7z6fs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:37:08 crc kubenswrapper[5005]: I0225 11:37:08.840585 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bafe0b13-d056-4990-b93d-f4cb487c7cd2-kube-api-access-5plc7" (OuterVolumeSpecName: "kube-api-access-5plc7") pod "bafe0b13-d056-4990-b93d-f4cb487c7cd2" (UID: "bafe0b13-d056-4990-b93d-f4cb487c7cd2"). InnerVolumeSpecName "kube-api-access-5plc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:37:08 crc kubenswrapper[5005]: I0225 11:37:08.933782 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55mnk\" (UniqueName: \"kubernetes.io/projected/2c0d38a7-c7e8-4dbd-80a4-403075937b43-kube-api-access-55mnk\") pod \"2c0d38a7-c7e8-4dbd-80a4-403075937b43\" (UID: \"2c0d38a7-c7e8-4dbd-80a4-403075937b43\") " Feb 25 11:37:08 crc kubenswrapper[5005]: I0225 11:37:08.933852 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c0d38a7-c7e8-4dbd-80a4-403075937b43-operator-scripts\") pod \"2c0d38a7-c7e8-4dbd-80a4-403075937b43\" (UID: \"2c0d38a7-c7e8-4dbd-80a4-403075937b43\") " Feb 25 11:37:08 crc kubenswrapper[5005]: I0225 11:37:08.934008 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kt28g\" (UniqueName: \"kubernetes.io/projected/daf2d0ce-59f4-40b5-a2fd-fe4991a62dda-kube-api-access-kt28g\") pod \"daf2d0ce-59f4-40b5-a2fd-fe4991a62dda\" (UID: \"daf2d0ce-59f4-40b5-a2fd-fe4991a62dda\") " Feb 25 11:37:08 crc kubenswrapper[5005]: I0225 11:37:08.934024 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/daf2d0ce-59f4-40b5-a2fd-fe4991a62dda-operator-scripts\") pod \"daf2d0ce-59f4-40b5-a2fd-fe4991a62dda\" (UID: \"daf2d0ce-59f4-40b5-a2fd-fe4991a62dda\") " Feb 25 11:37:08 crc kubenswrapper[5005]: I0225 11:37:08.934361 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7z6fs\" (UniqueName: \"kubernetes.io/projected/da5c495c-be81-4a63-b604-a9c3f5d2de7c-kube-api-access-7z6fs\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:08 crc kubenswrapper[5005]: I0225 11:37:08.934390 5005 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bafe0b13-d056-4990-b93d-f4cb487c7cd2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:08 crc kubenswrapper[5005]: I0225 11:37:08.934399 5005 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da5c495c-be81-4a63-b604-a9c3f5d2de7c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:08 crc kubenswrapper[5005]: I0225 11:37:08.934411 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5plc7\" (UniqueName: \"kubernetes.io/projected/bafe0b13-d056-4990-b93d-f4cb487c7cd2-kube-api-access-5plc7\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:08 crc kubenswrapper[5005]: I0225 11:37:08.934987 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c0d38a7-c7e8-4dbd-80a4-403075937b43-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2c0d38a7-c7e8-4dbd-80a4-403075937b43" (UID: "2c0d38a7-c7e8-4dbd-80a4-403075937b43"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:37:08 crc kubenswrapper[5005]: I0225 11:37:08.935131 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/daf2d0ce-59f4-40b5-a2fd-fe4991a62dda-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "daf2d0ce-59f4-40b5-a2fd-fe4991a62dda" (UID: "daf2d0ce-59f4-40b5-a2fd-fe4991a62dda"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:37:08 crc kubenswrapper[5005]: I0225 11:37:08.938562 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c0d38a7-c7e8-4dbd-80a4-403075937b43-kube-api-access-55mnk" (OuterVolumeSpecName: "kube-api-access-55mnk") pod "2c0d38a7-c7e8-4dbd-80a4-403075937b43" (UID: "2c0d38a7-c7e8-4dbd-80a4-403075937b43"). InnerVolumeSpecName "kube-api-access-55mnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:37:08 crc kubenswrapper[5005]: I0225 11:37:08.938696 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/daf2d0ce-59f4-40b5-a2fd-fe4991a62dda-kube-api-access-kt28g" (OuterVolumeSpecName: "kube-api-access-kt28g") pod "daf2d0ce-59f4-40b5-a2fd-fe4991a62dda" (UID: "daf2d0ce-59f4-40b5-a2fd-fe4991a62dda"). InnerVolumeSpecName "kube-api-access-kt28g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:37:09 crc kubenswrapper[5005]: I0225 11:37:09.035837 5005 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c0d38a7-c7e8-4dbd-80a4-403075937b43-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:09 crc kubenswrapper[5005]: I0225 11:37:09.035883 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kt28g\" (UniqueName: \"kubernetes.io/projected/daf2d0ce-59f4-40b5-a2fd-fe4991a62dda-kube-api-access-kt28g\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:09 crc kubenswrapper[5005]: I0225 11:37:09.035894 5005 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/daf2d0ce-59f4-40b5-a2fd-fe4991a62dda-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:09 crc kubenswrapper[5005]: I0225 11:37:09.035903 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55mnk\" (UniqueName: \"kubernetes.io/projected/2c0d38a7-c7e8-4dbd-80a4-403075937b43-kube-api-access-55mnk\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:09 crc kubenswrapper[5005]: I0225 11:37:09.093984 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-3048-account-create-update-h86sj" event={"ID":"bafe0b13-d056-4990-b93d-f4cb487c7cd2","Type":"ContainerDied","Data":"a9c0c5307ec25aea807062911f30dddc5cc87ae5dc5cbb40010312c67a79383e"} Feb 25 11:37:09 crc kubenswrapper[5005]: I0225 11:37:09.094022 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9c0c5307ec25aea807062911f30dddc5cc87ae5dc5cbb40010312c67a79383e" Feb 25 11:37:09 crc kubenswrapper[5005]: I0225 11:37:09.094103 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3048-account-create-update-h86sj" Feb 25 11:37:09 crc kubenswrapper[5005]: I0225 11:37:09.100268 5005 generic.go:334] "Generic (PLEG): container finished" podID="097d7f93-e779-4856-a6d1-57be2bcba899" containerID="eefed9e1b907d4da7ea0caf8cc5f3b30234611827cf8c3652fce559c4548094c" exitCode=0 Feb 25 11:37:09 crc kubenswrapper[5005]: I0225 11:37:09.100438 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"097d7f93-e779-4856-a6d1-57be2bcba899","Type":"ContainerDied","Data":"eefed9e1b907d4da7ea0caf8cc5f3b30234611827cf8c3652fce559c4548094c"} Feb 25 11:37:09 crc kubenswrapper[5005]: I0225 11:37:09.104077 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-w992v" event={"ID":"019aa605-4910-4b2d-aba0-de303611c1f4","Type":"ContainerDied","Data":"3442fe0c0fd78e235ef1a201a1de487c377fd39a1c295742716d2102177dacbc"} Feb 25 11:37:09 crc kubenswrapper[5005]: I0225 11:37:09.104122 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3442fe0c0fd78e235ef1a201a1de487c377fd39a1c295742716d2102177dacbc" Feb 25 11:37:09 crc kubenswrapper[5005]: I0225 11:37:09.104223 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-w992v" Feb 25 11:37:09 crc kubenswrapper[5005]: I0225 11:37:09.106026 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-n8jp5" event={"ID":"2c0d38a7-c7e8-4dbd-80a4-403075937b43","Type":"ContainerDied","Data":"6100e5243f429d205c28c5b6ac6f35967f03c0b41f0f53ff0f7ac35cbddc65c3"} Feb 25 11:37:09 crc kubenswrapper[5005]: I0225 11:37:09.106050 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6100e5243f429d205c28c5b6ac6f35967f03c0b41f0f53ff0f7ac35cbddc65c3" Feb 25 11:37:09 crc kubenswrapper[5005]: I0225 11:37:09.106110 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-n8jp5" Feb 25 11:37:09 crc kubenswrapper[5005]: I0225 11:37:09.112093 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-qdf8s" event={"ID":"da5c495c-be81-4a63-b604-a9c3f5d2de7c","Type":"ContainerDied","Data":"f7b5058914cd7dec3fbac191881b9a5b1649fae4bf9df9af0b3d41247237f565"} Feb 25 11:37:09 crc kubenswrapper[5005]: I0225 11:37:09.112126 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7b5058914cd7dec3fbac191881b9a5b1649fae4bf9df9af0b3d41247237f565" Feb 25 11:37:09 crc kubenswrapper[5005]: I0225 11:37:09.112165 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-qdf8s" Feb 25 11:37:09 crc kubenswrapper[5005]: I0225 11:37:09.115389 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3c62-account-create-update-5949j" event={"ID":"daf2d0ce-59f4-40b5-a2fd-fe4991a62dda","Type":"ContainerDied","Data":"c029e95bda9b6fa2a38fde097b3f129c04f6f12484c6a5179dfe39c72e635754"} Feb 25 11:37:09 crc kubenswrapper[5005]: I0225 11:37:09.115429 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c029e95bda9b6fa2a38fde097b3f129c04f6f12484c6a5179dfe39c72e635754" Feb 25 11:37:09 crc kubenswrapper[5005]: I0225 11:37:09.116587 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3c62-account-create-update-5949j" Feb 25 11:37:09 crc kubenswrapper[5005]: I0225 11:37:09.193657 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 25 11:37:09 crc kubenswrapper[5005]: I0225 11:37:09.339124 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7a9e-account-create-update-v5tcv" Feb 25 11:37:09 crc kubenswrapper[5005]: I0225 11:37:09.341946 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/097d7f93-e779-4856-a6d1-57be2bcba899-config-data\") pod \"097d7f93-e779-4856-a6d1-57be2bcba899\" (UID: \"097d7f93-e779-4856-a6d1-57be2bcba899\") " Feb 25 11:37:09 crc kubenswrapper[5005]: I0225 11:37:09.342006 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/097d7f93-e779-4856-a6d1-57be2bcba899-run-httpd\") pod \"097d7f93-e779-4856-a6d1-57be2bcba899\" (UID: \"097d7f93-e779-4856-a6d1-57be2bcba899\") " Feb 25 11:37:09 crc kubenswrapper[5005]: I0225 11:37:09.342086 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/097d7f93-e779-4856-a6d1-57be2bcba899-sg-core-conf-yaml\") pod \"097d7f93-e779-4856-a6d1-57be2bcba899\" (UID: \"097d7f93-e779-4856-a6d1-57be2bcba899\") " Feb 25 11:37:09 crc kubenswrapper[5005]: I0225 11:37:09.342183 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h46fp\" (UniqueName: \"kubernetes.io/projected/097d7f93-e779-4856-a6d1-57be2bcba899-kube-api-access-h46fp\") pod \"097d7f93-e779-4856-a6d1-57be2bcba899\" (UID: \"097d7f93-e779-4856-a6d1-57be2bcba899\") " Feb 25 11:37:09 crc kubenswrapper[5005]: I0225 11:37:09.342233 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/097d7f93-e779-4856-a6d1-57be2bcba899-log-httpd\") pod \"097d7f93-e779-4856-a6d1-57be2bcba899\" (UID: \"097d7f93-e779-4856-a6d1-57be2bcba899\") " Feb 25 11:37:09 crc kubenswrapper[5005]: I0225 11:37:09.342313 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/097d7f93-e779-4856-a6d1-57be2bcba899-scripts\") pod \"097d7f93-e779-4856-a6d1-57be2bcba899\" (UID: \"097d7f93-e779-4856-a6d1-57be2bcba899\") " Feb 25 11:37:09 crc kubenswrapper[5005]: I0225 11:37:09.342361 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/097d7f93-e779-4856-a6d1-57be2bcba899-combined-ca-bundle\") pod \"097d7f93-e779-4856-a6d1-57be2bcba899\" (UID: \"097d7f93-e779-4856-a6d1-57be2bcba899\") " Feb 25 11:37:09 crc kubenswrapper[5005]: I0225 11:37:09.342436 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/097d7f93-e779-4856-a6d1-57be2bcba899-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "097d7f93-e779-4856-a6d1-57be2bcba899" (UID: "097d7f93-e779-4856-a6d1-57be2bcba899"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:37:09 crc kubenswrapper[5005]: I0225 11:37:09.342842 5005 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/097d7f93-e779-4856-a6d1-57be2bcba899-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:09 crc kubenswrapper[5005]: I0225 11:37:09.342933 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/097d7f93-e779-4856-a6d1-57be2bcba899-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "097d7f93-e779-4856-a6d1-57be2bcba899" (UID: "097d7f93-e779-4856-a6d1-57be2bcba899"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:37:09 crc kubenswrapper[5005]: I0225 11:37:09.345630 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/097d7f93-e779-4856-a6d1-57be2bcba899-kube-api-access-h46fp" (OuterVolumeSpecName: "kube-api-access-h46fp") pod "097d7f93-e779-4856-a6d1-57be2bcba899" (UID: "097d7f93-e779-4856-a6d1-57be2bcba899"). InnerVolumeSpecName "kube-api-access-h46fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:37:09 crc kubenswrapper[5005]: I0225 11:37:09.346181 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/097d7f93-e779-4856-a6d1-57be2bcba899-scripts" (OuterVolumeSpecName: "scripts") pod "097d7f93-e779-4856-a6d1-57be2bcba899" (UID: "097d7f93-e779-4856-a6d1-57be2bcba899"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:37:09 crc kubenswrapper[5005]: I0225 11:37:09.379030 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/097d7f93-e779-4856-a6d1-57be2bcba899-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "097d7f93-e779-4856-a6d1-57be2bcba899" (UID: "097d7f93-e779-4856-a6d1-57be2bcba899"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:37:09 crc kubenswrapper[5005]: I0225 11:37:09.440615 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/097d7f93-e779-4856-a6d1-57be2bcba899-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "097d7f93-e779-4856-a6d1-57be2bcba899" (UID: "097d7f93-e779-4856-a6d1-57be2bcba899"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:37:09 crc kubenswrapper[5005]: I0225 11:37:09.444826 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zk7vw\" (UniqueName: \"kubernetes.io/projected/7da5666d-9f95-46f2-9455-ee3eaecf137d-kube-api-access-zk7vw\") pod \"7da5666d-9f95-46f2-9455-ee3eaecf137d\" (UID: \"7da5666d-9f95-46f2-9455-ee3eaecf137d\") " Feb 25 11:37:09 crc kubenswrapper[5005]: I0225 11:37:09.445157 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7da5666d-9f95-46f2-9455-ee3eaecf137d-operator-scripts\") pod \"7da5666d-9f95-46f2-9455-ee3eaecf137d\" (UID: \"7da5666d-9f95-46f2-9455-ee3eaecf137d\") " Feb 25 11:37:09 crc kubenswrapper[5005]: I0225 11:37:09.448539 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h46fp\" (UniqueName: \"kubernetes.io/projected/097d7f93-e779-4856-a6d1-57be2bcba899-kube-api-access-h46fp\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:09 crc kubenswrapper[5005]: I0225 11:37:09.448574 5005 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/097d7f93-e779-4856-a6d1-57be2bcba899-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:09 crc kubenswrapper[5005]: I0225 11:37:09.448585 5005 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/097d7f93-e779-4856-a6d1-57be2bcba899-scripts\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:09 crc kubenswrapper[5005]: I0225 11:37:09.448596 5005 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/097d7f93-e779-4856-a6d1-57be2bcba899-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:09 crc kubenswrapper[5005]: I0225 11:37:09.448605 5005 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/097d7f93-e779-4856-a6d1-57be2bcba899-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:09 crc kubenswrapper[5005]: I0225 11:37:09.449214 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7da5666d-9f95-46f2-9455-ee3eaecf137d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7da5666d-9f95-46f2-9455-ee3eaecf137d" (UID: "7da5666d-9f95-46f2-9455-ee3eaecf137d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:37:09 crc kubenswrapper[5005]: I0225 11:37:09.453155 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7da5666d-9f95-46f2-9455-ee3eaecf137d-kube-api-access-zk7vw" (OuterVolumeSpecName: "kube-api-access-zk7vw") pod "7da5666d-9f95-46f2-9455-ee3eaecf137d" (UID: "7da5666d-9f95-46f2-9455-ee3eaecf137d"). InnerVolumeSpecName "kube-api-access-zk7vw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:37:09 crc kubenswrapper[5005]: I0225 11:37:09.455629 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/097d7f93-e779-4856-a6d1-57be2bcba899-config-data" (OuterVolumeSpecName: "config-data") pod "097d7f93-e779-4856-a6d1-57be2bcba899" (UID: "097d7f93-e779-4856-a6d1-57be2bcba899"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:37:09 crc kubenswrapper[5005]: I0225 11:37:09.550730 5005 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/097d7f93-e779-4856-a6d1-57be2bcba899-config-data\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:09 crc kubenswrapper[5005]: I0225 11:37:09.550773 5005 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7da5666d-9f95-46f2-9455-ee3eaecf137d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:09 crc kubenswrapper[5005]: I0225 11:37:09.550789 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zk7vw\" (UniqueName: \"kubernetes.io/projected/7da5666d-9f95-46f2-9455-ee3eaecf137d-kube-api-access-zk7vw\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.129106 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7a9e-account-create-update-v5tcv" event={"ID":"7da5666d-9f95-46f2-9455-ee3eaecf137d","Type":"ContainerDied","Data":"0555572c47f23a8591931fdd2372caa58523dd02dcf74f141602b4bf44c693b7"} Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.129634 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0555572c47f23a8591931fdd2372caa58523dd02dcf74f141602b4bf44c693b7" Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.129153 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7a9e-account-create-update-v5tcv" Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.132155 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"097d7f93-e779-4856-a6d1-57be2bcba899","Type":"ContainerDied","Data":"379ea284f05c30caaa955948d781b47438af9a09729d15fc5f003f54cb5857ad"} Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.132207 5005 scope.go:117] "RemoveContainer" containerID="0a8e28b73392089baf878f2894a4ea60b0fb43195c8b47dfc91273f6ee18d6ef" Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.132293 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.176771 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.183418 5005 scope.go:117] "RemoveContainer" containerID="934789b36a79ce497a935186463f6c524add0e6c8c8062d0d85a1f74fda9697d" Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.194870 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.205861 5005 scope.go:117] "RemoveContainer" containerID="eefed9e1b907d4da7ea0caf8cc5f3b30234611827cf8c3652fce559c4548094c" Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.206651 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 25 11:37:10 crc kubenswrapper[5005]: E0225 11:37:10.207020 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="097d7f93-e779-4856-a6d1-57be2bcba899" containerName="ceilometer-central-agent" Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.207040 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="097d7f93-e779-4856-a6d1-57be2bcba899" containerName="ceilometer-central-agent" Feb 25 11:37:10 crc kubenswrapper[5005]: E0225 11:37:10.207051 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da5c495c-be81-4a63-b604-a9c3f5d2de7c" containerName="mariadb-database-create" Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.207057 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="da5c495c-be81-4a63-b604-a9c3f5d2de7c" containerName="mariadb-database-create" Feb 25 11:37:10 crc kubenswrapper[5005]: E0225 11:37:10.207066 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="019aa605-4910-4b2d-aba0-de303611c1f4" containerName="mariadb-database-create" Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.207072 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="019aa605-4910-4b2d-aba0-de303611c1f4" containerName="mariadb-database-create" Feb 25 11:37:10 crc kubenswrapper[5005]: E0225 11:37:10.207085 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="097d7f93-e779-4856-a6d1-57be2bcba899" containerName="sg-core" Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.207091 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="097d7f93-e779-4856-a6d1-57be2bcba899" containerName="sg-core" Feb 25 11:37:10 crc kubenswrapper[5005]: E0225 11:37:10.207102 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="daf2d0ce-59f4-40b5-a2fd-fe4991a62dda" containerName="mariadb-account-create-update" Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.207107 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="daf2d0ce-59f4-40b5-a2fd-fe4991a62dda" containerName="mariadb-account-create-update" Feb 25 11:37:10 crc kubenswrapper[5005]: E0225 11:37:10.207116 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c0d38a7-c7e8-4dbd-80a4-403075937b43" containerName="mariadb-database-create" Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.207122 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c0d38a7-c7e8-4dbd-80a4-403075937b43" containerName="mariadb-database-create" Feb 25 11:37:10 crc kubenswrapper[5005]: E0225 11:37:10.207136 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7da5666d-9f95-46f2-9455-ee3eaecf137d" containerName="mariadb-account-create-update" Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.207142 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="7da5666d-9f95-46f2-9455-ee3eaecf137d" containerName="mariadb-account-create-update" Feb 25 11:37:10 crc kubenswrapper[5005]: E0225 11:37:10.207159 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="097d7f93-e779-4856-a6d1-57be2bcba899" containerName="proxy-httpd" Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.207165 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="097d7f93-e779-4856-a6d1-57be2bcba899" containerName="proxy-httpd" Feb 25 11:37:10 crc kubenswrapper[5005]: E0225 11:37:10.207177 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bafe0b13-d056-4990-b93d-f4cb487c7cd2" containerName="mariadb-account-create-update" Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.207183 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="bafe0b13-d056-4990-b93d-f4cb487c7cd2" containerName="mariadb-account-create-update" Feb 25 11:37:10 crc kubenswrapper[5005]: E0225 11:37:10.207192 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="097d7f93-e779-4856-a6d1-57be2bcba899" containerName="ceilometer-notification-agent" Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.207197 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="097d7f93-e779-4856-a6d1-57be2bcba899" containerName="ceilometer-notification-agent" Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.207355 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="da5c495c-be81-4a63-b604-a9c3f5d2de7c" containerName="mariadb-database-create" Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.207364 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="7da5666d-9f95-46f2-9455-ee3eaecf137d" containerName="mariadb-account-create-update" Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.207395 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="097d7f93-e779-4856-a6d1-57be2bcba899" containerName="ceilometer-central-agent" Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.207408 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="097d7f93-e779-4856-a6d1-57be2bcba899" containerName="proxy-httpd" Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.207416 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="bafe0b13-d056-4990-b93d-f4cb487c7cd2" containerName="mariadb-account-create-update" Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.207425 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="097d7f93-e779-4856-a6d1-57be2bcba899" containerName="ceilometer-notification-agent" Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.207431 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="019aa605-4910-4b2d-aba0-de303611c1f4" containerName="mariadb-database-create" Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.207442 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="daf2d0ce-59f4-40b5-a2fd-fe4991a62dda" containerName="mariadb-account-create-update" Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.207451 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c0d38a7-c7e8-4dbd-80a4-403075937b43" containerName="mariadb-database-create" Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.207461 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="097d7f93-e779-4856-a6d1-57be2bcba899" containerName="sg-core" Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.208902 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.211739 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.211860 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.217934 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.267012 5005 scope.go:117] "RemoveContainer" containerID="b60d0c9060998ffc054412737a4e6e862d044b5bba0d11788fd64d351fa21e51" Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.361896 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ebcf2e95-a654-4ea8-9008-804d52fe7f80-log-httpd\") pod \"ceilometer-0\" (UID: \"ebcf2e95-a654-4ea8-9008-804d52fe7f80\") " pod="openstack/ceilometer-0" Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.362127 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jwfp\" (UniqueName: \"kubernetes.io/projected/ebcf2e95-a654-4ea8-9008-804d52fe7f80-kube-api-access-5jwfp\") pod \"ceilometer-0\" (UID: \"ebcf2e95-a654-4ea8-9008-804d52fe7f80\") " pod="openstack/ceilometer-0" Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.362234 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebcf2e95-a654-4ea8-9008-804d52fe7f80-scripts\") pod \"ceilometer-0\" (UID: \"ebcf2e95-a654-4ea8-9008-804d52fe7f80\") " pod="openstack/ceilometer-0" Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.362429 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebcf2e95-a654-4ea8-9008-804d52fe7f80-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ebcf2e95-a654-4ea8-9008-804d52fe7f80\") " pod="openstack/ceilometer-0" Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.362625 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ebcf2e95-a654-4ea8-9008-804d52fe7f80-run-httpd\") pod \"ceilometer-0\" (UID: \"ebcf2e95-a654-4ea8-9008-804d52fe7f80\") " pod="openstack/ceilometer-0" Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.362655 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ebcf2e95-a654-4ea8-9008-804d52fe7f80-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ebcf2e95-a654-4ea8-9008-804d52fe7f80\") " pod="openstack/ceilometer-0" Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.362689 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebcf2e95-a654-4ea8-9008-804d52fe7f80-config-data\") pod \"ceilometer-0\" (UID: \"ebcf2e95-a654-4ea8-9008-804d52fe7f80\") " pod="openstack/ceilometer-0" Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.466487 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ebcf2e95-a654-4ea8-9008-804d52fe7f80-log-httpd\") pod \"ceilometer-0\" (UID: \"ebcf2e95-a654-4ea8-9008-804d52fe7f80\") " pod="openstack/ceilometer-0" Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.466558 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jwfp\" (UniqueName: \"kubernetes.io/projected/ebcf2e95-a654-4ea8-9008-804d52fe7f80-kube-api-access-5jwfp\") pod \"ceilometer-0\" (UID: \"ebcf2e95-a654-4ea8-9008-804d52fe7f80\") " pod="openstack/ceilometer-0" Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.466596 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebcf2e95-a654-4ea8-9008-804d52fe7f80-scripts\") pod \"ceilometer-0\" (UID: \"ebcf2e95-a654-4ea8-9008-804d52fe7f80\") " pod="openstack/ceilometer-0" Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.466648 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebcf2e95-a654-4ea8-9008-804d52fe7f80-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ebcf2e95-a654-4ea8-9008-804d52fe7f80\") " pod="openstack/ceilometer-0" Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.466721 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ebcf2e95-a654-4ea8-9008-804d52fe7f80-run-httpd\") pod \"ceilometer-0\" (UID: \"ebcf2e95-a654-4ea8-9008-804d52fe7f80\") " pod="openstack/ceilometer-0" Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.466742 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ebcf2e95-a654-4ea8-9008-804d52fe7f80-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ebcf2e95-a654-4ea8-9008-804d52fe7f80\") " pod="openstack/ceilometer-0" Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.466768 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebcf2e95-a654-4ea8-9008-804d52fe7f80-config-data\") pod \"ceilometer-0\" (UID: \"ebcf2e95-a654-4ea8-9008-804d52fe7f80\") " pod="openstack/ceilometer-0" Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.466955 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ebcf2e95-a654-4ea8-9008-804d52fe7f80-log-httpd\") pod \"ceilometer-0\" (UID: \"ebcf2e95-a654-4ea8-9008-804d52fe7f80\") " pod="openstack/ceilometer-0" Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.467446 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ebcf2e95-a654-4ea8-9008-804d52fe7f80-run-httpd\") pod \"ceilometer-0\" (UID: \"ebcf2e95-a654-4ea8-9008-804d52fe7f80\") " pod="openstack/ceilometer-0" Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.473311 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebcf2e95-a654-4ea8-9008-804d52fe7f80-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ebcf2e95-a654-4ea8-9008-804d52fe7f80\") " pod="openstack/ceilometer-0" Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.473363 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebcf2e95-a654-4ea8-9008-804d52fe7f80-scripts\") pod \"ceilometer-0\" (UID: \"ebcf2e95-a654-4ea8-9008-804d52fe7f80\") " pod="openstack/ceilometer-0" Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.473758 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebcf2e95-a654-4ea8-9008-804d52fe7f80-config-data\") pod \"ceilometer-0\" (UID: \"ebcf2e95-a654-4ea8-9008-804d52fe7f80\") " pod="openstack/ceilometer-0" Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.473913 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ebcf2e95-a654-4ea8-9008-804d52fe7f80-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ebcf2e95-a654-4ea8-9008-804d52fe7f80\") " pod="openstack/ceilometer-0" Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.484154 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jwfp\" (UniqueName: \"kubernetes.io/projected/ebcf2e95-a654-4ea8-9008-804d52fe7f80-kube-api-access-5jwfp\") pod \"ceilometer-0\" (UID: \"ebcf2e95-a654-4ea8-9008-804d52fe7f80\") " pod="openstack/ceilometer-0" Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.564909 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.574876 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.705907 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="097d7f93-e779-4856-a6d1-57be2bcba899" path="/var/lib/kubelet/pods/097d7f93-e779-4856-a6d1-57be2bcba899/volumes" Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.779385 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-84f5fb5d89-rmhhp" Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.866424 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-twxb2"] Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.867825 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-twxb2" Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.872702 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.872902 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.873030 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-cm6lk" Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.883153 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5f5b655546-bjxxg"] Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.883544 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5f5b655546-bjxxg" podUID="b568af4f-9018-43a0-abb5-5e656bb4e039" containerName="neutron-api" containerID="cri-o://7c495e8fbfb1d6621c6e39cd8b4230628f57149e411e7d8c1965a39dd11cca94" gracePeriod=30 Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.883816 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5f5b655546-bjxxg" podUID="b568af4f-9018-43a0-abb5-5e656bb4e039" containerName="neutron-httpd" containerID="cri-o://605440471bce3b821f21c3944458ca4a7b7938db6957d3b54b6bfe3a49a6298e" gracePeriod=30 Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.888698 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.902364 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-twxb2"] Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.979974 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/811df314-ec45-441f-9869-cb6b976163cb-scripts\") pod \"nova-cell0-conductor-db-sync-twxb2\" (UID: \"811df314-ec45-441f-9869-cb6b976163cb\") " pod="openstack/nova-cell0-conductor-db-sync-twxb2" Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.980029 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/811df314-ec45-441f-9869-cb6b976163cb-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-twxb2\" (UID: \"811df314-ec45-441f-9869-cb6b976163cb\") " pod="openstack/nova-cell0-conductor-db-sync-twxb2" Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.980088 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/811df314-ec45-441f-9869-cb6b976163cb-config-data\") pod \"nova-cell0-conductor-db-sync-twxb2\" (UID: \"811df314-ec45-441f-9869-cb6b976163cb\") " pod="openstack/nova-cell0-conductor-db-sync-twxb2" Feb 25 11:37:10 crc kubenswrapper[5005]: I0225 11:37:10.980136 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdmg6\" (UniqueName: \"kubernetes.io/projected/811df314-ec45-441f-9869-cb6b976163cb-kube-api-access-tdmg6\") pod \"nova-cell0-conductor-db-sync-twxb2\" (UID: \"811df314-ec45-441f-9869-cb6b976163cb\") " pod="openstack/nova-cell0-conductor-db-sync-twxb2" Feb 25 11:37:11 crc kubenswrapper[5005]: I0225 11:37:11.082314 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/811df314-ec45-441f-9869-cb6b976163cb-scripts\") pod \"nova-cell0-conductor-db-sync-twxb2\" (UID: \"811df314-ec45-441f-9869-cb6b976163cb\") " pod="openstack/nova-cell0-conductor-db-sync-twxb2" Feb 25 11:37:11 crc kubenswrapper[5005]: I0225 11:37:11.082413 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/811df314-ec45-441f-9869-cb6b976163cb-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-twxb2\" (UID: \"811df314-ec45-441f-9869-cb6b976163cb\") " pod="openstack/nova-cell0-conductor-db-sync-twxb2" Feb 25 11:37:11 crc kubenswrapper[5005]: I0225 11:37:11.082493 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/811df314-ec45-441f-9869-cb6b976163cb-config-data\") pod \"nova-cell0-conductor-db-sync-twxb2\" (UID: \"811df314-ec45-441f-9869-cb6b976163cb\") " pod="openstack/nova-cell0-conductor-db-sync-twxb2" Feb 25 11:37:11 crc kubenswrapper[5005]: I0225 11:37:11.082551 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdmg6\" (UniqueName: \"kubernetes.io/projected/811df314-ec45-441f-9869-cb6b976163cb-kube-api-access-tdmg6\") pod \"nova-cell0-conductor-db-sync-twxb2\" (UID: \"811df314-ec45-441f-9869-cb6b976163cb\") " pod="openstack/nova-cell0-conductor-db-sync-twxb2" Feb 25 11:37:11 crc kubenswrapper[5005]: I0225 11:37:11.087071 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/811df314-ec45-441f-9869-cb6b976163cb-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-twxb2\" (UID: \"811df314-ec45-441f-9869-cb6b976163cb\") " pod="openstack/nova-cell0-conductor-db-sync-twxb2" Feb 25 11:37:11 crc kubenswrapper[5005]: I0225 11:37:11.088837 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/811df314-ec45-441f-9869-cb6b976163cb-config-data\") pod \"nova-cell0-conductor-db-sync-twxb2\" (UID: \"811df314-ec45-441f-9869-cb6b976163cb\") " pod="openstack/nova-cell0-conductor-db-sync-twxb2" Feb 25 11:37:11 crc kubenswrapper[5005]: I0225 11:37:11.089118 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/811df314-ec45-441f-9869-cb6b976163cb-scripts\") pod \"nova-cell0-conductor-db-sync-twxb2\" (UID: \"811df314-ec45-441f-9869-cb6b976163cb\") " pod="openstack/nova-cell0-conductor-db-sync-twxb2" Feb 25 11:37:11 crc kubenswrapper[5005]: I0225 11:37:11.097791 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdmg6\" (UniqueName: \"kubernetes.io/projected/811df314-ec45-441f-9869-cb6b976163cb-kube-api-access-tdmg6\") pod \"nova-cell0-conductor-db-sync-twxb2\" (UID: \"811df314-ec45-441f-9869-cb6b976163cb\") " pod="openstack/nova-cell0-conductor-db-sync-twxb2" Feb 25 11:37:11 crc kubenswrapper[5005]: I0225 11:37:11.142539 5005 generic.go:334] "Generic (PLEG): container finished" podID="b568af4f-9018-43a0-abb5-5e656bb4e039" containerID="605440471bce3b821f21c3944458ca4a7b7938db6957d3b54b6bfe3a49a6298e" exitCode=0 Feb 25 11:37:11 crc kubenswrapper[5005]: I0225 11:37:11.142608 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f5b655546-bjxxg" event={"ID":"b568af4f-9018-43a0-abb5-5e656bb4e039","Type":"ContainerDied","Data":"605440471bce3b821f21c3944458ca4a7b7938db6957d3b54b6bfe3a49a6298e"} Feb 25 11:37:11 crc kubenswrapper[5005]: I0225 11:37:11.145158 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ebcf2e95-a654-4ea8-9008-804d52fe7f80","Type":"ContainerStarted","Data":"48f20834567b2f27c21bc587ba8cfe0ccd5352f9fb97fde85ccc5375056016ee"} Feb 25 11:37:11 crc kubenswrapper[5005]: I0225 11:37:11.239247 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-twxb2" Feb 25 11:37:11 crc kubenswrapper[5005]: I0225 11:37:11.555531 5005 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7f46657c4d-qmj7f" podUID="f37d127e-e5d1-45f3-8e44-262ab354e0c2" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Feb 25 11:37:11 crc kubenswrapper[5005]: I0225 11:37:11.556310 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7f46657c4d-qmj7f" Feb 25 11:37:11 crc kubenswrapper[5005]: I0225 11:37:11.638397 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-twxb2"] Feb 25 11:37:12 crc kubenswrapper[5005]: I0225 11:37:12.152936 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-twxb2" event={"ID":"811df314-ec45-441f-9869-cb6b976163cb","Type":"ContainerStarted","Data":"4f80002795c426e28a628d63f3d7f79436147013c1a5b37b6d7f4a8ba9461ab7"} Feb 25 11:37:12 crc kubenswrapper[5005]: I0225 11:37:12.154426 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ebcf2e95-a654-4ea8-9008-804d52fe7f80","Type":"ContainerStarted","Data":"941a626e284f2209504b5ccd5a37adfc46db4675ff44e51267a4e83a3945561d"} Feb 25 11:37:13 crc kubenswrapper[5005]: I0225 11:37:13.165541 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ebcf2e95-a654-4ea8-9008-804d52fe7f80","Type":"ContainerStarted","Data":"1f63cfdbe351c06bd08e7935b521201df6bd960d2aad420ea764d13e9c42d04e"} Feb 25 11:37:13 crc kubenswrapper[5005]: I0225 11:37:13.597759 5005 scope.go:117] "RemoveContainer" containerID="1be24398d3f466f46170c106963bc8cab76c212b3fac2c6aa1d291ede2930cde" Feb 25 11:37:14 crc kubenswrapper[5005]: I0225 11:37:14.176332 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ebcf2e95-a654-4ea8-9008-804d52fe7f80","Type":"ContainerStarted","Data":"9e38886e64a6e9ebc07d1894d781ae259cc39425c0b722216b2f3cdca015ce53"} Feb 25 11:37:15 crc kubenswrapper[5005]: I0225 11:37:15.189685 5005 generic.go:334] "Generic (PLEG): container finished" podID="b568af4f-9018-43a0-abb5-5e656bb4e039" containerID="7c495e8fbfb1d6621c6e39cd8b4230628f57149e411e7d8c1965a39dd11cca94" exitCode=0 Feb 25 11:37:15 crc kubenswrapper[5005]: I0225 11:37:15.189760 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f5b655546-bjxxg" event={"ID":"b568af4f-9018-43a0-abb5-5e656bb4e039","Type":"ContainerDied","Data":"7c495e8fbfb1d6621c6e39cd8b4230628f57149e411e7d8c1965a39dd11cca94"} Feb 25 11:37:18 crc kubenswrapper[5005]: I0225 11:37:18.224225 5005 generic.go:334] "Generic (PLEG): container finished" podID="f37d127e-e5d1-45f3-8e44-262ab354e0c2" containerID="69013907147be4d78676e13f1a212c8cf91c9975f9e239bda625c13ee8a696a3" exitCode=137 Feb 25 11:37:18 crc kubenswrapper[5005]: I0225 11:37:18.224302 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f46657c4d-qmj7f" event={"ID":"f37d127e-e5d1-45f3-8e44-262ab354e0c2","Type":"ContainerDied","Data":"69013907147be4d78676e13f1a212c8cf91c9975f9e239bda625c13ee8a696a3"} Feb 25 11:37:19 crc kubenswrapper[5005]: I0225 11:37:19.508678 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6c94f4bc5b-tpjtv" Feb 25 11:37:19 crc kubenswrapper[5005]: I0225 11:37:19.510239 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6c94f4bc5b-tpjtv" Feb 25 11:37:19 crc kubenswrapper[5005]: I0225 11:37:19.577327 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7445447f66-tpwqp"] Feb 25 11:37:19 crc kubenswrapper[5005]: I0225 11:37:19.577607 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-7445447f66-tpwqp" podUID="60d4805b-4c99-4d5c-9e31-744e20ba5c55" containerName="placement-log" containerID="cri-o://c506446cc34bef3a7aa96fc599a57363ab460e080fc6a1857943908fa464c118" gracePeriod=30 Feb 25 11:37:19 crc kubenswrapper[5005]: I0225 11:37:19.577669 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-7445447f66-tpwqp" podUID="60d4805b-4c99-4d5c-9e31-744e20ba5c55" containerName="placement-api" containerID="cri-o://96ac18b7b690b3e2907e6a699ec81de0f5618c482b34150fe0e1f4ae27693deb" gracePeriod=30 Feb 25 11:37:20 crc kubenswrapper[5005]: I0225 11:37:20.245110 5005 generic.go:334] "Generic (PLEG): container finished" podID="60d4805b-4c99-4d5c-9e31-744e20ba5c55" containerID="c506446cc34bef3a7aa96fc599a57363ab460e080fc6a1857943908fa464c118" exitCode=143 Feb 25 11:37:20 crc kubenswrapper[5005]: I0225 11:37:20.246146 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7445447f66-tpwqp" event={"ID":"60d4805b-4c99-4d5c-9e31-744e20ba5c55","Type":"ContainerDied","Data":"c506446cc34bef3a7aa96fc599a57363ab460e080fc6a1857943908fa464c118"} Feb 25 11:37:20 crc kubenswrapper[5005]: I0225 11:37:20.477074 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5f5b655546-bjxxg" Feb 25 11:37:20 crc kubenswrapper[5005]: I0225 11:37:20.524213 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b568af4f-9018-43a0-abb5-5e656bb4e039-combined-ca-bundle\") pod \"b568af4f-9018-43a0-abb5-5e656bb4e039\" (UID: \"b568af4f-9018-43a0-abb5-5e656bb4e039\") " Feb 25 11:37:20 crc kubenswrapper[5005]: I0225 11:37:20.524266 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hh6wh\" (UniqueName: \"kubernetes.io/projected/b568af4f-9018-43a0-abb5-5e656bb4e039-kube-api-access-hh6wh\") pod \"b568af4f-9018-43a0-abb5-5e656bb4e039\" (UID: \"b568af4f-9018-43a0-abb5-5e656bb4e039\") " Feb 25 11:37:20 crc kubenswrapper[5005]: I0225 11:37:20.524312 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b568af4f-9018-43a0-abb5-5e656bb4e039-httpd-config\") pod \"b568af4f-9018-43a0-abb5-5e656bb4e039\" (UID: \"b568af4f-9018-43a0-abb5-5e656bb4e039\") " Feb 25 11:37:20 crc kubenswrapper[5005]: I0225 11:37:20.524348 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b568af4f-9018-43a0-abb5-5e656bb4e039-config\") pod \"b568af4f-9018-43a0-abb5-5e656bb4e039\" (UID: \"b568af4f-9018-43a0-abb5-5e656bb4e039\") " Feb 25 11:37:20 crc kubenswrapper[5005]: I0225 11:37:20.524364 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b568af4f-9018-43a0-abb5-5e656bb4e039-ovndb-tls-certs\") pod \"b568af4f-9018-43a0-abb5-5e656bb4e039\" (UID: \"b568af4f-9018-43a0-abb5-5e656bb4e039\") " Feb 25 11:37:20 crc kubenswrapper[5005]: I0225 11:37:20.529537 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b568af4f-9018-43a0-abb5-5e656bb4e039-kube-api-access-hh6wh" (OuterVolumeSpecName: "kube-api-access-hh6wh") pod "b568af4f-9018-43a0-abb5-5e656bb4e039" (UID: "b568af4f-9018-43a0-abb5-5e656bb4e039"). InnerVolumeSpecName "kube-api-access-hh6wh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:37:20 crc kubenswrapper[5005]: I0225 11:37:20.529811 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b568af4f-9018-43a0-abb5-5e656bb4e039-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "b568af4f-9018-43a0-abb5-5e656bb4e039" (UID: "b568af4f-9018-43a0-abb5-5e656bb4e039"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:37:20 crc kubenswrapper[5005]: I0225 11:37:20.552223 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7f46657c4d-qmj7f" Feb 25 11:37:20 crc kubenswrapper[5005]: I0225 11:37:20.594487 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b568af4f-9018-43a0-abb5-5e656bb4e039-config" (OuterVolumeSpecName: "config") pod "b568af4f-9018-43a0-abb5-5e656bb4e039" (UID: "b568af4f-9018-43a0-abb5-5e656bb4e039"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:37:20 crc kubenswrapper[5005]: I0225 11:37:20.600718 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b568af4f-9018-43a0-abb5-5e656bb4e039-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b568af4f-9018-43a0-abb5-5e656bb4e039" (UID: "b568af4f-9018-43a0-abb5-5e656bb4e039"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:37:20 crc kubenswrapper[5005]: I0225 11:37:20.626080 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f37d127e-e5d1-45f3-8e44-262ab354e0c2-combined-ca-bundle\") pod \"f37d127e-e5d1-45f3-8e44-262ab354e0c2\" (UID: \"f37d127e-e5d1-45f3-8e44-262ab354e0c2\") " Feb 25 11:37:20 crc kubenswrapper[5005]: I0225 11:37:20.626121 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f37d127e-e5d1-45f3-8e44-262ab354e0c2-horizon-secret-key\") pod \"f37d127e-e5d1-45f3-8e44-262ab354e0c2\" (UID: \"f37d127e-e5d1-45f3-8e44-262ab354e0c2\") " Feb 25 11:37:20 crc kubenswrapper[5005]: I0225 11:37:20.626218 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f37d127e-e5d1-45f3-8e44-262ab354e0c2-logs\") pod \"f37d127e-e5d1-45f3-8e44-262ab354e0c2\" (UID: \"f37d127e-e5d1-45f3-8e44-262ab354e0c2\") " Feb 25 11:37:20 crc kubenswrapper[5005]: I0225 11:37:20.626263 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/f37d127e-e5d1-45f3-8e44-262ab354e0c2-horizon-tls-certs\") pod \"f37d127e-e5d1-45f3-8e44-262ab354e0c2\" (UID: \"f37d127e-e5d1-45f3-8e44-262ab354e0c2\") " Feb 25 11:37:20 crc kubenswrapper[5005]: I0225 11:37:20.626306 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f37d127e-e5d1-45f3-8e44-262ab354e0c2-scripts\") pod \"f37d127e-e5d1-45f3-8e44-262ab354e0c2\" (UID: \"f37d127e-e5d1-45f3-8e44-262ab354e0c2\") " Feb 25 11:37:20 crc kubenswrapper[5005]: I0225 11:37:20.626342 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f37d127e-e5d1-45f3-8e44-262ab354e0c2-config-data\") pod \"f37d127e-e5d1-45f3-8e44-262ab354e0c2\" (UID: \"f37d127e-e5d1-45f3-8e44-262ab354e0c2\") " Feb 25 11:37:20 crc kubenswrapper[5005]: I0225 11:37:20.626385 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8kgw\" (UniqueName: \"kubernetes.io/projected/f37d127e-e5d1-45f3-8e44-262ab354e0c2-kube-api-access-k8kgw\") pod \"f37d127e-e5d1-45f3-8e44-262ab354e0c2\" (UID: \"f37d127e-e5d1-45f3-8e44-262ab354e0c2\") " Feb 25 11:37:20 crc kubenswrapper[5005]: I0225 11:37:20.626735 5005 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/b568af4f-9018-43a0-abb5-5e656bb4e039-config\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:20 crc kubenswrapper[5005]: I0225 11:37:20.626739 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f37d127e-e5d1-45f3-8e44-262ab354e0c2-logs" (OuterVolumeSpecName: "logs") pod "f37d127e-e5d1-45f3-8e44-262ab354e0c2" (UID: "f37d127e-e5d1-45f3-8e44-262ab354e0c2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:37:20 crc kubenswrapper[5005]: I0225 11:37:20.626746 5005 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b568af4f-9018-43a0-abb5-5e656bb4e039-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:20 crc kubenswrapper[5005]: I0225 11:37:20.626789 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hh6wh\" (UniqueName: \"kubernetes.io/projected/b568af4f-9018-43a0-abb5-5e656bb4e039-kube-api-access-hh6wh\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:20 crc kubenswrapper[5005]: I0225 11:37:20.626806 5005 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b568af4f-9018-43a0-abb5-5e656bb4e039-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:20 crc kubenswrapper[5005]: I0225 11:37:20.632393 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f37d127e-e5d1-45f3-8e44-262ab354e0c2-kube-api-access-k8kgw" (OuterVolumeSpecName: "kube-api-access-k8kgw") pod "f37d127e-e5d1-45f3-8e44-262ab354e0c2" (UID: "f37d127e-e5d1-45f3-8e44-262ab354e0c2"). InnerVolumeSpecName "kube-api-access-k8kgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:37:20 crc kubenswrapper[5005]: I0225 11:37:20.632679 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f37d127e-e5d1-45f3-8e44-262ab354e0c2-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "f37d127e-e5d1-45f3-8e44-262ab354e0c2" (UID: "f37d127e-e5d1-45f3-8e44-262ab354e0c2"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:37:20 crc kubenswrapper[5005]: I0225 11:37:20.648850 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b568af4f-9018-43a0-abb5-5e656bb4e039-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "b568af4f-9018-43a0-abb5-5e656bb4e039" (UID: "b568af4f-9018-43a0-abb5-5e656bb4e039"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:37:20 crc kubenswrapper[5005]: I0225 11:37:20.658234 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f37d127e-e5d1-45f3-8e44-262ab354e0c2-scripts" (OuterVolumeSpecName: "scripts") pod "f37d127e-e5d1-45f3-8e44-262ab354e0c2" (UID: "f37d127e-e5d1-45f3-8e44-262ab354e0c2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:37:20 crc kubenswrapper[5005]: I0225 11:37:20.661577 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f37d127e-e5d1-45f3-8e44-262ab354e0c2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f37d127e-e5d1-45f3-8e44-262ab354e0c2" (UID: "f37d127e-e5d1-45f3-8e44-262ab354e0c2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:37:20 crc kubenswrapper[5005]: I0225 11:37:20.667300 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f37d127e-e5d1-45f3-8e44-262ab354e0c2-config-data" (OuterVolumeSpecName: "config-data") pod "f37d127e-e5d1-45f3-8e44-262ab354e0c2" (UID: "f37d127e-e5d1-45f3-8e44-262ab354e0c2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:37:20 crc kubenswrapper[5005]: I0225 11:37:20.676693 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f37d127e-e5d1-45f3-8e44-262ab354e0c2-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "f37d127e-e5d1-45f3-8e44-262ab354e0c2" (UID: "f37d127e-e5d1-45f3-8e44-262ab354e0c2"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:37:20 crc kubenswrapper[5005]: I0225 11:37:20.728155 5005 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f37d127e-e5d1-45f3-8e44-262ab354e0c2-config-data\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:20 crc kubenswrapper[5005]: I0225 11:37:20.728193 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8kgw\" (UniqueName: \"kubernetes.io/projected/f37d127e-e5d1-45f3-8e44-262ab354e0c2-kube-api-access-k8kgw\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:20 crc kubenswrapper[5005]: I0225 11:37:20.728209 5005 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f37d127e-e5d1-45f3-8e44-262ab354e0c2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:20 crc kubenswrapper[5005]: I0225 11:37:20.728220 5005 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f37d127e-e5d1-45f3-8e44-262ab354e0c2-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:20 crc kubenswrapper[5005]: I0225 11:37:20.728232 5005 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b568af4f-9018-43a0-abb5-5e656bb4e039-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:20 crc kubenswrapper[5005]: I0225 11:37:20.728247 5005 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f37d127e-e5d1-45f3-8e44-262ab354e0c2-logs\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:20 crc kubenswrapper[5005]: I0225 11:37:20.728258 5005 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/f37d127e-e5d1-45f3-8e44-262ab354e0c2-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:20 crc kubenswrapper[5005]: I0225 11:37:20.728268 5005 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f37d127e-e5d1-45f3-8e44-262ab354e0c2-scripts\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:21 crc kubenswrapper[5005]: I0225 11:37:21.254661 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f46657c4d-qmj7f" event={"ID":"f37d127e-e5d1-45f3-8e44-262ab354e0c2","Type":"ContainerDied","Data":"5a5e3519bce33a27a443a137d310a1c393059de2e401f08195ede211191831d4"} Feb 25 11:37:21 crc kubenswrapper[5005]: I0225 11:37:21.255020 5005 scope.go:117] "RemoveContainer" containerID="4d2a9eb2800c2682c577225ec553c32240fbc85fb06c20035304d3f03ddba264" Feb 25 11:37:21 crc kubenswrapper[5005]: I0225 11:37:21.254674 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7f46657c4d-qmj7f" Feb 25 11:37:21 crc kubenswrapper[5005]: I0225 11:37:21.257048 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5f5b655546-bjxxg" Feb 25 11:37:21 crc kubenswrapper[5005]: I0225 11:37:21.257571 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f5b655546-bjxxg" event={"ID":"b568af4f-9018-43a0-abb5-5e656bb4e039","Type":"ContainerDied","Data":"779a2e11324bb5c2fa03b7614f573126227eb62d661596ef4c6d9d809909c6e5"} Feb 25 11:37:21 crc kubenswrapper[5005]: I0225 11:37:21.259501 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-twxb2" event={"ID":"811df314-ec45-441f-9869-cb6b976163cb","Type":"ContainerStarted","Data":"a5347fde957aa10d2aa20a22ae6ea22c21001a4e6d1794c539dd59af0f2ba786"} Feb 25 11:37:21 crc kubenswrapper[5005]: I0225 11:37:21.263101 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ebcf2e95-a654-4ea8-9008-804d52fe7f80","Type":"ContainerStarted","Data":"6ab62bd21dc29bf730811a5a02b17e38d446d60a3cec73c9d3d3612f706e3fe3"} Feb 25 11:37:21 crc kubenswrapper[5005]: I0225 11:37:21.263494 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ebcf2e95-a654-4ea8-9008-804d52fe7f80" containerName="sg-core" containerID="cri-o://9e38886e64a6e9ebc07d1894d781ae259cc39425c0b722216b2f3cdca015ce53" gracePeriod=30 Feb 25 11:37:21 crc kubenswrapper[5005]: I0225 11:37:21.263485 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ebcf2e95-a654-4ea8-9008-804d52fe7f80" containerName="proxy-httpd" containerID="cri-o://6ab62bd21dc29bf730811a5a02b17e38d446d60a3cec73c9d3d3612f706e3fe3" gracePeriod=30 Feb 25 11:37:21 crc kubenswrapper[5005]: I0225 11:37:21.263485 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ebcf2e95-a654-4ea8-9008-804d52fe7f80" containerName="ceilometer-notification-agent" containerID="cri-o://1f63cfdbe351c06bd08e7935b521201df6bd960d2aad420ea764d13e9c42d04e" gracePeriod=30 Feb 25 11:37:21 crc kubenswrapper[5005]: I0225 11:37:21.263434 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ebcf2e95-a654-4ea8-9008-804d52fe7f80" containerName="ceilometer-central-agent" containerID="cri-o://941a626e284f2209504b5ccd5a37adfc46db4675ff44e51267a4e83a3945561d" gracePeriod=30 Feb 25 11:37:21 crc kubenswrapper[5005]: I0225 11:37:21.264766 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 25 11:37:21 crc kubenswrapper[5005]: I0225 11:37:21.279424 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-twxb2" podStartSLOduration=2.536528515 podStartE2EDuration="11.279407527s" podCreationTimestamp="2026-02-25 11:37:10 +0000 UTC" firstStartedPulling="2026-02-25 11:37:11.660266936 +0000 UTC m=+1145.700999263" lastFinishedPulling="2026-02-25 11:37:20.403145948 +0000 UTC m=+1154.443878275" observedRunningTime="2026-02-25 11:37:21.278864501 +0000 UTC m=+1155.319596828" watchObservedRunningTime="2026-02-25 11:37:21.279407527 +0000 UTC m=+1155.320139864" Feb 25 11:37:21 crc kubenswrapper[5005]: I0225 11:37:21.301138 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.816231169 podStartE2EDuration="11.301122055s" podCreationTimestamp="2026-02-25 11:37:10 +0000 UTC" firstStartedPulling="2026-02-25 11:37:10.901869016 +0000 UTC m=+1144.942601343" lastFinishedPulling="2026-02-25 11:37:20.386759902 +0000 UTC m=+1154.427492229" observedRunningTime="2026-02-25 11:37:21.300173296 +0000 UTC m=+1155.340905633" watchObservedRunningTime="2026-02-25 11:37:21.301122055 +0000 UTC m=+1155.341854382" Feb 25 11:37:21 crc kubenswrapper[5005]: I0225 11:37:21.323852 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5f5b655546-bjxxg"] Feb 25 11:37:21 crc kubenswrapper[5005]: I0225 11:37:21.338896 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5f5b655546-bjxxg"] Feb 25 11:37:21 crc kubenswrapper[5005]: I0225 11:37:21.348335 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7f46657c4d-qmj7f"] Feb 25 11:37:21 crc kubenswrapper[5005]: I0225 11:37:21.355792 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7f46657c4d-qmj7f"] Feb 25 11:37:21 crc kubenswrapper[5005]: I0225 11:37:21.437552 5005 scope.go:117] "RemoveContainer" containerID="69013907147be4d78676e13f1a212c8cf91c9975f9e239bda625c13ee8a696a3" Feb 25 11:37:21 crc kubenswrapper[5005]: I0225 11:37:21.514719 5005 scope.go:117] "RemoveContainer" containerID="605440471bce3b821f21c3944458ca4a7b7938db6957d3b54b6bfe3a49a6298e" Feb 25 11:37:21 crc kubenswrapper[5005]: I0225 11:37:21.535063 5005 scope.go:117] "RemoveContainer" containerID="7c495e8fbfb1d6621c6e39cd8b4230628f57149e411e7d8c1965a39dd11cca94" Feb 25 11:37:22 crc kubenswrapper[5005]: I0225 11:37:22.276346 5005 generic.go:334] "Generic (PLEG): container finished" podID="ebcf2e95-a654-4ea8-9008-804d52fe7f80" containerID="6ab62bd21dc29bf730811a5a02b17e38d446d60a3cec73c9d3d3612f706e3fe3" exitCode=0 Feb 25 11:37:22 crc kubenswrapper[5005]: I0225 11:37:22.276395 5005 generic.go:334] "Generic (PLEG): container finished" podID="ebcf2e95-a654-4ea8-9008-804d52fe7f80" containerID="9e38886e64a6e9ebc07d1894d781ae259cc39425c0b722216b2f3cdca015ce53" exitCode=2 Feb 25 11:37:22 crc kubenswrapper[5005]: I0225 11:37:22.276404 5005 generic.go:334] "Generic (PLEG): container finished" podID="ebcf2e95-a654-4ea8-9008-804d52fe7f80" containerID="1f63cfdbe351c06bd08e7935b521201df6bd960d2aad420ea764d13e9c42d04e" exitCode=0 Feb 25 11:37:22 crc kubenswrapper[5005]: I0225 11:37:22.276414 5005 generic.go:334] "Generic (PLEG): container finished" podID="ebcf2e95-a654-4ea8-9008-804d52fe7f80" containerID="941a626e284f2209504b5ccd5a37adfc46db4675ff44e51267a4e83a3945561d" exitCode=0 Feb 25 11:37:22 crc kubenswrapper[5005]: I0225 11:37:22.276444 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ebcf2e95-a654-4ea8-9008-804d52fe7f80","Type":"ContainerDied","Data":"6ab62bd21dc29bf730811a5a02b17e38d446d60a3cec73c9d3d3612f706e3fe3"} Feb 25 11:37:22 crc kubenswrapper[5005]: I0225 11:37:22.276466 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ebcf2e95-a654-4ea8-9008-804d52fe7f80","Type":"ContainerDied","Data":"9e38886e64a6e9ebc07d1894d781ae259cc39425c0b722216b2f3cdca015ce53"} Feb 25 11:37:22 crc kubenswrapper[5005]: I0225 11:37:22.276475 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ebcf2e95-a654-4ea8-9008-804d52fe7f80","Type":"ContainerDied","Data":"1f63cfdbe351c06bd08e7935b521201df6bd960d2aad420ea764d13e9c42d04e"} Feb 25 11:37:22 crc kubenswrapper[5005]: I0225 11:37:22.276483 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ebcf2e95-a654-4ea8-9008-804d52fe7f80","Type":"ContainerDied","Data":"941a626e284f2209504b5ccd5a37adfc46db4675ff44e51267a4e83a3945561d"} Feb 25 11:37:22 crc kubenswrapper[5005]: I0225 11:37:22.428521 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 25 11:37:22 crc kubenswrapper[5005]: I0225 11:37:22.461884 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebcf2e95-a654-4ea8-9008-804d52fe7f80-scripts\") pod \"ebcf2e95-a654-4ea8-9008-804d52fe7f80\" (UID: \"ebcf2e95-a654-4ea8-9008-804d52fe7f80\") " Feb 25 11:37:22 crc kubenswrapper[5005]: I0225 11:37:22.461939 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ebcf2e95-a654-4ea8-9008-804d52fe7f80-run-httpd\") pod \"ebcf2e95-a654-4ea8-9008-804d52fe7f80\" (UID: \"ebcf2e95-a654-4ea8-9008-804d52fe7f80\") " Feb 25 11:37:22 crc kubenswrapper[5005]: I0225 11:37:22.462652 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ebcf2e95-a654-4ea8-9008-804d52fe7f80-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ebcf2e95-a654-4ea8-9008-804d52fe7f80" (UID: "ebcf2e95-a654-4ea8-9008-804d52fe7f80"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:37:22 crc kubenswrapper[5005]: I0225 11:37:22.475557 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebcf2e95-a654-4ea8-9008-804d52fe7f80-scripts" (OuterVolumeSpecName: "scripts") pod "ebcf2e95-a654-4ea8-9008-804d52fe7f80" (UID: "ebcf2e95-a654-4ea8-9008-804d52fe7f80"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:37:22 crc kubenswrapper[5005]: I0225 11:37:22.563589 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebcf2e95-a654-4ea8-9008-804d52fe7f80-config-data\") pod \"ebcf2e95-a654-4ea8-9008-804d52fe7f80\" (UID: \"ebcf2e95-a654-4ea8-9008-804d52fe7f80\") " Feb 25 11:37:22 crc kubenswrapper[5005]: I0225 11:37:22.563694 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebcf2e95-a654-4ea8-9008-804d52fe7f80-combined-ca-bundle\") pod \"ebcf2e95-a654-4ea8-9008-804d52fe7f80\" (UID: \"ebcf2e95-a654-4ea8-9008-804d52fe7f80\") " Feb 25 11:37:22 crc kubenswrapper[5005]: I0225 11:37:22.563777 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ebcf2e95-a654-4ea8-9008-804d52fe7f80-sg-core-conf-yaml\") pod \"ebcf2e95-a654-4ea8-9008-804d52fe7f80\" (UID: \"ebcf2e95-a654-4ea8-9008-804d52fe7f80\") " Feb 25 11:37:22 crc kubenswrapper[5005]: I0225 11:37:22.563812 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ebcf2e95-a654-4ea8-9008-804d52fe7f80-log-httpd\") pod \"ebcf2e95-a654-4ea8-9008-804d52fe7f80\" (UID: \"ebcf2e95-a654-4ea8-9008-804d52fe7f80\") " Feb 25 11:37:22 crc kubenswrapper[5005]: I0225 11:37:22.564242 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jwfp\" (UniqueName: \"kubernetes.io/projected/ebcf2e95-a654-4ea8-9008-804d52fe7f80-kube-api-access-5jwfp\") pod \"ebcf2e95-a654-4ea8-9008-804d52fe7f80\" (UID: \"ebcf2e95-a654-4ea8-9008-804d52fe7f80\") " Feb 25 11:37:22 crc kubenswrapper[5005]: I0225 11:37:22.564735 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ebcf2e95-a654-4ea8-9008-804d52fe7f80-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ebcf2e95-a654-4ea8-9008-804d52fe7f80" (UID: "ebcf2e95-a654-4ea8-9008-804d52fe7f80"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:37:22 crc kubenswrapper[5005]: I0225 11:37:22.565193 5005 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebcf2e95-a654-4ea8-9008-804d52fe7f80-scripts\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:22 crc kubenswrapper[5005]: I0225 11:37:22.565213 5005 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ebcf2e95-a654-4ea8-9008-804d52fe7f80-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:22 crc kubenswrapper[5005]: I0225 11:37:22.565225 5005 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ebcf2e95-a654-4ea8-9008-804d52fe7f80-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:22 crc kubenswrapper[5005]: I0225 11:37:22.570338 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebcf2e95-a654-4ea8-9008-804d52fe7f80-kube-api-access-5jwfp" (OuterVolumeSpecName: "kube-api-access-5jwfp") pod "ebcf2e95-a654-4ea8-9008-804d52fe7f80" (UID: "ebcf2e95-a654-4ea8-9008-804d52fe7f80"). InnerVolumeSpecName "kube-api-access-5jwfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:37:22 crc kubenswrapper[5005]: I0225 11:37:22.589627 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebcf2e95-a654-4ea8-9008-804d52fe7f80-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ebcf2e95-a654-4ea8-9008-804d52fe7f80" (UID: "ebcf2e95-a654-4ea8-9008-804d52fe7f80"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:37:22 crc kubenswrapper[5005]: I0225 11:37:22.652747 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebcf2e95-a654-4ea8-9008-804d52fe7f80-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ebcf2e95-a654-4ea8-9008-804d52fe7f80" (UID: "ebcf2e95-a654-4ea8-9008-804d52fe7f80"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:37:22 crc kubenswrapper[5005]: I0225 11:37:22.667018 5005 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebcf2e95-a654-4ea8-9008-804d52fe7f80-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:22 crc kubenswrapper[5005]: I0225 11:37:22.667047 5005 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ebcf2e95-a654-4ea8-9008-804d52fe7f80-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:22 crc kubenswrapper[5005]: I0225 11:37:22.667057 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jwfp\" (UniqueName: \"kubernetes.io/projected/ebcf2e95-a654-4ea8-9008-804d52fe7f80-kube-api-access-5jwfp\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:22 crc kubenswrapper[5005]: I0225 11:37:22.692435 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebcf2e95-a654-4ea8-9008-804d52fe7f80-config-data" (OuterVolumeSpecName: "config-data") pod "ebcf2e95-a654-4ea8-9008-804d52fe7f80" (UID: "ebcf2e95-a654-4ea8-9008-804d52fe7f80"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:37:22 crc kubenswrapper[5005]: I0225 11:37:22.696987 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b568af4f-9018-43a0-abb5-5e656bb4e039" path="/var/lib/kubelet/pods/b568af4f-9018-43a0-abb5-5e656bb4e039/volumes" Feb 25 11:37:22 crc kubenswrapper[5005]: I0225 11:37:22.697609 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f37d127e-e5d1-45f3-8e44-262ab354e0c2" path="/var/lib/kubelet/pods/f37d127e-e5d1-45f3-8e44-262ab354e0c2/volumes" Feb 25 11:37:22 crc kubenswrapper[5005]: I0225 11:37:22.768054 5005 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebcf2e95-a654-4ea8-9008-804d52fe7f80-config-data\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.074097 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7445447f66-tpwqp" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.274097 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v25sp\" (UniqueName: \"kubernetes.io/projected/60d4805b-4c99-4d5c-9e31-744e20ba5c55-kube-api-access-v25sp\") pod \"60d4805b-4c99-4d5c-9e31-744e20ba5c55\" (UID: \"60d4805b-4c99-4d5c-9e31-744e20ba5c55\") " Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.274195 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60d4805b-4c99-4d5c-9e31-744e20ba5c55-scripts\") pod \"60d4805b-4c99-4d5c-9e31-744e20ba5c55\" (UID: \"60d4805b-4c99-4d5c-9e31-744e20ba5c55\") " Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.274275 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60d4805b-4c99-4d5c-9e31-744e20ba5c55-logs\") pod \"60d4805b-4c99-4d5c-9e31-744e20ba5c55\" (UID: \"60d4805b-4c99-4d5c-9e31-744e20ba5c55\") " Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.274342 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/60d4805b-4c99-4d5c-9e31-744e20ba5c55-public-tls-certs\") pod \"60d4805b-4c99-4d5c-9e31-744e20ba5c55\" (UID: \"60d4805b-4c99-4d5c-9e31-744e20ba5c55\") " Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.274479 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/60d4805b-4c99-4d5c-9e31-744e20ba5c55-internal-tls-certs\") pod \"60d4805b-4c99-4d5c-9e31-744e20ba5c55\" (UID: \"60d4805b-4c99-4d5c-9e31-744e20ba5c55\") " Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.274688 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60d4805b-4c99-4d5c-9e31-744e20ba5c55-logs" (OuterVolumeSpecName: "logs") pod "60d4805b-4c99-4d5c-9e31-744e20ba5c55" (UID: "60d4805b-4c99-4d5c-9e31-744e20ba5c55"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.274988 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60d4805b-4c99-4d5c-9e31-744e20ba5c55-config-data\") pod \"60d4805b-4c99-4d5c-9e31-744e20ba5c55\" (UID: \"60d4805b-4c99-4d5c-9e31-744e20ba5c55\") " Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.275137 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60d4805b-4c99-4d5c-9e31-744e20ba5c55-combined-ca-bundle\") pod \"60d4805b-4c99-4d5c-9e31-744e20ba5c55\" (UID: \"60d4805b-4c99-4d5c-9e31-744e20ba5c55\") " Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.275735 5005 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60d4805b-4c99-4d5c-9e31-744e20ba5c55-logs\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.288595 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60d4805b-4c99-4d5c-9e31-744e20ba5c55-scripts" (OuterVolumeSpecName: "scripts") pod "60d4805b-4c99-4d5c-9e31-744e20ba5c55" (UID: "60d4805b-4c99-4d5c-9e31-744e20ba5c55"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.294424 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ebcf2e95-a654-4ea8-9008-804d52fe7f80","Type":"ContainerDied","Data":"48f20834567b2f27c21bc587ba8cfe0ccd5352f9fb97fde85ccc5375056016ee"} Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.294497 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.294582 5005 scope.go:117] "RemoveContainer" containerID="6ab62bd21dc29bf730811a5a02b17e38d446d60a3cec73c9d3d3612f706e3fe3" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.301067 5005 generic.go:334] "Generic (PLEG): container finished" podID="60d4805b-4c99-4d5c-9e31-744e20ba5c55" containerID="96ac18b7b690b3e2907e6a699ec81de0f5618c482b34150fe0e1f4ae27693deb" exitCode=0 Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.301114 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7445447f66-tpwqp" event={"ID":"60d4805b-4c99-4d5c-9e31-744e20ba5c55","Type":"ContainerDied","Data":"96ac18b7b690b3e2907e6a699ec81de0f5618c482b34150fe0e1f4ae27693deb"} Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.301143 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7445447f66-tpwqp" event={"ID":"60d4805b-4c99-4d5c-9e31-744e20ba5c55","Type":"ContainerDied","Data":"471925f01a0a55cc3786b6f5d0575a9d163e35bba711fb9c2b364656b1a5448c"} Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.301225 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7445447f66-tpwqp" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.303571 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60d4805b-4c99-4d5c-9e31-744e20ba5c55-kube-api-access-v25sp" (OuterVolumeSpecName: "kube-api-access-v25sp") pod "60d4805b-4c99-4d5c-9e31-744e20ba5c55" (UID: "60d4805b-4c99-4d5c-9e31-744e20ba5c55"). InnerVolumeSpecName "kube-api-access-v25sp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.335126 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60d4805b-4c99-4d5c-9e31-744e20ba5c55-config-data" (OuterVolumeSpecName: "config-data") pod "60d4805b-4c99-4d5c-9e31-744e20ba5c55" (UID: "60d4805b-4c99-4d5c-9e31-744e20ba5c55"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.336807 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60d4805b-4c99-4d5c-9e31-744e20ba5c55-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "60d4805b-4c99-4d5c-9e31-744e20ba5c55" (UID: "60d4805b-4c99-4d5c-9e31-744e20ba5c55"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.388587 5005 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60d4805b-4c99-4d5c-9e31-744e20ba5c55-scripts\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.388622 5005 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60d4805b-4c99-4d5c-9e31-744e20ba5c55-config-data\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.388633 5005 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60d4805b-4c99-4d5c-9e31-744e20ba5c55-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.388641 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v25sp\" (UniqueName: \"kubernetes.io/projected/60d4805b-4c99-4d5c-9e31-744e20ba5c55-kube-api-access-v25sp\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.400586 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60d4805b-4c99-4d5c-9e31-744e20ba5c55-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "60d4805b-4c99-4d5c-9e31-744e20ba5c55" (UID: "60d4805b-4c99-4d5c-9e31-744e20ba5c55"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.415565 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60d4805b-4c99-4d5c-9e31-744e20ba5c55-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "60d4805b-4c99-4d5c-9e31-744e20ba5c55" (UID: "60d4805b-4c99-4d5c-9e31-744e20ba5c55"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.481039 5005 scope.go:117] "RemoveContainer" containerID="9e38886e64a6e9ebc07d1894d781ae259cc39425c0b722216b2f3cdca015ce53" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.486417 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.489758 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.491339 5005 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/60d4805b-4c99-4d5c-9e31-744e20ba5c55-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.491362 5005 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/60d4805b-4c99-4d5c-9e31-744e20ba5c55-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.511543 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 25 11:37:23 crc kubenswrapper[5005]: E0225 11:37:23.511887 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b568af4f-9018-43a0-abb5-5e656bb4e039" containerName="neutron-api" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.511904 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="b568af4f-9018-43a0-abb5-5e656bb4e039" containerName="neutron-api" Feb 25 11:37:23 crc kubenswrapper[5005]: E0225 11:37:23.511916 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebcf2e95-a654-4ea8-9008-804d52fe7f80" containerName="ceilometer-central-agent" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.511923 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebcf2e95-a654-4ea8-9008-804d52fe7f80" containerName="ceilometer-central-agent" Feb 25 11:37:23 crc kubenswrapper[5005]: E0225 11:37:23.511941 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60d4805b-4c99-4d5c-9e31-744e20ba5c55" containerName="placement-log" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.511948 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="60d4805b-4c99-4d5c-9e31-744e20ba5c55" containerName="placement-log" Feb 25 11:37:23 crc kubenswrapper[5005]: E0225 11:37:23.511958 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f37d127e-e5d1-45f3-8e44-262ab354e0c2" containerName="horizon-log" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.511963 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="f37d127e-e5d1-45f3-8e44-262ab354e0c2" containerName="horizon-log" Feb 25 11:37:23 crc kubenswrapper[5005]: E0225 11:37:23.511973 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60d4805b-4c99-4d5c-9e31-744e20ba5c55" containerName="placement-api" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.511978 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="60d4805b-4c99-4d5c-9e31-744e20ba5c55" containerName="placement-api" Feb 25 11:37:23 crc kubenswrapper[5005]: E0225 11:37:23.511991 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebcf2e95-a654-4ea8-9008-804d52fe7f80" containerName="ceilometer-notification-agent" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.511997 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebcf2e95-a654-4ea8-9008-804d52fe7f80" containerName="ceilometer-notification-agent" Feb 25 11:37:23 crc kubenswrapper[5005]: E0225 11:37:23.512024 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f37d127e-e5d1-45f3-8e44-262ab354e0c2" containerName="horizon" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.512029 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="f37d127e-e5d1-45f3-8e44-262ab354e0c2" containerName="horizon" Feb 25 11:37:23 crc kubenswrapper[5005]: E0225 11:37:23.512038 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebcf2e95-a654-4ea8-9008-804d52fe7f80" containerName="sg-core" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.512043 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebcf2e95-a654-4ea8-9008-804d52fe7f80" containerName="sg-core" Feb 25 11:37:23 crc kubenswrapper[5005]: E0225 11:37:23.512051 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebcf2e95-a654-4ea8-9008-804d52fe7f80" containerName="proxy-httpd" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.512057 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebcf2e95-a654-4ea8-9008-804d52fe7f80" containerName="proxy-httpd" Feb 25 11:37:23 crc kubenswrapper[5005]: E0225 11:37:23.512063 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b568af4f-9018-43a0-abb5-5e656bb4e039" containerName="neutron-httpd" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.512069 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="b568af4f-9018-43a0-abb5-5e656bb4e039" containerName="neutron-httpd" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.512212 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="f37d127e-e5d1-45f3-8e44-262ab354e0c2" containerName="horizon-log" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.512224 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="b568af4f-9018-43a0-abb5-5e656bb4e039" containerName="neutron-api" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.512235 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="b568af4f-9018-43a0-abb5-5e656bb4e039" containerName="neutron-httpd" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.512247 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="60d4805b-4c99-4d5c-9e31-744e20ba5c55" containerName="placement-log" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.512257 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebcf2e95-a654-4ea8-9008-804d52fe7f80" containerName="ceilometer-central-agent" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.512264 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebcf2e95-a654-4ea8-9008-804d52fe7f80" containerName="sg-core" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.512276 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="f37d127e-e5d1-45f3-8e44-262ab354e0c2" containerName="horizon" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.512284 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebcf2e95-a654-4ea8-9008-804d52fe7f80" containerName="proxy-httpd" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.512293 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="60d4805b-4c99-4d5c-9e31-744e20ba5c55" containerName="placement-api" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.512303 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebcf2e95-a654-4ea8-9008-804d52fe7f80" containerName="ceilometer-notification-agent" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.513789 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.521064 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.521307 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.534599 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.536332 5005 scope.go:117] "RemoveContainer" containerID="1f63cfdbe351c06bd08e7935b521201df6bd960d2aad420ea764d13e9c42d04e" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.628649 5005 scope.go:117] "RemoveContainer" containerID="941a626e284f2209504b5ccd5a37adfc46db4675ff44e51267a4e83a3945561d" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.647914 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7445447f66-tpwqp"] Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.653619 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-7445447f66-tpwqp"] Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.661458 5005 scope.go:117] "RemoveContainer" containerID="96ac18b7b690b3e2907e6a699ec81de0f5618c482b34150fe0e1f4ae27693deb" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.683978 5005 scope.go:117] "RemoveContainer" containerID="c506446cc34bef3a7aa96fc599a57363ab460e080fc6a1857943908fa464c118" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.694972 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45835808-fd30-412e-a08b-4120e6e4ea9b-config-data\") pod \"ceilometer-0\" (UID: \"45835808-fd30-412e-a08b-4120e6e4ea9b\") " pod="openstack/ceilometer-0" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.695008 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45835808-fd30-412e-a08b-4120e6e4ea9b-scripts\") pod \"ceilometer-0\" (UID: \"45835808-fd30-412e-a08b-4120e6e4ea9b\") " pod="openstack/ceilometer-0" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.695030 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/45835808-fd30-412e-a08b-4120e6e4ea9b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"45835808-fd30-412e-a08b-4120e6e4ea9b\") " pod="openstack/ceilometer-0" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.695103 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xglgw\" (UniqueName: \"kubernetes.io/projected/45835808-fd30-412e-a08b-4120e6e4ea9b-kube-api-access-xglgw\") pod \"ceilometer-0\" (UID: \"45835808-fd30-412e-a08b-4120e6e4ea9b\") " pod="openstack/ceilometer-0" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.695119 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45835808-fd30-412e-a08b-4120e6e4ea9b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"45835808-fd30-412e-a08b-4120e6e4ea9b\") " pod="openstack/ceilometer-0" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.695160 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45835808-fd30-412e-a08b-4120e6e4ea9b-run-httpd\") pod \"ceilometer-0\" (UID: \"45835808-fd30-412e-a08b-4120e6e4ea9b\") " pod="openstack/ceilometer-0" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.695188 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45835808-fd30-412e-a08b-4120e6e4ea9b-log-httpd\") pod \"ceilometer-0\" (UID: \"45835808-fd30-412e-a08b-4120e6e4ea9b\") " pod="openstack/ceilometer-0" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.711087 5005 scope.go:117] "RemoveContainer" containerID="96ac18b7b690b3e2907e6a699ec81de0f5618c482b34150fe0e1f4ae27693deb" Feb 25 11:37:23 crc kubenswrapper[5005]: E0225 11:37:23.711809 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96ac18b7b690b3e2907e6a699ec81de0f5618c482b34150fe0e1f4ae27693deb\": container with ID starting with 96ac18b7b690b3e2907e6a699ec81de0f5618c482b34150fe0e1f4ae27693deb not found: ID does not exist" containerID="96ac18b7b690b3e2907e6a699ec81de0f5618c482b34150fe0e1f4ae27693deb" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.711868 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96ac18b7b690b3e2907e6a699ec81de0f5618c482b34150fe0e1f4ae27693deb"} err="failed to get container status \"96ac18b7b690b3e2907e6a699ec81de0f5618c482b34150fe0e1f4ae27693deb\": rpc error: code = NotFound desc = could not find container \"96ac18b7b690b3e2907e6a699ec81de0f5618c482b34150fe0e1f4ae27693deb\": container with ID starting with 96ac18b7b690b3e2907e6a699ec81de0f5618c482b34150fe0e1f4ae27693deb not found: ID does not exist" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.711897 5005 scope.go:117] "RemoveContainer" containerID="c506446cc34bef3a7aa96fc599a57363ab460e080fc6a1857943908fa464c118" Feb 25 11:37:23 crc kubenswrapper[5005]: E0225 11:37:23.712492 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c506446cc34bef3a7aa96fc599a57363ab460e080fc6a1857943908fa464c118\": container with ID starting with c506446cc34bef3a7aa96fc599a57363ab460e080fc6a1857943908fa464c118 not found: ID does not exist" containerID="c506446cc34bef3a7aa96fc599a57363ab460e080fc6a1857943908fa464c118" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.712531 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c506446cc34bef3a7aa96fc599a57363ab460e080fc6a1857943908fa464c118"} err="failed to get container status \"c506446cc34bef3a7aa96fc599a57363ab460e080fc6a1857943908fa464c118\": rpc error: code = NotFound desc = could not find container \"c506446cc34bef3a7aa96fc599a57363ab460e080fc6a1857943908fa464c118\": container with ID starting with c506446cc34bef3a7aa96fc599a57363ab460e080fc6a1857943908fa464c118 not found: ID does not exist" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.796196 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xglgw\" (UniqueName: \"kubernetes.io/projected/45835808-fd30-412e-a08b-4120e6e4ea9b-kube-api-access-xglgw\") pod \"ceilometer-0\" (UID: \"45835808-fd30-412e-a08b-4120e6e4ea9b\") " pod="openstack/ceilometer-0" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.796240 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45835808-fd30-412e-a08b-4120e6e4ea9b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"45835808-fd30-412e-a08b-4120e6e4ea9b\") " pod="openstack/ceilometer-0" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.796294 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45835808-fd30-412e-a08b-4120e6e4ea9b-run-httpd\") pod \"ceilometer-0\" (UID: \"45835808-fd30-412e-a08b-4120e6e4ea9b\") " pod="openstack/ceilometer-0" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.796325 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45835808-fd30-412e-a08b-4120e6e4ea9b-log-httpd\") pod \"ceilometer-0\" (UID: \"45835808-fd30-412e-a08b-4120e6e4ea9b\") " pod="openstack/ceilometer-0" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.796427 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45835808-fd30-412e-a08b-4120e6e4ea9b-config-data\") pod \"ceilometer-0\" (UID: \"45835808-fd30-412e-a08b-4120e6e4ea9b\") " pod="openstack/ceilometer-0" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.796453 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45835808-fd30-412e-a08b-4120e6e4ea9b-scripts\") pod \"ceilometer-0\" (UID: \"45835808-fd30-412e-a08b-4120e6e4ea9b\") " pod="openstack/ceilometer-0" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.796473 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/45835808-fd30-412e-a08b-4120e6e4ea9b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"45835808-fd30-412e-a08b-4120e6e4ea9b\") " pod="openstack/ceilometer-0" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.798497 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45835808-fd30-412e-a08b-4120e6e4ea9b-run-httpd\") pod \"ceilometer-0\" (UID: \"45835808-fd30-412e-a08b-4120e6e4ea9b\") " pod="openstack/ceilometer-0" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.798641 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45835808-fd30-412e-a08b-4120e6e4ea9b-log-httpd\") pod \"ceilometer-0\" (UID: \"45835808-fd30-412e-a08b-4120e6e4ea9b\") " pod="openstack/ceilometer-0" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.802299 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45835808-fd30-412e-a08b-4120e6e4ea9b-config-data\") pod \"ceilometer-0\" (UID: \"45835808-fd30-412e-a08b-4120e6e4ea9b\") " pod="openstack/ceilometer-0" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.804150 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45835808-fd30-412e-a08b-4120e6e4ea9b-scripts\") pod \"ceilometer-0\" (UID: \"45835808-fd30-412e-a08b-4120e6e4ea9b\") " pod="openstack/ceilometer-0" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.804430 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/45835808-fd30-412e-a08b-4120e6e4ea9b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"45835808-fd30-412e-a08b-4120e6e4ea9b\") " pod="openstack/ceilometer-0" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.805035 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45835808-fd30-412e-a08b-4120e6e4ea9b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"45835808-fd30-412e-a08b-4120e6e4ea9b\") " pod="openstack/ceilometer-0" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.812802 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xglgw\" (UniqueName: \"kubernetes.io/projected/45835808-fd30-412e-a08b-4120e6e4ea9b-kube-api-access-xglgw\") pod \"ceilometer-0\" (UID: \"45835808-fd30-412e-a08b-4120e6e4ea9b\") " pod="openstack/ceilometer-0" Feb 25 11:37:23 crc kubenswrapper[5005]: I0225 11:37:23.869716 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 25 11:37:24 crc kubenswrapper[5005]: I0225 11:37:24.128914 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 25 11:37:24 crc kubenswrapper[5005]: W0225 11:37:24.138526 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45835808_fd30_412e_a08b_4120e6e4ea9b.slice/crio-0b0ae637b9a156cbbfacc313ffb63c649bc3078da53d81e0f1bd175af8009130 WatchSource:0}: Error finding container 0b0ae637b9a156cbbfacc313ffb63c649bc3078da53d81e0f1bd175af8009130: Status 404 returned error can't find the container with id 0b0ae637b9a156cbbfacc313ffb63c649bc3078da53d81e0f1bd175af8009130 Feb 25 11:37:24 crc kubenswrapper[5005]: I0225 11:37:24.309564 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45835808-fd30-412e-a08b-4120e6e4ea9b","Type":"ContainerStarted","Data":"0b0ae637b9a156cbbfacc313ffb63c649bc3078da53d81e0f1bd175af8009130"} Feb 25 11:37:24 crc kubenswrapper[5005]: I0225 11:37:24.697112 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60d4805b-4c99-4d5c-9e31-744e20ba5c55" path="/var/lib/kubelet/pods/60d4805b-4c99-4d5c-9e31-744e20ba5c55/volumes" Feb 25 11:37:24 crc kubenswrapper[5005]: I0225 11:37:24.698184 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebcf2e95-a654-4ea8-9008-804d52fe7f80" path="/var/lib/kubelet/pods/ebcf2e95-a654-4ea8-9008-804d52fe7f80/volumes" Feb 25 11:37:25 crc kubenswrapper[5005]: I0225 11:37:25.323675 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45835808-fd30-412e-a08b-4120e6e4ea9b","Type":"ContainerStarted","Data":"af4f7f24a693a2f06856bfef21ad15bb8f408973731870fd8f72d9df49fe5714"} Feb 25 11:37:26 crc kubenswrapper[5005]: I0225 11:37:26.337454 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45835808-fd30-412e-a08b-4120e6e4ea9b","Type":"ContainerStarted","Data":"a9b7b859269734a99c9c673b0e21637efecf33da22ee42ad3e53575d04dde3bd"} Feb 25 11:37:27 crc kubenswrapper[5005]: I0225 11:37:27.355036 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45835808-fd30-412e-a08b-4120e6e4ea9b","Type":"ContainerStarted","Data":"399c2868f58b821f3e030cc0ac2d5add99cda777d9fe6b2bccac2d2e79e394f1"} Feb 25 11:37:28 crc kubenswrapper[5005]: I0225 11:37:28.087573 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 11:37:28 crc kubenswrapper[5005]: I0225 11:37:28.087894 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 11:37:29 crc kubenswrapper[5005]: I0225 11:37:29.379394 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45835808-fd30-412e-a08b-4120e6e4ea9b","Type":"ContainerStarted","Data":"fac5a83243191d974fc974d5dc47451bcfd89c6eb0e9686872f2450566538dcc"} Feb 25 11:37:29 crc kubenswrapper[5005]: I0225 11:37:29.379722 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 25 11:37:29 crc kubenswrapper[5005]: I0225 11:37:29.407819 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.374282857 podStartE2EDuration="6.407798759s" podCreationTimestamp="2026-02-25 11:37:23 +0000 UTC" firstStartedPulling="2026-02-25 11:37:24.140505 +0000 UTC m=+1158.181237327" lastFinishedPulling="2026-02-25 11:37:28.174020902 +0000 UTC m=+1162.214753229" observedRunningTime="2026-02-25 11:37:29.398828467 +0000 UTC m=+1163.439560794" watchObservedRunningTime="2026-02-25 11:37:29.407798759 +0000 UTC m=+1163.448531086" Feb 25 11:37:31 crc kubenswrapper[5005]: I0225 11:37:31.410858 5005 generic.go:334] "Generic (PLEG): container finished" podID="811df314-ec45-441f-9869-cb6b976163cb" containerID="a5347fde957aa10d2aa20a22ae6ea22c21001a4e6d1794c539dd59af0f2ba786" exitCode=0 Feb 25 11:37:31 crc kubenswrapper[5005]: I0225 11:37:31.410981 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-twxb2" event={"ID":"811df314-ec45-441f-9869-cb6b976163cb","Type":"ContainerDied","Data":"a5347fde957aa10d2aa20a22ae6ea22c21001a4e6d1794c539dd59af0f2ba786"} Feb 25 11:37:32 crc kubenswrapper[5005]: I0225 11:37:32.758407 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-twxb2" Feb 25 11:37:32 crc kubenswrapper[5005]: I0225 11:37:32.766870 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/811df314-ec45-441f-9869-cb6b976163cb-config-data\") pod \"811df314-ec45-441f-9869-cb6b976163cb\" (UID: \"811df314-ec45-441f-9869-cb6b976163cb\") " Feb 25 11:37:32 crc kubenswrapper[5005]: I0225 11:37:32.767010 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/811df314-ec45-441f-9869-cb6b976163cb-scripts\") pod \"811df314-ec45-441f-9869-cb6b976163cb\" (UID: \"811df314-ec45-441f-9869-cb6b976163cb\") " Feb 25 11:37:32 crc kubenswrapper[5005]: I0225 11:37:32.767058 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/811df314-ec45-441f-9869-cb6b976163cb-combined-ca-bundle\") pod \"811df314-ec45-441f-9869-cb6b976163cb\" (UID: \"811df314-ec45-441f-9869-cb6b976163cb\") " Feb 25 11:37:32 crc kubenswrapper[5005]: I0225 11:37:32.767105 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdmg6\" (UniqueName: \"kubernetes.io/projected/811df314-ec45-441f-9869-cb6b976163cb-kube-api-access-tdmg6\") pod \"811df314-ec45-441f-9869-cb6b976163cb\" (UID: \"811df314-ec45-441f-9869-cb6b976163cb\") " Feb 25 11:37:32 crc kubenswrapper[5005]: I0225 11:37:32.776856 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/811df314-ec45-441f-9869-cb6b976163cb-kube-api-access-tdmg6" (OuterVolumeSpecName: "kube-api-access-tdmg6") pod "811df314-ec45-441f-9869-cb6b976163cb" (UID: "811df314-ec45-441f-9869-cb6b976163cb"). InnerVolumeSpecName "kube-api-access-tdmg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:37:32 crc kubenswrapper[5005]: I0225 11:37:32.777770 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/811df314-ec45-441f-9869-cb6b976163cb-scripts" (OuterVolumeSpecName: "scripts") pod "811df314-ec45-441f-9869-cb6b976163cb" (UID: "811df314-ec45-441f-9869-cb6b976163cb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:37:32 crc kubenswrapper[5005]: I0225 11:37:32.812544 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/811df314-ec45-441f-9869-cb6b976163cb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "811df314-ec45-441f-9869-cb6b976163cb" (UID: "811df314-ec45-441f-9869-cb6b976163cb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:37:32 crc kubenswrapper[5005]: I0225 11:37:32.832384 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/811df314-ec45-441f-9869-cb6b976163cb-config-data" (OuterVolumeSpecName: "config-data") pod "811df314-ec45-441f-9869-cb6b976163cb" (UID: "811df314-ec45-441f-9869-cb6b976163cb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:37:32 crc kubenswrapper[5005]: I0225 11:37:32.869976 5005 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/811df314-ec45-441f-9869-cb6b976163cb-scripts\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:32 crc kubenswrapper[5005]: I0225 11:37:32.870019 5005 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/811df314-ec45-441f-9869-cb6b976163cb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:32 crc kubenswrapper[5005]: I0225 11:37:32.870036 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdmg6\" (UniqueName: \"kubernetes.io/projected/811df314-ec45-441f-9869-cb6b976163cb-kube-api-access-tdmg6\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:32 crc kubenswrapper[5005]: I0225 11:37:32.870049 5005 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/811df314-ec45-441f-9869-cb6b976163cb-config-data\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:33 crc kubenswrapper[5005]: I0225 11:37:33.445420 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-twxb2" event={"ID":"811df314-ec45-441f-9869-cb6b976163cb","Type":"ContainerDied","Data":"4f80002795c426e28a628d63f3d7f79436147013c1a5b37b6d7f4a8ba9461ab7"} Feb 25 11:37:33 crc kubenswrapper[5005]: I0225 11:37:33.445487 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f80002795c426e28a628d63f3d7f79436147013c1a5b37b6d7f4a8ba9461ab7" Feb 25 11:37:33 crc kubenswrapper[5005]: I0225 11:37:33.445684 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-twxb2" Feb 25 11:37:33 crc kubenswrapper[5005]: I0225 11:37:33.553666 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 25 11:37:33 crc kubenswrapper[5005]: E0225 11:37:33.555960 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="811df314-ec45-441f-9869-cb6b976163cb" containerName="nova-cell0-conductor-db-sync" Feb 25 11:37:33 crc kubenswrapper[5005]: I0225 11:37:33.555989 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="811df314-ec45-441f-9869-cb6b976163cb" containerName="nova-cell0-conductor-db-sync" Feb 25 11:37:33 crc kubenswrapper[5005]: I0225 11:37:33.556304 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="811df314-ec45-441f-9869-cb6b976163cb" containerName="nova-cell0-conductor-db-sync" Feb 25 11:37:33 crc kubenswrapper[5005]: I0225 11:37:33.557276 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 25 11:37:33 crc kubenswrapper[5005]: I0225 11:37:33.570471 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-cm6lk" Feb 25 11:37:33 crc kubenswrapper[5005]: I0225 11:37:33.571969 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 25 11:37:33 crc kubenswrapper[5005]: I0225 11:37:33.575204 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 25 11:37:33 crc kubenswrapper[5005]: I0225 11:37:33.582346 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrhcb\" (UniqueName: \"kubernetes.io/projected/f6b51fe3-5b9f-4745-8d5a-c9418091f431-kube-api-access-qrhcb\") pod \"nova-cell0-conductor-0\" (UID: \"f6b51fe3-5b9f-4745-8d5a-c9418091f431\") " pod="openstack/nova-cell0-conductor-0" Feb 25 11:37:33 crc kubenswrapper[5005]: I0225 11:37:33.582544 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6b51fe3-5b9f-4745-8d5a-c9418091f431-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f6b51fe3-5b9f-4745-8d5a-c9418091f431\") " pod="openstack/nova-cell0-conductor-0" Feb 25 11:37:33 crc kubenswrapper[5005]: I0225 11:37:33.582660 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6b51fe3-5b9f-4745-8d5a-c9418091f431-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f6b51fe3-5b9f-4745-8d5a-c9418091f431\") " pod="openstack/nova-cell0-conductor-0" Feb 25 11:37:33 crc kubenswrapper[5005]: I0225 11:37:33.684068 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6b51fe3-5b9f-4745-8d5a-c9418091f431-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f6b51fe3-5b9f-4745-8d5a-c9418091f431\") " pod="openstack/nova-cell0-conductor-0" Feb 25 11:37:33 crc kubenswrapper[5005]: I0225 11:37:33.684355 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6b51fe3-5b9f-4745-8d5a-c9418091f431-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f6b51fe3-5b9f-4745-8d5a-c9418091f431\") " pod="openstack/nova-cell0-conductor-0" Feb 25 11:37:33 crc kubenswrapper[5005]: I0225 11:37:33.684485 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrhcb\" (UniqueName: \"kubernetes.io/projected/f6b51fe3-5b9f-4745-8d5a-c9418091f431-kube-api-access-qrhcb\") pod \"nova-cell0-conductor-0\" (UID: \"f6b51fe3-5b9f-4745-8d5a-c9418091f431\") " pod="openstack/nova-cell0-conductor-0" Feb 25 11:37:33 crc kubenswrapper[5005]: I0225 11:37:33.689500 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6b51fe3-5b9f-4745-8d5a-c9418091f431-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f6b51fe3-5b9f-4745-8d5a-c9418091f431\") " pod="openstack/nova-cell0-conductor-0" Feb 25 11:37:33 crc kubenswrapper[5005]: I0225 11:37:33.689555 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6b51fe3-5b9f-4745-8d5a-c9418091f431-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f6b51fe3-5b9f-4745-8d5a-c9418091f431\") " pod="openstack/nova-cell0-conductor-0" Feb 25 11:37:33 crc kubenswrapper[5005]: I0225 11:37:33.712958 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrhcb\" (UniqueName: \"kubernetes.io/projected/f6b51fe3-5b9f-4745-8d5a-c9418091f431-kube-api-access-qrhcb\") pod \"nova-cell0-conductor-0\" (UID: \"f6b51fe3-5b9f-4745-8d5a-c9418091f431\") " pod="openstack/nova-cell0-conductor-0" Feb 25 11:37:33 crc kubenswrapper[5005]: I0225 11:37:33.880364 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 25 11:37:34 crc kubenswrapper[5005]: I0225 11:37:34.100989 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 25 11:37:34 crc kubenswrapper[5005]: I0225 11:37:34.456071 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"f6b51fe3-5b9f-4745-8d5a-c9418091f431","Type":"ContainerStarted","Data":"0058aae633dfabcd4b00c5d7a2377399e7aa033486d2c0a39009f2e4549be262"} Feb 25 11:37:34 crc kubenswrapper[5005]: I0225 11:37:34.456117 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"f6b51fe3-5b9f-4745-8d5a-c9418091f431","Type":"ContainerStarted","Data":"847fd4d315d24a207824fd8c528a78599de5b6cf644412d37dbb4bc1de68552f"} Feb 25 11:37:34 crc kubenswrapper[5005]: I0225 11:37:34.456183 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 25 11:37:34 crc kubenswrapper[5005]: I0225 11:37:34.479112 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=1.479083462 podStartE2EDuration="1.479083462s" podCreationTimestamp="2026-02-25 11:37:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:37:34.468704217 +0000 UTC m=+1168.509436544" watchObservedRunningTime="2026-02-25 11:37:34.479083462 +0000 UTC m=+1168.519815809" Feb 25 11:37:43 crc kubenswrapper[5005]: I0225 11:37:43.921862 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.439489 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-x788q"] Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.440799 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-x788q" Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.444658 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.444775 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.457669 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-x788q"] Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.609164 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.610394 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.612233 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.619897 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa5f2bfb-c847-4266-9def-11101efa2256-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-x788q\" (UID: \"aa5f2bfb-c847-4266-9def-11101efa2256\") " pod="openstack/nova-cell0-cell-mapping-x788q" Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.620026 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72gv9\" (UniqueName: \"kubernetes.io/projected/aa5f2bfb-c847-4266-9def-11101efa2256-kube-api-access-72gv9\") pod \"nova-cell0-cell-mapping-x788q\" (UID: \"aa5f2bfb-c847-4266-9def-11101efa2256\") " pod="openstack/nova-cell0-cell-mapping-x788q" Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.620125 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa5f2bfb-c847-4266-9def-11101efa2256-scripts\") pod \"nova-cell0-cell-mapping-x788q\" (UID: \"aa5f2bfb-c847-4266-9def-11101efa2256\") " pod="openstack/nova-cell0-cell-mapping-x788q" Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.620274 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa5f2bfb-c847-4266-9def-11101efa2256-config-data\") pod \"nova-cell0-cell-mapping-x788q\" (UID: \"aa5f2bfb-c847-4266-9def-11101efa2256\") " pod="openstack/nova-cell0-cell-mapping-x788q" Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.620411 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.631758 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.633532 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.635834 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.651224 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.746183 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa5f2bfb-c847-4266-9def-11101efa2256-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-x788q\" (UID: \"aa5f2bfb-c847-4266-9def-11101efa2256\") " pod="openstack/nova-cell0-cell-mapping-x788q" Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.746257 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgqzh\" (UniqueName: \"kubernetes.io/projected/b374c6f2-49de-499e-9c35-b0b859de60f2-kube-api-access-pgqzh\") pod \"nova-cell1-novncproxy-0\" (UID: \"b374c6f2-49de-499e-9c35-b0b859de60f2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.746297 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72gv9\" (UniqueName: \"kubernetes.io/projected/aa5f2bfb-c847-4266-9def-11101efa2256-kube-api-access-72gv9\") pod \"nova-cell0-cell-mapping-x788q\" (UID: \"aa5f2bfb-c847-4266-9def-11101efa2256\") " pod="openstack/nova-cell0-cell-mapping-x788q" Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.769038 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa5f2bfb-c847-4266-9def-11101efa2256-scripts\") pod \"nova-cell0-cell-mapping-x788q\" (UID: \"aa5f2bfb-c847-4266-9def-11101efa2256\") " pod="openstack/nova-cell0-cell-mapping-x788q" Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.769200 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b374c6f2-49de-499e-9c35-b0b859de60f2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b374c6f2-49de-499e-9c35-b0b859de60f2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.769236 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa5f2bfb-c847-4266-9def-11101efa2256-config-data\") pod \"nova-cell0-cell-mapping-x788q\" (UID: \"aa5f2bfb-c847-4266-9def-11101efa2256\") " pod="openstack/nova-cell0-cell-mapping-x788q" Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.769263 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b374c6f2-49de-499e-9c35-b0b859de60f2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b374c6f2-49de-499e-9c35-b0b859de60f2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.799057 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa5f2bfb-c847-4266-9def-11101efa2256-scripts\") pod \"nova-cell0-cell-mapping-x788q\" (UID: \"aa5f2bfb-c847-4266-9def-11101efa2256\") " pod="openstack/nova-cell0-cell-mapping-x788q" Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.799306 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.801383 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72gv9\" (UniqueName: \"kubernetes.io/projected/aa5f2bfb-c847-4266-9def-11101efa2256-kube-api-access-72gv9\") pod \"nova-cell0-cell-mapping-x788q\" (UID: \"aa5f2bfb-c847-4266-9def-11101efa2256\") " pod="openstack/nova-cell0-cell-mapping-x788q" Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.802854 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.804094 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa5f2bfb-c847-4266-9def-11101efa2256-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-x788q\" (UID: \"aa5f2bfb-c847-4266-9def-11101efa2256\") " pod="openstack/nova-cell0-cell-mapping-x788q" Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.807321 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa5f2bfb-c847-4266-9def-11101efa2256-config-data\") pod \"nova-cell0-cell-mapping-x788q\" (UID: \"aa5f2bfb-c847-4266-9def-11101efa2256\") " pod="openstack/nova-cell0-cell-mapping-x788q" Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.819830 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.836846 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.862870 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.864084 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.869810 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.870764 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64c05ccd-a75c-46df-8cbe-60f572b64666-config-data\") pod \"nova-scheduler-0\" (UID: \"64c05ccd-a75c-46df-8cbe-60f572b64666\") " pod="openstack/nova-scheduler-0" Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.870842 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgqzh\" (UniqueName: \"kubernetes.io/projected/b374c6f2-49de-499e-9c35-b0b859de60f2-kube-api-access-pgqzh\") pod \"nova-cell1-novncproxy-0\" (UID: \"b374c6f2-49de-499e-9c35-b0b859de60f2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.870948 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d736fb4b-faba-4398-8a5a-5fa576351d40-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d736fb4b-faba-4398-8a5a-5fa576351d40\") " pod="openstack/nova-metadata-0" Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.870992 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64c05ccd-a75c-46df-8cbe-60f572b64666-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"64c05ccd-a75c-46df-8cbe-60f572b64666\") " pod="openstack/nova-scheduler-0" Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.871043 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d736fb4b-faba-4398-8a5a-5fa576351d40-logs\") pod \"nova-metadata-0\" (UID: \"d736fb4b-faba-4398-8a5a-5fa576351d40\") " pod="openstack/nova-metadata-0" Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.871107 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d736fb4b-faba-4398-8a5a-5fa576351d40-config-data\") pod \"nova-metadata-0\" (UID: \"d736fb4b-faba-4398-8a5a-5fa576351d40\") " pod="openstack/nova-metadata-0" Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.871162 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b374c6f2-49de-499e-9c35-b0b859de60f2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b374c6f2-49de-499e-9c35-b0b859de60f2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.871774 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26sbs\" (UniqueName: \"kubernetes.io/projected/64c05ccd-a75c-46df-8cbe-60f572b64666-kube-api-access-26sbs\") pod \"nova-scheduler-0\" (UID: \"64c05ccd-a75c-46df-8cbe-60f572b64666\") " pod="openstack/nova-scheduler-0" Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.871808 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b374c6f2-49de-499e-9c35-b0b859de60f2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b374c6f2-49de-499e-9c35-b0b859de60f2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.871849 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldm26\" (UniqueName: \"kubernetes.io/projected/d736fb4b-faba-4398-8a5a-5fa576351d40-kube-api-access-ldm26\") pod \"nova-metadata-0\" (UID: \"d736fb4b-faba-4398-8a5a-5fa576351d40\") " pod="openstack/nova-metadata-0" Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.876191 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b374c6f2-49de-499e-9c35-b0b859de60f2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b374c6f2-49de-499e-9c35-b0b859de60f2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.881499 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b374c6f2-49de-499e-9c35-b0b859de60f2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b374c6f2-49de-499e-9c35-b0b859de60f2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.887978 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.890032 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgqzh\" (UniqueName: \"kubernetes.io/projected/b374c6f2-49de-499e-9c35-b0b859de60f2-kube-api-access-pgqzh\") pod \"nova-cell1-novncproxy-0\" (UID: \"b374c6f2-49de-499e-9c35-b0b859de60f2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.902491 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-bqh8f"] Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.903942 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-bqh8f" Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.909563 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-bqh8f"] Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.922827 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.973188 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d736fb4b-faba-4398-8a5a-5fa576351d40-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d736fb4b-faba-4398-8a5a-5fa576351d40\") " pod="openstack/nova-metadata-0" Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.973839 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64c05ccd-a75c-46df-8cbe-60f572b64666-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"64c05ccd-a75c-46df-8cbe-60f572b64666\") " pod="openstack/nova-scheduler-0" Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.973907 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aaf7387c-6f81-405e-bc13-38a9a279741c-logs\") pod \"nova-api-0\" (UID: \"aaf7387c-6f81-405e-bc13-38a9a279741c\") " pod="openstack/nova-api-0" Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.973930 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d736fb4b-faba-4398-8a5a-5fa576351d40-logs\") pod \"nova-metadata-0\" (UID: \"d736fb4b-faba-4398-8a5a-5fa576351d40\") " pod="openstack/nova-metadata-0" Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.973961 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d736fb4b-faba-4398-8a5a-5fa576351d40-config-data\") pod \"nova-metadata-0\" (UID: \"d736fb4b-faba-4398-8a5a-5fa576351d40\") " pod="openstack/nova-metadata-0" Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.974135 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26sbs\" (UniqueName: \"kubernetes.io/projected/64c05ccd-a75c-46df-8cbe-60f572b64666-kube-api-access-26sbs\") pod \"nova-scheduler-0\" (UID: \"64c05ccd-a75c-46df-8cbe-60f572b64666\") " pod="openstack/nova-scheduler-0" Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.974172 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v76rn\" (UniqueName: \"kubernetes.io/projected/aaf7387c-6f81-405e-bc13-38a9a279741c-kube-api-access-v76rn\") pod \"nova-api-0\" (UID: \"aaf7387c-6f81-405e-bc13-38a9a279741c\") " pod="openstack/nova-api-0" Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.974196 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldm26\" (UniqueName: \"kubernetes.io/projected/d736fb4b-faba-4398-8a5a-5fa576351d40-kube-api-access-ldm26\") pod \"nova-metadata-0\" (UID: \"d736fb4b-faba-4398-8a5a-5fa576351d40\") " pod="openstack/nova-metadata-0" Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.974216 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaf7387c-6f81-405e-bc13-38a9a279741c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"aaf7387c-6f81-405e-bc13-38a9a279741c\") " pod="openstack/nova-api-0" Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.974235 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaf7387c-6f81-405e-bc13-38a9a279741c-config-data\") pod \"nova-api-0\" (UID: \"aaf7387c-6f81-405e-bc13-38a9a279741c\") " pod="openstack/nova-api-0" Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.974264 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64c05ccd-a75c-46df-8cbe-60f572b64666-config-data\") pod \"nova-scheduler-0\" (UID: \"64c05ccd-a75c-46df-8cbe-60f572b64666\") " pod="openstack/nova-scheduler-0" Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.974691 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d736fb4b-faba-4398-8a5a-5fa576351d40-logs\") pod \"nova-metadata-0\" (UID: \"d736fb4b-faba-4398-8a5a-5fa576351d40\") " pod="openstack/nova-metadata-0" Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.977224 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d736fb4b-faba-4398-8a5a-5fa576351d40-config-data\") pod \"nova-metadata-0\" (UID: \"d736fb4b-faba-4398-8a5a-5fa576351d40\") " pod="openstack/nova-metadata-0" Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.978240 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64c05ccd-a75c-46df-8cbe-60f572b64666-config-data\") pod \"nova-scheduler-0\" (UID: \"64c05ccd-a75c-46df-8cbe-60f572b64666\") " pod="openstack/nova-scheduler-0" Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.982785 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d736fb4b-faba-4398-8a5a-5fa576351d40-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d736fb4b-faba-4398-8a5a-5fa576351d40\") " pod="openstack/nova-metadata-0" Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.992716 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64c05ccd-a75c-46df-8cbe-60f572b64666-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"64c05ccd-a75c-46df-8cbe-60f572b64666\") " pod="openstack/nova-scheduler-0" Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.992774 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26sbs\" (UniqueName: \"kubernetes.io/projected/64c05ccd-a75c-46df-8cbe-60f572b64666-kube-api-access-26sbs\") pod \"nova-scheduler-0\" (UID: \"64c05ccd-a75c-46df-8cbe-60f572b64666\") " pod="openstack/nova-scheduler-0" Feb 25 11:37:44 crc kubenswrapper[5005]: I0225 11:37:44.994361 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldm26\" (UniqueName: \"kubernetes.io/projected/d736fb4b-faba-4398-8a5a-5fa576351d40-kube-api-access-ldm26\") pod \"nova-metadata-0\" (UID: \"d736fb4b-faba-4398-8a5a-5fa576351d40\") " pod="openstack/nova-metadata-0" Feb 25 11:37:45 crc kubenswrapper[5005]: I0225 11:37:45.062764 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-x788q" Feb 25 11:37:45 crc kubenswrapper[5005]: I0225 11:37:45.075946 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lnbt\" (UniqueName: \"kubernetes.io/projected/29cad9cb-c8e5-4f68-89b4-cb0f89f33637-kube-api-access-9lnbt\") pod \"dnsmasq-dns-566b5b7845-bqh8f\" (UID: \"29cad9cb-c8e5-4f68-89b4-cb0f89f33637\") " pod="openstack/dnsmasq-dns-566b5b7845-bqh8f" Feb 25 11:37:45 crc kubenswrapper[5005]: I0225 11:37:45.076001 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v76rn\" (UniqueName: \"kubernetes.io/projected/aaf7387c-6f81-405e-bc13-38a9a279741c-kube-api-access-v76rn\") pod \"nova-api-0\" (UID: \"aaf7387c-6f81-405e-bc13-38a9a279741c\") " pod="openstack/nova-api-0" Feb 25 11:37:45 crc kubenswrapper[5005]: I0225 11:37:45.076126 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaf7387c-6f81-405e-bc13-38a9a279741c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"aaf7387c-6f81-405e-bc13-38a9a279741c\") " pod="openstack/nova-api-0" Feb 25 11:37:45 crc kubenswrapper[5005]: I0225 11:37:45.077129 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaf7387c-6f81-405e-bc13-38a9a279741c-config-data\") pod \"nova-api-0\" (UID: \"aaf7387c-6f81-405e-bc13-38a9a279741c\") " pod="openstack/nova-api-0" Feb 25 11:37:45 crc kubenswrapper[5005]: I0225 11:37:45.077257 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/29cad9cb-c8e5-4f68-89b4-cb0f89f33637-ovsdbserver-nb\") pod \"dnsmasq-dns-566b5b7845-bqh8f\" (UID: \"29cad9cb-c8e5-4f68-89b4-cb0f89f33637\") " pod="openstack/dnsmasq-dns-566b5b7845-bqh8f" Feb 25 11:37:45 crc kubenswrapper[5005]: I0225 11:37:45.077283 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29cad9cb-c8e5-4f68-89b4-cb0f89f33637-config\") pod \"dnsmasq-dns-566b5b7845-bqh8f\" (UID: \"29cad9cb-c8e5-4f68-89b4-cb0f89f33637\") " pod="openstack/dnsmasq-dns-566b5b7845-bqh8f" Feb 25 11:37:45 crc kubenswrapper[5005]: I0225 11:37:45.077343 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29cad9cb-c8e5-4f68-89b4-cb0f89f33637-dns-svc\") pod \"dnsmasq-dns-566b5b7845-bqh8f\" (UID: \"29cad9cb-c8e5-4f68-89b4-cb0f89f33637\") " pod="openstack/dnsmasq-dns-566b5b7845-bqh8f" Feb 25 11:37:45 crc kubenswrapper[5005]: I0225 11:37:45.077393 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/29cad9cb-c8e5-4f68-89b4-cb0f89f33637-ovsdbserver-sb\") pod \"dnsmasq-dns-566b5b7845-bqh8f\" (UID: \"29cad9cb-c8e5-4f68-89b4-cb0f89f33637\") " pod="openstack/dnsmasq-dns-566b5b7845-bqh8f" Feb 25 11:37:45 crc kubenswrapper[5005]: I0225 11:37:45.077475 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aaf7387c-6f81-405e-bc13-38a9a279741c-logs\") pod \"nova-api-0\" (UID: \"aaf7387c-6f81-405e-bc13-38a9a279741c\") " pod="openstack/nova-api-0" Feb 25 11:37:45 crc kubenswrapper[5005]: I0225 11:37:45.077921 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aaf7387c-6f81-405e-bc13-38a9a279741c-logs\") pod \"nova-api-0\" (UID: \"aaf7387c-6f81-405e-bc13-38a9a279741c\") " pod="openstack/nova-api-0" Feb 25 11:37:45 crc kubenswrapper[5005]: I0225 11:37:45.083295 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaf7387c-6f81-405e-bc13-38a9a279741c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"aaf7387c-6f81-405e-bc13-38a9a279741c\") " pod="openstack/nova-api-0" Feb 25 11:37:45 crc kubenswrapper[5005]: I0225 11:37:45.083501 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaf7387c-6f81-405e-bc13-38a9a279741c-config-data\") pod \"nova-api-0\" (UID: \"aaf7387c-6f81-405e-bc13-38a9a279741c\") " pod="openstack/nova-api-0" Feb 25 11:37:45 crc kubenswrapper[5005]: I0225 11:37:45.091953 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v76rn\" (UniqueName: \"kubernetes.io/projected/aaf7387c-6f81-405e-bc13-38a9a279741c-kube-api-access-v76rn\") pod \"nova-api-0\" (UID: \"aaf7387c-6f81-405e-bc13-38a9a279741c\") " pod="openstack/nova-api-0" Feb 25 11:37:45 crc kubenswrapper[5005]: I0225 11:37:45.177183 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 25 11:37:45 crc kubenswrapper[5005]: I0225 11:37:45.178860 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29cad9cb-c8e5-4f68-89b4-cb0f89f33637-dns-svc\") pod \"dnsmasq-dns-566b5b7845-bqh8f\" (UID: \"29cad9cb-c8e5-4f68-89b4-cb0f89f33637\") " pod="openstack/dnsmasq-dns-566b5b7845-bqh8f" Feb 25 11:37:45 crc kubenswrapper[5005]: I0225 11:37:45.178894 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/29cad9cb-c8e5-4f68-89b4-cb0f89f33637-ovsdbserver-sb\") pod \"dnsmasq-dns-566b5b7845-bqh8f\" (UID: \"29cad9cb-c8e5-4f68-89b4-cb0f89f33637\") " pod="openstack/dnsmasq-dns-566b5b7845-bqh8f" Feb 25 11:37:45 crc kubenswrapper[5005]: I0225 11:37:45.179005 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lnbt\" (UniqueName: \"kubernetes.io/projected/29cad9cb-c8e5-4f68-89b4-cb0f89f33637-kube-api-access-9lnbt\") pod \"dnsmasq-dns-566b5b7845-bqh8f\" (UID: \"29cad9cb-c8e5-4f68-89b4-cb0f89f33637\") " pod="openstack/dnsmasq-dns-566b5b7845-bqh8f" Feb 25 11:37:45 crc kubenswrapper[5005]: I0225 11:37:45.179209 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/29cad9cb-c8e5-4f68-89b4-cb0f89f33637-ovsdbserver-nb\") pod \"dnsmasq-dns-566b5b7845-bqh8f\" (UID: \"29cad9cb-c8e5-4f68-89b4-cb0f89f33637\") " pod="openstack/dnsmasq-dns-566b5b7845-bqh8f" Feb 25 11:37:45 crc kubenswrapper[5005]: I0225 11:37:45.179241 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29cad9cb-c8e5-4f68-89b4-cb0f89f33637-config\") pod \"dnsmasq-dns-566b5b7845-bqh8f\" (UID: \"29cad9cb-c8e5-4f68-89b4-cb0f89f33637\") " pod="openstack/dnsmasq-dns-566b5b7845-bqh8f" Feb 25 11:37:45 crc kubenswrapper[5005]: I0225 11:37:45.180282 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/29cad9cb-c8e5-4f68-89b4-cb0f89f33637-ovsdbserver-sb\") pod \"dnsmasq-dns-566b5b7845-bqh8f\" (UID: \"29cad9cb-c8e5-4f68-89b4-cb0f89f33637\") " pod="openstack/dnsmasq-dns-566b5b7845-bqh8f" Feb 25 11:37:45 crc kubenswrapper[5005]: I0225 11:37:45.180310 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/29cad9cb-c8e5-4f68-89b4-cb0f89f33637-ovsdbserver-nb\") pod \"dnsmasq-dns-566b5b7845-bqh8f\" (UID: \"29cad9cb-c8e5-4f68-89b4-cb0f89f33637\") " pod="openstack/dnsmasq-dns-566b5b7845-bqh8f" Feb 25 11:37:45 crc kubenswrapper[5005]: I0225 11:37:45.180547 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29cad9cb-c8e5-4f68-89b4-cb0f89f33637-dns-svc\") pod \"dnsmasq-dns-566b5b7845-bqh8f\" (UID: \"29cad9cb-c8e5-4f68-89b4-cb0f89f33637\") " pod="openstack/dnsmasq-dns-566b5b7845-bqh8f" Feb 25 11:37:45 crc kubenswrapper[5005]: I0225 11:37:45.181970 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29cad9cb-c8e5-4f68-89b4-cb0f89f33637-config\") pod \"dnsmasq-dns-566b5b7845-bqh8f\" (UID: \"29cad9cb-c8e5-4f68-89b4-cb0f89f33637\") " pod="openstack/dnsmasq-dns-566b5b7845-bqh8f" Feb 25 11:37:45 crc kubenswrapper[5005]: I0225 11:37:45.198809 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lnbt\" (UniqueName: \"kubernetes.io/projected/29cad9cb-c8e5-4f68-89b4-cb0f89f33637-kube-api-access-9lnbt\") pod \"dnsmasq-dns-566b5b7845-bqh8f\" (UID: \"29cad9cb-c8e5-4f68-89b4-cb0f89f33637\") " pod="openstack/dnsmasq-dns-566b5b7845-bqh8f" Feb 25 11:37:45 crc kubenswrapper[5005]: I0225 11:37:45.251036 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 25 11:37:45 crc kubenswrapper[5005]: I0225 11:37:45.287250 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 25 11:37:45 crc kubenswrapper[5005]: I0225 11:37:45.292553 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-bqh8f" Feb 25 11:37:45 crc kubenswrapper[5005]: I0225 11:37:45.407326 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 25 11:37:45 crc kubenswrapper[5005]: I0225 11:37:45.428857 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-ntj86"] Feb 25 11:37:45 crc kubenswrapper[5005]: I0225 11:37:45.430528 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-ntj86" Feb 25 11:37:45 crc kubenswrapper[5005]: I0225 11:37:45.440073 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-ntj86"] Feb 25 11:37:45 crc kubenswrapper[5005]: I0225 11:37:45.445791 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 25 11:37:45 crc kubenswrapper[5005]: I0225 11:37:45.445999 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 25 11:37:45 crc kubenswrapper[5005]: I0225 11:37:45.505786 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-x788q"] Feb 25 11:37:45 crc kubenswrapper[5005]: I0225 11:37:45.591553 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1734d86e-d703-4000-9058-bdc27eae9765-scripts\") pod \"nova-cell1-conductor-db-sync-ntj86\" (UID: \"1734d86e-d703-4000-9058-bdc27eae9765\") " pod="openstack/nova-cell1-conductor-db-sync-ntj86" Feb 25 11:37:45 crc kubenswrapper[5005]: I0225 11:37:45.591655 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1734d86e-d703-4000-9058-bdc27eae9765-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-ntj86\" (UID: \"1734d86e-d703-4000-9058-bdc27eae9765\") " pod="openstack/nova-cell1-conductor-db-sync-ntj86" Feb 25 11:37:45 crc kubenswrapper[5005]: I0225 11:37:45.591702 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgbvm\" (UniqueName: \"kubernetes.io/projected/1734d86e-d703-4000-9058-bdc27eae9765-kube-api-access-dgbvm\") pod \"nova-cell1-conductor-db-sync-ntj86\" (UID: \"1734d86e-d703-4000-9058-bdc27eae9765\") " pod="openstack/nova-cell1-conductor-db-sync-ntj86" Feb 25 11:37:45 crc kubenswrapper[5005]: I0225 11:37:45.591730 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1734d86e-d703-4000-9058-bdc27eae9765-config-data\") pod \"nova-cell1-conductor-db-sync-ntj86\" (UID: \"1734d86e-d703-4000-9058-bdc27eae9765\") " pod="openstack/nova-cell1-conductor-db-sync-ntj86" Feb 25 11:37:45 crc kubenswrapper[5005]: I0225 11:37:45.611250 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b374c6f2-49de-499e-9c35-b0b859de60f2","Type":"ContainerStarted","Data":"aae042aa48af8371a80bb07091defba5deb86c81b61da406c61bb781c0b7672f"} Feb 25 11:37:45 crc kubenswrapper[5005]: I0225 11:37:45.612415 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-x788q" event={"ID":"aa5f2bfb-c847-4266-9def-11101efa2256","Type":"ContainerStarted","Data":"2bd5f5425178090544fc83f731def14040aa3ee400ded4f946b3f8b177eb8cb5"} Feb 25 11:37:45 crc kubenswrapper[5005]: I0225 11:37:45.674010 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 25 11:37:45 crc kubenswrapper[5005]: W0225 11:37:45.677570 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaaf7387c_6f81_405e_bc13_38a9a279741c.slice/crio-21bdfba20c1965ad3e57c8fe678f4b7b1889600f9b88e44cad40d14463cade29 WatchSource:0}: Error finding container 21bdfba20c1965ad3e57c8fe678f4b7b1889600f9b88e44cad40d14463cade29: Status 404 returned error can't find the container with id 21bdfba20c1965ad3e57c8fe678f4b7b1889600f9b88e44cad40d14463cade29 Feb 25 11:37:45 crc kubenswrapper[5005]: I0225 11:37:45.693588 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgbvm\" (UniqueName: \"kubernetes.io/projected/1734d86e-d703-4000-9058-bdc27eae9765-kube-api-access-dgbvm\") pod \"nova-cell1-conductor-db-sync-ntj86\" (UID: \"1734d86e-d703-4000-9058-bdc27eae9765\") " pod="openstack/nova-cell1-conductor-db-sync-ntj86" Feb 25 11:37:45 crc kubenswrapper[5005]: I0225 11:37:45.693646 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1734d86e-d703-4000-9058-bdc27eae9765-config-data\") pod \"nova-cell1-conductor-db-sync-ntj86\" (UID: \"1734d86e-d703-4000-9058-bdc27eae9765\") " pod="openstack/nova-cell1-conductor-db-sync-ntj86" Feb 25 11:37:45 crc kubenswrapper[5005]: I0225 11:37:45.693720 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1734d86e-d703-4000-9058-bdc27eae9765-scripts\") pod \"nova-cell1-conductor-db-sync-ntj86\" (UID: \"1734d86e-d703-4000-9058-bdc27eae9765\") " pod="openstack/nova-cell1-conductor-db-sync-ntj86" Feb 25 11:37:45 crc kubenswrapper[5005]: I0225 11:37:45.693783 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1734d86e-d703-4000-9058-bdc27eae9765-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-ntj86\" (UID: \"1734d86e-d703-4000-9058-bdc27eae9765\") " pod="openstack/nova-cell1-conductor-db-sync-ntj86" Feb 25 11:37:45 crc kubenswrapper[5005]: I0225 11:37:45.708920 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1734d86e-d703-4000-9058-bdc27eae9765-scripts\") pod \"nova-cell1-conductor-db-sync-ntj86\" (UID: \"1734d86e-d703-4000-9058-bdc27eae9765\") " pod="openstack/nova-cell1-conductor-db-sync-ntj86" Feb 25 11:37:45 crc kubenswrapper[5005]: I0225 11:37:45.709838 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1734d86e-d703-4000-9058-bdc27eae9765-config-data\") pod \"nova-cell1-conductor-db-sync-ntj86\" (UID: \"1734d86e-d703-4000-9058-bdc27eae9765\") " pod="openstack/nova-cell1-conductor-db-sync-ntj86" Feb 25 11:37:45 crc kubenswrapper[5005]: I0225 11:37:45.710048 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1734d86e-d703-4000-9058-bdc27eae9765-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-ntj86\" (UID: \"1734d86e-d703-4000-9058-bdc27eae9765\") " pod="openstack/nova-cell1-conductor-db-sync-ntj86" Feb 25 11:37:45 crc kubenswrapper[5005]: I0225 11:37:45.712934 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgbvm\" (UniqueName: \"kubernetes.io/projected/1734d86e-d703-4000-9058-bdc27eae9765-kube-api-access-dgbvm\") pod \"nova-cell1-conductor-db-sync-ntj86\" (UID: \"1734d86e-d703-4000-9058-bdc27eae9765\") " pod="openstack/nova-cell1-conductor-db-sync-ntj86" Feb 25 11:37:45 crc kubenswrapper[5005]: I0225 11:37:45.751786 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-ntj86" Feb 25 11:37:45 crc kubenswrapper[5005]: I0225 11:37:45.816963 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 25 11:37:45 crc kubenswrapper[5005]: I0225 11:37:45.915882 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-bqh8f"] Feb 25 11:37:45 crc kubenswrapper[5005]: W0225 11:37:45.922248 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29cad9cb_c8e5_4f68_89b4_cb0f89f33637.slice/crio-e0cc3f83f4331ce5af69427b0fc330aec43a7c0bd4e1474baea24a14ff35b733 WatchSource:0}: Error finding container e0cc3f83f4331ce5af69427b0fc330aec43a7c0bd4e1474baea24a14ff35b733: Status 404 returned error can't find the container with id e0cc3f83f4331ce5af69427b0fc330aec43a7c0bd4e1474baea24a14ff35b733 Feb 25 11:37:45 crc kubenswrapper[5005]: W0225 11:37:45.925700 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64c05ccd_a75c_46df_8cbe_60f572b64666.slice/crio-61e25eb474a06b916ca75e40aff945a3b31878eda29e0862dac362cb40a14ada WatchSource:0}: Error finding container 61e25eb474a06b916ca75e40aff945a3b31878eda29e0862dac362cb40a14ada: Status 404 returned error can't find the container with id 61e25eb474a06b916ca75e40aff945a3b31878eda29e0862dac362cb40a14ada Feb 25 11:37:45 crc kubenswrapper[5005]: I0225 11:37:45.931846 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 25 11:37:46 crc kubenswrapper[5005]: I0225 11:37:46.217967 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-ntj86"] Feb 25 11:37:46 crc kubenswrapper[5005]: I0225 11:37:46.622856 5005 generic.go:334] "Generic (PLEG): container finished" podID="29cad9cb-c8e5-4f68-89b4-cb0f89f33637" containerID="b3e75160dd6b139853af5abbd366c4fe4dc7e89e9d54a0f46c3d17d9763facf1" exitCode=0 Feb 25 11:37:46 crc kubenswrapper[5005]: I0225 11:37:46.622959 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-bqh8f" event={"ID":"29cad9cb-c8e5-4f68-89b4-cb0f89f33637","Type":"ContainerDied","Data":"b3e75160dd6b139853af5abbd366c4fe4dc7e89e9d54a0f46c3d17d9763facf1"} Feb 25 11:37:46 crc kubenswrapper[5005]: I0225 11:37:46.623194 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-bqh8f" event={"ID":"29cad9cb-c8e5-4f68-89b4-cb0f89f33637","Type":"ContainerStarted","Data":"e0cc3f83f4331ce5af69427b0fc330aec43a7c0bd4e1474baea24a14ff35b733"} Feb 25 11:37:46 crc kubenswrapper[5005]: I0225 11:37:46.625389 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-ntj86" event={"ID":"1734d86e-d703-4000-9058-bdc27eae9765","Type":"ContainerStarted","Data":"8d018400ea1ec8acbfc543580027d54c18924188194205ba6bc96493fb10ef84"} Feb 25 11:37:46 crc kubenswrapper[5005]: I0225 11:37:46.625436 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-ntj86" event={"ID":"1734d86e-d703-4000-9058-bdc27eae9765","Type":"ContainerStarted","Data":"1f45adabd8ae0efe0f32f68ed84e27873aa36318f494fc4ff23ea4a380f5efa5"} Feb 25 11:37:46 crc kubenswrapper[5005]: I0225 11:37:46.627127 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-x788q" event={"ID":"aa5f2bfb-c847-4266-9def-11101efa2256","Type":"ContainerStarted","Data":"d1f0b3ffae2f57e898b99e6db839bf3a05034740859bb1e89de16ea30c2661d0"} Feb 25 11:37:46 crc kubenswrapper[5005]: I0225 11:37:46.628100 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d736fb4b-faba-4398-8a5a-5fa576351d40","Type":"ContainerStarted","Data":"e2a788e04b9d6acedc2141f423eae4c0039689d6af4aff83ce6829ccdf57d205"} Feb 25 11:37:46 crc kubenswrapper[5005]: I0225 11:37:46.629103 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"64c05ccd-a75c-46df-8cbe-60f572b64666","Type":"ContainerStarted","Data":"61e25eb474a06b916ca75e40aff945a3b31878eda29e0862dac362cb40a14ada"} Feb 25 11:37:46 crc kubenswrapper[5005]: I0225 11:37:46.630185 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aaf7387c-6f81-405e-bc13-38a9a279741c","Type":"ContainerStarted","Data":"21bdfba20c1965ad3e57c8fe678f4b7b1889600f9b88e44cad40d14463cade29"} Feb 25 11:37:46 crc kubenswrapper[5005]: I0225 11:37:46.667621 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-x788q" podStartSLOduration=2.667606021 podStartE2EDuration="2.667606021s" podCreationTimestamp="2026-02-25 11:37:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:37:46.663888248 +0000 UTC m=+1180.704620575" watchObservedRunningTime="2026-02-25 11:37:46.667606021 +0000 UTC m=+1180.708338348" Feb 25 11:37:46 crc kubenswrapper[5005]: I0225 11:37:46.678208 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-ntj86" podStartSLOduration=1.678189101 podStartE2EDuration="1.678189101s" podCreationTimestamp="2026-02-25 11:37:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:37:46.677632025 +0000 UTC m=+1180.718364352" watchObservedRunningTime="2026-02-25 11:37:46.678189101 +0000 UTC m=+1180.718921428" Feb 25 11:37:48 crc kubenswrapper[5005]: I0225 11:37:48.355596 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 25 11:37:48 crc kubenswrapper[5005]: I0225 11:37:48.370631 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 25 11:37:49 crc kubenswrapper[5005]: I0225 11:37:49.707392 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d736fb4b-faba-4398-8a5a-5fa576351d40","Type":"ContainerStarted","Data":"94f8b048370ad4ccbd5452185123ba591ceb2206f306222144db5e0122692159"} Feb 25 11:37:49 crc kubenswrapper[5005]: I0225 11:37:49.707741 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d736fb4b-faba-4398-8a5a-5fa576351d40" containerName="nova-metadata-metadata" containerID="cri-o://94f8b048370ad4ccbd5452185123ba591ceb2206f306222144db5e0122692159" gracePeriod=30 Feb 25 11:37:49 crc kubenswrapper[5005]: I0225 11:37:49.707745 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d736fb4b-faba-4398-8a5a-5fa576351d40","Type":"ContainerStarted","Data":"651546a587f3c6cbffe9fe0de326543a4c88cddea8b1be00de8a59ededcd3388"} Feb 25 11:37:49 crc kubenswrapper[5005]: I0225 11:37:49.707478 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d736fb4b-faba-4398-8a5a-5fa576351d40" containerName="nova-metadata-log" containerID="cri-o://651546a587f3c6cbffe9fe0de326543a4c88cddea8b1be00de8a59ededcd3388" gracePeriod=30 Feb 25 11:37:49 crc kubenswrapper[5005]: I0225 11:37:49.709418 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"64c05ccd-a75c-46df-8cbe-60f572b64666","Type":"ContainerStarted","Data":"8dbb84d365d833f0bb3d9d72c5005f234ee050ab0606e3a36b3ce2a5a40137de"} Feb 25 11:37:49 crc kubenswrapper[5005]: I0225 11:37:49.712219 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aaf7387c-6f81-405e-bc13-38a9a279741c","Type":"ContainerStarted","Data":"2825ca6d8113420407ba2463cb1da55b050658cef1d9b67fb6d80855ec9c1938"} Feb 25 11:37:49 crc kubenswrapper[5005]: I0225 11:37:49.712265 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aaf7387c-6f81-405e-bc13-38a9a279741c","Type":"ContainerStarted","Data":"ccde9036bd885d9b2d8b304c6e4f854065143e4a7fbfc9d2b2c5580e6ca31a18"} Feb 25 11:37:49 crc kubenswrapper[5005]: I0225 11:37:49.716991 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b374c6f2-49de-499e-9c35-b0b859de60f2","Type":"ContainerStarted","Data":"349bb79dd4d251f0509c62a4ed8d2a2b8477ffa133040216112b2441f98e7030"} Feb 25 11:37:49 crc kubenswrapper[5005]: I0225 11:37:49.717090 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="b374c6f2-49de-499e-9c35-b0b859de60f2" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://349bb79dd4d251f0509c62a4ed8d2a2b8477ffa133040216112b2441f98e7030" gracePeriod=30 Feb 25 11:37:49 crc kubenswrapper[5005]: I0225 11:37:49.721457 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-bqh8f" event={"ID":"29cad9cb-c8e5-4f68-89b4-cb0f89f33637","Type":"ContainerStarted","Data":"3d4aaa7d5db69734c5c2219d284b8fdfcea18d492ec838d88857b5e33b6b6e24"} Feb 25 11:37:49 crc kubenswrapper[5005]: I0225 11:37:49.722357 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-566b5b7845-bqh8f" Feb 25 11:37:49 crc kubenswrapper[5005]: I0225 11:37:49.736481 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.629392214 podStartE2EDuration="5.736462866s" podCreationTimestamp="2026-02-25 11:37:44 +0000 UTC" firstStartedPulling="2026-02-25 11:37:45.831049595 +0000 UTC m=+1179.871781912" lastFinishedPulling="2026-02-25 11:37:48.938120237 +0000 UTC m=+1182.978852564" observedRunningTime="2026-02-25 11:37:49.729706001 +0000 UTC m=+1183.770438338" watchObservedRunningTime="2026-02-25 11:37:49.736462866 +0000 UTC m=+1183.777195193" Feb 25 11:37:49 crc kubenswrapper[5005]: I0225 11:37:49.750903 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.747519312 podStartE2EDuration="5.750889074s" podCreationTimestamp="2026-02-25 11:37:44 +0000 UTC" firstStartedPulling="2026-02-25 11:37:45.92962719 +0000 UTC m=+1179.970359517" lastFinishedPulling="2026-02-25 11:37:48.932996952 +0000 UTC m=+1182.973729279" observedRunningTime="2026-02-25 11:37:49.745991666 +0000 UTC m=+1183.786724003" watchObservedRunningTime="2026-02-25 11:37:49.750889074 +0000 UTC m=+1183.791621401" Feb 25 11:37:49 crc kubenswrapper[5005]: I0225 11:37:49.764695 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-566b5b7845-bqh8f" podStartSLOduration=5.764674291 podStartE2EDuration="5.764674291s" podCreationTimestamp="2026-02-25 11:37:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:37:49.763810665 +0000 UTC m=+1183.804543002" watchObservedRunningTime="2026-02-25 11:37:49.764674291 +0000 UTC m=+1183.805406618" Feb 25 11:37:49 crc kubenswrapper[5005]: I0225 11:37:49.789233 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.287388656 podStartE2EDuration="5.789213084s" podCreationTimestamp="2026-02-25 11:37:44 +0000 UTC" firstStartedPulling="2026-02-25 11:37:45.432600907 +0000 UTC m=+1179.473333234" lastFinishedPulling="2026-02-25 11:37:48.934425335 +0000 UTC m=+1182.975157662" observedRunningTime="2026-02-25 11:37:49.779311985 +0000 UTC m=+1183.820044312" watchObservedRunningTime="2026-02-25 11:37:49.789213084 +0000 UTC m=+1183.829945411" Feb 25 11:37:49 crc kubenswrapper[5005]: I0225 11:37:49.799805 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.547536655 podStartE2EDuration="5.799783094s" podCreationTimestamp="2026-02-25 11:37:44 +0000 UTC" firstStartedPulling="2026-02-25 11:37:45.679941448 +0000 UTC m=+1179.720673775" lastFinishedPulling="2026-02-25 11:37:48.932187887 +0000 UTC m=+1182.972920214" observedRunningTime="2026-02-25 11:37:49.796727961 +0000 UTC m=+1183.837460288" watchObservedRunningTime="2026-02-25 11:37:49.799783094 +0000 UTC m=+1183.840515421" Feb 25 11:37:49 crc kubenswrapper[5005]: I0225 11:37:49.923687 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 25 11:37:50 crc kubenswrapper[5005]: I0225 11:37:50.251397 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 25 11:37:50 crc kubenswrapper[5005]: I0225 11:37:50.251465 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 25 11:37:50 crc kubenswrapper[5005]: I0225 11:37:50.287607 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 25 11:37:50 crc kubenswrapper[5005]: I0225 11:37:50.307913 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 25 11:37:50 crc kubenswrapper[5005]: I0225 11:37:50.491147 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d736fb4b-faba-4398-8a5a-5fa576351d40-config-data\") pod \"d736fb4b-faba-4398-8a5a-5fa576351d40\" (UID: \"d736fb4b-faba-4398-8a5a-5fa576351d40\") " Feb 25 11:37:50 crc kubenswrapper[5005]: I0225 11:37:50.491345 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldm26\" (UniqueName: \"kubernetes.io/projected/d736fb4b-faba-4398-8a5a-5fa576351d40-kube-api-access-ldm26\") pod \"d736fb4b-faba-4398-8a5a-5fa576351d40\" (UID: \"d736fb4b-faba-4398-8a5a-5fa576351d40\") " Feb 25 11:37:50 crc kubenswrapper[5005]: I0225 11:37:50.491480 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d736fb4b-faba-4398-8a5a-5fa576351d40-logs\") pod \"d736fb4b-faba-4398-8a5a-5fa576351d40\" (UID: \"d736fb4b-faba-4398-8a5a-5fa576351d40\") " Feb 25 11:37:50 crc kubenswrapper[5005]: I0225 11:37:50.491561 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d736fb4b-faba-4398-8a5a-5fa576351d40-combined-ca-bundle\") pod \"d736fb4b-faba-4398-8a5a-5fa576351d40\" (UID: \"d736fb4b-faba-4398-8a5a-5fa576351d40\") " Feb 25 11:37:50 crc kubenswrapper[5005]: I0225 11:37:50.492534 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d736fb4b-faba-4398-8a5a-5fa576351d40-logs" (OuterVolumeSpecName: "logs") pod "d736fb4b-faba-4398-8a5a-5fa576351d40" (UID: "d736fb4b-faba-4398-8a5a-5fa576351d40"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:37:50 crc kubenswrapper[5005]: I0225 11:37:50.496318 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d736fb4b-faba-4398-8a5a-5fa576351d40-kube-api-access-ldm26" (OuterVolumeSpecName: "kube-api-access-ldm26") pod "d736fb4b-faba-4398-8a5a-5fa576351d40" (UID: "d736fb4b-faba-4398-8a5a-5fa576351d40"). InnerVolumeSpecName "kube-api-access-ldm26". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:37:50 crc kubenswrapper[5005]: I0225 11:37:50.525452 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d736fb4b-faba-4398-8a5a-5fa576351d40-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d736fb4b-faba-4398-8a5a-5fa576351d40" (UID: "d736fb4b-faba-4398-8a5a-5fa576351d40"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:37:50 crc kubenswrapper[5005]: I0225 11:37:50.541067 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d736fb4b-faba-4398-8a5a-5fa576351d40-config-data" (OuterVolumeSpecName: "config-data") pod "d736fb4b-faba-4398-8a5a-5fa576351d40" (UID: "d736fb4b-faba-4398-8a5a-5fa576351d40"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:37:50 crc kubenswrapper[5005]: I0225 11:37:50.593950 5005 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d736fb4b-faba-4398-8a5a-5fa576351d40-config-data\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:50 crc kubenswrapper[5005]: I0225 11:37:50.593987 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldm26\" (UniqueName: \"kubernetes.io/projected/d736fb4b-faba-4398-8a5a-5fa576351d40-kube-api-access-ldm26\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:50 crc kubenswrapper[5005]: I0225 11:37:50.594003 5005 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d736fb4b-faba-4398-8a5a-5fa576351d40-logs\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:50 crc kubenswrapper[5005]: I0225 11:37:50.594012 5005 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d736fb4b-faba-4398-8a5a-5fa576351d40-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:50 crc kubenswrapper[5005]: I0225 11:37:50.731385 5005 generic.go:334] "Generic (PLEG): container finished" podID="d736fb4b-faba-4398-8a5a-5fa576351d40" containerID="94f8b048370ad4ccbd5452185123ba591ceb2206f306222144db5e0122692159" exitCode=0 Feb 25 11:37:50 crc kubenswrapper[5005]: I0225 11:37:50.731420 5005 generic.go:334] "Generic (PLEG): container finished" podID="d736fb4b-faba-4398-8a5a-5fa576351d40" containerID="651546a587f3c6cbffe9fe0de326543a4c88cddea8b1be00de8a59ededcd3388" exitCode=143 Feb 25 11:37:50 crc kubenswrapper[5005]: I0225 11:37:50.731828 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d736fb4b-faba-4398-8a5a-5fa576351d40","Type":"ContainerDied","Data":"94f8b048370ad4ccbd5452185123ba591ceb2206f306222144db5e0122692159"} Feb 25 11:37:50 crc kubenswrapper[5005]: I0225 11:37:50.731878 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d736fb4b-faba-4398-8a5a-5fa576351d40","Type":"ContainerDied","Data":"651546a587f3c6cbffe9fe0de326543a4c88cddea8b1be00de8a59ededcd3388"} Feb 25 11:37:50 crc kubenswrapper[5005]: I0225 11:37:50.731890 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d736fb4b-faba-4398-8a5a-5fa576351d40","Type":"ContainerDied","Data":"e2a788e04b9d6acedc2141f423eae4c0039689d6af4aff83ce6829ccdf57d205"} Feb 25 11:37:50 crc kubenswrapper[5005]: I0225 11:37:50.731908 5005 scope.go:117] "RemoveContainer" containerID="94f8b048370ad4ccbd5452185123ba591ceb2206f306222144db5e0122692159" Feb 25 11:37:50 crc kubenswrapper[5005]: I0225 11:37:50.732036 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 25 11:37:50 crc kubenswrapper[5005]: I0225 11:37:50.782019 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 25 11:37:50 crc kubenswrapper[5005]: I0225 11:37:50.807182 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 25 11:37:50 crc kubenswrapper[5005]: I0225 11:37:50.814000 5005 scope.go:117] "RemoveContainer" containerID="651546a587f3c6cbffe9fe0de326543a4c88cddea8b1be00de8a59ededcd3388" Feb 25 11:37:50 crc kubenswrapper[5005]: I0225 11:37:50.830564 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 25 11:37:50 crc kubenswrapper[5005]: E0225 11:37:50.831106 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d736fb4b-faba-4398-8a5a-5fa576351d40" containerName="nova-metadata-metadata" Feb 25 11:37:50 crc kubenswrapper[5005]: I0225 11:37:50.831127 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="d736fb4b-faba-4398-8a5a-5fa576351d40" containerName="nova-metadata-metadata" Feb 25 11:37:50 crc kubenswrapper[5005]: E0225 11:37:50.831163 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d736fb4b-faba-4398-8a5a-5fa576351d40" containerName="nova-metadata-log" Feb 25 11:37:50 crc kubenswrapper[5005]: I0225 11:37:50.831171 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="d736fb4b-faba-4398-8a5a-5fa576351d40" containerName="nova-metadata-log" Feb 25 11:37:50 crc kubenswrapper[5005]: I0225 11:37:50.831356 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="d736fb4b-faba-4398-8a5a-5fa576351d40" containerName="nova-metadata-log" Feb 25 11:37:50 crc kubenswrapper[5005]: I0225 11:37:50.831457 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="d736fb4b-faba-4398-8a5a-5fa576351d40" containerName="nova-metadata-metadata" Feb 25 11:37:50 crc kubenswrapper[5005]: I0225 11:37:50.832746 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 25 11:37:50 crc kubenswrapper[5005]: I0225 11:37:50.841887 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 25 11:37:50 crc kubenswrapper[5005]: I0225 11:37:50.842004 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 25 11:37:50 crc kubenswrapper[5005]: I0225 11:37:50.847492 5005 scope.go:117] "RemoveContainer" containerID="94f8b048370ad4ccbd5452185123ba591ceb2206f306222144db5e0122692159" Feb 25 11:37:50 crc kubenswrapper[5005]: E0225 11:37:50.848738 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94f8b048370ad4ccbd5452185123ba591ceb2206f306222144db5e0122692159\": container with ID starting with 94f8b048370ad4ccbd5452185123ba591ceb2206f306222144db5e0122692159 not found: ID does not exist" containerID="94f8b048370ad4ccbd5452185123ba591ceb2206f306222144db5e0122692159" Feb 25 11:37:50 crc kubenswrapper[5005]: I0225 11:37:50.848770 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94f8b048370ad4ccbd5452185123ba591ceb2206f306222144db5e0122692159"} err="failed to get container status \"94f8b048370ad4ccbd5452185123ba591ceb2206f306222144db5e0122692159\": rpc error: code = NotFound desc = could not find container \"94f8b048370ad4ccbd5452185123ba591ceb2206f306222144db5e0122692159\": container with ID starting with 94f8b048370ad4ccbd5452185123ba591ceb2206f306222144db5e0122692159 not found: ID does not exist" Feb 25 11:37:50 crc kubenswrapper[5005]: I0225 11:37:50.848789 5005 scope.go:117] "RemoveContainer" containerID="651546a587f3c6cbffe9fe0de326543a4c88cddea8b1be00de8a59ededcd3388" Feb 25 11:37:50 crc kubenswrapper[5005]: E0225 11:37:50.849086 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"651546a587f3c6cbffe9fe0de326543a4c88cddea8b1be00de8a59ededcd3388\": container with ID starting with 651546a587f3c6cbffe9fe0de326543a4c88cddea8b1be00de8a59ededcd3388 not found: ID does not exist" containerID="651546a587f3c6cbffe9fe0de326543a4c88cddea8b1be00de8a59ededcd3388" Feb 25 11:37:50 crc kubenswrapper[5005]: I0225 11:37:50.849102 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"651546a587f3c6cbffe9fe0de326543a4c88cddea8b1be00de8a59ededcd3388"} err="failed to get container status \"651546a587f3c6cbffe9fe0de326543a4c88cddea8b1be00de8a59ededcd3388\": rpc error: code = NotFound desc = could not find container \"651546a587f3c6cbffe9fe0de326543a4c88cddea8b1be00de8a59ededcd3388\": container with ID starting with 651546a587f3c6cbffe9fe0de326543a4c88cddea8b1be00de8a59ededcd3388 not found: ID does not exist" Feb 25 11:37:50 crc kubenswrapper[5005]: I0225 11:37:50.849117 5005 scope.go:117] "RemoveContainer" containerID="94f8b048370ad4ccbd5452185123ba591ceb2206f306222144db5e0122692159" Feb 25 11:37:50 crc kubenswrapper[5005]: I0225 11:37:50.849277 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94f8b048370ad4ccbd5452185123ba591ceb2206f306222144db5e0122692159"} err="failed to get container status \"94f8b048370ad4ccbd5452185123ba591ceb2206f306222144db5e0122692159\": rpc error: code = NotFound desc = could not find container \"94f8b048370ad4ccbd5452185123ba591ceb2206f306222144db5e0122692159\": container with ID starting with 94f8b048370ad4ccbd5452185123ba591ceb2206f306222144db5e0122692159 not found: ID does not exist" Feb 25 11:37:50 crc kubenswrapper[5005]: I0225 11:37:50.849295 5005 scope.go:117] "RemoveContainer" containerID="651546a587f3c6cbffe9fe0de326543a4c88cddea8b1be00de8a59ededcd3388" Feb 25 11:37:50 crc kubenswrapper[5005]: I0225 11:37:50.849464 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"651546a587f3c6cbffe9fe0de326543a4c88cddea8b1be00de8a59ededcd3388"} err="failed to get container status \"651546a587f3c6cbffe9fe0de326543a4c88cddea8b1be00de8a59ededcd3388\": rpc error: code = NotFound desc = could not find container \"651546a587f3c6cbffe9fe0de326543a4c88cddea8b1be00de8a59ededcd3388\": container with ID starting with 651546a587f3c6cbffe9fe0de326543a4c88cddea8b1be00de8a59ededcd3388 not found: ID does not exist" Feb 25 11:37:50 crc kubenswrapper[5005]: I0225 11:37:50.853061 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 25 11:37:51 crc kubenswrapper[5005]: I0225 11:37:51.000717 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e15576e-9682-47eb-9843-ff2bd0fd5d64-logs\") pod \"nova-metadata-0\" (UID: \"2e15576e-9682-47eb-9843-ff2bd0fd5d64\") " pod="openstack/nova-metadata-0" Feb 25 11:37:51 crc kubenswrapper[5005]: I0225 11:37:51.000760 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5psf2\" (UniqueName: \"kubernetes.io/projected/2e15576e-9682-47eb-9843-ff2bd0fd5d64-kube-api-access-5psf2\") pod \"nova-metadata-0\" (UID: \"2e15576e-9682-47eb-9843-ff2bd0fd5d64\") " pod="openstack/nova-metadata-0" Feb 25 11:37:51 crc kubenswrapper[5005]: I0225 11:37:51.000807 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e15576e-9682-47eb-9843-ff2bd0fd5d64-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2e15576e-9682-47eb-9843-ff2bd0fd5d64\") " pod="openstack/nova-metadata-0" Feb 25 11:37:51 crc kubenswrapper[5005]: I0225 11:37:51.000882 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e15576e-9682-47eb-9843-ff2bd0fd5d64-config-data\") pod \"nova-metadata-0\" (UID: \"2e15576e-9682-47eb-9843-ff2bd0fd5d64\") " pod="openstack/nova-metadata-0" Feb 25 11:37:51 crc kubenswrapper[5005]: I0225 11:37:51.000900 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e15576e-9682-47eb-9843-ff2bd0fd5d64-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2e15576e-9682-47eb-9843-ff2bd0fd5d64\") " pod="openstack/nova-metadata-0" Feb 25 11:37:51 crc kubenswrapper[5005]: I0225 11:37:51.102514 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e15576e-9682-47eb-9843-ff2bd0fd5d64-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2e15576e-9682-47eb-9843-ff2bd0fd5d64\") " pod="openstack/nova-metadata-0" Feb 25 11:37:51 crc kubenswrapper[5005]: I0225 11:37:51.102932 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e15576e-9682-47eb-9843-ff2bd0fd5d64-config-data\") pod \"nova-metadata-0\" (UID: \"2e15576e-9682-47eb-9843-ff2bd0fd5d64\") " pod="openstack/nova-metadata-0" Feb 25 11:37:51 crc kubenswrapper[5005]: I0225 11:37:51.103050 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e15576e-9682-47eb-9843-ff2bd0fd5d64-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2e15576e-9682-47eb-9843-ff2bd0fd5d64\") " pod="openstack/nova-metadata-0" Feb 25 11:37:51 crc kubenswrapper[5005]: I0225 11:37:51.103205 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e15576e-9682-47eb-9843-ff2bd0fd5d64-logs\") pod \"nova-metadata-0\" (UID: \"2e15576e-9682-47eb-9843-ff2bd0fd5d64\") " pod="openstack/nova-metadata-0" Feb 25 11:37:51 crc kubenswrapper[5005]: I0225 11:37:51.103321 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5psf2\" (UniqueName: \"kubernetes.io/projected/2e15576e-9682-47eb-9843-ff2bd0fd5d64-kube-api-access-5psf2\") pod \"nova-metadata-0\" (UID: \"2e15576e-9682-47eb-9843-ff2bd0fd5d64\") " pod="openstack/nova-metadata-0" Feb 25 11:37:51 crc kubenswrapper[5005]: I0225 11:37:51.104496 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e15576e-9682-47eb-9843-ff2bd0fd5d64-logs\") pod \"nova-metadata-0\" (UID: \"2e15576e-9682-47eb-9843-ff2bd0fd5d64\") " pod="openstack/nova-metadata-0" Feb 25 11:37:51 crc kubenswrapper[5005]: I0225 11:37:51.108153 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e15576e-9682-47eb-9843-ff2bd0fd5d64-config-data\") pod \"nova-metadata-0\" (UID: \"2e15576e-9682-47eb-9843-ff2bd0fd5d64\") " pod="openstack/nova-metadata-0" Feb 25 11:37:51 crc kubenswrapper[5005]: I0225 11:37:51.108889 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e15576e-9682-47eb-9843-ff2bd0fd5d64-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2e15576e-9682-47eb-9843-ff2bd0fd5d64\") " pod="openstack/nova-metadata-0" Feb 25 11:37:51 crc kubenswrapper[5005]: I0225 11:37:51.109325 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e15576e-9682-47eb-9843-ff2bd0fd5d64-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2e15576e-9682-47eb-9843-ff2bd0fd5d64\") " pod="openstack/nova-metadata-0" Feb 25 11:37:51 crc kubenswrapper[5005]: I0225 11:37:51.133635 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5psf2\" (UniqueName: \"kubernetes.io/projected/2e15576e-9682-47eb-9843-ff2bd0fd5d64-kube-api-access-5psf2\") pod \"nova-metadata-0\" (UID: \"2e15576e-9682-47eb-9843-ff2bd0fd5d64\") " pod="openstack/nova-metadata-0" Feb 25 11:37:51 crc kubenswrapper[5005]: I0225 11:37:51.155970 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 25 11:37:51 crc kubenswrapper[5005]: I0225 11:37:51.617733 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 25 11:37:51 crc kubenswrapper[5005]: I0225 11:37:51.745162 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2e15576e-9682-47eb-9843-ff2bd0fd5d64","Type":"ContainerStarted","Data":"d74494fa8c52ae00a2542d0b1aec917f51a28ee18015d93b8c61493d8142ff5b"} Feb 25 11:37:52 crc kubenswrapper[5005]: I0225 11:37:52.705714 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d736fb4b-faba-4398-8a5a-5fa576351d40" path="/var/lib/kubelet/pods/d736fb4b-faba-4398-8a5a-5fa576351d40/volumes" Feb 25 11:37:52 crc kubenswrapper[5005]: I0225 11:37:52.758488 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2e15576e-9682-47eb-9843-ff2bd0fd5d64","Type":"ContainerStarted","Data":"8ec8c6782f1ff82f629260f9f42306cddf0b7a55c08e0371080d4aa3b275f4d5"} Feb 25 11:37:52 crc kubenswrapper[5005]: I0225 11:37:52.758553 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2e15576e-9682-47eb-9843-ff2bd0fd5d64","Type":"ContainerStarted","Data":"daf3cb88b4bec226a7e420cb2d14944806b6aa3f31ed9965e6130f14e3a133f3"} Feb 25 11:37:52 crc kubenswrapper[5005]: I0225 11:37:52.764149 5005 generic.go:334] "Generic (PLEG): container finished" podID="aa5f2bfb-c847-4266-9def-11101efa2256" containerID="d1f0b3ffae2f57e898b99e6db839bf3a05034740859bb1e89de16ea30c2661d0" exitCode=0 Feb 25 11:37:52 crc kubenswrapper[5005]: I0225 11:37:52.764188 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-x788q" event={"ID":"aa5f2bfb-c847-4266-9def-11101efa2256","Type":"ContainerDied","Data":"d1f0b3ffae2f57e898b99e6db839bf3a05034740859bb1e89de16ea30c2661d0"} Feb 25 11:37:52 crc kubenswrapper[5005]: I0225 11:37:52.813946 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.813924233 podStartE2EDuration="2.813924233s" podCreationTimestamp="2026-02-25 11:37:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:37:52.801391193 +0000 UTC m=+1186.842123540" watchObservedRunningTime="2026-02-25 11:37:52.813924233 +0000 UTC m=+1186.854656570" Feb 25 11:37:53 crc kubenswrapper[5005]: I0225 11:37:53.776735 5005 generic.go:334] "Generic (PLEG): container finished" podID="1734d86e-d703-4000-9058-bdc27eae9765" containerID="8d018400ea1ec8acbfc543580027d54c18924188194205ba6bc96493fb10ef84" exitCode=0 Feb 25 11:37:53 crc kubenswrapper[5005]: I0225 11:37:53.776828 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-ntj86" event={"ID":"1734d86e-d703-4000-9058-bdc27eae9765","Type":"ContainerDied","Data":"8d018400ea1ec8acbfc543580027d54c18924188194205ba6bc96493fb10ef84"} Feb 25 11:37:53 crc kubenswrapper[5005]: I0225 11:37:53.878744 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 25 11:37:54 crc kubenswrapper[5005]: I0225 11:37:54.145312 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-x788q" Feb 25 11:37:54 crc kubenswrapper[5005]: I0225 11:37:54.178974 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72gv9\" (UniqueName: \"kubernetes.io/projected/aa5f2bfb-c847-4266-9def-11101efa2256-kube-api-access-72gv9\") pod \"aa5f2bfb-c847-4266-9def-11101efa2256\" (UID: \"aa5f2bfb-c847-4266-9def-11101efa2256\") " Feb 25 11:37:54 crc kubenswrapper[5005]: I0225 11:37:54.179089 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa5f2bfb-c847-4266-9def-11101efa2256-scripts\") pod \"aa5f2bfb-c847-4266-9def-11101efa2256\" (UID: \"aa5f2bfb-c847-4266-9def-11101efa2256\") " Feb 25 11:37:54 crc kubenswrapper[5005]: I0225 11:37:54.179225 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa5f2bfb-c847-4266-9def-11101efa2256-config-data\") pod \"aa5f2bfb-c847-4266-9def-11101efa2256\" (UID: \"aa5f2bfb-c847-4266-9def-11101efa2256\") " Feb 25 11:37:54 crc kubenswrapper[5005]: I0225 11:37:54.179303 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa5f2bfb-c847-4266-9def-11101efa2256-combined-ca-bundle\") pod \"aa5f2bfb-c847-4266-9def-11101efa2256\" (UID: \"aa5f2bfb-c847-4266-9def-11101efa2256\") " Feb 25 11:37:54 crc kubenswrapper[5005]: I0225 11:37:54.184150 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa5f2bfb-c847-4266-9def-11101efa2256-scripts" (OuterVolumeSpecName: "scripts") pod "aa5f2bfb-c847-4266-9def-11101efa2256" (UID: "aa5f2bfb-c847-4266-9def-11101efa2256"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:37:54 crc kubenswrapper[5005]: I0225 11:37:54.192586 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa5f2bfb-c847-4266-9def-11101efa2256-kube-api-access-72gv9" (OuterVolumeSpecName: "kube-api-access-72gv9") pod "aa5f2bfb-c847-4266-9def-11101efa2256" (UID: "aa5f2bfb-c847-4266-9def-11101efa2256"). InnerVolumeSpecName "kube-api-access-72gv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:37:54 crc kubenswrapper[5005]: I0225 11:37:54.206298 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa5f2bfb-c847-4266-9def-11101efa2256-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aa5f2bfb-c847-4266-9def-11101efa2256" (UID: "aa5f2bfb-c847-4266-9def-11101efa2256"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:37:54 crc kubenswrapper[5005]: I0225 11:37:54.210079 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa5f2bfb-c847-4266-9def-11101efa2256-config-data" (OuterVolumeSpecName: "config-data") pod "aa5f2bfb-c847-4266-9def-11101efa2256" (UID: "aa5f2bfb-c847-4266-9def-11101efa2256"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:37:54 crc kubenswrapper[5005]: I0225 11:37:54.282642 5005 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa5f2bfb-c847-4266-9def-11101efa2256-scripts\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:54 crc kubenswrapper[5005]: I0225 11:37:54.282685 5005 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa5f2bfb-c847-4266-9def-11101efa2256-config-data\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:54 crc kubenswrapper[5005]: I0225 11:37:54.282703 5005 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa5f2bfb-c847-4266-9def-11101efa2256-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:54 crc kubenswrapper[5005]: I0225 11:37:54.282724 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72gv9\" (UniqueName: \"kubernetes.io/projected/aa5f2bfb-c847-4266-9def-11101efa2256-kube-api-access-72gv9\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:54 crc kubenswrapper[5005]: I0225 11:37:54.787011 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-x788q" Feb 25 11:37:54 crc kubenswrapper[5005]: I0225 11:37:54.787004 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-x788q" event={"ID":"aa5f2bfb-c847-4266-9def-11101efa2256","Type":"ContainerDied","Data":"2bd5f5425178090544fc83f731def14040aa3ee400ded4f946b3f8b177eb8cb5"} Feb 25 11:37:54 crc kubenswrapper[5005]: I0225 11:37:54.787065 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2bd5f5425178090544fc83f731def14040aa3ee400ded4f946b3f8b177eb8cb5" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.002779 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.002996 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="aaf7387c-6f81-405e-bc13-38a9a279741c" containerName="nova-api-log" containerID="cri-o://ccde9036bd885d9b2d8b304c6e4f854065143e4a7fbfc9d2b2c5580e6ca31a18" gracePeriod=30 Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.003359 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="aaf7387c-6f81-405e-bc13-38a9a279741c" containerName="nova-api-api" containerID="cri-o://2825ca6d8113420407ba2463cb1da55b050658cef1d9b67fb6d80855ec9c1938" gracePeriod=30 Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.024632 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.025182 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="64c05ccd-a75c-46df-8cbe-60f572b64666" containerName="nova-scheduler-scheduler" containerID="cri-o://8dbb84d365d833f0bb3d9d72c5005f234ee050ab0606e3a36b3ce2a5a40137de" gracePeriod=30 Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.096133 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.096421 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2e15576e-9682-47eb-9843-ff2bd0fd5d64" containerName="nova-metadata-log" containerID="cri-o://daf3cb88b4bec226a7e420cb2d14944806b6aa3f31ed9965e6130f14e3a133f3" gracePeriod=30 Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.096655 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2e15576e-9682-47eb-9843-ff2bd0fd5d64" containerName="nova-metadata-metadata" containerID="cri-o://8ec8c6782f1ff82f629260f9f42306cddf0b7a55c08e0371080d4aa3b275f4d5" gracePeriod=30 Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.294524 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-566b5b7845-bqh8f" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.368850 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-wx6kk"] Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.406697 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d97fcdd8f-wx6kk" podUID="a6f62bc8-4834-483f-b68d-9f4859378352" containerName="dnsmasq-dns" containerID="cri-o://b058113ab4aabc2fbb0bd521ac34532433b4f3944166e7fa34a033e8a1ff8016" gracePeriod=10 Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.505947 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-ntj86" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.515628 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.542244 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgbvm\" (UniqueName: \"kubernetes.io/projected/1734d86e-d703-4000-9058-bdc27eae9765-kube-api-access-dgbvm\") pod \"1734d86e-d703-4000-9058-bdc27eae9765\" (UID: \"1734d86e-d703-4000-9058-bdc27eae9765\") " Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.542287 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaf7387c-6f81-405e-bc13-38a9a279741c-config-data\") pod \"aaf7387c-6f81-405e-bc13-38a9a279741c\" (UID: \"aaf7387c-6f81-405e-bc13-38a9a279741c\") " Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.542335 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1734d86e-d703-4000-9058-bdc27eae9765-config-data\") pod \"1734d86e-d703-4000-9058-bdc27eae9765\" (UID: \"1734d86e-d703-4000-9058-bdc27eae9765\") " Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.542385 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aaf7387c-6f81-405e-bc13-38a9a279741c-logs\") pod \"aaf7387c-6f81-405e-bc13-38a9a279741c\" (UID: \"aaf7387c-6f81-405e-bc13-38a9a279741c\") " Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.542491 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1734d86e-d703-4000-9058-bdc27eae9765-combined-ca-bundle\") pod \"1734d86e-d703-4000-9058-bdc27eae9765\" (UID: \"1734d86e-d703-4000-9058-bdc27eae9765\") " Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.542518 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1734d86e-d703-4000-9058-bdc27eae9765-scripts\") pod \"1734d86e-d703-4000-9058-bdc27eae9765\" (UID: \"1734d86e-d703-4000-9058-bdc27eae9765\") " Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.542633 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v76rn\" (UniqueName: \"kubernetes.io/projected/aaf7387c-6f81-405e-bc13-38a9a279741c-kube-api-access-v76rn\") pod \"aaf7387c-6f81-405e-bc13-38a9a279741c\" (UID: \"aaf7387c-6f81-405e-bc13-38a9a279741c\") " Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.542701 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaf7387c-6f81-405e-bc13-38a9a279741c-combined-ca-bundle\") pod \"aaf7387c-6f81-405e-bc13-38a9a279741c\" (UID: \"aaf7387c-6f81-405e-bc13-38a9a279741c\") " Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.543232 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aaf7387c-6f81-405e-bc13-38a9a279741c-logs" (OuterVolumeSpecName: "logs") pod "aaf7387c-6f81-405e-bc13-38a9a279741c" (UID: "aaf7387c-6f81-405e-bc13-38a9a279741c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.554925 5005 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aaf7387c-6f81-405e-bc13-38a9a279741c-logs\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.563115 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1734d86e-d703-4000-9058-bdc27eae9765-scripts" (OuterVolumeSpecName: "scripts") pod "1734d86e-d703-4000-9058-bdc27eae9765" (UID: "1734d86e-d703-4000-9058-bdc27eae9765"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.564708 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1734d86e-d703-4000-9058-bdc27eae9765-kube-api-access-dgbvm" (OuterVolumeSpecName: "kube-api-access-dgbvm") pod "1734d86e-d703-4000-9058-bdc27eae9765" (UID: "1734d86e-d703-4000-9058-bdc27eae9765"). InnerVolumeSpecName "kube-api-access-dgbvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.592960 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aaf7387c-6f81-405e-bc13-38a9a279741c-kube-api-access-v76rn" (OuterVolumeSpecName: "kube-api-access-v76rn") pod "aaf7387c-6f81-405e-bc13-38a9a279741c" (UID: "aaf7387c-6f81-405e-bc13-38a9a279741c"). InnerVolumeSpecName "kube-api-access-v76rn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.594426 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1734d86e-d703-4000-9058-bdc27eae9765-config-data" (OuterVolumeSpecName: "config-data") pod "1734d86e-d703-4000-9058-bdc27eae9765" (UID: "1734d86e-d703-4000-9058-bdc27eae9765"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.602556 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaf7387c-6f81-405e-bc13-38a9a279741c-config-data" (OuterVolumeSpecName: "config-data") pod "aaf7387c-6f81-405e-bc13-38a9a279741c" (UID: "aaf7387c-6f81-405e-bc13-38a9a279741c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.607047 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaf7387c-6f81-405e-bc13-38a9a279741c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aaf7387c-6f81-405e-bc13-38a9a279741c" (UID: "aaf7387c-6f81-405e-bc13-38a9a279741c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.631561 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1734d86e-d703-4000-9058-bdc27eae9765-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1734d86e-d703-4000-9058-bdc27eae9765" (UID: "1734d86e-d703-4000-9058-bdc27eae9765"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.666420 5005 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1734d86e-d703-4000-9058-bdc27eae9765-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.666449 5005 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1734d86e-d703-4000-9058-bdc27eae9765-scripts\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.666460 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v76rn\" (UniqueName: \"kubernetes.io/projected/aaf7387c-6f81-405e-bc13-38a9a279741c-kube-api-access-v76rn\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.666472 5005 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaf7387c-6f81-405e-bc13-38a9a279741c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.666480 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgbvm\" (UniqueName: \"kubernetes.io/projected/1734d86e-d703-4000-9058-bdc27eae9765-kube-api-access-dgbvm\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.666488 5005 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaf7387c-6f81-405e-bc13-38a9a279741c-config-data\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.666496 5005 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1734d86e-d703-4000-9058-bdc27eae9765-config-data\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.756478 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.798551 5005 generic.go:334] "Generic (PLEG): container finished" podID="a6f62bc8-4834-483f-b68d-9f4859378352" containerID="b058113ab4aabc2fbb0bd521ac34532433b4f3944166e7fa34a033e8a1ff8016" exitCode=0 Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.798605 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-wx6kk" event={"ID":"a6f62bc8-4834-483f-b68d-9f4859378352","Type":"ContainerDied","Data":"b058113ab4aabc2fbb0bd521ac34532433b4f3944166e7fa34a033e8a1ff8016"} Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.800329 5005 generic.go:334] "Generic (PLEG): container finished" podID="aaf7387c-6f81-405e-bc13-38a9a279741c" containerID="2825ca6d8113420407ba2463cb1da55b050658cef1d9b67fb6d80855ec9c1938" exitCode=0 Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.800342 5005 generic.go:334] "Generic (PLEG): container finished" podID="aaf7387c-6f81-405e-bc13-38a9a279741c" containerID="ccde9036bd885d9b2d8b304c6e4f854065143e4a7fbfc9d2b2c5580e6ca31a18" exitCode=143 Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.800413 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aaf7387c-6f81-405e-bc13-38a9a279741c","Type":"ContainerDied","Data":"2825ca6d8113420407ba2463cb1da55b050658cef1d9b67fb6d80855ec9c1938"} Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.800430 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aaf7387c-6f81-405e-bc13-38a9a279741c","Type":"ContainerDied","Data":"ccde9036bd885d9b2d8b304c6e4f854065143e4a7fbfc9d2b2c5580e6ca31a18"} Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.800439 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aaf7387c-6f81-405e-bc13-38a9a279741c","Type":"ContainerDied","Data":"21bdfba20c1965ad3e57c8fe678f4b7b1889600f9b88e44cad40d14463cade29"} Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.800473 5005 scope.go:117] "RemoveContainer" containerID="2825ca6d8113420407ba2463cb1da55b050658cef1d9b67fb6d80855ec9c1938" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.800669 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.802491 5005 generic.go:334] "Generic (PLEG): container finished" podID="2e15576e-9682-47eb-9843-ff2bd0fd5d64" containerID="8ec8c6782f1ff82f629260f9f42306cddf0b7a55c08e0371080d4aa3b275f4d5" exitCode=0 Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.802518 5005 generic.go:334] "Generic (PLEG): container finished" podID="2e15576e-9682-47eb-9843-ff2bd0fd5d64" containerID="daf3cb88b4bec226a7e420cb2d14944806b6aa3f31ed9965e6130f14e3a133f3" exitCode=143 Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.802560 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2e15576e-9682-47eb-9843-ff2bd0fd5d64","Type":"ContainerDied","Data":"8ec8c6782f1ff82f629260f9f42306cddf0b7a55c08e0371080d4aa3b275f4d5"} Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.802586 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2e15576e-9682-47eb-9843-ff2bd0fd5d64","Type":"ContainerDied","Data":"daf3cb88b4bec226a7e420cb2d14944806b6aa3f31ed9965e6130f14e3a133f3"} Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.802596 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2e15576e-9682-47eb-9843-ff2bd0fd5d64","Type":"ContainerDied","Data":"d74494fa8c52ae00a2542d0b1aec917f51a28ee18015d93b8c61493d8142ff5b"} Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.802814 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.805384 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-ntj86" event={"ID":"1734d86e-d703-4000-9058-bdc27eae9765","Type":"ContainerDied","Data":"1f45adabd8ae0efe0f32f68ed84e27873aa36318f494fc4ff23ea4a380f5efa5"} Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.805410 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f45adabd8ae0efe0f32f68ed84e27873aa36318f494fc4ff23ea4a380f5efa5" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.806280 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-ntj86" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.831203 5005 scope.go:117] "RemoveContainer" containerID="ccde9036bd885d9b2d8b304c6e4f854065143e4a7fbfc9d2b2c5580e6ca31a18" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.868467 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e15576e-9682-47eb-9843-ff2bd0fd5d64-config-data\") pod \"2e15576e-9682-47eb-9843-ff2bd0fd5d64\" (UID: \"2e15576e-9682-47eb-9843-ff2bd0fd5d64\") " Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.868560 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e15576e-9682-47eb-9843-ff2bd0fd5d64-combined-ca-bundle\") pod \"2e15576e-9682-47eb-9843-ff2bd0fd5d64\" (UID: \"2e15576e-9682-47eb-9843-ff2bd0fd5d64\") " Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.868659 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e15576e-9682-47eb-9843-ff2bd0fd5d64-nova-metadata-tls-certs\") pod \"2e15576e-9682-47eb-9843-ff2bd0fd5d64\" (UID: \"2e15576e-9682-47eb-9843-ff2bd0fd5d64\") " Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.868683 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5psf2\" (UniqueName: \"kubernetes.io/projected/2e15576e-9682-47eb-9843-ff2bd0fd5d64-kube-api-access-5psf2\") pod \"2e15576e-9682-47eb-9843-ff2bd0fd5d64\" (UID: \"2e15576e-9682-47eb-9843-ff2bd0fd5d64\") " Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.870955 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.875879 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e15576e-9682-47eb-9843-ff2bd0fd5d64-logs\") pod \"2e15576e-9682-47eb-9843-ff2bd0fd5d64\" (UID: \"2e15576e-9682-47eb-9843-ff2bd0fd5d64\") " Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.880037 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e15576e-9682-47eb-9843-ff2bd0fd5d64-logs" (OuterVolumeSpecName: "logs") pod "2e15576e-9682-47eb-9843-ff2bd0fd5d64" (UID: "2e15576e-9682-47eb-9843-ff2bd0fd5d64"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.883974 5005 scope.go:117] "RemoveContainer" containerID="2825ca6d8113420407ba2463cb1da55b050658cef1d9b67fb6d80855ec9c1938" Feb 25 11:37:55 crc kubenswrapper[5005]: E0225 11:37:55.884415 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2825ca6d8113420407ba2463cb1da55b050658cef1d9b67fb6d80855ec9c1938\": container with ID starting with 2825ca6d8113420407ba2463cb1da55b050658cef1d9b67fb6d80855ec9c1938 not found: ID does not exist" containerID="2825ca6d8113420407ba2463cb1da55b050658cef1d9b67fb6d80855ec9c1938" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.884447 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2825ca6d8113420407ba2463cb1da55b050658cef1d9b67fb6d80855ec9c1938"} err="failed to get container status \"2825ca6d8113420407ba2463cb1da55b050658cef1d9b67fb6d80855ec9c1938\": rpc error: code = NotFound desc = could not find container \"2825ca6d8113420407ba2463cb1da55b050658cef1d9b67fb6d80855ec9c1938\": container with ID starting with 2825ca6d8113420407ba2463cb1da55b050658cef1d9b67fb6d80855ec9c1938 not found: ID does not exist" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.884466 5005 scope.go:117] "RemoveContainer" containerID="ccde9036bd885d9b2d8b304c6e4f854065143e4a7fbfc9d2b2c5580e6ca31a18" Feb 25 11:37:55 crc kubenswrapper[5005]: E0225 11:37:55.886772 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccde9036bd885d9b2d8b304c6e4f854065143e4a7fbfc9d2b2c5580e6ca31a18\": container with ID starting with ccde9036bd885d9b2d8b304c6e4f854065143e4a7fbfc9d2b2c5580e6ca31a18 not found: ID does not exist" containerID="ccde9036bd885d9b2d8b304c6e4f854065143e4a7fbfc9d2b2c5580e6ca31a18" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.886821 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccde9036bd885d9b2d8b304c6e4f854065143e4a7fbfc9d2b2c5580e6ca31a18"} err="failed to get container status \"ccde9036bd885d9b2d8b304c6e4f854065143e4a7fbfc9d2b2c5580e6ca31a18\": rpc error: code = NotFound desc = could not find container \"ccde9036bd885d9b2d8b304c6e4f854065143e4a7fbfc9d2b2c5580e6ca31a18\": container with ID starting with ccde9036bd885d9b2d8b304c6e4f854065143e4a7fbfc9d2b2c5580e6ca31a18 not found: ID does not exist" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.886851 5005 scope.go:117] "RemoveContainer" containerID="2825ca6d8113420407ba2463cb1da55b050658cef1d9b67fb6d80855ec9c1938" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.887354 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2825ca6d8113420407ba2463cb1da55b050658cef1d9b67fb6d80855ec9c1938"} err="failed to get container status \"2825ca6d8113420407ba2463cb1da55b050658cef1d9b67fb6d80855ec9c1938\": rpc error: code = NotFound desc = could not find container \"2825ca6d8113420407ba2463cb1da55b050658cef1d9b67fb6d80855ec9c1938\": container with ID starting with 2825ca6d8113420407ba2463cb1da55b050658cef1d9b67fb6d80855ec9c1938 not found: ID does not exist" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.887476 5005 scope.go:117] "RemoveContainer" containerID="ccde9036bd885d9b2d8b304c6e4f854065143e4a7fbfc9d2b2c5580e6ca31a18" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.888550 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccde9036bd885d9b2d8b304c6e4f854065143e4a7fbfc9d2b2c5580e6ca31a18"} err="failed to get container status \"ccde9036bd885d9b2d8b304c6e4f854065143e4a7fbfc9d2b2c5580e6ca31a18\": rpc error: code = NotFound desc = could not find container \"ccde9036bd885d9b2d8b304c6e4f854065143e4a7fbfc9d2b2c5580e6ca31a18\": container with ID starting with ccde9036bd885d9b2d8b304c6e4f854065143e4a7fbfc9d2b2c5580e6ca31a18 not found: ID does not exist" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.888581 5005 scope.go:117] "RemoveContainer" containerID="8ec8c6782f1ff82f629260f9f42306cddf0b7a55c08e0371080d4aa3b275f4d5" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.889401 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e15576e-9682-47eb-9843-ff2bd0fd5d64-kube-api-access-5psf2" (OuterVolumeSpecName: "kube-api-access-5psf2") pod "2e15576e-9682-47eb-9843-ff2bd0fd5d64" (UID: "2e15576e-9682-47eb-9843-ff2bd0fd5d64"). InnerVolumeSpecName "kube-api-access-5psf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.897350 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 25 11:37:55 crc kubenswrapper[5005]: E0225 11:37:55.898078 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa5f2bfb-c847-4266-9def-11101efa2256" containerName="nova-manage" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.898156 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa5f2bfb-c847-4266-9def-11101efa2256" containerName="nova-manage" Feb 25 11:37:55 crc kubenswrapper[5005]: E0225 11:37:55.898249 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaf7387c-6f81-405e-bc13-38a9a279741c" containerName="nova-api-api" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.898309 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaf7387c-6f81-405e-bc13-38a9a279741c" containerName="nova-api-api" Feb 25 11:37:55 crc kubenswrapper[5005]: E0225 11:37:55.898399 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1734d86e-d703-4000-9058-bdc27eae9765" containerName="nova-cell1-conductor-db-sync" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.898465 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="1734d86e-d703-4000-9058-bdc27eae9765" containerName="nova-cell1-conductor-db-sync" Feb 25 11:37:55 crc kubenswrapper[5005]: E0225 11:37:55.898526 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e15576e-9682-47eb-9843-ff2bd0fd5d64" containerName="nova-metadata-log" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.898581 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e15576e-9682-47eb-9843-ff2bd0fd5d64" containerName="nova-metadata-log" Feb 25 11:37:55 crc kubenswrapper[5005]: E0225 11:37:55.898636 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e15576e-9682-47eb-9843-ff2bd0fd5d64" containerName="nova-metadata-metadata" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.898690 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e15576e-9682-47eb-9843-ff2bd0fd5d64" containerName="nova-metadata-metadata" Feb 25 11:37:55 crc kubenswrapper[5005]: E0225 11:37:55.898745 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaf7387c-6f81-405e-bc13-38a9a279741c" containerName="nova-api-log" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.898796 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaf7387c-6f81-405e-bc13-38a9a279741c" containerName="nova-api-log" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.899007 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaf7387c-6f81-405e-bc13-38a9a279741c" containerName="nova-api-log" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.899071 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa5f2bfb-c847-4266-9def-11101efa2256" containerName="nova-manage" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.899127 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaf7387c-6f81-405e-bc13-38a9a279741c" containerName="nova-api-api" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.899182 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="1734d86e-d703-4000-9058-bdc27eae9765" containerName="nova-cell1-conductor-db-sync" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.899232 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e15576e-9682-47eb-9843-ff2bd0fd5d64" containerName="nova-metadata-log" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.899291 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e15576e-9682-47eb-9843-ff2bd0fd5d64" containerName="nova-metadata-metadata" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.899920 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.902347 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.912691 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.925705 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.927795 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-wx6kk" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.929312 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e15576e-9682-47eb-9843-ff2bd0fd5d64-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e15576e-9682-47eb-9843-ff2bd0fd5d64" (UID: "2e15576e-9682-47eb-9843-ff2bd0fd5d64"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.930242 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e15576e-9682-47eb-9843-ff2bd0fd5d64-config-data" (OuterVolumeSpecName: "config-data") pod "2e15576e-9682-47eb-9843-ff2bd0fd5d64" (UID: "2e15576e-9682-47eb-9843-ff2bd0fd5d64"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.930656 5005 scope.go:117] "RemoveContainer" containerID="daf3cb88b4bec226a7e420cb2d14944806b6aa3f31ed9965e6130f14e3a133f3" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.935349 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 25 11:37:55 crc kubenswrapper[5005]: E0225 11:37:55.936073 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6f62bc8-4834-483f-b68d-9f4859378352" containerName="dnsmasq-dns" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.936093 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6f62bc8-4834-483f-b68d-9f4859378352" containerName="dnsmasq-dns" Feb 25 11:37:55 crc kubenswrapper[5005]: E0225 11:37:55.936107 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6f62bc8-4834-483f-b68d-9f4859378352" containerName="init" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.936113 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6f62bc8-4834-483f-b68d-9f4859378352" containerName="init" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.936292 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6f62bc8-4834-483f-b68d-9f4859378352" containerName="dnsmasq-dns" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.937352 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.940207 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.951964 5005 scope.go:117] "RemoveContainer" containerID="8ec8c6782f1ff82f629260f9f42306cddf0b7a55c08e0371080d4aa3b275f4d5" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.954504 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 25 11:37:55 crc kubenswrapper[5005]: E0225 11:37:55.955043 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ec8c6782f1ff82f629260f9f42306cddf0b7a55c08e0371080d4aa3b275f4d5\": container with ID starting with 8ec8c6782f1ff82f629260f9f42306cddf0b7a55c08e0371080d4aa3b275f4d5 not found: ID does not exist" containerID="8ec8c6782f1ff82f629260f9f42306cddf0b7a55c08e0371080d4aa3b275f4d5" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.955078 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ec8c6782f1ff82f629260f9f42306cddf0b7a55c08e0371080d4aa3b275f4d5"} err="failed to get container status \"8ec8c6782f1ff82f629260f9f42306cddf0b7a55c08e0371080d4aa3b275f4d5\": rpc error: code = NotFound desc = could not find container \"8ec8c6782f1ff82f629260f9f42306cddf0b7a55c08e0371080d4aa3b275f4d5\": container with ID starting with 8ec8c6782f1ff82f629260f9f42306cddf0b7a55c08e0371080d4aa3b275f4d5 not found: ID does not exist" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.955105 5005 scope.go:117] "RemoveContainer" containerID="daf3cb88b4bec226a7e420cb2d14944806b6aa3f31ed9965e6130f14e3a133f3" Feb 25 11:37:55 crc kubenswrapper[5005]: E0225 11:37:55.955310 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"daf3cb88b4bec226a7e420cb2d14944806b6aa3f31ed9965e6130f14e3a133f3\": container with ID starting with daf3cb88b4bec226a7e420cb2d14944806b6aa3f31ed9965e6130f14e3a133f3 not found: ID does not exist" containerID="daf3cb88b4bec226a7e420cb2d14944806b6aa3f31ed9965e6130f14e3a133f3" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.955331 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"daf3cb88b4bec226a7e420cb2d14944806b6aa3f31ed9965e6130f14e3a133f3"} err="failed to get container status \"daf3cb88b4bec226a7e420cb2d14944806b6aa3f31ed9965e6130f14e3a133f3\": rpc error: code = NotFound desc = could not find container \"daf3cb88b4bec226a7e420cb2d14944806b6aa3f31ed9965e6130f14e3a133f3\": container with ID starting with daf3cb88b4bec226a7e420cb2d14944806b6aa3f31ed9965e6130f14e3a133f3 not found: ID does not exist" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.955342 5005 scope.go:117] "RemoveContainer" containerID="8ec8c6782f1ff82f629260f9f42306cddf0b7a55c08e0371080d4aa3b275f4d5" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.955513 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ec8c6782f1ff82f629260f9f42306cddf0b7a55c08e0371080d4aa3b275f4d5"} err="failed to get container status \"8ec8c6782f1ff82f629260f9f42306cddf0b7a55c08e0371080d4aa3b275f4d5\": rpc error: code = NotFound desc = could not find container \"8ec8c6782f1ff82f629260f9f42306cddf0b7a55c08e0371080d4aa3b275f4d5\": container with ID starting with 8ec8c6782f1ff82f629260f9f42306cddf0b7a55c08e0371080d4aa3b275f4d5 not found: ID does not exist" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.955532 5005 scope.go:117] "RemoveContainer" containerID="daf3cb88b4bec226a7e420cb2d14944806b6aa3f31ed9965e6130f14e3a133f3" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.956039 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"daf3cb88b4bec226a7e420cb2d14944806b6aa3f31ed9965e6130f14e3a133f3"} err="failed to get container status \"daf3cb88b4bec226a7e420cb2d14944806b6aa3f31ed9965e6130f14e3a133f3\": rpc error: code = NotFound desc = could not find container \"daf3cb88b4bec226a7e420cb2d14944806b6aa3f31ed9965e6130f14e3a133f3\": container with ID starting with daf3cb88b4bec226a7e420cb2d14944806b6aa3f31ed9965e6130f14e3a133f3 not found: ID does not exist" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.959470 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e15576e-9682-47eb-9843-ff2bd0fd5d64-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "2e15576e-9682-47eb-9843-ff2bd0fd5d64" (UID: "2e15576e-9682-47eb-9843-ff2bd0fd5d64"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.981353 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4dbecac-c5f9-4cdf-a25a-6b3e24ca93df-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e4dbecac-c5f9-4cdf-a25a-6b3e24ca93df\") " pod="openstack/nova-cell1-conductor-0" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.981438 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2m9xt\" (UniqueName: \"kubernetes.io/projected/e4dbecac-c5f9-4cdf-a25a-6b3e24ca93df-kube-api-access-2m9xt\") pod \"nova-cell1-conductor-0\" (UID: \"e4dbecac-c5f9-4cdf-a25a-6b3e24ca93df\") " pod="openstack/nova-cell1-conductor-0" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.981653 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4dbecac-c5f9-4cdf-a25a-6b3e24ca93df-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e4dbecac-c5f9-4cdf-a25a-6b3e24ca93df\") " pod="openstack/nova-cell1-conductor-0" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.981849 5005 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e15576e-9682-47eb-9843-ff2bd0fd5d64-config-data\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.981892 5005 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e15576e-9682-47eb-9843-ff2bd0fd5d64-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.981907 5005 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e15576e-9682-47eb-9843-ff2bd0fd5d64-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.981919 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5psf2\" (UniqueName: \"kubernetes.io/projected/2e15576e-9682-47eb-9843-ff2bd0fd5d64-kube-api-access-5psf2\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:55 crc kubenswrapper[5005]: I0225 11:37:55.981930 5005 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e15576e-9682-47eb-9843-ff2bd0fd5d64-logs\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.082870 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdgsh\" (UniqueName: \"kubernetes.io/projected/a6f62bc8-4834-483f-b68d-9f4859378352-kube-api-access-sdgsh\") pod \"a6f62bc8-4834-483f-b68d-9f4859378352\" (UID: \"a6f62bc8-4834-483f-b68d-9f4859378352\") " Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.083161 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a6f62bc8-4834-483f-b68d-9f4859378352-ovsdbserver-sb\") pod \"a6f62bc8-4834-483f-b68d-9f4859378352\" (UID: \"a6f62bc8-4834-483f-b68d-9f4859378352\") " Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.083487 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6f62bc8-4834-483f-b68d-9f4859378352-config\") pod \"a6f62bc8-4834-483f-b68d-9f4859378352\" (UID: \"a6f62bc8-4834-483f-b68d-9f4859378352\") " Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.083580 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a6f62bc8-4834-483f-b68d-9f4859378352-dns-svc\") pod \"a6f62bc8-4834-483f-b68d-9f4859378352\" (UID: \"a6f62bc8-4834-483f-b68d-9f4859378352\") " Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.083689 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a6f62bc8-4834-483f-b68d-9f4859378352-ovsdbserver-nb\") pod \"a6f62bc8-4834-483f-b68d-9f4859378352\" (UID: \"a6f62bc8-4834-483f-b68d-9f4859378352\") " Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.084012 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4dbecac-c5f9-4cdf-a25a-6b3e24ca93df-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e4dbecac-c5f9-4cdf-a25a-6b3e24ca93df\") " pod="openstack/nova-cell1-conductor-0" Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.084173 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a7e414b-bfff-44be-9dd0-9c73445bfc5c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6a7e414b-bfff-44be-9dd0-9c73445bfc5c\") " pod="openstack/nova-api-0" Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.084340 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2m9xt\" (UniqueName: \"kubernetes.io/projected/e4dbecac-c5f9-4cdf-a25a-6b3e24ca93df-kube-api-access-2m9xt\") pod \"nova-cell1-conductor-0\" (UID: \"e4dbecac-c5f9-4cdf-a25a-6b3e24ca93df\") " pod="openstack/nova-cell1-conductor-0" Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.084364 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rndr\" (UniqueName: \"kubernetes.io/projected/6a7e414b-bfff-44be-9dd0-9c73445bfc5c-kube-api-access-2rndr\") pod \"nova-api-0\" (UID: \"6a7e414b-bfff-44be-9dd0-9c73445bfc5c\") " pod="openstack/nova-api-0" Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.084544 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a7e414b-bfff-44be-9dd0-9c73445bfc5c-config-data\") pod \"nova-api-0\" (UID: \"6a7e414b-bfff-44be-9dd0-9c73445bfc5c\") " pod="openstack/nova-api-0" Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.084899 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4dbecac-c5f9-4cdf-a25a-6b3e24ca93df-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e4dbecac-c5f9-4cdf-a25a-6b3e24ca93df\") " pod="openstack/nova-cell1-conductor-0" Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.084931 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a7e414b-bfff-44be-9dd0-9c73445bfc5c-logs\") pod \"nova-api-0\" (UID: \"6a7e414b-bfff-44be-9dd0-9c73445bfc5c\") " pod="openstack/nova-api-0" Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.085877 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6f62bc8-4834-483f-b68d-9f4859378352-kube-api-access-sdgsh" (OuterVolumeSpecName: "kube-api-access-sdgsh") pod "a6f62bc8-4834-483f-b68d-9f4859378352" (UID: "a6f62bc8-4834-483f-b68d-9f4859378352"). InnerVolumeSpecName "kube-api-access-sdgsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.088646 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4dbecac-c5f9-4cdf-a25a-6b3e24ca93df-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e4dbecac-c5f9-4cdf-a25a-6b3e24ca93df\") " pod="openstack/nova-cell1-conductor-0" Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.089436 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4dbecac-c5f9-4cdf-a25a-6b3e24ca93df-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e4dbecac-c5f9-4cdf-a25a-6b3e24ca93df\") " pod="openstack/nova-cell1-conductor-0" Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.100351 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2m9xt\" (UniqueName: \"kubernetes.io/projected/e4dbecac-c5f9-4cdf-a25a-6b3e24ca93df-kube-api-access-2m9xt\") pod \"nova-cell1-conductor-0\" (UID: \"e4dbecac-c5f9-4cdf-a25a-6b3e24ca93df\") " pod="openstack/nova-cell1-conductor-0" Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.142585 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6f62bc8-4834-483f-b68d-9f4859378352-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a6f62bc8-4834-483f-b68d-9f4859378352" (UID: "a6f62bc8-4834-483f-b68d-9f4859378352"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.146266 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6f62bc8-4834-483f-b68d-9f4859378352-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a6f62bc8-4834-483f-b68d-9f4859378352" (UID: "a6f62bc8-4834-483f-b68d-9f4859378352"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.149382 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6f62bc8-4834-483f-b68d-9f4859378352-config" (OuterVolumeSpecName: "config") pod "a6f62bc8-4834-483f-b68d-9f4859378352" (UID: "a6f62bc8-4834-483f-b68d-9f4859378352"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.159114 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6f62bc8-4834-483f-b68d-9f4859378352-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a6f62bc8-4834-483f-b68d-9f4859378352" (UID: "a6f62bc8-4834-483f-b68d-9f4859378352"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.187465 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a7e414b-bfff-44be-9dd0-9c73445bfc5c-logs\") pod \"nova-api-0\" (UID: \"6a7e414b-bfff-44be-9dd0-9c73445bfc5c\") " pod="openstack/nova-api-0" Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.187822 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a7e414b-bfff-44be-9dd0-9c73445bfc5c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6a7e414b-bfff-44be-9dd0-9c73445bfc5c\") " pod="openstack/nova-api-0" Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.187868 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rndr\" (UniqueName: \"kubernetes.io/projected/6a7e414b-bfff-44be-9dd0-9c73445bfc5c-kube-api-access-2rndr\") pod \"nova-api-0\" (UID: \"6a7e414b-bfff-44be-9dd0-9c73445bfc5c\") " pod="openstack/nova-api-0" Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.187953 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a7e414b-bfff-44be-9dd0-9c73445bfc5c-config-data\") pod \"nova-api-0\" (UID: \"6a7e414b-bfff-44be-9dd0-9c73445bfc5c\") " pod="openstack/nova-api-0" Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.188064 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdgsh\" (UniqueName: \"kubernetes.io/projected/a6f62bc8-4834-483f-b68d-9f4859378352-kube-api-access-sdgsh\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.188086 5005 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a6f62bc8-4834-483f-b68d-9f4859378352-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.188100 5005 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6f62bc8-4834-483f-b68d-9f4859378352-config\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.188112 5005 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a6f62bc8-4834-483f-b68d-9f4859378352-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.188124 5005 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a6f62bc8-4834-483f-b68d-9f4859378352-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.194500 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a7e414b-bfff-44be-9dd0-9c73445bfc5c-logs\") pod \"nova-api-0\" (UID: \"6a7e414b-bfff-44be-9dd0-9c73445bfc5c\") " pod="openstack/nova-api-0" Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.197456 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a7e414b-bfff-44be-9dd0-9c73445bfc5c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6a7e414b-bfff-44be-9dd0-9c73445bfc5c\") " pod="openstack/nova-api-0" Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.208117 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a7e414b-bfff-44be-9dd0-9c73445bfc5c-config-data\") pod \"nova-api-0\" (UID: \"6a7e414b-bfff-44be-9dd0-9c73445bfc5c\") " pod="openstack/nova-api-0" Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.211047 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rndr\" (UniqueName: \"kubernetes.io/projected/6a7e414b-bfff-44be-9dd0-9c73445bfc5c-kube-api-access-2rndr\") pod \"nova-api-0\" (UID: \"6a7e414b-bfff-44be-9dd0-9c73445bfc5c\") " pod="openstack/nova-api-0" Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.224318 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.265634 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.443493 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.454473 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.462022 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.463526 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.470798 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.471361 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.477189 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.594723 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03bc0cdc-e98f-4161-aa11-534e22b5be40-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"03bc0cdc-e98f-4161-aa11-534e22b5be40\") " pod="openstack/nova-metadata-0" Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.594807 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrhfr\" (UniqueName: \"kubernetes.io/projected/03bc0cdc-e98f-4161-aa11-534e22b5be40-kube-api-access-zrhfr\") pod \"nova-metadata-0\" (UID: \"03bc0cdc-e98f-4161-aa11-534e22b5be40\") " pod="openstack/nova-metadata-0" Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.594954 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03bc0cdc-e98f-4161-aa11-534e22b5be40-logs\") pod \"nova-metadata-0\" (UID: \"03bc0cdc-e98f-4161-aa11-534e22b5be40\") " pod="openstack/nova-metadata-0" Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.595189 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03bc0cdc-e98f-4161-aa11-534e22b5be40-config-data\") pod \"nova-metadata-0\" (UID: \"03bc0cdc-e98f-4161-aa11-534e22b5be40\") " pod="openstack/nova-metadata-0" Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.595274 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/03bc0cdc-e98f-4161-aa11-534e22b5be40-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"03bc0cdc-e98f-4161-aa11-534e22b5be40\") " pod="openstack/nova-metadata-0" Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.698763 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03bc0cdc-e98f-4161-aa11-534e22b5be40-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"03bc0cdc-e98f-4161-aa11-534e22b5be40\") " pod="openstack/nova-metadata-0" Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.698832 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrhfr\" (UniqueName: \"kubernetes.io/projected/03bc0cdc-e98f-4161-aa11-534e22b5be40-kube-api-access-zrhfr\") pod \"nova-metadata-0\" (UID: \"03bc0cdc-e98f-4161-aa11-534e22b5be40\") " pod="openstack/nova-metadata-0" Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.698894 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03bc0cdc-e98f-4161-aa11-534e22b5be40-logs\") pod \"nova-metadata-0\" (UID: \"03bc0cdc-e98f-4161-aa11-534e22b5be40\") " pod="openstack/nova-metadata-0" Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.698951 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03bc0cdc-e98f-4161-aa11-534e22b5be40-config-data\") pod \"nova-metadata-0\" (UID: \"03bc0cdc-e98f-4161-aa11-534e22b5be40\") " pod="openstack/nova-metadata-0" Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.698984 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/03bc0cdc-e98f-4161-aa11-534e22b5be40-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"03bc0cdc-e98f-4161-aa11-534e22b5be40\") " pod="openstack/nova-metadata-0" Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.700076 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03bc0cdc-e98f-4161-aa11-534e22b5be40-logs\") pod \"nova-metadata-0\" (UID: \"03bc0cdc-e98f-4161-aa11-534e22b5be40\") " pod="openstack/nova-metadata-0" Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.709308 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03bc0cdc-e98f-4161-aa11-534e22b5be40-config-data\") pod \"nova-metadata-0\" (UID: \"03bc0cdc-e98f-4161-aa11-534e22b5be40\") " pod="openstack/nova-metadata-0" Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.709638 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/03bc0cdc-e98f-4161-aa11-534e22b5be40-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"03bc0cdc-e98f-4161-aa11-534e22b5be40\") " pod="openstack/nova-metadata-0" Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.710222 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e15576e-9682-47eb-9843-ff2bd0fd5d64" path="/var/lib/kubelet/pods/2e15576e-9682-47eb-9843-ff2bd0fd5d64/volumes" Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.710986 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aaf7387c-6f81-405e-bc13-38a9a279741c" path="/var/lib/kubelet/pods/aaf7387c-6f81-405e-bc13-38a9a279741c/volumes" Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.720011 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03bc0cdc-e98f-4161-aa11-534e22b5be40-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"03bc0cdc-e98f-4161-aa11-534e22b5be40\") " pod="openstack/nova-metadata-0" Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.724570 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrhfr\" (UniqueName: \"kubernetes.io/projected/03bc0cdc-e98f-4161-aa11-534e22b5be40-kube-api-access-zrhfr\") pod \"nova-metadata-0\" (UID: \"03bc0cdc-e98f-4161-aa11-534e22b5be40\") " pod="openstack/nova-metadata-0" Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.762580 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.762843 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="06987e0a-c281-4cbf-acdf-5831dd0b3561" containerName="kube-state-metrics" containerID="cri-o://f5b619d3f6c89177173ffd5e7305627a3041510c96518fba1ce69bb1dcccaace" gracePeriod=30 Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.782774 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.788362 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.826648 5005 generic.go:334] "Generic (PLEG): container finished" podID="64c05ccd-a75c-46df-8cbe-60f572b64666" containerID="8dbb84d365d833f0bb3d9d72c5005f234ee050ab0606e3a36b3ce2a5a40137de" exitCode=0 Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.826714 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"64c05ccd-a75c-46df-8cbe-60f572b64666","Type":"ContainerDied","Data":"8dbb84d365d833f0bb3d9d72c5005f234ee050ab0606e3a36b3ce2a5a40137de"} Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.831234 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-wx6kk" event={"ID":"a6f62bc8-4834-483f-b68d-9f4859378352","Type":"ContainerDied","Data":"6f3c6899a15a9e128749c77743644109c40c4f97d0359ed82d22ad76b1711658"} Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.831264 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-wx6kk" Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.831282 5005 scope.go:117] "RemoveContainer" containerID="b058113ab4aabc2fbb0bd521ac34532433b4f3944166e7fa34a033e8a1ff8016" Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.837249 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"e4dbecac-c5f9-4cdf-a25a-6b3e24ca93df","Type":"ContainerStarted","Data":"3c9a17d625ad34b765912312176352fc9f3dfcc0cfe9e3dbb7943dfe3b9637d5"} Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.855026 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.881363 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-wx6kk"] Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.885043 5005 scope.go:117] "RemoveContainer" containerID="59ded5a7bff5500b0366ee3a26f41f9ff45436b7d4176028d930469f5f13f99e" Feb 25 11:37:56 crc kubenswrapper[5005]: W0225 11:37:56.888934 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a7e414b_bfff_44be_9dd0_9c73445bfc5c.slice/crio-633f601c34cf3bf3666bfb2d1762c5b1ff3c9f4d6c0a28057bb4f19bf681c573 WatchSource:0}: Error finding container 633f601c34cf3bf3666bfb2d1762c5b1ff3c9f4d6c0a28057bb4f19bf681c573: Status 404 returned error can't find the container with id 633f601c34cf3bf3666bfb2d1762c5b1ff3c9f4d6c0a28057bb4f19bf681c573 Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.893067 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-wx6kk"] Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.903644 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.905478 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64c05ccd-a75c-46df-8cbe-60f572b64666-combined-ca-bundle\") pod \"64c05ccd-a75c-46df-8cbe-60f572b64666\" (UID: \"64c05ccd-a75c-46df-8cbe-60f572b64666\") " Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.905548 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26sbs\" (UniqueName: \"kubernetes.io/projected/64c05ccd-a75c-46df-8cbe-60f572b64666-kube-api-access-26sbs\") pod \"64c05ccd-a75c-46df-8cbe-60f572b64666\" (UID: \"64c05ccd-a75c-46df-8cbe-60f572b64666\") " Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.905616 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64c05ccd-a75c-46df-8cbe-60f572b64666-config-data\") pod \"64c05ccd-a75c-46df-8cbe-60f572b64666\" (UID: \"64c05ccd-a75c-46df-8cbe-60f572b64666\") " Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.916278 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64c05ccd-a75c-46df-8cbe-60f572b64666-kube-api-access-26sbs" (OuterVolumeSpecName: "kube-api-access-26sbs") pod "64c05ccd-a75c-46df-8cbe-60f572b64666" (UID: "64c05ccd-a75c-46df-8cbe-60f572b64666"). InnerVolumeSpecName "kube-api-access-26sbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:37:56 crc kubenswrapper[5005]: E0225 11:37:56.953786 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64c05ccd-a75c-46df-8cbe-60f572b64666-config-data podName:64c05ccd-a75c-46df-8cbe-60f572b64666 nodeName:}" failed. No retries permitted until 2026-02-25 11:37:57.453753144 +0000 UTC m=+1191.494485471 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/64c05ccd-a75c-46df-8cbe-60f572b64666-config-data") pod "64c05ccd-a75c-46df-8cbe-60f572b64666" (UID: "64c05ccd-a75c-46df-8cbe-60f572b64666") : error deleting /var/lib/kubelet/pods/64c05ccd-a75c-46df-8cbe-60f572b64666/volume-subpaths: remove /var/lib/kubelet/pods/64c05ccd-a75c-46df-8cbe-60f572b64666/volume-subpaths: no such file or directory Feb 25 11:37:56 crc kubenswrapper[5005]: I0225 11:37:56.959474 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64c05ccd-a75c-46df-8cbe-60f572b64666-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "64c05ccd-a75c-46df-8cbe-60f572b64666" (UID: "64c05ccd-a75c-46df-8cbe-60f572b64666"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:37:57 crc kubenswrapper[5005]: I0225 11:37:57.011444 5005 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64c05ccd-a75c-46df-8cbe-60f572b64666-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:57 crc kubenswrapper[5005]: I0225 11:37:57.011480 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26sbs\" (UniqueName: \"kubernetes.io/projected/64c05ccd-a75c-46df-8cbe-60f572b64666-kube-api-access-26sbs\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:57 crc kubenswrapper[5005]: I0225 11:37:57.284939 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 25 11:37:57 crc kubenswrapper[5005]: W0225 11:37:57.291448 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod03bc0cdc_e98f_4161_aa11_534e22b5be40.slice/crio-a4d6b1f76b749fda0127ad0e84bdea7db36d23b464e7d0e2be8769ed9f930c89 WatchSource:0}: Error finding container a4d6b1f76b749fda0127ad0e84bdea7db36d23b464e7d0e2be8769ed9f930c89: Status 404 returned error can't find the container with id a4d6b1f76b749fda0127ad0e84bdea7db36d23b464e7d0e2be8769ed9f930c89 Feb 25 11:37:57 crc kubenswrapper[5005]: I0225 11:37:57.333536 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 25 11:37:57 crc kubenswrapper[5005]: I0225 11:37:57.518066 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64c05ccd-a75c-46df-8cbe-60f572b64666-config-data\") pod \"64c05ccd-a75c-46df-8cbe-60f572b64666\" (UID: \"64c05ccd-a75c-46df-8cbe-60f572b64666\") " Feb 25 11:37:57 crc kubenswrapper[5005]: I0225 11:37:57.518539 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmg8r\" (UniqueName: \"kubernetes.io/projected/06987e0a-c281-4cbf-acdf-5831dd0b3561-kube-api-access-bmg8r\") pod \"06987e0a-c281-4cbf-acdf-5831dd0b3561\" (UID: \"06987e0a-c281-4cbf-acdf-5831dd0b3561\") " Feb 25 11:37:57 crc kubenswrapper[5005]: I0225 11:37:57.523186 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06987e0a-c281-4cbf-acdf-5831dd0b3561-kube-api-access-bmg8r" (OuterVolumeSpecName: "kube-api-access-bmg8r") pod "06987e0a-c281-4cbf-acdf-5831dd0b3561" (UID: "06987e0a-c281-4cbf-acdf-5831dd0b3561"). InnerVolumeSpecName "kube-api-access-bmg8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:37:57 crc kubenswrapper[5005]: I0225 11:37:57.524468 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64c05ccd-a75c-46df-8cbe-60f572b64666-config-data" (OuterVolumeSpecName: "config-data") pod "64c05ccd-a75c-46df-8cbe-60f572b64666" (UID: "64c05ccd-a75c-46df-8cbe-60f572b64666"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:37:57 crc kubenswrapper[5005]: I0225 11:37:57.620588 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmg8r\" (UniqueName: \"kubernetes.io/projected/06987e0a-c281-4cbf-acdf-5831dd0b3561-kube-api-access-bmg8r\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:57 crc kubenswrapper[5005]: I0225 11:37:57.620618 5005 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64c05ccd-a75c-46df-8cbe-60f572b64666-config-data\") on node \"crc\" DevicePath \"\"" Feb 25 11:37:57 crc kubenswrapper[5005]: I0225 11:37:57.822931 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 25 11:37:57 crc kubenswrapper[5005]: I0225 11:37:57.823326 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="45835808-fd30-412e-a08b-4120e6e4ea9b" containerName="ceilometer-notification-agent" containerID="cri-o://a9b7b859269734a99c9c673b0e21637efecf33da22ee42ad3e53575d04dde3bd" gracePeriod=30 Feb 25 11:37:57 crc kubenswrapper[5005]: I0225 11:37:57.823396 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="45835808-fd30-412e-a08b-4120e6e4ea9b" containerName="proxy-httpd" containerID="cri-o://fac5a83243191d974fc974d5dc47451bcfd89c6eb0e9686872f2450566538dcc" gracePeriod=30 Feb 25 11:37:57 crc kubenswrapper[5005]: I0225 11:37:57.823326 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="45835808-fd30-412e-a08b-4120e6e4ea9b" containerName="sg-core" containerID="cri-o://399c2868f58b821f3e030cc0ac2d5add99cda777d9fe6b2bccac2d2e79e394f1" gracePeriod=30 Feb 25 11:37:57 crc kubenswrapper[5005]: I0225 11:37:57.823206 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="45835808-fd30-412e-a08b-4120e6e4ea9b" containerName="ceilometer-central-agent" containerID="cri-o://af4f7f24a693a2f06856bfef21ad15bb8f408973731870fd8f72d9df49fe5714" gracePeriod=30 Feb 25 11:37:57 crc kubenswrapper[5005]: I0225 11:37:57.846637 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"03bc0cdc-e98f-4161-aa11-534e22b5be40","Type":"ContainerStarted","Data":"70474a14723af8d559c2002f5cb3c115b0a0726e4a4481418bc07bd50e6f4d42"} Feb 25 11:37:57 crc kubenswrapper[5005]: I0225 11:37:57.846677 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"03bc0cdc-e98f-4161-aa11-534e22b5be40","Type":"ContainerStarted","Data":"760b9c3c979a4203ce06b4c8946b6cf0a1890b6c28ddf45d37d4d49e97880a46"} Feb 25 11:37:57 crc kubenswrapper[5005]: I0225 11:37:57.846688 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"03bc0cdc-e98f-4161-aa11-534e22b5be40","Type":"ContainerStarted","Data":"a4d6b1f76b749fda0127ad0e84bdea7db36d23b464e7d0e2be8769ed9f930c89"} Feb 25 11:37:57 crc kubenswrapper[5005]: I0225 11:37:57.848984 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"64c05ccd-a75c-46df-8cbe-60f572b64666","Type":"ContainerDied","Data":"61e25eb474a06b916ca75e40aff945a3b31878eda29e0862dac362cb40a14ada"} Feb 25 11:37:57 crc kubenswrapper[5005]: I0225 11:37:57.849039 5005 scope.go:117] "RemoveContainer" containerID="8dbb84d365d833f0bb3d9d72c5005f234ee050ab0606e3a36b3ce2a5a40137de" Feb 25 11:37:57 crc kubenswrapper[5005]: I0225 11:37:57.849141 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 25 11:37:57 crc kubenswrapper[5005]: I0225 11:37:57.859304 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6a7e414b-bfff-44be-9dd0-9c73445bfc5c","Type":"ContainerStarted","Data":"11ebe164443d683630c79821e740f6d89bd841b9c1b24194b8cd2a115a2a31f3"} Feb 25 11:37:57 crc kubenswrapper[5005]: I0225 11:37:57.859362 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6a7e414b-bfff-44be-9dd0-9c73445bfc5c","Type":"ContainerStarted","Data":"5fdd9adad9b8ecd20f36056e15f6ccdeed0ecf689a11cbb42d2ce1a1d77d29d1"} Feb 25 11:37:57 crc kubenswrapper[5005]: I0225 11:37:57.859386 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6a7e414b-bfff-44be-9dd0-9c73445bfc5c","Type":"ContainerStarted","Data":"633f601c34cf3bf3666bfb2d1762c5b1ff3c9f4d6c0a28057bb4f19bf681c573"} Feb 25 11:37:57 crc kubenswrapper[5005]: I0225 11:37:57.873093 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"e4dbecac-c5f9-4cdf-a25a-6b3e24ca93df","Type":"ContainerStarted","Data":"9d09c69ec2031a6c0ed8d84461e6e97de2f5aa0298aed77aae8de44785f25039"} Feb 25 11:37:57 crc kubenswrapper[5005]: I0225 11:37:57.873272 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 25 11:37:57 crc kubenswrapper[5005]: I0225 11:37:57.873304 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.8732845230000001 podStartE2EDuration="1.873284523s" podCreationTimestamp="2026-02-25 11:37:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:37:57.872953464 +0000 UTC m=+1191.913685791" watchObservedRunningTime="2026-02-25 11:37:57.873284523 +0000 UTC m=+1191.914016850" Feb 25 11:37:57 crc kubenswrapper[5005]: I0225 11:37:57.877187 5005 generic.go:334] "Generic (PLEG): container finished" podID="06987e0a-c281-4cbf-acdf-5831dd0b3561" containerID="f5b619d3f6c89177173ffd5e7305627a3041510c96518fba1ce69bb1dcccaace" exitCode=2 Feb 25 11:37:57 crc kubenswrapper[5005]: I0225 11:37:57.877231 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"06987e0a-c281-4cbf-acdf-5831dd0b3561","Type":"ContainerDied","Data":"f5b619d3f6c89177173ffd5e7305627a3041510c96518fba1ce69bb1dcccaace"} Feb 25 11:37:57 crc kubenswrapper[5005]: I0225 11:37:57.877253 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"06987e0a-c281-4cbf-acdf-5831dd0b3561","Type":"ContainerDied","Data":"eb1dfda2041125fd85dfe3a61ea07850bf76f61a293536aaf600dc6f458e6848"} Feb 25 11:37:57 crc kubenswrapper[5005]: I0225 11:37:57.877297 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 25 11:37:57 crc kubenswrapper[5005]: I0225 11:37:57.886652 5005 scope.go:117] "RemoveContainer" containerID="f5b619d3f6c89177173ffd5e7305627a3041510c96518fba1ce69bb1dcccaace" Feb 25 11:37:57 crc kubenswrapper[5005]: I0225 11:37:57.891958 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.891945169 podStartE2EDuration="2.891945169s" podCreationTimestamp="2026-02-25 11:37:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:37:57.890139364 +0000 UTC m=+1191.930871711" watchObservedRunningTime="2026-02-25 11:37:57.891945169 +0000 UTC m=+1191.932677496" Feb 25 11:37:57 crc kubenswrapper[5005]: I0225 11:37:57.911205 5005 scope.go:117] "RemoveContainer" containerID="f5b619d3f6c89177173ffd5e7305627a3041510c96518fba1ce69bb1dcccaace" Feb 25 11:37:57 crc kubenswrapper[5005]: E0225 11:37:57.911636 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5b619d3f6c89177173ffd5e7305627a3041510c96518fba1ce69bb1dcccaace\": container with ID starting with f5b619d3f6c89177173ffd5e7305627a3041510c96518fba1ce69bb1dcccaace not found: ID does not exist" containerID="f5b619d3f6c89177173ffd5e7305627a3041510c96518fba1ce69bb1dcccaace" Feb 25 11:37:57 crc kubenswrapper[5005]: I0225 11:37:57.911664 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5b619d3f6c89177173ffd5e7305627a3041510c96518fba1ce69bb1dcccaace"} err="failed to get container status \"f5b619d3f6c89177173ffd5e7305627a3041510c96518fba1ce69bb1dcccaace\": rpc error: code = NotFound desc = could not find container \"f5b619d3f6c89177173ffd5e7305627a3041510c96518fba1ce69bb1dcccaace\": container with ID starting with f5b619d3f6c89177173ffd5e7305627a3041510c96518fba1ce69bb1dcccaace not found: ID does not exist" Feb 25 11:37:57 crc kubenswrapper[5005]: I0225 11:37:57.919842 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.9198273930000003 podStartE2EDuration="2.919827393s" podCreationTimestamp="2026-02-25 11:37:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:37:57.903783448 +0000 UTC m=+1191.944515775" watchObservedRunningTime="2026-02-25 11:37:57.919827393 +0000 UTC m=+1191.960559710" Feb 25 11:37:57 crc kubenswrapper[5005]: I0225 11:37:57.961537 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 25 11:37:57 crc kubenswrapper[5005]: I0225 11:37:57.996218 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 25 11:37:58 crc kubenswrapper[5005]: I0225 11:37:58.005174 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 25 11:37:58 crc kubenswrapper[5005]: I0225 11:37:58.020242 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 25 11:37:58 crc kubenswrapper[5005]: I0225 11:37:58.036178 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 25 11:37:58 crc kubenswrapper[5005]: E0225 11:37:58.036567 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06987e0a-c281-4cbf-acdf-5831dd0b3561" containerName="kube-state-metrics" Feb 25 11:37:58 crc kubenswrapper[5005]: I0225 11:37:58.036583 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="06987e0a-c281-4cbf-acdf-5831dd0b3561" containerName="kube-state-metrics" Feb 25 11:37:58 crc kubenswrapper[5005]: E0225 11:37:58.036624 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64c05ccd-a75c-46df-8cbe-60f572b64666" containerName="nova-scheduler-scheduler" Feb 25 11:37:58 crc kubenswrapper[5005]: I0225 11:37:58.036632 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="64c05ccd-a75c-46df-8cbe-60f572b64666" containerName="nova-scheduler-scheduler" Feb 25 11:37:58 crc kubenswrapper[5005]: I0225 11:37:58.036787 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="64c05ccd-a75c-46df-8cbe-60f572b64666" containerName="nova-scheduler-scheduler" Feb 25 11:37:58 crc kubenswrapper[5005]: I0225 11:37:58.036808 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="06987e0a-c281-4cbf-acdf-5831dd0b3561" containerName="kube-state-metrics" Feb 25 11:37:58 crc kubenswrapper[5005]: I0225 11:37:58.037420 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 25 11:37:58 crc kubenswrapper[5005]: I0225 11:37:58.043664 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 25 11:37:58 crc kubenswrapper[5005]: I0225 11:37:58.049471 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 25 11:37:58 crc kubenswrapper[5005]: I0225 11:37:58.059071 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 25 11:37:58 crc kubenswrapper[5005]: I0225 11:37:58.061765 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 25 11:37:58 crc kubenswrapper[5005]: I0225 11:37:58.063784 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 25 11:37:58 crc kubenswrapper[5005]: I0225 11:37:58.063957 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 25 11:37:58 crc kubenswrapper[5005]: I0225 11:37:58.067438 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 25 11:37:58 crc kubenswrapper[5005]: I0225 11:37:58.091705 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 11:37:58 crc kubenswrapper[5005]: I0225 11:37:58.091762 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 11:37:58 crc kubenswrapper[5005]: I0225 11:37:58.091807 5005 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" Feb 25 11:37:58 crc kubenswrapper[5005]: I0225 11:37:58.092706 5005 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"436a11adc02a3406c7c2d7029cfaa74683b64c268bdd958d676e56e989c38e2c"} pod="openshift-machine-config-operator/machine-config-daemon-tct5q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 25 11:37:58 crc kubenswrapper[5005]: I0225 11:37:58.092765 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" containerID="cri-o://436a11adc02a3406c7c2d7029cfaa74683b64c268bdd958d676e56e989c38e2c" gracePeriod=600 Feb 25 11:37:58 crc kubenswrapper[5005]: E0225 11:37:58.130003 5005 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45835808_fd30_412e_a08b_4120e6e4ea9b.slice/crio-fac5a83243191d974fc974d5dc47451bcfd89c6eb0e9686872f2450566538dcc.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45835808_fd30_412e_a08b_4120e6e4ea9b.slice/crio-conmon-fac5a83243191d974fc974d5dc47451bcfd89c6eb0e9686872f2450566538dcc.scope\": RecentStats: unable to find data in memory cache]" Feb 25 11:37:58 crc kubenswrapper[5005]: I0225 11:37:58.237213 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcshr\" (UniqueName: \"kubernetes.io/projected/4b524360-8bfc-488d-b2ec-2668afe9b13d-kube-api-access-tcshr\") pod \"kube-state-metrics-0\" (UID: \"4b524360-8bfc-488d-b2ec-2668afe9b13d\") " pod="openstack/kube-state-metrics-0" Feb 25 11:37:58 crc kubenswrapper[5005]: I0225 11:37:58.237805 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b524360-8bfc-488d-b2ec-2668afe9b13d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"4b524360-8bfc-488d-b2ec-2668afe9b13d\") " pod="openstack/kube-state-metrics-0" Feb 25 11:37:58 crc kubenswrapper[5005]: I0225 11:37:58.237874 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2f4g\" (UniqueName: \"kubernetes.io/projected/d06d7bee-8488-40a7-aa2b-56c3e45a92f0-kube-api-access-q2f4g\") pod \"nova-scheduler-0\" (UID: \"d06d7bee-8488-40a7-aa2b-56c3e45a92f0\") " pod="openstack/nova-scheduler-0" Feb 25 11:37:58 crc kubenswrapper[5005]: I0225 11:37:58.237977 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d06d7bee-8488-40a7-aa2b-56c3e45a92f0-config-data\") pod \"nova-scheduler-0\" (UID: \"d06d7bee-8488-40a7-aa2b-56c3e45a92f0\") " pod="openstack/nova-scheduler-0" Feb 25 11:37:58 crc kubenswrapper[5005]: I0225 11:37:58.238060 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/4b524360-8bfc-488d-b2ec-2668afe9b13d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"4b524360-8bfc-488d-b2ec-2668afe9b13d\") " pod="openstack/kube-state-metrics-0" Feb 25 11:37:58 crc kubenswrapper[5005]: I0225 11:37:58.238147 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b524360-8bfc-488d-b2ec-2668afe9b13d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"4b524360-8bfc-488d-b2ec-2668afe9b13d\") " pod="openstack/kube-state-metrics-0" Feb 25 11:37:58 crc kubenswrapper[5005]: I0225 11:37:58.239619 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d06d7bee-8488-40a7-aa2b-56c3e45a92f0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d06d7bee-8488-40a7-aa2b-56c3e45a92f0\") " pod="openstack/nova-scheduler-0" Feb 25 11:37:58 crc kubenswrapper[5005]: I0225 11:37:58.341695 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d06d7bee-8488-40a7-aa2b-56c3e45a92f0-config-data\") pod \"nova-scheduler-0\" (UID: \"d06d7bee-8488-40a7-aa2b-56c3e45a92f0\") " pod="openstack/nova-scheduler-0" Feb 25 11:37:58 crc kubenswrapper[5005]: I0225 11:37:58.341753 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/4b524360-8bfc-488d-b2ec-2668afe9b13d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"4b524360-8bfc-488d-b2ec-2668afe9b13d\") " pod="openstack/kube-state-metrics-0" Feb 25 11:37:58 crc kubenswrapper[5005]: I0225 11:37:58.341806 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b524360-8bfc-488d-b2ec-2668afe9b13d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"4b524360-8bfc-488d-b2ec-2668afe9b13d\") " pod="openstack/kube-state-metrics-0" Feb 25 11:37:58 crc kubenswrapper[5005]: I0225 11:37:58.341826 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d06d7bee-8488-40a7-aa2b-56c3e45a92f0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d06d7bee-8488-40a7-aa2b-56c3e45a92f0\") " pod="openstack/nova-scheduler-0" Feb 25 11:37:58 crc kubenswrapper[5005]: I0225 11:37:58.341886 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcshr\" (UniqueName: \"kubernetes.io/projected/4b524360-8bfc-488d-b2ec-2668afe9b13d-kube-api-access-tcshr\") pod \"kube-state-metrics-0\" (UID: \"4b524360-8bfc-488d-b2ec-2668afe9b13d\") " pod="openstack/kube-state-metrics-0" Feb 25 11:37:58 crc kubenswrapper[5005]: I0225 11:37:58.341918 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b524360-8bfc-488d-b2ec-2668afe9b13d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"4b524360-8bfc-488d-b2ec-2668afe9b13d\") " pod="openstack/kube-state-metrics-0" Feb 25 11:37:58 crc kubenswrapper[5005]: I0225 11:37:58.341997 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2f4g\" (UniqueName: \"kubernetes.io/projected/d06d7bee-8488-40a7-aa2b-56c3e45a92f0-kube-api-access-q2f4g\") pod \"nova-scheduler-0\" (UID: \"d06d7bee-8488-40a7-aa2b-56c3e45a92f0\") " pod="openstack/nova-scheduler-0" Feb 25 11:37:58 crc kubenswrapper[5005]: I0225 11:37:58.347483 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d06d7bee-8488-40a7-aa2b-56c3e45a92f0-config-data\") pod \"nova-scheduler-0\" (UID: \"d06d7bee-8488-40a7-aa2b-56c3e45a92f0\") " pod="openstack/nova-scheduler-0" Feb 25 11:37:58 crc kubenswrapper[5005]: I0225 11:37:58.347658 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b524360-8bfc-488d-b2ec-2668afe9b13d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"4b524360-8bfc-488d-b2ec-2668afe9b13d\") " pod="openstack/kube-state-metrics-0" Feb 25 11:37:58 crc kubenswrapper[5005]: I0225 11:37:58.348451 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/4b524360-8bfc-488d-b2ec-2668afe9b13d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"4b524360-8bfc-488d-b2ec-2668afe9b13d\") " pod="openstack/kube-state-metrics-0" Feb 25 11:37:58 crc kubenswrapper[5005]: I0225 11:37:58.348969 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b524360-8bfc-488d-b2ec-2668afe9b13d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"4b524360-8bfc-488d-b2ec-2668afe9b13d\") " pod="openstack/kube-state-metrics-0" Feb 25 11:37:58 crc kubenswrapper[5005]: I0225 11:37:58.357492 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcshr\" (UniqueName: \"kubernetes.io/projected/4b524360-8bfc-488d-b2ec-2668afe9b13d-kube-api-access-tcshr\") pod \"kube-state-metrics-0\" (UID: \"4b524360-8bfc-488d-b2ec-2668afe9b13d\") " pod="openstack/kube-state-metrics-0" Feb 25 11:37:58 crc kubenswrapper[5005]: I0225 11:37:58.357744 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d06d7bee-8488-40a7-aa2b-56c3e45a92f0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d06d7bee-8488-40a7-aa2b-56c3e45a92f0\") " pod="openstack/nova-scheduler-0" Feb 25 11:37:58 crc kubenswrapper[5005]: I0225 11:37:58.359495 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2f4g\" (UniqueName: \"kubernetes.io/projected/d06d7bee-8488-40a7-aa2b-56c3e45a92f0-kube-api-access-q2f4g\") pod \"nova-scheduler-0\" (UID: \"d06d7bee-8488-40a7-aa2b-56c3e45a92f0\") " pod="openstack/nova-scheduler-0" Feb 25 11:37:58 crc kubenswrapper[5005]: I0225 11:37:58.394199 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 25 11:37:58 crc kubenswrapper[5005]: I0225 11:37:58.658549 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 25 11:37:58 crc kubenswrapper[5005]: I0225 11:37:58.694707 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06987e0a-c281-4cbf-acdf-5831dd0b3561" path="/var/lib/kubelet/pods/06987e0a-c281-4cbf-acdf-5831dd0b3561/volumes" Feb 25 11:37:58 crc kubenswrapper[5005]: I0225 11:37:58.695447 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64c05ccd-a75c-46df-8cbe-60f572b64666" path="/var/lib/kubelet/pods/64c05ccd-a75c-46df-8cbe-60f572b64666/volumes" Feb 25 11:37:58 crc kubenswrapper[5005]: I0225 11:37:58.695946 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6f62bc8-4834-483f-b68d-9f4859378352" path="/var/lib/kubelet/pods/a6f62bc8-4834-483f-b68d-9f4859378352/volumes" Feb 25 11:37:58 crc kubenswrapper[5005]: I0225 11:37:58.841309 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 25 11:37:58 crc kubenswrapper[5005]: I0225 11:37:58.888044 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4b524360-8bfc-488d-b2ec-2668afe9b13d","Type":"ContainerStarted","Data":"5c8d18ad81eae2f36f2c996533d21c99d5053de5ea224084404a36acf6db0cde"} Feb 25 11:37:58 crc kubenswrapper[5005]: I0225 11:37:58.894117 5005 generic.go:334] "Generic (PLEG): container finished" podID="45835808-fd30-412e-a08b-4120e6e4ea9b" containerID="fac5a83243191d974fc974d5dc47451bcfd89c6eb0e9686872f2450566538dcc" exitCode=0 Feb 25 11:37:58 crc kubenswrapper[5005]: I0225 11:37:58.894139 5005 generic.go:334] "Generic (PLEG): container finished" podID="45835808-fd30-412e-a08b-4120e6e4ea9b" containerID="399c2868f58b821f3e030cc0ac2d5add99cda777d9fe6b2bccac2d2e79e394f1" exitCode=2 Feb 25 11:37:58 crc kubenswrapper[5005]: I0225 11:37:58.894147 5005 generic.go:334] "Generic (PLEG): container finished" podID="45835808-fd30-412e-a08b-4120e6e4ea9b" containerID="af4f7f24a693a2f06856bfef21ad15bb8f408973731870fd8f72d9df49fe5714" exitCode=0 Feb 25 11:37:58 crc kubenswrapper[5005]: I0225 11:37:58.894175 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45835808-fd30-412e-a08b-4120e6e4ea9b","Type":"ContainerDied","Data":"fac5a83243191d974fc974d5dc47451bcfd89c6eb0e9686872f2450566538dcc"} Feb 25 11:37:58 crc kubenswrapper[5005]: I0225 11:37:58.894191 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45835808-fd30-412e-a08b-4120e6e4ea9b","Type":"ContainerDied","Data":"399c2868f58b821f3e030cc0ac2d5add99cda777d9fe6b2bccac2d2e79e394f1"} Feb 25 11:37:58 crc kubenswrapper[5005]: I0225 11:37:58.894200 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45835808-fd30-412e-a08b-4120e6e4ea9b","Type":"ContainerDied","Data":"af4f7f24a693a2f06856bfef21ad15bb8f408973731870fd8f72d9df49fe5714"} Feb 25 11:37:58 crc kubenswrapper[5005]: I0225 11:37:58.899996 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerDied","Data":"436a11adc02a3406c7c2d7029cfaa74683b64c268bdd958d676e56e989c38e2c"} Feb 25 11:37:58 crc kubenswrapper[5005]: I0225 11:37:58.900043 5005 scope.go:117] "RemoveContainer" containerID="bd803ca207119b9d3512cdb7e88427039abb6164280815395a904d77de0c108e" Feb 25 11:37:58 crc kubenswrapper[5005]: I0225 11:37:58.900523 5005 generic.go:334] "Generic (PLEG): container finished" podID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerID="436a11adc02a3406c7c2d7029cfaa74683b64c268bdd958d676e56e989c38e2c" exitCode=0 Feb 25 11:37:58 crc kubenswrapper[5005]: I0225 11:37:58.901699 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerStarted","Data":"c816cc274b9544be8ead9d4693ad6ccdd4ec8d5d3bca03945f606f68950d1d53"} Feb 25 11:37:59 crc kubenswrapper[5005]: I0225 11:37:59.101817 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 25 11:37:59 crc kubenswrapper[5005]: I0225 11:37:59.922004 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4b524360-8bfc-488d-b2ec-2668afe9b13d","Type":"ContainerStarted","Data":"018cc315b7f5e8161971fdbd39e32fe00cb455cf49bee2aab652d356b5caae54"} Feb 25 11:37:59 crc kubenswrapper[5005]: I0225 11:37:59.922398 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 25 11:37:59 crc kubenswrapper[5005]: I0225 11:37:59.924452 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d06d7bee-8488-40a7-aa2b-56c3e45a92f0","Type":"ContainerStarted","Data":"09513e3732fdd0822587ab8822c51c20594b6a5660e7385dccb01f3e0f686d2d"} Feb 25 11:37:59 crc kubenswrapper[5005]: I0225 11:37:59.924477 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d06d7bee-8488-40a7-aa2b-56c3e45a92f0","Type":"ContainerStarted","Data":"f03c79de2601947d90f647c3afc986f53c1e169158ead95c44c391f9eddf2953"} Feb 25 11:37:59 crc kubenswrapper[5005]: I0225 11:37:59.947823 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.478398767 podStartE2EDuration="2.947806944s" podCreationTimestamp="2026-02-25 11:37:57 +0000 UTC" firstStartedPulling="2026-02-25 11:37:58.853599964 +0000 UTC m=+1192.894332291" lastFinishedPulling="2026-02-25 11:37:59.323008141 +0000 UTC m=+1193.363740468" observedRunningTime="2026-02-25 11:37:59.942709379 +0000 UTC m=+1193.983441746" watchObservedRunningTime="2026-02-25 11:37:59.947806944 +0000 UTC m=+1193.988539271" Feb 25 11:37:59 crc kubenswrapper[5005]: I0225 11:37:59.965222 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.9652008309999998 podStartE2EDuration="2.965200831s" podCreationTimestamp="2026-02-25 11:37:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:37:59.959022833 +0000 UTC m=+1193.999755180" watchObservedRunningTime="2026-02-25 11:37:59.965200831 +0000 UTC m=+1194.005933158" Feb 25 11:38:00 crc kubenswrapper[5005]: I0225 11:38:00.137178 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533658-khbfz"] Feb 25 11:38:00 crc kubenswrapper[5005]: I0225 11:38:00.138893 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533658-khbfz" Feb 25 11:38:00 crc kubenswrapper[5005]: I0225 11:38:00.142297 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 11:38:00 crc kubenswrapper[5005]: I0225 11:38:00.142531 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 11:38:00 crc kubenswrapper[5005]: I0225 11:38:00.142855 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 11:38:00 crc kubenswrapper[5005]: I0225 11:38:00.156043 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533658-khbfz"] Feb 25 11:38:00 crc kubenswrapper[5005]: I0225 11:38:00.278601 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dv778\" (UniqueName: \"kubernetes.io/projected/3df09937-0397-4bf9-8f3a-435b47224f7e-kube-api-access-dv778\") pod \"auto-csr-approver-29533658-khbfz\" (UID: \"3df09937-0397-4bf9-8f3a-435b47224f7e\") " pod="openshift-infra/auto-csr-approver-29533658-khbfz" Feb 25 11:38:00 crc kubenswrapper[5005]: I0225 11:38:00.380596 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dv778\" (UniqueName: \"kubernetes.io/projected/3df09937-0397-4bf9-8f3a-435b47224f7e-kube-api-access-dv778\") pod \"auto-csr-approver-29533658-khbfz\" (UID: \"3df09937-0397-4bf9-8f3a-435b47224f7e\") " pod="openshift-infra/auto-csr-approver-29533658-khbfz" Feb 25 11:38:00 crc kubenswrapper[5005]: I0225 11:38:00.400633 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dv778\" (UniqueName: \"kubernetes.io/projected/3df09937-0397-4bf9-8f3a-435b47224f7e-kube-api-access-dv778\") pod \"auto-csr-approver-29533658-khbfz\" (UID: \"3df09937-0397-4bf9-8f3a-435b47224f7e\") " pod="openshift-infra/auto-csr-approver-29533658-khbfz" Feb 25 11:38:00 crc kubenswrapper[5005]: I0225 11:38:00.481907 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533658-khbfz" Feb 25 11:38:00 crc kubenswrapper[5005]: I0225 11:38:00.981159 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533658-khbfz"] Feb 25 11:38:01 crc kubenswrapper[5005]: I0225 11:38:01.257190 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 25 11:38:01 crc kubenswrapper[5005]: I0225 11:38:01.430355 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 25 11:38:01 crc kubenswrapper[5005]: I0225 11:38:01.624674 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45835808-fd30-412e-a08b-4120e6e4ea9b-log-httpd\") pod \"45835808-fd30-412e-a08b-4120e6e4ea9b\" (UID: \"45835808-fd30-412e-a08b-4120e6e4ea9b\") " Feb 25 11:38:01 crc kubenswrapper[5005]: I0225 11:38:01.624730 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45835808-fd30-412e-a08b-4120e6e4ea9b-run-httpd\") pod \"45835808-fd30-412e-a08b-4120e6e4ea9b\" (UID: \"45835808-fd30-412e-a08b-4120e6e4ea9b\") " Feb 25 11:38:01 crc kubenswrapper[5005]: I0225 11:38:01.624757 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45835808-fd30-412e-a08b-4120e6e4ea9b-scripts\") pod \"45835808-fd30-412e-a08b-4120e6e4ea9b\" (UID: \"45835808-fd30-412e-a08b-4120e6e4ea9b\") " Feb 25 11:38:01 crc kubenswrapper[5005]: I0225 11:38:01.624790 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xglgw\" (UniqueName: \"kubernetes.io/projected/45835808-fd30-412e-a08b-4120e6e4ea9b-kube-api-access-xglgw\") pod \"45835808-fd30-412e-a08b-4120e6e4ea9b\" (UID: \"45835808-fd30-412e-a08b-4120e6e4ea9b\") " Feb 25 11:38:01 crc kubenswrapper[5005]: I0225 11:38:01.624814 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/45835808-fd30-412e-a08b-4120e6e4ea9b-sg-core-conf-yaml\") pod \"45835808-fd30-412e-a08b-4120e6e4ea9b\" (UID: \"45835808-fd30-412e-a08b-4120e6e4ea9b\") " Feb 25 11:38:01 crc kubenswrapper[5005]: I0225 11:38:01.624933 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45835808-fd30-412e-a08b-4120e6e4ea9b-config-data\") pod \"45835808-fd30-412e-a08b-4120e6e4ea9b\" (UID: \"45835808-fd30-412e-a08b-4120e6e4ea9b\") " Feb 25 11:38:01 crc kubenswrapper[5005]: I0225 11:38:01.625004 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45835808-fd30-412e-a08b-4120e6e4ea9b-combined-ca-bundle\") pod \"45835808-fd30-412e-a08b-4120e6e4ea9b\" (UID: \"45835808-fd30-412e-a08b-4120e6e4ea9b\") " Feb 25 11:38:01 crc kubenswrapper[5005]: I0225 11:38:01.625195 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45835808-fd30-412e-a08b-4120e6e4ea9b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "45835808-fd30-412e-a08b-4120e6e4ea9b" (UID: "45835808-fd30-412e-a08b-4120e6e4ea9b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:38:01 crc kubenswrapper[5005]: I0225 11:38:01.625448 5005 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45835808-fd30-412e-a08b-4120e6e4ea9b-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 25 11:38:01 crc kubenswrapper[5005]: I0225 11:38:01.631450 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45835808-fd30-412e-a08b-4120e6e4ea9b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "45835808-fd30-412e-a08b-4120e6e4ea9b" (UID: "45835808-fd30-412e-a08b-4120e6e4ea9b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:38:01 crc kubenswrapper[5005]: I0225 11:38:01.633811 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45835808-fd30-412e-a08b-4120e6e4ea9b-kube-api-access-xglgw" (OuterVolumeSpecName: "kube-api-access-xglgw") pod "45835808-fd30-412e-a08b-4120e6e4ea9b" (UID: "45835808-fd30-412e-a08b-4120e6e4ea9b"). InnerVolumeSpecName "kube-api-access-xglgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:38:01 crc kubenswrapper[5005]: I0225 11:38:01.634702 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45835808-fd30-412e-a08b-4120e6e4ea9b-scripts" (OuterVolumeSpecName: "scripts") pod "45835808-fd30-412e-a08b-4120e6e4ea9b" (UID: "45835808-fd30-412e-a08b-4120e6e4ea9b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:38:01 crc kubenswrapper[5005]: I0225 11:38:01.673806 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45835808-fd30-412e-a08b-4120e6e4ea9b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "45835808-fd30-412e-a08b-4120e6e4ea9b" (UID: "45835808-fd30-412e-a08b-4120e6e4ea9b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:38:01 crc kubenswrapper[5005]: I0225 11:38:01.722994 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45835808-fd30-412e-a08b-4120e6e4ea9b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "45835808-fd30-412e-a08b-4120e6e4ea9b" (UID: "45835808-fd30-412e-a08b-4120e6e4ea9b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:38:01 crc kubenswrapper[5005]: I0225 11:38:01.727280 5005 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45835808-fd30-412e-a08b-4120e6e4ea9b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 11:38:01 crc kubenswrapper[5005]: I0225 11:38:01.727322 5005 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45835808-fd30-412e-a08b-4120e6e4ea9b-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 25 11:38:01 crc kubenswrapper[5005]: I0225 11:38:01.727333 5005 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45835808-fd30-412e-a08b-4120e6e4ea9b-scripts\") on node \"crc\" DevicePath \"\"" Feb 25 11:38:01 crc kubenswrapper[5005]: I0225 11:38:01.727346 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xglgw\" (UniqueName: \"kubernetes.io/projected/45835808-fd30-412e-a08b-4120e6e4ea9b-kube-api-access-xglgw\") on node \"crc\" DevicePath \"\"" Feb 25 11:38:01 crc kubenswrapper[5005]: I0225 11:38:01.727358 5005 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/45835808-fd30-412e-a08b-4120e6e4ea9b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 25 11:38:01 crc kubenswrapper[5005]: I0225 11:38:01.747180 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45835808-fd30-412e-a08b-4120e6e4ea9b-config-data" (OuterVolumeSpecName: "config-data") pod "45835808-fd30-412e-a08b-4120e6e4ea9b" (UID: "45835808-fd30-412e-a08b-4120e6e4ea9b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:38:01 crc kubenswrapper[5005]: I0225 11:38:01.789998 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 25 11:38:01 crc kubenswrapper[5005]: I0225 11:38:01.790931 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 25 11:38:01 crc kubenswrapper[5005]: I0225 11:38:01.829364 5005 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45835808-fd30-412e-a08b-4120e6e4ea9b-config-data\") on node \"crc\" DevicePath \"\"" Feb 25 11:38:01 crc kubenswrapper[5005]: I0225 11:38:01.944065 5005 generic.go:334] "Generic (PLEG): container finished" podID="45835808-fd30-412e-a08b-4120e6e4ea9b" containerID="a9b7b859269734a99c9c673b0e21637efecf33da22ee42ad3e53575d04dde3bd" exitCode=0 Feb 25 11:38:01 crc kubenswrapper[5005]: I0225 11:38:01.944110 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 25 11:38:01 crc kubenswrapper[5005]: I0225 11:38:01.944129 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45835808-fd30-412e-a08b-4120e6e4ea9b","Type":"ContainerDied","Data":"a9b7b859269734a99c9c673b0e21637efecf33da22ee42ad3e53575d04dde3bd"} Feb 25 11:38:01 crc kubenswrapper[5005]: I0225 11:38:01.948363 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45835808-fd30-412e-a08b-4120e6e4ea9b","Type":"ContainerDied","Data":"0b0ae637b9a156cbbfacc313ffb63c649bc3078da53d81e0f1bd175af8009130"} Feb 25 11:38:01 crc kubenswrapper[5005]: I0225 11:38:01.948407 5005 scope.go:117] "RemoveContainer" containerID="fac5a83243191d974fc974d5dc47451bcfd89c6eb0e9686872f2450566538dcc" Feb 25 11:38:01 crc kubenswrapper[5005]: I0225 11:38:01.954604 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533658-khbfz" event={"ID":"3df09937-0397-4bf9-8f3a-435b47224f7e","Type":"ContainerStarted","Data":"d9efb0b22766e89a67949a9983bc920c913c814c6a74e44c7da44bf453fce3ca"} Feb 25 11:38:01 crc kubenswrapper[5005]: I0225 11:38:01.982109 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 25 11:38:02 crc kubenswrapper[5005]: I0225 11:38:02.003483 5005 scope.go:117] "RemoveContainer" containerID="399c2868f58b821f3e030cc0ac2d5add99cda777d9fe6b2bccac2d2e79e394f1" Feb 25 11:38:02 crc kubenswrapper[5005]: I0225 11:38:02.004928 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 25 11:38:02 crc kubenswrapper[5005]: I0225 11:38:02.017127 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 25 11:38:02 crc kubenswrapper[5005]: E0225 11:38:02.017569 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45835808-fd30-412e-a08b-4120e6e4ea9b" containerName="ceilometer-central-agent" Feb 25 11:38:02 crc kubenswrapper[5005]: I0225 11:38:02.017594 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="45835808-fd30-412e-a08b-4120e6e4ea9b" containerName="ceilometer-central-agent" Feb 25 11:38:02 crc kubenswrapper[5005]: E0225 11:38:02.017614 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45835808-fd30-412e-a08b-4120e6e4ea9b" containerName="ceilometer-notification-agent" Feb 25 11:38:02 crc kubenswrapper[5005]: I0225 11:38:02.017623 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="45835808-fd30-412e-a08b-4120e6e4ea9b" containerName="ceilometer-notification-agent" Feb 25 11:38:02 crc kubenswrapper[5005]: E0225 11:38:02.017644 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45835808-fd30-412e-a08b-4120e6e4ea9b" containerName="sg-core" Feb 25 11:38:02 crc kubenswrapper[5005]: I0225 11:38:02.017653 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="45835808-fd30-412e-a08b-4120e6e4ea9b" containerName="sg-core" Feb 25 11:38:02 crc kubenswrapper[5005]: E0225 11:38:02.017694 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45835808-fd30-412e-a08b-4120e6e4ea9b" containerName="proxy-httpd" Feb 25 11:38:02 crc kubenswrapper[5005]: I0225 11:38:02.017704 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="45835808-fd30-412e-a08b-4120e6e4ea9b" containerName="proxy-httpd" Feb 25 11:38:02 crc kubenswrapper[5005]: I0225 11:38:02.017899 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="45835808-fd30-412e-a08b-4120e6e4ea9b" containerName="proxy-httpd" Feb 25 11:38:02 crc kubenswrapper[5005]: I0225 11:38:02.017916 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="45835808-fd30-412e-a08b-4120e6e4ea9b" containerName="ceilometer-notification-agent" Feb 25 11:38:02 crc kubenswrapper[5005]: I0225 11:38:02.017932 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="45835808-fd30-412e-a08b-4120e6e4ea9b" containerName="ceilometer-central-agent" Feb 25 11:38:02 crc kubenswrapper[5005]: I0225 11:38:02.017946 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="45835808-fd30-412e-a08b-4120e6e4ea9b" containerName="sg-core" Feb 25 11:38:02 crc kubenswrapper[5005]: I0225 11:38:02.020396 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 25 11:38:02 crc kubenswrapper[5005]: I0225 11:38:02.029517 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 25 11:38:02 crc kubenswrapper[5005]: I0225 11:38:02.029968 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 25 11:38:02 crc kubenswrapper[5005]: I0225 11:38:02.030115 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 25 11:38:02 crc kubenswrapper[5005]: I0225 11:38:02.035211 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 25 11:38:02 crc kubenswrapper[5005]: I0225 11:38:02.042400 5005 scope.go:117] "RemoveContainer" containerID="a9b7b859269734a99c9c673b0e21637efecf33da22ee42ad3e53575d04dde3bd" Feb 25 11:38:02 crc kubenswrapper[5005]: I0225 11:38:02.074471 5005 scope.go:117] "RemoveContainer" containerID="af4f7f24a693a2f06856bfef21ad15bb8f408973731870fd8f72d9df49fe5714" Feb 25 11:38:02 crc kubenswrapper[5005]: I0225 11:38:02.102549 5005 scope.go:117] "RemoveContainer" containerID="fac5a83243191d974fc974d5dc47451bcfd89c6eb0e9686872f2450566538dcc" Feb 25 11:38:02 crc kubenswrapper[5005]: E0225 11:38:02.103098 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fac5a83243191d974fc974d5dc47451bcfd89c6eb0e9686872f2450566538dcc\": container with ID starting with fac5a83243191d974fc974d5dc47451bcfd89c6eb0e9686872f2450566538dcc not found: ID does not exist" containerID="fac5a83243191d974fc974d5dc47451bcfd89c6eb0e9686872f2450566538dcc" Feb 25 11:38:02 crc kubenswrapper[5005]: I0225 11:38:02.103198 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fac5a83243191d974fc974d5dc47451bcfd89c6eb0e9686872f2450566538dcc"} err="failed to get container status \"fac5a83243191d974fc974d5dc47451bcfd89c6eb0e9686872f2450566538dcc\": rpc error: code = NotFound desc = could not find container \"fac5a83243191d974fc974d5dc47451bcfd89c6eb0e9686872f2450566538dcc\": container with ID starting with fac5a83243191d974fc974d5dc47451bcfd89c6eb0e9686872f2450566538dcc not found: ID does not exist" Feb 25 11:38:02 crc kubenswrapper[5005]: I0225 11:38:02.103281 5005 scope.go:117] "RemoveContainer" containerID="399c2868f58b821f3e030cc0ac2d5add99cda777d9fe6b2bccac2d2e79e394f1" Feb 25 11:38:02 crc kubenswrapper[5005]: E0225 11:38:02.105479 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"399c2868f58b821f3e030cc0ac2d5add99cda777d9fe6b2bccac2d2e79e394f1\": container with ID starting with 399c2868f58b821f3e030cc0ac2d5add99cda777d9fe6b2bccac2d2e79e394f1 not found: ID does not exist" containerID="399c2868f58b821f3e030cc0ac2d5add99cda777d9fe6b2bccac2d2e79e394f1" Feb 25 11:38:02 crc kubenswrapper[5005]: I0225 11:38:02.105593 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"399c2868f58b821f3e030cc0ac2d5add99cda777d9fe6b2bccac2d2e79e394f1"} err="failed to get container status \"399c2868f58b821f3e030cc0ac2d5add99cda777d9fe6b2bccac2d2e79e394f1\": rpc error: code = NotFound desc = could not find container \"399c2868f58b821f3e030cc0ac2d5add99cda777d9fe6b2bccac2d2e79e394f1\": container with ID starting with 399c2868f58b821f3e030cc0ac2d5add99cda777d9fe6b2bccac2d2e79e394f1 not found: ID does not exist" Feb 25 11:38:02 crc kubenswrapper[5005]: I0225 11:38:02.105624 5005 scope.go:117] "RemoveContainer" containerID="a9b7b859269734a99c9c673b0e21637efecf33da22ee42ad3e53575d04dde3bd" Feb 25 11:38:02 crc kubenswrapper[5005]: E0225 11:38:02.105850 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9b7b859269734a99c9c673b0e21637efecf33da22ee42ad3e53575d04dde3bd\": container with ID starting with a9b7b859269734a99c9c673b0e21637efecf33da22ee42ad3e53575d04dde3bd not found: ID does not exist" containerID="a9b7b859269734a99c9c673b0e21637efecf33da22ee42ad3e53575d04dde3bd" Feb 25 11:38:02 crc kubenswrapper[5005]: I0225 11:38:02.105901 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9b7b859269734a99c9c673b0e21637efecf33da22ee42ad3e53575d04dde3bd"} err="failed to get container status \"a9b7b859269734a99c9c673b0e21637efecf33da22ee42ad3e53575d04dde3bd\": rpc error: code = NotFound desc = could not find container \"a9b7b859269734a99c9c673b0e21637efecf33da22ee42ad3e53575d04dde3bd\": container with ID starting with a9b7b859269734a99c9c673b0e21637efecf33da22ee42ad3e53575d04dde3bd not found: ID does not exist" Feb 25 11:38:02 crc kubenswrapper[5005]: I0225 11:38:02.105924 5005 scope.go:117] "RemoveContainer" containerID="af4f7f24a693a2f06856bfef21ad15bb8f408973731870fd8f72d9df49fe5714" Feb 25 11:38:02 crc kubenswrapper[5005]: E0225 11:38:02.106511 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af4f7f24a693a2f06856bfef21ad15bb8f408973731870fd8f72d9df49fe5714\": container with ID starting with af4f7f24a693a2f06856bfef21ad15bb8f408973731870fd8f72d9df49fe5714 not found: ID does not exist" containerID="af4f7f24a693a2f06856bfef21ad15bb8f408973731870fd8f72d9df49fe5714" Feb 25 11:38:02 crc kubenswrapper[5005]: I0225 11:38:02.106542 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af4f7f24a693a2f06856bfef21ad15bb8f408973731870fd8f72d9df49fe5714"} err="failed to get container status \"af4f7f24a693a2f06856bfef21ad15bb8f408973731870fd8f72d9df49fe5714\": rpc error: code = NotFound desc = could not find container \"af4f7f24a693a2f06856bfef21ad15bb8f408973731870fd8f72d9df49fe5714\": container with ID starting with af4f7f24a693a2f06856bfef21ad15bb8f408973731870fd8f72d9df49fe5714 not found: ID does not exist" Feb 25 11:38:02 crc kubenswrapper[5005]: I0225 11:38:02.137723 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/caca9b15-0f1c-40a9-8cf9-0c030b9114f2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"caca9b15-0f1c-40a9-8cf9-0c030b9114f2\") " pod="openstack/ceilometer-0" Feb 25 11:38:02 crc kubenswrapper[5005]: I0225 11:38:02.137757 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/caca9b15-0f1c-40a9-8cf9-0c030b9114f2-log-httpd\") pod \"ceilometer-0\" (UID: \"caca9b15-0f1c-40a9-8cf9-0c030b9114f2\") " pod="openstack/ceilometer-0" Feb 25 11:38:02 crc kubenswrapper[5005]: I0225 11:38:02.137799 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/caca9b15-0f1c-40a9-8cf9-0c030b9114f2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"caca9b15-0f1c-40a9-8cf9-0c030b9114f2\") " pod="openstack/ceilometer-0" Feb 25 11:38:02 crc kubenswrapper[5005]: I0225 11:38:02.137825 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/caca9b15-0f1c-40a9-8cf9-0c030b9114f2-run-httpd\") pod \"ceilometer-0\" (UID: \"caca9b15-0f1c-40a9-8cf9-0c030b9114f2\") " pod="openstack/ceilometer-0" Feb 25 11:38:02 crc kubenswrapper[5005]: I0225 11:38:02.137945 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/caca9b15-0f1c-40a9-8cf9-0c030b9114f2-scripts\") pod \"ceilometer-0\" (UID: \"caca9b15-0f1c-40a9-8cf9-0c030b9114f2\") " pod="openstack/ceilometer-0" Feb 25 11:38:02 crc kubenswrapper[5005]: I0225 11:38:02.137974 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caca9b15-0f1c-40a9-8cf9-0c030b9114f2-config-data\") pod \"ceilometer-0\" (UID: \"caca9b15-0f1c-40a9-8cf9-0c030b9114f2\") " pod="openstack/ceilometer-0" Feb 25 11:38:02 crc kubenswrapper[5005]: I0225 11:38:02.137988 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caca9b15-0f1c-40a9-8cf9-0c030b9114f2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"caca9b15-0f1c-40a9-8cf9-0c030b9114f2\") " pod="openstack/ceilometer-0" Feb 25 11:38:02 crc kubenswrapper[5005]: I0225 11:38:02.138055 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m95dh\" (UniqueName: \"kubernetes.io/projected/caca9b15-0f1c-40a9-8cf9-0c030b9114f2-kube-api-access-m95dh\") pod \"ceilometer-0\" (UID: \"caca9b15-0f1c-40a9-8cf9-0c030b9114f2\") " pod="openstack/ceilometer-0" Feb 25 11:38:02 crc kubenswrapper[5005]: I0225 11:38:02.240178 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/caca9b15-0f1c-40a9-8cf9-0c030b9114f2-run-httpd\") pod \"ceilometer-0\" (UID: \"caca9b15-0f1c-40a9-8cf9-0c030b9114f2\") " pod="openstack/ceilometer-0" Feb 25 11:38:02 crc kubenswrapper[5005]: I0225 11:38:02.240271 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/caca9b15-0f1c-40a9-8cf9-0c030b9114f2-scripts\") pod \"ceilometer-0\" (UID: \"caca9b15-0f1c-40a9-8cf9-0c030b9114f2\") " pod="openstack/ceilometer-0" Feb 25 11:38:02 crc kubenswrapper[5005]: I0225 11:38:02.240319 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caca9b15-0f1c-40a9-8cf9-0c030b9114f2-config-data\") pod \"ceilometer-0\" (UID: \"caca9b15-0f1c-40a9-8cf9-0c030b9114f2\") " pod="openstack/ceilometer-0" Feb 25 11:38:02 crc kubenswrapper[5005]: I0225 11:38:02.240336 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caca9b15-0f1c-40a9-8cf9-0c030b9114f2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"caca9b15-0f1c-40a9-8cf9-0c030b9114f2\") " pod="openstack/ceilometer-0" Feb 25 11:38:02 crc kubenswrapper[5005]: I0225 11:38:02.240382 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m95dh\" (UniqueName: \"kubernetes.io/projected/caca9b15-0f1c-40a9-8cf9-0c030b9114f2-kube-api-access-m95dh\") pod \"ceilometer-0\" (UID: \"caca9b15-0f1c-40a9-8cf9-0c030b9114f2\") " pod="openstack/ceilometer-0" Feb 25 11:38:02 crc kubenswrapper[5005]: I0225 11:38:02.240458 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/caca9b15-0f1c-40a9-8cf9-0c030b9114f2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"caca9b15-0f1c-40a9-8cf9-0c030b9114f2\") " pod="openstack/ceilometer-0" Feb 25 11:38:02 crc kubenswrapper[5005]: I0225 11:38:02.240473 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/caca9b15-0f1c-40a9-8cf9-0c030b9114f2-log-httpd\") pod \"ceilometer-0\" (UID: \"caca9b15-0f1c-40a9-8cf9-0c030b9114f2\") " pod="openstack/ceilometer-0" Feb 25 11:38:02 crc kubenswrapper[5005]: I0225 11:38:02.240500 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/caca9b15-0f1c-40a9-8cf9-0c030b9114f2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"caca9b15-0f1c-40a9-8cf9-0c030b9114f2\") " pod="openstack/ceilometer-0" Feb 25 11:38:02 crc kubenswrapper[5005]: I0225 11:38:02.243428 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/caca9b15-0f1c-40a9-8cf9-0c030b9114f2-run-httpd\") pod \"ceilometer-0\" (UID: \"caca9b15-0f1c-40a9-8cf9-0c030b9114f2\") " pod="openstack/ceilometer-0" Feb 25 11:38:02 crc kubenswrapper[5005]: I0225 11:38:02.243656 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/caca9b15-0f1c-40a9-8cf9-0c030b9114f2-log-httpd\") pod \"ceilometer-0\" (UID: \"caca9b15-0f1c-40a9-8cf9-0c030b9114f2\") " pod="openstack/ceilometer-0" Feb 25 11:38:02 crc kubenswrapper[5005]: I0225 11:38:02.245919 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/caca9b15-0f1c-40a9-8cf9-0c030b9114f2-scripts\") pod \"ceilometer-0\" (UID: \"caca9b15-0f1c-40a9-8cf9-0c030b9114f2\") " pod="openstack/ceilometer-0" Feb 25 11:38:02 crc kubenswrapper[5005]: I0225 11:38:02.247816 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caca9b15-0f1c-40a9-8cf9-0c030b9114f2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"caca9b15-0f1c-40a9-8cf9-0c030b9114f2\") " pod="openstack/ceilometer-0" Feb 25 11:38:02 crc kubenswrapper[5005]: I0225 11:38:02.248657 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caca9b15-0f1c-40a9-8cf9-0c030b9114f2-config-data\") pod \"ceilometer-0\" (UID: \"caca9b15-0f1c-40a9-8cf9-0c030b9114f2\") " pod="openstack/ceilometer-0" Feb 25 11:38:02 crc kubenswrapper[5005]: I0225 11:38:02.253653 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/caca9b15-0f1c-40a9-8cf9-0c030b9114f2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"caca9b15-0f1c-40a9-8cf9-0c030b9114f2\") " pod="openstack/ceilometer-0" Feb 25 11:38:02 crc kubenswrapper[5005]: I0225 11:38:02.254938 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/caca9b15-0f1c-40a9-8cf9-0c030b9114f2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"caca9b15-0f1c-40a9-8cf9-0c030b9114f2\") " pod="openstack/ceilometer-0" Feb 25 11:38:02 crc kubenswrapper[5005]: I0225 11:38:02.261191 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m95dh\" (UniqueName: \"kubernetes.io/projected/caca9b15-0f1c-40a9-8cf9-0c030b9114f2-kube-api-access-m95dh\") pod \"ceilometer-0\" (UID: \"caca9b15-0f1c-40a9-8cf9-0c030b9114f2\") " pod="openstack/ceilometer-0" Feb 25 11:38:02 crc kubenswrapper[5005]: I0225 11:38:02.360603 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 25 11:38:02 crc kubenswrapper[5005]: I0225 11:38:02.693977 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45835808-fd30-412e-a08b-4120e6e4ea9b" path="/var/lib/kubelet/pods/45835808-fd30-412e-a08b-4120e6e4ea9b/volumes" Feb 25 11:38:02 crc kubenswrapper[5005]: I0225 11:38:02.881737 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 25 11:38:02 crc kubenswrapper[5005]: I0225 11:38:02.964646 5005 generic.go:334] "Generic (PLEG): container finished" podID="3df09937-0397-4bf9-8f3a-435b47224f7e" containerID="909fdd9ef9f189ddcc58320f4ff35609076afe4d8c7fc5800ec148320d07a934" exitCode=0 Feb 25 11:38:02 crc kubenswrapper[5005]: I0225 11:38:02.964726 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533658-khbfz" event={"ID":"3df09937-0397-4bf9-8f3a-435b47224f7e","Type":"ContainerDied","Data":"909fdd9ef9f189ddcc58320f4ff35609076afe4d8c7fc5800ec148320d07a934"} Feb 25 11:38:02 crc kubenswrapper[5005]: I0225 11:38:02.966113 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"caca9b15-0f1c-40a9-8cf9-0c030b9114f2","Type":"ContainerStarted","Data":"c0f9a9cb56c65f24a9107015c0add265de7c42ca2774d062b7d7df403c19242f"} Feb 25 11:38:03 crc kubenswrapper[5005]: I0225 11:38:03.658905 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 25 11:38:03 crc kubenswrapper[5005]: I0225 11:38:03.997238 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"caca9b15-0f1c-40a9-8cf9-0c030b9114f2","Type":"ContainerStarted","Data":"6ab1e93f09b439376646c55cda6bc831e40e23b36d92c8a36ae4dfe8124d71a0"} Feb 25 11:38:04 crc kubenswrapper[5005]: I0225 11:38:04.376763 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533658-khbfz" Feb 25 11:38:04 crc kubenswrapper[5005]: I0225 11:38:04.394834 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dv778\" (UniqueName: \"kubernetes.io/projected/3df09937-0397-4bf9-8f3a-435b47224f7e-kube-api-access-dv778\") pod \"3df09937-0397-4bf9-8f3a-435b47224f7e\" (UID: \"3df09937-0397-4bf9-8f3a-435b47224f7e\") " Feb 25 11:38:04 crc kubenswrapper[5005]: I0225 11:38:04.400851 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3df09937-0397-4bf9-8f3a-435b47224f7e-kube-api-access-dv778" (OuterVolumeSpecName: "kube-api-access-dv778") pod "3df09937-0397-4bf9-8f3a-435b47224f7e" (UID: "3df09937-0397-4bf9-8f3a-435b47224f7e"). InnerVolumeSpecName "kube-api-access-dv778". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:38:04 crc kubenswrapper[5005]: I0225 11:38:04.505137 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dv778\" (UniqueName: \"kubernetes.io/projected/3df09937-0397-4bf9-8f3a-435b47224f7e-kube-api-access-dv778\") on node \"crc\" DevicePath \"\"" Feb 25 11:38:05 crc kubenswrapper[5005]: I0225 11:38:05.007643 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"caca9b15-0f1c-40a9-8cf9-0c030b9114f2","Type":"ContainerStarted","Data":"ba3f8ef5669f0828e0e466265cf7e54cf425f617dace7b414aae6c4f69af1af4"} Feb 25 11:38:05 crc kubenswrapper[5005]: I0225 11:38:05.008117 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"caca9b15-0f1c-40a9-8cf9-0c030b9114f2","Type":"ContainerStarted","Data":"f38fe29d06cb7b7d524c6b0ae29b25f588bbc2d8cc9042f2e480d4d210946359"} Feb 25 11:38:05 crc kubenswrapper[5005]: I0225 11:38:05.009666 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533658-khbfz" event={"ID":"3df09937-0397-4bf9-8f3a-435b47224f7e","Type":"ContainerDied","Data":"d9efb0b22766e89a67949a9983bc920c913c814c6a74e44c7da44bf453fce3ca"} Feb 25 11:38:05 crc kubenswrapper[5005]: I0225 11:38:05.009698 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9efb0b22766e89a67949a9983bc920c913c814c6a74e44c7da44bf453fce3ca" Feb 25 11:38:05 crc kubenswrapper[5005]: I0225 11:38:05.009750 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533658-khbfz" Feb 25 11:38:05 crc kubenswrapper[5005]: I0225 11:38:05.453678 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533652-h5tzk"] Feb 25 11:38:05 crc kubenswrapper[5005]: I0225 11:38:05.463097 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533652-h5tzk"] Feb 25 11:38:06 crc kubenswrapper[5005]: I0225 11:38:06.266866 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 25 11:38:06 crc kubenswrapper[5005]: I0225 11:38:06.267603 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 25 11:38:06 crc kubenswrapper[5005]: I0225 11:38:06.706486 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="919273df-651a-4f1d-a11c-739f7dabad38" path="/var/lib/kubelet/pods/919273df-651a-4f1d-a11c-739f7dabad38/volumes" Feb 25 11:38:06 crc kubenswrapper[5005]: I0225 11:38:06.789300 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 25 11:38:06 crc kubenswrapper[5005]: I0225 11:38:06.789366 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 25 11:38:07 crc kubenswrapper[5005]: I0225 11:38:07.038046 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"caca9b15-0f1c-40a9-8cf9-0c030b9114f2","Type":"ContainerStarted","Data":"55d58ed4bcea46537ed3c6d46c2ff46da4bfda8889c41da3a63c6e06558bbbde"} Feb 25 11:38:07 crc kubenswrapper[5005]: I0225 11:38:07.038213 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 25 11:38:07 crc kubenswrapper[5005]: I0225 11:38:07.077954 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.370844636 podStartE2EDuration="6.077921022s" podCreationTimestamp="2026-02-25 11:38:01 +0000 UTC" firstStartedPulling="2026-02-25 11:38:02.88659345 +0000 UTC m=+1196.927325777" lastFinishedPulling="2026-02-25 11:38:06.593669836 +0000 UTC m=+1200.634402163" observedRunningTime="2026-02-25 11:38:07.064126673 +0000 UTC m=+1201.104859010" watchObservedRunningTime="2026-02-25 11:38:07.077921022 +0000 UTC m=+1201.118653349" Feb 25 11:38:07 crc kubenswrapper[5005]: I0225 11:38:07.350534 5005 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6a7e414b-bfff-44be-9dd0-9c73445bfc5c" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.188:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 25 11:38:07 crc kubenswrapper[5005]: I0225 11:38:07.350614 5005 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6a7e414b-bfff-44be-9dd0-9c73445bfc5c" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.188:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 25 11:38:07 crc kubenswrapper[5005]: I0225 11:38:07.806545 5005 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="03bc0cdc-e98f-4161-aa11-534e22b5be40" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.189:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 25 11:38:07 crc kubenswrapper[5005]: I0225 11:38:07.806563 5005 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="03bc0cdc-e98f-4161-aa11-534e22b5be40" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.189:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 25 11:38:08 crc kubenswrapper[5005]: I0225 11:38:08.412723 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 25 11:38:08 crc kubenswrapper[5005]: I0225 11:38:08.659022 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 25 11:38:08 crc kubenswrapper[5005]: I0225 11:38:08.683444 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 25 11:38:09 crc kubenswrapper[5005]: I0225 11:38:09.104181 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 25 11:38:13 crc kubenswrapper[5005]: I0225 11:38:13.823466 5005 scope.go:117] "RemoveContainer" containerID="cae1af438d132ed2bc47900ecaee536e0d8e691d2548cb3fa92684d743d8c949" Feb 25 11:38:16 crc kubenswrapper[5005]: I0225 11:38:16.273601 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 25 11:38:16 crc kubenswrapper[5005]: I0225 11:38:16.274715 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 25 11:38:16 crc kubenswrapper[5005]: I0225 11:38:16.275206 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 25 11:38:16 crc kubenswrapper[5005]: I0225 11:38:16.275307 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 25 11:38:16 crc kubenswrapper[5005]: I0225 11:38:16.280786 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 25 11:38:16 crc kubenswrapper[5005]: I0225 11:38:16.281724 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 25 11:38:16 crc kubenswrapper[5005]: I0225 11:38:16.522788 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-xdxb8"] Feb 25 11:38:16 crc kubenswrapper[5005]: E0225 11:38:16.523245 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3df09937-0397-4bf9-8f3a-435b47224f7e" containerName="oc" Feb 25 11:38:16 crc kubenswrapper[5005]: I0225 11:38:16.523257 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="3df09937-0397-4bf9-8f3a-435b47224f7e" containerName="oc" Feb 25 11:38:16 crc kubenswrapper[5005]: I0225 11:38:16.523433 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="3df09937-0397-4bf9-8f3a-435b47224f7e" containerName="oc" Feb 25 11:38:16 crc kubenswrapper[5005]: I0225 11:38:16.525009 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-xdxb8" Feb 25 11:38:16 crc kubenswrapper[5005]: I0225 11:38:16.555538 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-xdxb8"] Feb 25 11:38:16 crc kubenswrapper[5005]: I0225 11:38:16.574329 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llgnf\" (UniqueName: \"kubernetes.io/projected/00cece27-2c2b-466b-a9c8-e72aabef0410-kube-api-access-llgnf\") pod \"dnsmasq-dns-5b856c5697-xdxb8\" (UID: \"00cece27-2c2b-466b-a9c8-e72aabef0410\") " pod="openstack/dnsmasq-dns-5b856c5697-xdxb8" Feb 25 11:38:16 crc kubenswrapper[5005]: I0225 11:38:16.574408 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00cece27-2c2b-466b-a9c8-e72aabef0410-config\") pod \"dnsmasq-dns-5b856c5697-xdxb8\" (UID: \"00cece27-2c2b-466b-a9c8-e72aabef0410\") " pod="openstack/dnsmasq-dns-5b856c5697-xdxb8" Feb 25 11:38:16 crc kubenswrapper[5005]: I0225 11:38:16.574444 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00cece27-2c2b-466b-a9c8-e72aabef0410-dns-svc\") pod \"dnsmasq-dns-5b856c5697-xdxb8\" (UID: \"00cece27-2c2b-466b-a9c8-e72aabef0410\") " pod="openstack/dnsmasq-dns-5b856c5697-xdxb8" Feb 25 11:38:16 crc kubenswrapper[5005]: I0225 11:38:16.574483 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/00cece27-2c2b-466b-a9c8-e72aabef0410-ovsdbserver-nb\") pod \"dnsmasq-dns-5b856c5697-xdxb8\" (UID: \"00cece27-2c2b-466b-a9c8-e72aabef0410\") " pod="openstack/dnsmasq-dns-5b856c5697-xdxb8" Feb 25 11:38:16 crc kubenswrapper[5005]: I0225 11:38:16.574508 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00cece27-2c2b-466b-a9c8-e72aabef0410-ovsdbserver-sb\") pod \"dnsmasq-dns-5b856c5697-xdxb8\" (UID: \"00cece27-2c2b-466b-a9c8-e72aabef0410\") " pod="openstack/dnsmasq-dns-5b856c5697-xdxb8" Feb 25 11:38:16 crc kubenswrapper[5005]: I0225 11:38:16.675709 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00cece27-2c2b-466b-a9c8-e72aabef0410-config\") pod \"dnsmasq-dns-5b856c5697-xdxb8\" (UID: \"00cece27-2c2b-466b-a9c8-e72aabef0410\") " pod="openstack/dnsmasq-dns-5b856c5697-xdxb8" Feb 25 11:38:16 crc kubenswrapper[5005]: I0225 11:38:16.675778 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00cece27-2c2b-466b-a9c8-e72aabef0410-dns-svc\") pod \"dnsmasq-dns-5b856c5697-xdxb8\" (UID: \"00cece27-2c2b-466b-a9c8-e72aabef0410\") " pod="openstack/dnsmasq-dns-5b856c5697-xdxb8" Feb 25 11:38:16 crc kubenswrapper[5005]: I0225 11:38:16.675911 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/00cece27-2c2b-466b-a9c8-e72aabef0410-ovsdbserver-nb\") pod \"dnsmasq-dns-5b856c5697-xdxb8\" (UID: \"00cece27-2c2b-466b-a9c8-e72aabef0410\") " pod="openstack/dnsmasq-dns-5b856c5697-xdxb8" Feb 25 11:38:16 crc kubenswrapper[5005]: I0225 11:38:16.675946 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00cece27-2c2b-466b-a9c8-e72aabef0410-ovsdbserver-sb\") pod \"dnsmasq-dns-5b856c5697-xdxb8\" (UID: \"00cece27-2c2b-466b-a9c8-e72aabef0410\") " pod="openstack/dnsmasq-dns-5b856c5697-xdxb8" Feb 25 11:38:16 crc kubenswrapper[5005]: I0225 11:38:16.676087 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llgnf\" (UniqueName: \"kubernetes.io/projected/00cece27-2c2b-466b-a9c8-e72aabef0410-kube-api-access-llgnf\") pod \"dnsmasq-dns-5b856c5697-xdxb8\" (UID: \"00cece27-2c2b-466b-a9c8-e72aabef0410\") " pod="openstack/dnsmasq-dns-5b856c5697-xdxb8" Feb 25 11:38:16 crc kubenswrapper[5005]: I0225 11:38:16.676702 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00cece27-2c2b-466b-a9c8-e72aabef0410-config\") pod \"dnsmasq-dns-5b856c5697-xdxb8\" (UID: \"00cece27-2c2b-466b-a9c8-e72aabef0410\") " pod="openstack/dnsmasq-dns-5b856c5697-xdxb8" Feb 25 11:38:16 crc kubenswrapper[5005]: I0225 11:38:16.676925 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00cece27-2c2b-466b-a9c8-e72aabef0410-ovsdbserver-sb\") pod \"dnsmasq-dns-5b856c5697-xdxb8\" (UID: \"00cece27-2c2b-466b-a9c8-e72aabef0410\") " pod="openstack/dnsmasq-dns-5b856c5697-xdxb8" Feb 25 11:38:16 crc kubenswrapper[5005]: I0225 11:38:16.676929 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/00cece27-2c2b-466b-a9c8-e72aabef0410-ovsdbserver-nb\") pod \"dnsmasq-dns-5b856c5697-xdxb8\" (UID: \"00cece27-2c2b-466b-a9c8-e72aabef0410\") " pod="openstack/dnsmasq-dns-5b856c5697-xdxb8" Feb 25 11:38:16 crc kubenswrapper[5005]: I0225 11:38:16.677613 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00cece27-2c2b-466b-a9c8-e72aabef0410-dns-svc\") pod \"dnsmasq-dns-5b856c5697-xdxb8\" (UID: \"00cece27-2c2b-466b-a9c8-e72aabef0410\") " pod="openstack/dnsmasq-dns-5b856c5697-xdxb8" Feb 25 11:38:16 crc kubenswrapper[5005]: I0225 11:38:16.697432 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llgnf\" (UniqueName: \"kubernetes.io/projected/00cece27-2c2b-466b-a9c8-e72aabef0410-kube-api-access-llgnf\") pod \"dnsmasq-dns-5b856c5697-xdxb8\" (UID: \"00cece27-2c2b-466b-a9c8-e72aabef0410\") " pod="openstack/dnsmasq-dns-5b856c5697-xdxb8" Feb 25 11:38:16 crc kubenswrapper[5005]: I0225 11:38:16.801328 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 25 11:38:16 crc kubenswrapper[5005]: I0225 11:38:16.839896 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 25 11:38:16 crc kubenswrapper[5005]: I0225 11:38:16.849906 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-xdxb8" Feb 25 11:38:16 crc kubenswrapper[5005]: I0225 11:38:16.977297 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 25 11:38:17 crc kubenswrapper[5005]: I0225 11:38:17.180558 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 25 11:38:17 crc kubenswrapper[5005]: I0225 11:38:17.403755 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-xdxb8"] Feb 25 11:38:17 crc kubenswrapper[5005]: W0225 11:38:17.415787 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00cece27_2c2b_466b_a9c8_e72aabef0410.slice/crio-fc5bfec534b114c0298b05aa301c45cffc90375bc2f8e1a920021119c1dc8ca0 WatchSource:0}: Error finding container fc5bfec534b114c0298b05aa301c45cffc90375bc2f8e1a920021119c1dc8ca0: Status 404 returned error can't find the container with id fc5bfec534b114c0298b05aa301c45cffc90375bc2f8e1a920021119c1dc8ca0 Feb 25 11:38:18 crc kubenswrapper[5005]: I0225 11:38:18.182655 5005 generic.go:334] "Generic (PLEG): container finished" podID="00cece27-2c2b-466b-a9c8-e72aabef0410" containerID="368e9872489a5e95a0c84300b654061df60ad2cbc71ef0fe767f3065c93bd780" exitCode=0 Feb 25 11:38:18 crc kubenswrapper[5005]: I0225 11:38:18.184167 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-xdxb8" event={"ID":"00cece27-2c2b-466b-a9c8-e72aabef0410","Type":"ContainerDied","Data":"368e9872489a5e95a0c84300b654061df60ad2cbc71ef0fe767f3065c93bd780"} Feb 25 11:38:18 crc kubenswrapper[5005]: I0225 11:38:18.184198 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-xdxb8" event={"ID":"00cece27-2c2b-466b-a9c8-e72aabef0410","Type":"ContainerStarted","Data":"fc5bfec534b114c0298b05aa301c45cffc90375bc2f8e1a920021119c1dc8ca0"} Feb 25 11:38:18 crc kubenswrapper[5005]: I0225 11:38:18.725834 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 25 11:38:18 crc kubenswrapper[5005]: I0225 11:38:18.726504 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="caca9b15-0f1c-40a9-8cf9-0c030b9114f2" containerName="ceilometer-central-agent" containerID="cri-o://6ab1e93f09b439376646c55cda6bc831e40e23b36d92c8a36ae4dfe8124d71a0" gracePeriod=30 Feb 25 11:38:18 crc kubenswrapper[5005]: I0225 11:38:18.726526 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="caca9b15-0f1c-40a9-8cf9-0c030b9114f2" containerName="proxy-httpd" containerID="cri-o://55d58ed4bcea46537ed3c6d46c2ff46da4bfda8889c41da3a63c6e06558bbbde" gracePeriod=30 Feb 25 11:38:18 crc kubenswrapper[5005]: I0225 11:38:18.726636 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="caca9b15-0f1c-40a9-8cf9-0c030b9114f2" containerName="ceilometer-notification-agent" containerID="cri-o://f38fe29d06cb7b7d524c6b0ae29b25f588bbc2d8cc9042f2e480d4d210946359" gracePeriod=30 Feb 25 11:38:18 crc kubenswrapper[5005]: I0225 11:38:18.726656 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="caca9b15-0f1c-40a9-8cf9-0c030b9114f2" containerName="sg-core" containerID="cri-o://ba3f8ef5669f0828e0e466265cf7e54cf425f617dace7b414aae6c4f69af1af4" gracePeriod=30 Feb 25 11:38:18 crc kubenswrapper[5005]: I0225 11:38:18.735779 5005 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="caca9b15-0f1c-40a9-8cf9-0c030b9114f2" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.193:3000/\": EOF" Feb 25 11:38:18 crc kubenswrapper[5005]: I0225 11:38:18.875150 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 25 11:38:19 crc kubenswrapper[5005]: I0225 11:38:19.193597 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-xdxb8" event={"ID":"00cece27-2c2b-466b-a9c8-e72aabef0410","Type":"ContainerStarted","Data":"e9c775f852281bdc6bb8b5cbf58776f8307b2680e0ece8466c24b636d1632fbe"} Feb 25 11:38:19 crc kubenswrapper[5005]: I0225 11:38:19.193800 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b856c5697-xdxb8" Feb 25 11:38:19 crc kubenswrapper[5005]: I0225 11:38:19.196160 5005 generic.go:334] "Generic (PLEG): container finished" podID="caca9b15-0f1c-40a9-8cf9-0c030b9114f2" containerID="55d58ed4bcea46537ed3c6d46c2ff46da4bfda8889c41da3a63c6e06558bbbde" exitCode=0 Feb 25 11:38:19 crc kubenswrapper[5005]: I0225 11:38:19.196188 5005 generic.go:334] "Generic (PLEG): container finished" podID="caca9b15-0f1c-40a9-8cf9-0c030b9114f2" containerID="ba3f8ef5669f0828e0e466265cf7e54cf425f617dace7b414aae6c4f69af1af4" exitCode=2 Feb 25 11:38:19 crc kubenswrapper[5005]: I0225 11:38:19.196196 5005 generic.go:334] "Generic (PLEG): container finished" podID="caca9b15-0f1c-40a9-8cf9-0c030b9114f2" containerID="6ab1e93f09b439376646c55cda6bc831e40e23b36d92c8a36ae4dfe8124d71a0" exitCode=0 Feb 25 11:38:19 crc kubenswrapper[5005]: I0225 11:38:19.196205 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"caca9b15-0f1c-40a9-8cf9-0c030b9114f2","Type":"ContainerDied","Data":"55d58ed4bcea46537ed3c6d46c2ff46da4bfda8889c41da3a63c6e06558bbbde"} Feb 25 11:38:19 crc kubenswrapper[5005]: I0225 11:38:19.196264 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"caca9b15-0f1c-40a9-8cf9-0c030b9114f2","Type":"ContainerDied","Data":"ba3f8ef5669f0828e0e466265cf7e54cf425f617dace7b414aae6c4f69af1af4"} Feb 25 11:38:19 crc kubenswrapper[5005]: I0225 11:38:19.196281 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"caca9b15-0f1c-40a9-8cf9-0c030b9114f2","Type":"ContainerDied","Data":"6ab1e93f09b439376646c55cda6bc831e40e23b36d92c8a36ae4dfe8124d71a0"} Feb 25 11:38:19 crc kubenswrapper[5005]: I0225 11:38:19.196675 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6a7e414b-bfff-44be-9dd0-9c73445bfc5c" containerName="nova-api-api" containerID="cri-o://11ebe164443d683630c79821e740f6d89bd841b9c1b24194b8cd2a115a2a31f3" gracePeriod=30 Feb 25 11:38:19 crc kubenswrapper[5005]: I0225 11:38:19.196670 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6a7e414b-bfff-44be-9dd0-9c73445bfc5c" containerName="nova-api-log" containerID="cri-o://5fdd9adad9b8ecd20f36056e15f6ccdeed0ecf689a11cbb42d2ce1a1d77d29d1" gracePeriod=30 Feb 25 11:38:19 crc kubenswrapper[5005]: I0225 11:38:19.215440 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b856c5697-xdxb8" podStartSLOduration=3.215424535 podStartE2EDuration="3.215424535s" podCreationTimestamp="2026-02-25 11:38:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:38:19.212055593 +0000 UTC m=+1213.252787920" watchObservedRunningTime="2026-02-25 11:38:19.215424535 +0000 UTC m=+1213.256156862" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.123163 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.213941 5005 generic.go:334] "Generic (PLEG): container finished" podID="6a7e414b-bfff-44be-9dd0-9c73445bfc5c" containerID="5fdd9adad9b8ecd20f36056e15f6ccdeed0ecf689a11cbb42d2ce1a1d77d29d1" exitCode=143 Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.214016 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6a7e414b-bfff-44be-9dd0-9c73445bfc5c","Type":"ContainerDied","Data":"5fdd9adad9b8ecd20f36056e15f6ccdeed0ecf689a11cbb42d2ce1a1d77d29d1"} Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.215203 5005 generic.go:334] "Generic (PLEG): container finished" podID="b374c6f2-49de-499e-9c35-b0b859de60f2" containerID="349bb79dd4d251f0509c62a4ed8d2a2b8477ffa133040216112b2441f98e7030" exitCode=137 Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.215241 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b374c6f2-49de-499e-9c35-b0b859de60f2","Type":"ContainerDied","Data":"349bb79dd4d251f0509c62a4ed8d2a2b8477ffa133040216112b2441f98e7030"} Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.215260 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b374c6f2-49de-499e-9c35-b0b859de60f2","Type":"ContainerDied","Data":"aae042aa48af8371a80bb07091defba5deb86c81b61da406c61bb781c0b7672f"} Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.215271 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aae042aa48af8371a80bb07091defba5deb86c81b61da406c61bb781c0b7672f" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.222345 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.222705 5005 generic.go:334] "Generic (PLEG): container finished" podID="caca9b15-0f1c-40a9-8cf9-0c030b9114f2" containerID="f38fe29d06cb7b7d524c6b0ae29b25f588bbc2d8cc9042f2e480d4d210946359" exitCode=0 Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.222899 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.223405 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"caca9b15-0f1c-40a9-8cf9-0c030b9114f2","Type":"ContainerDied","Data":"f38fe29d06cb7b7d524c6b0ae29b25f588bbc2d8cc9042f2e480d4d210946359"} Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.223442 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"caca9b15-0f1c-40a9-8cf9-0c030b9114f2","Type":"ContainerDied","Data":"c0f9a9cb56c65f24a9107015c0add265de7c42ca2774d062b7d7df403c19242f"} Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.223458 5005 scope.go:117] "RemoveContainer" containerID="55d58ed4bcea46537ed3c6d46c2ff46da4bfda8889c41da3a63c6e06558bbbde" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.238637 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/caca9b15-0f1c-40a9-8cf9-0c030b9114f2-sg-core-conf-yaml\") pod \"caca9b15-0f1c-40a9-8cf9-0c030b9114f2\" (UID: \"caca9b15-0f1c-40a9-8cf9-0c030b9114f2\") " Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.238675 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/caca9b15-0f1c-40a9-8cf9-0c030b9114f2-ceilometer-tls-certs\") pod \"caca9b15-0f1c-40a9-8cf9-0c030b9114f2\" (UID: \"caca9b15-0f1c-40a9-8cf9-0c030b9114f2\") " Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.238725 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caca9b15-0f1c-40a9-8cf9-0c030b9114f2-combined-ca-bundle\") pod \"caca9b15-0f1c-40a9-8cf9-0c030b9114f2\" (UID: \"caca9b15-0f1c-40a9-8cf9-0c030b9114f2\") " Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.238769 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/caca9b15-0f1c-40a9-8cf9-0c030b9114f2-scripts\") pod \"caca9b15-0f1c-40a9-8cf9-0c030b9114f2\" (UID: \"caca9b15-0f1c-40a9-8cf9-0c030b9114f2\") " Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.238824 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caca9b15-0f1c-40a9-8cf9-0c030b9114f2-config-data\") pod \"caca9b15-0f1c-40a9-8cf9-0c030b9114f2\" (UID: \"caca9b15-0f1c-40a9-8cf9-0c030b9114f2\") " Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.238844 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m95dh\" (UniqueName: \"kubernetes.io/projected/caca9b15-0f1c-40a9-8cf9-0c030b9114f2-kube-api-access-m95dh\") pod \"caca9b15-0f1c-40a9-8cf9-0c030b9114f2\" (UID: \"caca9b15-0f1c-40a9-8cf9-0c030b9114f2\") " Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.238868 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/caca9b15-0f1c-40a9-8cf9-0c030b9114f2-run-httpd\") pod \"caca9b15-0f1c-40a9-8cf9-0c030b9114f2\" (UID: \"caca9b15-0f1c-40a9-8cf9-0c030b9114f2\") " Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.239001 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/caca9b15-0f1c-40a9-8cf9-0c030b9114f2-log-httpd\") pod \"caca9b15-0f1c-40a9-8cf9-0c030b9114f2\" (UID: \"caca9b15-0f1c-40a9-8cf9-0c030b9114f2\") " Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.245561 5005 scope.go:117] "RemoveContainer" containerID="ba3f8ef5669f0828e0e466265cf7e54cf425f617dace7b414aae6c4f69af1af4" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.245775 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caca9b15-0f1c-40a9-8cf9-0c030b9114f2-scripts" (OuterVolumeSpecName: "scripts") pod "caca9b15-0f1c-40a9-8cf9-0c030b9114f2" (UID: "caca9b15-0f1c-40a9-8cf9-0c030b9114f2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.246722 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/caca9b15-0f1c-40a9-8cf9-0c030b9114f2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "caca9b15-0f1c-40a9-8cf9-0c030b9114f2" (UID: "caca9b15-0f1c-40a9-8cf9-0c030b9114f2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.246777 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/caca9b15-0f1c-40a9-8cf9-0c030b9114f2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "caca9b15-0f1c-40a9-8cf9-0c030b9114f2" (UID: "caca9b15-0f1c-40a9-8cf9-0c030b9114f2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.250125 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/caca9b15-0f1c-40a9-8cf9-0c030b9114f2-kube-api-access-m95dh" (OuterVolumeSpecName: "kube-api-access-m95dh") pod "caca9b15-0f1c-40a9-8cf9-0c030b9114f2" (UID: "caca9b15-0f1c-40a9-8cf9-0c030b9114f2"). InnerVolumeSpecName "kube-api-access-m95dh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.265088 5005 scope.go:117] "RemoveContainer" containerID="f38fe29d06cb7b7d524c6b0ae29b25f588bbc2d8cc9042f2e480d4d210946359" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.282083 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caca9b15-0f1c-40a9-8cf9-0c030b9114f2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "caca9b15-0f1c-40a9-8cf9-0c030b9114f2" (UID: "caca9b15-0f1c-40a9-8cf9-0c030b9114f2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.298972 5005 scope.go:117] "RemoveContainer" containerID="6ab1e93f09b439376646c55cda6bc831e40e23b36d92c8a36ae4dfe8124d71a0" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.298998 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caca9b15-0f1c-40a9-8cf9-0c030b9114f2-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "caca9b15-0f1c-40a9-8cf9-0c030b9114f2" (UID: "caca9b15-0f1c-40a9-8cf9-0c030b9114f2"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.327665 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caca9b15-0f1c-40a9-8cf9-0c030b9114f2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "caca9b15-0f1c-40a9-8cf9-0c030b9114f2" (UID: "caca9b15-0f1c-40a9-8cf9-0c030b9114f2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.330432 5005 scope.go:117] "RemoveContainer" containerID="55d58ed4bcea46537ed3c6d46c2ff46da4bfda8889c41da3a63c6e06558bbbde" Feb 25 11:38:20 crc kubenswrapper[5005]: E0225 11:38:20.331145 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55d58ed4bcea46537ed3c6d46c2ff46da4bfda8889c41da3a63c6e06558bbbde\": container with ID starting with 55d58ed4bcea46537ed3c6d46c2ff46da4bfda8889c41da3a63c6e06558bbbde not found: ID does not exist" containerID="55d58ed4bcea46537ed3c6d46c2ff46da4bfda8889c41da3a63c6e06558bbbde" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.331177 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55d58ed4bcea46537ed3c6d46c2ff46da4bfda8889c41da3a63c6e06558bbbde"} err="failed to get container status \"55d58ed4bcea46537ed3c6d46c2ff46da4bfda8889c41da3a63c6e06558bbbde\": rpc error: code = NotFound desc = could not find container \"55d58ed4bcea46537ed3c6d46c2ff46da4bfda8889c41da3a63c6e06558bbbde\": container with ID starting with 55d58ed4bcea46537ed3c6d46c2ff46da4bfda8889c41da3a63c6e06558bbbde not found: ID does not exist" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.331196 5005 scope.go:117] "RemoveContainer" containerID="ba3f8ef5669f0828e0e466265cf7e54cf425f617dace7b414aae6c4f69af1af4" Feb 25 11:38:20 crc kubenswrapper[5005]: E0225 11:38:20.331589 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba3f8ef5669f0828e0e466265cf7e54cf425f617dace7b414aae6c4f69af1af4\": container with ID starting with ba3f8ef5669f0828e0e466265cf7e54cf425f617dace7b414aae6c4f69af1af4 not found: ID does not exist" containerID="ba3f8ef5669f0828e0e466265cf7e54cf425f617dace7b414aae6c4f69af1af4" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.331636 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba3f8ef5669f0828e0e466265cf7e54cf425f617dace7b414aae6c4f69af1af4"} err="failed to get container status \"ba3f8ef5669f0828e0e466265cf7e54cf425f617dace7b414aae6c4f69af1af4\": rpc error: code = NotFound desc = could not find container \"ba3f8ef5669f0828e0e466265cf7e54cf425f617dace7b414aae6c4f69af1af4\": container with ID starting with ba3f8ef5669f0828e0e466265cf7e54cf425f617dace7b414aae6c4f69af1af4 not found: ID does not exist" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.331664 5005 scope.go:117] "RemoveContainer" containerID="f38fe29d06cb7b7d524c6b0ae29b25f588bbc2d8cc9042f2e480d4d210946359" Feb 25 11:38:20 crc kubenswrapper[5005]: E0225 11:38:20.331974 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f38fe29d06cb7b7d524c6b0ae29b25f588bbc2d8cc9042f2e480d4d210946359\": container with ID starting with f38fe29d06cb7b7d524c6b0ae29b25f588bbc2d8cc9042f2e480d4d210946359 not found: ID does not exist" containerID="f38fe29d06cb7b7d524c6b0ae29b25f588bbc2d8cc9042f2e480d4d210946359" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.332013 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f38fe29d06cb7b7d524c6b0ae29b25f588bbc2d8cc9042f2e480d4d210946359"} err="failed to get container status \"f38fe29d06cb7b7d524c6b0ae29b25f588bbc2d8cc9042f2e480d4d210946359\": rpc error: code = NotFound desc = could not find container \"f38fe29d06cb7b7d524c6b0ae29b25f588bbc2d8cc9042f2e480d4d210946359\": container with ID starting with f38fe29d06cb7b7d524c6b0ae29b25f588bbc2d8cc9042f2e480d4d210946359 not found: ID does not exist" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.332025 5005 scope.go:117] "RemoveContainer" containerID="6ab1e93f09b439376646c55cda6bc831e40e23b36d92c8a36ae4dfe8124d71a0" Feb 25 11:38:20 crc kubenswrapper[5005]: E0225 11:38:20.332366 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ab1e93f09b439376646c55cda6bc831e40e23b36d92c8a36ae4dfe8124d71a0\": container with ID starting with 6ab1e93f09b439376646c55cda6bc831e40e23b36d92c8a36ae4dfe8124d71a0 not found: ID does not exist" containerID="6ab1e93f09b439376646c55cda6bc831e40e23b36d92c8a36ae4dfe8124d71a0" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.332462 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ab1e93f09b439376646c55cda6bc831e40e23b36d92c8a36ae4dfe8124d71a0"} err="failed to get container status \"6ab1e93f09b439376646c55cda6bc831e40e23b36d92c8a36ae4dfe8124d71a0\": rpc error: code = NotFound desc = could not find container \"6ab1e93f09b439376646c55cda6bc831e40e23b36d92c8a36ae4dfe8124d71a0\": container with ID starting with 6ab1e93f09b439376646c55cda6bc831e40e23b36d92c8a36ae4dfe8124d71a0 not found: ID does not exist" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.346468 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgqzh\" (UniqueName: \"kubernetes.io/projected/b374c6f2-49de-499e-9c35-b0b859de60f2-kube-api-access-pgqzh\") pod \"b374c6f2-49de-499e-9c35-b0b859de60f2\" (UID: \"b374c6f2-49de-499e-9c35-b0b859de60f2\") " Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.346572 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b374c6f2-49de-499e-9c35-b0b859de60f2-config-data\") pod \"b374c6f2-49de-499e-9c35-b0b859de60f2\" (UID: \"b374c6f2-49de-499e-9c35-b0b859de60f2\") " Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.346691 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b374c6f2-49de-499e-9c35-b0b859de60f2-combined-ca-bundle\") pod \"b374c6f2-49de-499e-9c35-b0b859de60f2\" (UID: \"b374c6f2-49de-499e-9c35-b0b859de60f2\") " Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.347177 5005 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/caca9b15-0f1c-40a9-8cf9-0c030b9114f2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.347208 5005 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/caca9b15-0f1c-40a9-8cf9-0c030b9114f2-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.347220 5005 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caca9b15-0f1c-40a9-8cf9-0c030b9114f2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.347232 5005 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/caca9b15-0f1c-40a9-8cf9-0c030b9114f2-scripts\") on node \"crc\" DevicePath \"\"" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.347243 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m95dh\" (UniqueName: \"kubernetes.io/projected/caca9b15-0f1c-40a9-8cf9-0c030b9114f2-kube-api-access-m95dh\") on node \"crc\" DevicePath \"\"" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.347255 5005 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/caca9b15-0f1c-40a9-8cf9-0c030b9114f2-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.347265 5005 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/caca9b15-0f1c-40a9-8cf9-0c030b9114f2-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.353068 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b374c6f2-49de-499e-9c35-b0b859de60f2-kube-api-access-pgqzh" (OuterVolumeSpecName: "kube-api-access-pgqzh") pod "b374c6f2-49de-499e-9c35-b0b859de60f2" (UID: "b374c6f2-49de-499e-9c35-b0b859de60f2"). InnerVolumeSpecName "kube-api-access-pgqzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.366596 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caca9b15-0f1c-40a9-8cf9-0c030b9114f2-config-data" (OuterVolumeSpecName: "config-data") pod "caca9b15-0f1c-40a9-8cf9-0c030b9114f2" (UID: "caca9b15-0f1c-40a9-8cf9-0c030b9114f2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.368564 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b374c6f2-49de-499e-9c35-b0b859de60f2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b374c6f2-49de-499e-9c35-b0b859de60f2" (UID: "b374c6f2-49de-499e-9c35-b0b859de60f2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.376398 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b374c6f2-49de-499e-9c35-b0b859de60f2-config-data" (OuterVolumeSpecName: "config-data") pod "b374c6f2-49de-499e-9c35-b0b859de60f2" (UID: "b374c6f2-49de-499e-9c35-b0b859de60f2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.448998 5005 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b374c6f2-49de-499e-9c35-b0b859de60f2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.449031 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgqzh\" (UniqueName: \"kubernetes.io/projected/b374c6f2-49de-499e-9c35-b0b859de60f2-kube-api-access-pgqzh\") on node \"crc\" DevicePath \"\"" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.449042 5005 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caca9b15-0f1c-40a9-8cf9-0c030b9114f2-config-data\") on node \"crc\" DevicePath \"\"" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.449050 5005 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b374c6f2-49de-499e-9c35-b0b859de60f2-config-data\") on node \"crc\" DevicePath \"\"" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.558171 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.573397 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.580260 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 25 11:38:20 crc kubenswrapper[5005]: E0225 11:38:20.581004 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="caca9b15-0f1c-40a9-8cf9-0c030b9114f2" containerName="sg-core" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.581023 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="caca9b15-0f1c-40a9-8cf9-0c030b9114f2" containerName="sg-core" Feb 25 11:38:20 crc kubenswrapper[5005]: E0225 11:38:20.581040 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b374c6f2-49de-499e-9c35-b0b859de60f2" containerName="nova-cell1-novncproxy-novncproxy" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.581046 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="b374c6f2-49de-499e-9c35-b0b859de60f2" containerName="nova-cell1-novncproxy-novncproxy" Feb 25 11:38:20 crc kubenswrapper[5005]: E0225 11:38:20.581059 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="caca9b15-0f1c-40a9-8cf9-0c030b9114f2" containerName="ceilometer-central-agent" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.581066 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="caca9b15-0f1c-40a9-8cf9-0c030b9114f2" containerName="ceilometer-central-agent" Feb 25 11:38:20 crc kubenswrapper[5005]: E0225 11:38:20.581075 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="caca9b15-0f1c-40a9-8cf9-0c030b9114f2" containerName="proxy-httpd" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.581081 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="caca9b15-0f1c-40a9-8cf9-0c030b9114f2" containerName="proxy-httpd" Feb 25 11:38:20 crc kubenswrapper[5005]: E0225 11:38:20.581088 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="caca9b15-0f1c-40a9-8cf9-0c030b9114f2" containerName="ceilometer-notification-agent" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.581094 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="caca9b15-0f1c-40a9-8cf9-0c030b9114f2" containerName="ceilometer-notification-agent" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.581248 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="caca9b15-0f1c-40a9-8cf9-0c030b9114f2" containerName="sg-core" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.581260 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="caca9b15-0f1c-40a9-8cf9-0c030b9114f2" containerName="proxy-httpd" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.581271 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="b374c6f2-49de-499e-9c35-b0b859de60f2" containerName="nova-cell1-novncproxy-novncproxy" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.581279 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="caca9b15-0f1c-40a9-8cf9-0c030b9114f2" containerName="ceilometer-central-agent" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.581287 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="caca9b15-0f1c-40a9-8cf9-0c030b9114f2" containerName="ceilometer-notification-agent" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.582937 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.586574 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.586713 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.586964 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.603802 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.652514 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29c39ed3-bc30-4749-b26c-92e320736c0f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"29c39ed3-bc30-4749-b26c-92e320736c0f\") " pod="openstack/ceilometer-0" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.652575 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/29c39ed3-bc30-4749-b26c-92e320736c0f-run-httpd\") pod \"ceilometer-0\" (UID: \"29c39ed3-bc30-4749-b26c-92e320736c0f\") " pod="openstack/ceilometer-0" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.652636 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/29c39ed3-bc30-4749-b26c-92e320736c0f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"29c39ed3-bc30-4749-b26c-92e320736c0f\") " pod="openstack/ceilometer-0" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.652699 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29c39ed3-bc30-4749-b26c-92e320736c0f-config-data\") pod \"ceilometer-0\" (UID: \"29c39ed3-bc30-4749-b26c-92e320736c0f\") " pod="openstack/ceilometer-0" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.652731 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29c39ed3-bc30-4749-b26c-92e320736c0f-scripts\") pod \"ceilometer-0\" (UID: \"29c39ed3-bc30-4749-b26c-92e320736c0f\") " pod="openstack/ceilometer-0" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.652782 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/29c39ed3-bc30-4749-b26c-92e320736c0f-log-httpd\") pod \"ceilometer-0\" (UID: \"29c39ed3-bc30-4749-b26c-92e320736c0f\") " pod="openstack/ceilometer-0" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.653090 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/29c39ed3-bc30-4749-b26c-92e320736c0f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"29c39ed3-bc30-4749-b26c-92e320736c0f\") " pod="openstack/ceilometer-0" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.653194 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxcq7\" (UniqueName: \"kubernetes.io/projected/29c39ed3-bc30-4749-b26c-92e320736c0f-kube-api-access-kxcq7\") pod \"ceilometer-0\" (UID: \"29c39ed3-bc30-4749-b26c-92e320736c0f\") " pod="openstack/ceilometer-0" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.698655 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="caca9b15-0f1c-40a9-8cf9-0c030b9114f2" path="/var/lib/kubelet/pods/caca9b15-0f1c-40a9-8cf9-0c030b9114f2/volumes" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.754287 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29c39ed3-bc30-4749-b26c-92e320736c0f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"29c39ed3-bc30-4749-b26c-92e320736c0f\") " pod="openstack/ceilometer-0" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.754869 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/29c39ed3-bc30-4749-b26c-92e320736c0f-run-httpd\") pod \"ceilometer-0\" (UID: \"29c39ed3-bc30-4749-b26c-92e320736c0f\") " pod="openstack/ceilometer-0" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.754907 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/29c39ed3-bc30-4749-b26c-92e320736c0f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"29c39ed3-bc30-4749-b26c-92e320736c0f\") " pod="openstack/ceilometer-0" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.754968 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29c39ed3-bc30-4749-b26c-92e320736c0f-config-data\") pod \"ceilometer-0\" (UID: \"29c39ed3-bc30-4749-b26c-92e320736c0f\") " pod="openstack/ceilometer-0" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.755010 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29c39ed3-bc30-4749-b26c-92e320736c0f-scripts\") pod \"ceilometer-0\" (UID: \"29c39ed3-bc30-4749-b26c-92e320736c0f\") " pod="openstack/ceilometer-0" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.755051 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/29c39ed3-bc30-4749-b26c-92e320736c0f-log-httpd\") pod \"ceilometer-0\" (UID: \"29c39ed3-bc30-4749-b26c-92e320736c0f\") " pod="openstack/ceilometer-0" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.755198 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/29c39ed3-bc30-4749-b26c-92e320736c0f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"29c39ed3-bc30-4749-b26c-92e320736c0f\") " pod="openstack/ceilometer-0" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.755224 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxcq7\" (UniqueName: \"kubernetes.io/projected/29c39ed3-bc30-4749-b26c-92e320736c0f-kube-api-access-kxcq7\") pod \"ceilometer-0\" (UID: \"29c39ed3-bc30-4749-b26c-92e320736c0f\") " pod="openstack/ceilometer-0" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.755608 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/29c39ed3-bc30-4749-b26c-92e320736c0f-run-httpd\") pod \"ceilometer-0\" (UID: \"29c39ed3-bc30-4749-b26c-92e320736c0f\") " pod="openstack/ceilometer-0" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.756318 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/29c39ed3-bc30-4749-b26c-92e320736c0f-log-httpd\") pod \"ceilometer-0\" (UID: \"29c39ed3-bc30-4749-b26c-92e320736c0f\") " pod="openstack/ceilometer-0" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.758720 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29c39ed3-bc30-4749-b26c-92e320736c0f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"29c39ed3-bc30-4749-b26c-92e320736c0f\") " pod="openstack/ceilometer-0" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.759773 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29c39ed3-bc30-4749-b26c-92e320736c0f-config-data\") pod \"ceilometer-0\" (UID: \"29c39ed3-bc30-4749-b26c-92e320736c0f\") " pod="openstack/ceilometer-0" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.761940 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/29c39ed3-bc30-4749-b26c-92e320736c0f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"29c39ed3-bc30-4749-b26c-92e320736c0f\") " pod="openstack/ceilometer-0" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.761968 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29c39ed3-bc30-4749-b26c-92e320736c0f-scripts\") pod \"ceilometer-0\" (UID: \"29c39ed3-bc30-4749-b26c-92e320736c0f\") " pod="openstack/ceilometer-0" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.762024 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/29c39ed3-bc30-4749-b26c-92e320736c0f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"29c39ed3-bc30-4749-b26c-92e320736c0f\") " pod="openstack/ceilometer-0" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.777146 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxcq7\" (UniqueName: \"kubernetes.io/projected/29c39ed3-bc30-4749-b26c-92e320736c0f-kube-api-access-kxcq7\") pod \"ceilometer-0\" (UID: \"29c39ed3-bc30-4749-b26c-92e320736c0f\") " pod="openstack/ceilometer-0" Feb 25 11:38:20 crc kubenswrapper[5005]: I0225 11:38:20.911327 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 25 11:38:21 crc kubenswrapper[5005]: I0225 11:38:21.236641 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 25 11:38:21 crc kubenswrapper[5005]: I0225 11:38:21.247491 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 25 11:38:21 crc kubenswrapper[5005]: I0225 11:38:21.286914 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 25 11:38:21 crc kubenswrapper[5005]: I0225 11:38:21.299520 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 25 11:38:21 crc kubenswrapper[5005]: I0225 11:38:21.317720 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 25 11:38:21 crc kubenswrapper[5005]: I0225 11:38:21.319197 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 25 11:38:21 crc kubenswrapper[5005]: I0225 11:38:21.321343 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 25 11:38:21 crc kubenswrapper[5005]: I0225 11:38:21.321534 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 25 11:38:21 crc kubenswrapper[5005]: I0225 11:38:21.321580 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 25 11:38:21 crc kubenswrapper[5005]: I0225 11:38:21.328658 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 25 11:38:21 crc kubenswrapper[5005]: I0225 11:38:21.366151 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2858fac-fd3d-46ed-9ac2-f057ed2d8395-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f2858fac-fd3d-46ed-9ac2-f057ed2d8395\") " pod="openstack/nova-cell1-novncproxy-0" Feb 25 11:38:21 crc kubenswrapper[5005]: I0225 11:38:21.366228 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7f7bd\" (UniqueName: \"kubernetes.io/projected/f2858fac-fd3d-46ed-9ac2-f057ed2d8395-kube-api-access-7f7bd\") pod \"nova-cell1-novncproxy-0\" (UID: \"f2858fac-fd3d-46ed-9ac2-f057ed2d8395\") " pod="openstack/nova-cell1-novncproxy-0" Feb 25 11:38:21 crc kubenswrapper[5005]: I0225 11:38:21.366264 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2858fac-fd3d-46ed-9ac2-f057ed2d8395-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f2858fac-fd3d-46ed-9ac2-f057ed2d8395\") " pod="openstack/nova-cell1-novncproxy-0" Feb 25 11:38:21 crc kubenswrapper[5005]: I0225 11:38:21.366352 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2858fac-fd3d-46ed-9ac2-f057ed2d8395-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f2858fac-fd3d-46ed-9ac2-f057ed2d8395\") " pod="openstack/nova-cell1-novncproxy-0" Feb 25 11:38:21 crc kubenswrapper[5005]: I0225 11:38:21.366387 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2858fac-fd3d-46ed-9ac2-f057ed2d8395-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f2858fac-fd3d-46ed-9ac2-f057ed2d8395\") " pod="openstack/nova-cell1-novncproxy-0" Feb 25 11:38:21 crc kubenswrapper[5005]: I0225 11:38:21.467633 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2858fac-fd3d-46ed-9ac2-f057ed2d8395-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f2858fac-fd3d-46ed-9ac2-f057ed2d8395\") " pod="openstack/nova-cell1-novncproxy-0" Feb 25 11:38:21 crc kubenswrapper[5005]: I0225 11:38:21.467740 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7f7bd\" (UniqueName: \"kubernetes.io/projected/f2858fac-fd3d-46ed-9ac2-f057ed2d8395-kube-api-access-7f7bd\") pod \"nova-cell1-novncproxy-0\" (UID: \"f2858fac-fd3d-46ed-9ac2-f057ed2d8395\") " pod="openstack/nova-cell1-novncproxy-0" Feb 25 11:38:21 crc kubenswrapper[5005]: I0225 11:38:21.467836 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2858fac-fd3d-46ed-9ac2-f057ed2d8395-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f2858fac-fd3d-46ed-9ac2-f057ed2d8395\") " pod="openstack/nova-cell1-novncproxy-0" Feb 25 11:38:21 crc kubenswrapper[5005]: I0225 11:38:21.467940 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2858fac-fd3d-46ed-9ac2-f057ed2d8395-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f2858fac-fd3d-46ed-9ac2-f057ed2d8395\") " pod="openstack/nova-cell1-novncproxy-0" Feb 25 11:38:21 crc kubenswrapper[5005]: I0225 11:38:21.467963 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2858fac-fd3d-46ed-9ac2-f057ed2d8395-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f2858fac-fd3d-46ed-9ac2-f057ed2d8395\") " pod="openstack/nova-cell1-novncproxy-0" Feb 25 11:38:21 crc kubenswrapper[5005]: I0225 11:38:21.481920 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2858fac-fd3d-46ed-9ac2-f057ed2d8395-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f2858fac-fd3d-46ed-9ac2-f057ed2d8395\") " pod="openstack/nova-cell1-novncproxy-0" Feb 25 11:38:21 crc kubenswrapper[5005]: I0225 11:38:21.482063 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2858fac-fd3d-46ed-9ac2-f057ed2d8395-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f2858fac-fd3d-46ed-9ac2-f057ed2d8395\") " pod="openstack/nova-cell1-novncproxy-0" Feb 25 11:38:21 crc kubenswrapper[5005]: I0225 11:38:21.482094 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2858fac-fd3d-46ed-9ac2-f057ed2d8395-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f2858fac-fd3d-46ed-9ac2-f057ed2d8395\") " pod="openstack/nova-cell1-novncproxy-0" Feb 25 11:38:21 crc kubenswrapper[5005]: I0225 11:38:21.483874 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2858fac-fd3d-46ed-9ac2-f057ed2d8395-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f2858fac-fd3d-46ed-9ac2-f057ed2d8395\") " pod="openstack/nova-cell1-novncproxy-0" Feb 25 11:38:21 crc kubenswrapper[5005]: I0225 11:38:21.484696 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7f7bd\" (UniqueName: \"kubernetes.io/projected/f2858fac-fd3d-46ed-9ac2-f057ed2d8395-kube-api-access-7f7bd\") pod \"nova-cell1-novncproxy-0\" (UID: \"f2858fac-fd3d-46ed-9ac2-f057ed2d8395\") " pod="openstack/nova-cell1-novncproxy-0" Feb 25 11:38:21 crc kubenswrapper[5005]: I0225 11:38:21.637772 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 25 11:38:22 crc kubenswrapper[5005]: I0225 11:38:22.094866 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 25 11:38:22 crc kubenswrapper[5005]: W0225 11:38:22.098616 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2858fac_fd3d_46ed_9ac2_f057ed2d8395.slice/crio-22cb5c16a7cc371870fa2ae1ae1c8b58dfc31d1312623841537bf9519513f182 WatchSource:0}: Error finding container 22cb5c16a7cc371870fa2ae1ae1c8b58dfc31d1312623841537bf9519513f182: Status 404 returned error can't find the container with id 22cb5c16a7cc371870fa2ae1ae1c8b58dfc31d1312623841537bf9519513f182 Feb 25 11:38:22 crc kubenswrapper[5005]: I0225 11:38:22.250497 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"29c39ed3-bc30-4749-b26c-92e320736c0f","Type":"ContainerStarted","Data":"f88be5bedbbc34161d653695096fb400839e346b8c89d725084aba39145623e5"} Feb 25 11:38:22 crc kubenswrapper[5005]: I0225 11:38:22.251415 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"29c39ed3-bc30-4749-b26c-92e320736c0f","Type":"ContainerStarted","Data":"ff2a7205c9fcee10165d45fbcc4e6fada2bbe36e75ef6488a1f3025ae9371c65"} Feb 25 11:38:22 crc kubenswrapper[5005]: I0225 11:38:22.252440 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f2858fac-fd3d-46ed-9ac2-f057ed2d8395","Type":"ContainerStarted","Data":"22cb5c16a7cc371870fa2ae1ae1c8b58dfc31d1312623841537bf9519513f182"} Feb 25 11:38:22 crc kubenswrapper[5005]: I0225 11:38:22.697471 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b374c6f2-49de-499e-9c35-b0b859de60f2" path="/var/lib/kubelet/pods/b374c6f2-49de-499e-9c35-b0b859de60f2/volumes" Feb 25 11:38:22 crc kubenswrapper[5005]: I0225 11:38:22.756290 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 25 11:38:22 crc kubenswrapper[5005]: I0225 11:38:22.819233 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a7e414b-bfff-44be-9dd0-9c73445bfc5c-logs\") pod \"6a7e414b-bfff-44be-9dd0-9c73445bfc5c\" (UID: \"6a7e414b-bfff-44be-9dd0-9c73445bfc5c\") " Feb 25 11:38:22 crc kubenswrapper[5005]: I0225 11:38:22.819380 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a7e414b-bfff-44be-9dd0-9c73445bfc5c-config-data\") pod \"6a7e414b-bfff-44be-9dd0-9c73445bfc5c\" (UID: \"6a7e414b-bfff-44be-9dd0-9c73445bfc5c\") " Feb 25 11:38:22 crc kubenswrapper[5005]: I0225 11:38:22.819419 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a7e414b-bfff-44be-9dd0-9c73445bfc5c-combined-ca-bundle\") pod \"6a7e414b-bfff-44be-9dd0-9c73445bfc5c\" (UID: \"6a7e414b-bfff-44be-9dd0-9c73445bfc5c\") " Feb 25 11:38:22 crc kubenswrapper[5005]: I0225 11:38:22.819457 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rndr\" (UniqueName: \"kubernetes.io/projected/6a7e414b-bfff-44be-9dd0-9c73445bfc5c-kube-api-access-2rndr\") pod \"6a7e414b-bfff-44be-9dd0-9c73445bfc5c\" (UID: \"6a7e414b-bfff-44be-9dd0-9c73445bfc5c\") " Feb 25 11:38:22 crc kubenswrapper[5005]: I0225 11:38:22.821437 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a7e414b-bfff-44be-9dd0-9c73445bfc5c-logs" (OuterVolumeSpecName: "logs") pod "6a7e414b-bfff-44be-9dd0-9c73445bfc5c" (UID: "6a7e414b-bfff-44be-9dd0-9c73445bfc5c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:38:22 crc kubenswrapper[5005]: I0225 11:38:22.826046 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a7e414b-bfff-44be-9dd0-9c73445bfc5c-kube-api-access-2rndr" (OuterVolumeSpecName: "kube-api-access-2rndr") pod "6a7e414b-bfff-44be-9dd0-9c73445bfc5c" (UID: "6a7e414b-bfff-44be-9dd0-9c73445bfc5c"). InnerVolumeSpecName "kube-api-access-2rndr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:38:22 crc kubenswrapper[5005]: I0225 11:38:22.859127 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a7e414b-bfff-44be-9dd0-9c73445bfc5c-config-data" (OuterVolumeSpecName: "config-data") pod "6a7e414b-bfff-44be-9dd0-9c73445bfc5c" (UID: "6a7e414b-bfff-44be-9dd0-9c73445bfc5c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:38:22 crc kubenswrapper[5005]: I0225 11:38:22.864247 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a7e414b-bfff-44be-9dd0-9c73445bfc5c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6a7e414b-bfff-44be-9dd0-9c73445bfc5c" (UID: "6a7e414b-bfff-44be-9dd0-9c73445bfc5c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:38:22 crc kubenswrapper[5005]: I0225 11:38:22.921200 5005 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a7e414b-bfff-44be-9dd0-9c73445bfc5c-logs\") on node \"crc\" DevicePath \"\"" Feb 25 11:38:22 crc kubenswrapper[5005]: I0225 11:38:22.921232 5005 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a7e414b-bfff-44be-9dd0-9c73445bfc5c-config-data\") on node \"crc\" DevicePath \"\"" Feb 25 11:38:22 crc kubenswrapper[5005]: I0225 11:38:22.921241 5005 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a7e414b-bfff-44be-9dd0-9c73445bfc5c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 11:38:22 crc kubenswrapper[5005]: I0225 11:38:22.921253 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rndr\" (UniqueName: \"kubernetes.io/projected/6a7e414b-bfff-44be-9dd0-9c73445bfc5c-kube-api-access-2rndr\") on node \"crc\" DevicePath \"\"" Feb 25 11:38:23 crc kubenswrapper[5005]: I0225 11:38:23.270320 5005 generic.go:334] "Generic (PLEG): container finished" podID="6a7e414b-bfff-44be-9dd0-9c73445bfc5c" containerID="11ebe164443d683630c79821e740f6d89bd841b9c1b24194b8cd2a115a2a31f3" exitCode=0 Feb 25 11:38:23 crc kubenswrapper[5005]: I0225 11:38:23.270400 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 25 11:38:23 crc kubenswrapper[5005]: I0225 11:38:23.270417 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6a7e414b-bfff-44be-9dd0-9c73445bfc5c","Type":"ContainerDied","Data":"11ebe164443d683630c79821e740f6d89bd841b9c1b24194b8cd2a115a2a31f3"} Feb 25 11:38:23 crc kubenswrapper[5005]: I0225 11:38:23.271877 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6a7e414b-bfff-44be-9dd0-9c73445bfc5c","Type":"ContainerDied","Data":"633f601c34cf3bf3666bfb2d1762c5b1ff3c9f4d6c0a28057bb4f19bf681c573"} Feb 25 11:38:23 crc kubenswrapper[5005]: I0225 11:38:23.271988 5005 scope.go:117] "RemoveContainer" containerID="11ebe164443d683630c79821e740f6d89bd841b9c1b24194b8cd2a115a2a31f3" Feb 25 11:38:23 crc kubenswrapper[5005]: I0225 11:38:23.277736 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f2858fac-fd3d-46ed-9ac2-f057ed2d8395","Type":"ContainerStarted","Data":"6e0214fcd233f8a1c83314e5c29759d91cdfc1d01c7f3af9ebe360cc0dfeebc4"} Feb 25 11:38:23 crc kubenswrapper[5005]: I0225 11:38:23.287255 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"29c39ed3-bc30-4749-b26c-92e320736c0f","Type":"ContainerStarted","Data":"307be29c718a83803567db76497968305a12ac38154316e21bffaeb1781b7ed5"} Feb 25 11:38:23 crc kubenswrapper[5005]: I0225 11:38:23.287296 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"29c39ed3-bc30-4749-b26c-92e320736c0f","Type":"ContainerStarted","Data":"fbd62676a0db118d9d36752afba8e28086d289817b8972fbe610e2f28bd4f8ee"} Feb 25 11:38:23 crc kubenswrapper[5005]: I0225 11:38:23.293835 5005 scope.go:117] "RemoveContainer" containerID="5fdd9adad9b8ecd20f36056e15f6ccdeed0ecf689a11cbb42d2ce1a1d77d29d1" Feb 25 11:38:23 crc kubenswrapper[5005]: I0225 11:38:23.329280 5005 scope.go:117] "RemoveContainer" containerID="11ebe164443d683630c79821e740f6d89bd841b9c1b24194b8cd2a115a2a31f3" Feb 25 11:38:23 crc kubenswrapper[5005]: E0225 11:38:23.329684 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11ebe164443d683630c79821e740f6d89bd841b9c1b24194b8cd2a115a2a31f3\": container with ID starting with 11ebe164443d683630c79821e740f6d89bd841b9c1b24194b8cd2a115a2a31f3 not found: ID does not exist" containerID="11ebe164443d683630c79821e740f6d89bd841b9c1b24194b8cd2a115a2a31f3" Feb 25 11:38:23 crc kubenswrapper[5005]: I0225 11:38:23.329725 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11ebe164443d683630c79821e740f6d89bd841b9c1b24194b8cd2a115a2a31f3"} err="failed to get container status \"11ebe164443d683630c79821e740f6d89bd841b9c1b24194b8cd2a115a2a31f3\": rpc error: code = NotFound desc = could not find container \"11ebe164443d683630c79821e740f6d89bd841b9c1b24194b8cd2a115a2a31f3\": container with ID starting with 11ebe164443d683630c79821e740f6d89bd841b9c1b24194b8cd2a115a2a31f3 not found: ID does not exist" Feb 25 11:38:23 crc kubenswrapper[5005]: I0225 11:38:23.329747 5005 scope.go:117] "RemoveContainer" containerID="5fdd9adad9b8ecd20f36056e15f6ccdeed0ecf689a11cbb42d2ce1a1d77d29d1" Feb 25 11:38:23 crc kubenswrapper[5005]: E0225 11:38:23.330777 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fdd9adad9b8ecd20f36056e15f6ccdeed0ecf689a11cbb42d2ce1a1d77d29d1\": container with ID starting with 5fdd9adad9b8ecd20f36056e15f6ccdeed0ecf689a11cbb42d2ce1a1d77d29d1 not found: ID does not exist" containerID="5fdd9adad9b8ecd20f36056e15f6ccdeed0ecf689a11cbb42d2ce1a1d77d29d1" Feb 25 11:38:23 crc kubenswrapper[5005]: I0225 11:38:23.330800 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fdd9adad9b8ecd20f36056e15f6ccdeed0ecf689a11cbb42d2ce1a1d77d29d1"} err="failed to get container status \"5fdd9adad9b8ecd20f36056e15f6ccdeed0ecf689a11cbb42d2ce1a1d77d29d1\": rpc error: code = NotFound desc = could not find container \"5fdd9adad9b8ecd20f36056e15f6ccdeed0ecf689a11cbb42d2ce1a1d77d29d1\": container with ID starting with 5fdd9adad9b8ecd20f36056e15f6ccdeed0ecf689a11cbb42d2ce1a1d77d29d1 not found: ID does not exist" Feb 25 11:38:23 crc kubenswrapper[5005]: I0225 11:38:23.341099 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.341056477 podStartE2EDuration="2.341056477s" podCreationTimestamp="2026-02-25 11:38:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:38:23.310971496 +0000 UTC m=+1217.351703833" watchObservedRunningTime="2026-02-25 11:38:23.341056477 +0000 UTC m=+1217.381788804" Feb 25 11:38:23 crc kubenswrapper[5005]: I0225 11:38:23.352592 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 25 11:38:23 crc kubenswrapper[5005]: I0225 11:38:23.375529 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 25 11:38:23 crc kubenswrapper[5005]: I0225 11:38:23.384844 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 25 11:38:23 crc kubenswrapper[5005]: E0225 11:38:23.385238 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a7e414b-bfff-44be-9dd0-9c73445bfc5c" containerName="nova-api-api" Feb 25 11:38:23 crc kubenswrapper[5005]: I0225 11:38:23.385258 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a7e414b-bfff-44be-9dd0-9c73445bfc5c" containerName="nova-api-api" Feb 25 11:38:23 crc kubenswrapper[5005]: E0225 11:38:23.385279 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a7e414b-bfff-44be-9dd0-9c73445bfc5c" containerName="nova-api-log" Feb 25 11:38:23 crc kubenswrapper[5005]: I0225 11:38:23.385287 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a7e414b-bfff-44be-9dd0-9c73445bfc5c" containerName="nova-api-log" Feb 25 11:38:23 crc kubenswrapper[5005]: I0225 11:38:23.385545 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a7e414b-bfff-44be-9dd0-9c73445bfc5c" containerName="nova-api-api" Feb 25 11:38:23 crc kubenswrapper[5005]: I0225 11:38:23.385581 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a7e414b-bfff-44be-9dd0-9c73445bfc5c" containerName="nova-api-log" Feb 25 11:38:23 crc kubenswrapper[5005]: I0225 11:38:23.386754 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 25 11:38:23 crc kubenswrapper[5005]: I0225 11:38:23.389412 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 25 11:38:23 crc kubenswrapper[5005]: I0225 11:38:23.389600 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 25 11:38:23 crc kubenswrapper[5005]: I0225 11:38:23.389729 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 25 11:38:23 crc kubenswrapper[5005]: I0225 11:38:23.393016 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 25 11:38:23 crc kubenswrapper[5005]: I0225 11:38:23.531240 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb8753d7-153b-406c-875e-b54d53dc851e-public-tls-certs\") pod \"nova-api-0\" (UID: \"bb8753d7-153b-406c-875e-b54d53dc851e\") " pod="openstack/nova-api-0" Feb 25 11:38:23 crc kubenswrapper[5005]: I0225 11:38:23.531606 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb8753d7-153b-406c-875e-b54d53dc851e-logs\") pod \"nova-api-0\" (UID: \"bb8753d7-153b-406c-875e-b54d53dc851e\") " pod="openstack/nova-api-0" Feb 25 11:38:23 crc kubenswrapper[5005]: I0225 11:38:23.531628 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb8753d7-153b-406c-875e-b54d53dc851e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"bb8753d7-153b-406c-875e-b54d53dc851e\") " pod="openstack/nova-api-0" Feb 25 11:38:23 crc kubenswrapper[5005]: I0225 11:38:23.531659 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5tzs\" (UniqueName: \"kubernetes.io/projected/bb8753d7-153b-406c-875e-b54d53dc851e-kube-api-access-h5tzs\") pod \"nova-api-0\" (UID: \"bb8753d7-153b-406c-875e-b54d53dc851e\") " pod="openstack/nova-api-0" Feb 25 11:38:23 crc kubenswrapper[5005]: I0225 11:38:23.531681 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb8753d7-153b-406c-875e-b54d53dc851e-config-data\") pod \"nova-api-0\" (UID: \"bb8753d7-153b-406c-875e-b54d53dc851e\") " pod="openstack/nova-api-0" Feb 25 11:38:23 crc kubenswrapper[5005]: I0225 11:38:23.531718 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb8753d7-153b-406c-875e-b54d53dc851e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bb8753d7-153b-406c-875e-b54d53dc851e\") " pod="openstack/nova-api-0" Feb 25 11:38:23 crc kubenswrapper[5005]: I0225 11:38:23.633079 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb8753d7-153b-406c-875e-b54d53dc851e-public-tls-certs\") pod \"nova-api-0\" (UID: \"bb8753d7-153b-406c-875e-b54d53dc851e\") " pod="openstack/nova-api-0" Feb 25 11:38:23 crc kubenswrapper[5005]: I0225 11:38:23.633133 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb8753d7-153b-406c-875e-b54d53dc851e-logs\") pod \"nova-api-0\" (UID: \"bb8753d7-153b-406c-875e-b54d53dc851e\") " pod="openstack/nova-api-0" Feb 25 11:38:23 crc kubenswrapper[5005]: I0225 11:38:23.633159 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb8753d7-153b-406c-875e-b54d53dc851e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"bb8753d7-153b-406c-875e-b54d53dc851e\") " pod="openstack/nova-api-0" Feb 25 11:38:23 crc kubenswrapper[5005]: I0225 11:38:23.633189 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5tzs\" (UniqueName: \"kubernetes.io/projected/bb8753d7-153b-406c-875e-b54d53dc851e-kube-api-access-h5tzs\") pod \"nova-api-0\" (UID: \"bb8753d7-153b-406c-875e-b54d53dc851e\") " pod="openstack/nova-api-0" Feb 25 11:38:23 crc kubenswrapper[5005]: I0225 11:38:23.633215 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb8753d7-153b-406c-875e-b54d53dc851e-config-data\") pod \"nova-api-0\" (UID: \"bb8753d7-153b-406c-875e-b54d53dc851e\") " pod="openstack/nova-api-0" Feb 25 11:38:23 crc kubenswrapper[5005]: I0225 11:38:23.633252 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb8753d7-153b-406c-875e-b54d53dc851e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bb8753d7-153b-406c-875e-b54d53dc851e\") " pod="openstack/nova-api-0" Feb 25 11:38:23 crc kubenswrapper[5005]: I0225 11:38:23.633791 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb8753d7-153b-406c-875e-b54d53dc851e-logs\") pod \"nova-api-0\" (UID: \"bb8753d7-153b-406c-875e-b54d53dc851e\") " pod="openstack/nova-api-0" Feb 25 11:38:23 crc kubenswrapper[5005]: I0225 11:38:23.637982 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb8753d7-153b-406c-875e-b54d53dc851e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bb8753d7-153b-406c-875e-b54d53dc851e\") " pod="openstack/nova-api-0" Feb 25 11:38:23 crc kubenswrapper[5005]: I0225 11:38:23.638178 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb8753d7-153b-406c-875e-b54d53dc851e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"bb8753d7-153b-406c-875e-b54d53dc851e\") " pod="openstack/nova-api-0" Feb 25 11:38:23 crc kubenswrapper[5005]: I0225 11:38:23.645133 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb8753d7-153b-406c-875e-b54d53dc851e-config-data\") pod \"nova-api-0\" (UID: \"bb8753d7-153b-406c-875e-b54d53dc851e\") " pod="openstack/nova-api-0" Feb 25 11:38:23 crc kubenswrapper[5005]: I0225 11:38:23.649910 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb8753d7-153b-406c-875e-b54d53dc851e-public-tls-certs\") pod \"nova-api-0\" (UID: \"bb8753d7-153b-406c-875e-b54d53dc851e\") " pod="openstack/nova-api-0" Feb 25 11:38:23 crc kubenswrapper[5005]: I0225 11:38:23.650223 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5tzs\" (UniqueName: \"kubernetes.io/projected/bb8753d7-153b-406c-875e-b54d53dc851e-kube-api-access-h5tzs\") pod \"nova-api-0\" (UID: \"bb8753d7-153b-406c-875e-b54d53dc851e\") " pod="openstack/nova-api-0" Feb 25 11:38:23 crc kubenswrapper[5005]: I0225 11:38:23.713413 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 25 11:38:24 crc kubenswrapper[5005]: I0225 11:38:24.181664 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 25 11:38:24 crc kubenswrapper[5005]: W0225 11:38:24.189832 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb8753d7_153b_406c_875e_b54d53dc851e.slice/crio-aa35b281699ed9660019c1ebcecfb1c675f4da279eb828cf9ec184bed7e108b9 WatchSource:0}: Error finding container aa35b281699ed9660019c1ebcecfb1c675f4da279eb828cf9ec184bed7e108b9: Status 404 returned error can't find the container with id aa35b281699ed9660019c1ebcecfb1c675f4da279eb828cf9ec184bed7e108b9 Feb 25 11:38:24 crc kubenswrapper[5005]: I0225 11:38:24.299275 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bb8753d7-153b-406c-875e-b54d53dc851e","Type":"ContainerStarted","Data":"aa35b281699ed9660019c1ebcecfb1c675f4da279eb828cf9ec184bed7e108b9"} Feb 25 11:38:24 crc kubenswrapper[5005]: I0225 11:38:24.711515 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a7e414b-bfff-44be-9dd0-9c73445bfc5c" path="/var/lib/kubelet/pods/6a7e414b-bfff-44be-9dd0-9c73445bfc5c/volumes" Feb 25 11:38:25 crc kubenswrapper[5005]: I0225 11:38:25.313546 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"29c39ed3-bc30-4749-b26c-92e320736c0f","Type":"ContainerStarted","Data":"0aa9e7276d63c74f973f1a9ab82927df2838e0efe2e4543358fee0062c6f9a89"} Feb 25 11:38:25 crc kubenswrapper[5005]: I0225 11:38:25.313684 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 25 11:38:25 crc kubenswrapper[5005]: I0225 11:38:25.317221 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bb8753d7-153b-406c-875e-b54d53dc851e","Type":"ContainerStarted","Data":"33ed10690b073079afb4ba6eeab9e8c39eaa143e8a828c1c55e27d93d76734f0"} Feb 25 11:38:25 crc kubenswrapper[5005]: I0225 11:38:25.317980 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bb8753d7-153b-406c-875e-b54d53dc851e","Type":"ContainerStarted","Data":"1dd04f2897f108f22cd91a57076e30562f534cbb64edd7b01d772197b80c41e0"} Feb 25 11:38:25 crc kubenswrapper[5005]: I0225 11:38:25.354898 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.753131394 podStartE2EDuration="5.354873449s" podCreationTimestamp="2026-02-25 11:38:20 +0000 UTC" firstStartedPulling="2026-02-25 11:38:21.278558251 +0000 UTC m=+1215.319290578" lastFinishedPulling="2026-02-25 11:38:24.880300306 +0000 UTC m=+1218.921032633" observedRunningTime="2026-02-25 11:38:25.341564206 +0000 UTC m=+1219.382296553" watchObservedRunningTime="2026-02-25 11:38:25.354873449 +0000 UTC m=+1219.395605786" Feb 25 11:38:25 crc kubenswrapper[5005]: I0225 11:38:25.379226 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.379195595 podStartE2EDuration="2.379195595s" podCreationTimestamp="2026-02-25 11:38:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:38:25.366089699 +0000 UTC m=+1219.406822036" watchObservedRunningTime="2026-02-25 11:38:25.379195595 +0000 UTC m=+1219.419927962" Feb 25 11:38:26 crc kubenswrapper[5005]: I0225 11:38:26.638057 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 25 11:38:26 crc kubenswrapper[5005]: I0225 11:38:26.851697 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b856c5697-xdxb8" Feb 25 11:38:26 crc kubenswrapper[5005]: I0225 11:38:26.954889 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-bqh8f"] Feb 25 11:38:26 crc kubenswrapper[5005]: I0225 11:38:26.955449 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-566b5b7845-bqh8f" podUID="29cad9cb-c8e5-4f68-89b4-cb0f89f33637" containerName="dnsmasq-dns" containerID="cri-o://3d4aaa7d5db69734c5c2219d284b8fdfcea18d492ec838d88857b5e33b6b6e24" gracePeriod=10 Feb 25 11:38:27 crc kubenswrapper[5005]: I0225 11:38:27.351840 5005 generic.go:334] "Generic (PLEG): container finished" podID="29cad9cb-c8e5-4f68-89b4-cb0f89f33637" containerID="3d4aaa7d5db69734c5c2219d284b8fdfcea18d492ec838d88857b5e33b6b6e24" exitCode=0 Feb 25 11:38:27 crc kubenswrapper[5005]: I0225 11:38:27.351885 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-bqh8f" event={"ID":"29cad9cb-c8e5-4f68-89b4-cb0f89f33637","Type":"ContainerDied","Data":"3d4aaa7d5db69734c5c2219d284b8fdfcea18d492ec838d88857b5e33b6b6e24"} Feb 25 11:38:27 crc kubenswrapper[5005]: I0225 11:38:27.479986 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-bqh8f" Feb 25 11:38:27 crc kubenswrapper[5005]: I0225 11:38:27.522960 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/29cad9cb-c8e5-4f68-89b4-cb0f89f33637-ovsdbserver-nb\") pod \"29cad9cb-c8e5-4f68-89b4-cb0f89f33637\" (UID: \"29cad9cb-c8e5-4f68-89b4-cb0f89f33637\") " Feb 25 11:38:27 crc kubenswrapper[5005]: I0225 11:38:27.523057 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29cad9cb-c8e5-4f68-89b4-cb0f89f33637-dns-svc\") pod \"29cad9cb-c8e5-4f68-89b4-cb0f89f33637\" (UID: \"29cad9cb-c8e5-4f68-89b4-cb0f89f33637\") " Feb 25 11:38:27 crc kubenswrapper[5005]: I0225 11:38:27.523124 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29cad9cb-c8e5-4f68-89b4-cb0f89f33637-config\") pod \"29cad9cb-c8e5-4f68-89b4-cb0f89f33637\" (UID: \"29cad9cb-c8e5-4f68-89b4-cb0f89f33637\") " Feb 25 11:38:27 crc kubenswrapper[5005]: I0225 11:38:27.523288 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/29cad9cb-c8e5-4f68-89b4-cb0f89f33637-ovsdbserver-sb\") pod \"29cad9cb-c8e5-4f68-89b4-cb0f89f33637\" (UID: \"29cad9cb-c8e5-4f68-89b4-cb0f89f33637\") " Feb 25 11:38:27 crc kubenswrapper[5005]: I0225 11:38:27.523402 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lnbt\" (UniqueName: \"kubernetes.io/projected/29cad9cb-c8e5-4f68-89b4-cb0f89f33637-kube-api-access-9lnbt\") pod \"29cad9cb-c8e5-4f68-89b4-cb0f89f33637\" (UID: \"29cad9cb-c8e5-4f68-89b4-cb0f89f33637\") " Feb 25 11:38:27 crc kubenswrapper[5005]: I0225 11:38:27.546654 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29cad9cb-c8e5-4f68-89b4-cb0f89f33637-kube-api-access-9lnbt" (OuterVolumeSpecName: "kube-api-access-9lnbt") pod "29cad9cb-c8e5-4f68-89b4-cb0f89f33637" (UID: "29cad9cb-c8e5-4f68-89b4-cb0f89f33637"). InnerVolumeSpecName "kube-api-access-9lnbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:38:27 crc kubenswrapper[5005]: I0225 11:38:27.585550 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29cad9cb-c8e5-4f68-89b4-cb0f89f33637-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "29cad9cb-c8e5-4f68-89b4-cb0f89f33637" (UID: "29cad9cb-c8e5-4f68-89b4-cb0f89f33637"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:38:27 crc kubenswrapper[5005]: I0225 11:38:27.588549 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29cad9cb-c8e5-4f68-89b4-cb0f89f33637-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "29cad9cb-c8e5-4f68-89b4-cb0f89f33637" (UID: "29cad9cb-c8e5-4f68-89b4-cb0f89f33637"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:38:27 crc kubenswrapper[5005]: I0225 11:38:27.590465 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29cad9cb-c8e5-4f68-89b4-cb0f89f33637-config" (OuterVolumeSpecName: "config") pod "29cad9cb-c8e5-4f68-89b4-cb0f89f33637" (UID: "29cad9cb-c8e5-4f68-89b4-cb0f89f33637"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:38:27 crc kubenswrapper[5005]: I0225 11:38:27.600034 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29cad9cb-c8e5-4f68-89b4-cb0f89f33637-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "29cad9cb-c8e5-4f68-89b4-cb0f89f33637" (UID: "29cad9cb-c8e5-4f68-89b4-cb0f89f33637"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:38:27 crc kubenswrapper[5005]: I0225 11:38:27.626166 5005 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/29cad9cb-c8e5-4f68-89b4-cb0f89f33637-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 25 11:38:27 crc kubenswrapper[5005]: I0225 11:38:27.626223 5005 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29cad9cb-c8e5-4f68-89b4-cb0f89f33637-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 25 11:38:27 crc kubenswrapper[5005]: I0225 11:38:27.626232 5005 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29cad9cb-c8e5-4f68-89b4-cb0f89f33637-config\") on node \"crc\" DevicePath \"\"" Feb 25 11:38:27 crc kubenswrapper[5005]: I0225 11:38:27.626241 5005 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/29cad9cb-c8e5-4f68-89b4-cb0f89f33637-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 25 11:38:27 crc kubenswrapper[5005]: I0225 11:38:27.626251 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lnbt\" (UniqueName: \"kubernetes.io/projected/29cad9cb-c8e5-4f68-89b4-cb0f89f33637-kube-api-access-9lnbt\") on node \"crc\" DevicePath \"\"" Feb 25 11:38:28 crc kubenswrapper[5005]: I0225 11:38:28.369991 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-bqh8f" event={"ID":"29cad9cb-c8e5-4f68-89b4-cb0f89f33637","Type":"ContainerDied","Data":"e0cc3f83f4331ce5af69427b0fc330aec43a7c0bd4e1474baea24a14ff35b733"} Feb 25 11:38:28 crc kubenswrapper[5005]: I0225 11:38:28.370424 5005 scope.go:117] "RemoveContainer" containerID="3d4aaa7d5db69734c5c2219d284b8fdfcea18d492ec838d88857b5e33b6b6e24" Feb 25 11:38:28 crc kubenswrapper[5005]: I0225 11:38:28.370664 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-bqh8f" Feb 25 11:38:28 crc kubenswrapper[5005]: I0225 11:38:28.420226 5005 scope.go:117] "RemoveContainer" containerID="b3e75160dd6b139853af5abbd366c4fe4dc7e89e9d54a0f46c3d17d9763facf1" Feb 25 11:38:28 crc kubenswrapper[5005]: I0225 11:38:28.428032 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-bqh8f"] Feb 25 11:38:28 crc kubenswrapper[5005]: I0225 11:38:28.440022 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-bqh8f"] Feb 25 11:38:28 crc kubenswrapper[5005]: I0225 11:38:28.711511 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29cad9cb-c8e5-4f68-89b4-cb0f89f33637" path="/var/lib/kubelet/pods/29cad9cb-c8e5-4f68-89b4-cb0f89f33637/volumes" Feb 25 11:38:31 crc kubenswrapper[5005]: I0225 11:38:31.639017 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 25 11:38:31 crc kubenswrapper[5005]: I0225 11:38:31.674466 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 25 11:38:32 crc kubenswrapper[5005]: I0225 11:38:32.433554 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 25 11:38:32 crc kubenswrapper[5005]: I0225 11:38:32.627018 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-zqrvp"] Feb 25 11:38:32 crc kubenswrapper[5005]: E0225 11:38:32.627714 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29cad9cb-c8e5-4f68-89b4-cb0f89f33637" containerName="init" Feb 25 11:38:32 crc kubenswrapper[5005]: I0225 11:38:32.627733 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="29cad9cb-c8e5-4f68-89b4-cb0f89f33637" containerName="init" Feb 25 11:38:32 crc kubenswrapper[5005]: E0225 11:38:32.627749 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29cad9cb-c8e5-4f68-89b4-cb0f89f33637" containerName="dnsmasq-dns" Feb 25 11:38:32 crc kubenswrapper[5005]: I0225 11:38:32.627756 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="29cad9cb-c8e5-4f68-89b4-cb0f89f33637" containerName="dnsmasq-dns" Feb 25 11:38:32 crc kubenswrapper[5005]: I0225 11:38:32.627919 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="29cad9cb-c8e5-4f68-89b4-cb0f89f33637" containerName="dnsmasq-dns" Feb 25 11:38:32 crc kubenswrapper[5005]: I0225 11:38:32.628435 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-zqrvp" Feb 25 11:38:32 crc kubenswrapper[5005]: I0225 11:38:32.633349 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 25 11:38:32 crc kubenswrapper[5005]: I0225 11:38:32.633632 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 25 11:38:32 crc kubenswrapper[5005]: I0225 11:38:32.641632 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-zqrvp"] Feb 25 11:38:32 crc kubenswrapper[5005]: I0225 11:38:32.741264 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/277c7024-9f41-4e45-afd0-68cf9af20681-scripts\") pod \"nova-cell1-cell-mapping-zqrvp\" (UID: \"277c7024-9f41-4e45-afd0-68cf9af20681\") " pod="openstack/nova-cell1-cell-mapping-zqrvp" Feb 25 11:38:32 crc kubenswrapper[5005]: I0225 11:38:32.741340 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4dmn\" (UniqueName: \"kubernetes.io/projected/277c7024-9f41-4e45-afd0-68cf9af20681-kube-api-access-z4dmn\") pod \"nova-cell1-cell-mapping-zqrvp\" (UID: \"277c7024-9f41-4e45-afd0-68cf9af20681\") " pod="openstack/nova-cell1-cell-mapping-zqrvp" Feb 25 11:38:32 crc kubenswrapper[5005]: I0225 11:38:32.741394 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/277c7024-9f41-4e45-afd0-68cf9af20681-config-data\") pod \"nova-cell1-cell-mapping-zqrvp\" (UID: \"277c7024-9f41-4e45-afd0-68cf9af20681\") " pod="openstack/nova-cell1-cell-mapping-zqrvp" Feb 25 11:38:32 crc kubenswrapper[5005]: I0225 11:38:32.741415 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/277c7024-9f41-4e45-afd0-68cf9af20681-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-zqrvp\" (UID: \"277c7024-9f41-4e45-afd0-68cf9af20681\") " pod="openstack/nova-cell1-cell-mapping-zqrvp" Feb 25 11:38:32 crc kubenswrapper[5005]: I0225 11:38:32.843190 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/277c7024-9f41-4e45-afd0-68cf9af20681-scripts\") pod \"nova-cell1-cell-mapping-zqrvp\" (UID: \"277c7024-9f41-4e45-afd0-68cf9af20681\") " pod="openstack/nova-cell1-cell-mapping-zqrvp" Feb 25 11:38:32 crc kubenswrapper[5005]: I0225 11:38:32.843582 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4dmn\" (UniqueName: \"kubernetes.io/projected/277c7024-9f41-4e45-afd0-68cf9af20681-kube-api-access-z4dmn\") pod \"nova-cell1-cell-mapping-zqrvp\" (UID: \"277c7024-9f41-4e45-afd0-68cf9af20681\") " pod="openstack/nova-cell1-cell-mapping-zqrvp" Feb 25 11:38:32 crc kubenswrapper[5005]: I0225 11:38:32.843747 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/277c7024-9f41-4e45-afd0-68cf9af20681-config-data\") pod \"nova-cell1-cell-mapping-zqrvp\" (UID: \"277c7024-9f41-4e45-afd0-68cf9af20681\") " pod="openstack/nova-cell1-cell-mapping-zqrvp" Feb 25 11:38:32 crc kubenswrapper[5005]: I0225 11:38:32.843841 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/277c7024-9f41-4e45-afd0-68cf9af20681-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-zqrvp\" (UID: \"277c7024-9f41-4e45-afd0-68cf9af20681\") " pod="openstack/nova-cell1-cell-mapping-zqrvp" Feb 25 11:38:32 crc kubenswrapper[5005]: I0225 11:38:32.848720 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/277c7024-9f41-4e45-afd0-68cf9af20681-config-data\") pod \"nova-cell1-cell-mapping-zqrvp\" (UID: \"277c7024-9f41-4e45-afd0-68cf9af20681\") " pod="openstack/nova-cell1-cell-mapping-zqrvp" Feb 25 11:38:32 crc kubenswrapper[5005]: I0225 11:38:32.848869 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/277c7024-9f41-4e45-afd0-68cf9af20681-scripts\") pod \"nova-cell1-cell-mapping-zqrvp\" (UID: \"277c7024-9f41-4e45-afd0-68cf9af20681\") " pod="openstack/nova-cell1-cell-mapping-zqrvp" Feb 25 11:38:32 crc kubenswrapper[5005]: I0225 11:38:32.850948 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/277c7024-9f41-4e45-afd0-68cf9af20681-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-zqrvp\" (UID: \"277c7024-9f41-4e45-afd0-68cf9af20681\") " pod="openstack/nova-cell1-cell-mapping-zqrvp" Feb 25 11:38:32 crc kubenswrapper[5005]: I0225 11:38:32.861004 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4dmn\" (UniqueName: \"kubernetes.io/projected/277c7024-9f41-4e45-afd0-68cf9af20681-kube-api-access-z4dmn\") pod \"nova-cell1-cell-mapping-zqrvp\" (UID: \"277c7024-9f41-4e45-afd0-68cf9af20681\") " pod="openstack/nova-cell1-cell-mapping-zqrvp" Feb 25 11:38:32 crc kubenswrapper[5005]: I0225 11:38:32.949455 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-zqrvp" Feb 25 11:38:33 crc kubenswrapper[5005]: I0225 11:38:33.396213 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-zqrvp"] Feb 25 11:38:33 crc kubenswrapper[5005]: W0225 11:38:33.406491 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod277c7024_9f41_4e45_afd0_68cf9af20681.slice/crio-682f0a57b6bdfc7262579aee61e3f2f6f6b6527c5d19f43fa18a2724b28818f6 WatchSource:0}: Error finding container 682f0a57b6bdfc7262579aee61e3f2f6f6b6527c5d19f43fa18a2724b28818f6: Status 404 returned error can't find the container with id 682f0a57b6bdfc7262579aee61e3f2f6f6b6527c5d19f43fa18a2724b28818f6 Feb 25 11:38:33 crc kubenswrapper[5005]: I0225 11:38:33.425521 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-zqrvp" event={"ID":"277c7024-9f41-4e45-afd0-68cf9af20681","Type":"ContainerStarted","Data":"682f0a57b6bdfc7262579aee61e3f2f6f6b6527c5d19f43fa18a2724b28818f6"} Feb 25 11:38:33 crc kubenswrapper[5005]: I0225 11:38:33.713980 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 25 11:38:33 crc kubenswrapper[5005]: I0225 11:38:33.714353 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 25 11:38:34 crc kubenswrapper[5005]: I0225 11:38:34.435430 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-zqrvp" event={"ID":"277c7024-9f41-4e45-afd0-68cf9af20681","Type":"ContainerStarted","Data":"ddea575fffbe9605941f6d3d236f15b2b2a08d14f8d8cf9aa9086ee9fb7331e2"} Feb 25 11:38:34 crc kubenswrapper[5005]: I0225 11:38:34.732554 5005 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="bb8753d7-153b-406c-875e-b54d53dc851e" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.197:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 25 11:38:34 crc kubenswrapper[5005]: I0225 11:38:34.732585 5005 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="bb8753d7-153b-406c-875e-b54d53dc851e" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.197:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 25 11:38:38 crc kubenswrapper[5005]: I0225 11:38:38.482039 5005 generic.go:334] "Generic (PLEG): container finished" podID="277c7024-9f41-4e45-afd0-68cf9af20681" containerID="ddea575fffbe9605941f6d3d236f15b2b2a08d14f8d8cf9aa9086ee9fb7331e2" exitCode=0 Feb 25 11:38:38 crc kubenswrapper[5005]: I0225 11:38:38.482089 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-zqrvp" event={"ID":"277c7024-9f41-4e45-afd0-68cf9af20681","Type":"ContainerDied","Data":"ddea575fffbe9605941f6d3d236f15b2b2a08d14f8d8cf9aa9086ee9fb7331e2"} Feb 25 11:38:40 crc kubenswrapper[5005]: I0225 11:38:40.019885 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-zqrvp" Feb 25 11:38:40 crc kubenswrapper[5005]: I0225 11:38:40.086492 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4dmn\" (UniqueName: \"kubernetes.io/projected/277c7024-9f41-4e45-afd0-68cf9af20681-kube-api-access-z4dmn\") pod \"277c7024-9f41-4e45-afd0-68cf9af20681\" (UID: \"277c7024-9f41-4e45-afd0-68cf9af20681\") " Feb 25 11:38:40 crc kubenswrapper[5005]: I0225 11:38:40.086621 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/277c7024-9f41-4e45-afd0-68cf9af20681-combined-ca-bundle\") pod \"277c7024-9f41-4e45-afd0-68cf9af20681\" (UID: \"277c7024-9f41-4e45-afd0-68cf9af20681\") " Feb 25 11:38:40 crc kubenswrapper[5005]: I0225 11:38:40.086664 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/277c7024-9f41-4e45-afd0-68cf9af20681-scripts\") pod \"277c7024-9f41-4e45-afd0-68cf9af20681\" (UID: \"277c7024-9f41-4e45-afd0-68cf9af20681\") " Feb 25 11:38:40 crc kubenswrapper[5005]: I0225 11:38:40.086770 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/277c7024-9f41-4e45-afd0-68cf9af20681-config-data\") pod \"277c7024-9f41-4e45-afd0-68cf9af20681\" (UID: \"277c7024-9f41-4e45-afd0-68cf9af20681\") " Feb 25 11:38:40 crc kubenswrapper[5005]: I0225 11:38:40.093580 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/277c7024-9f41-4e45-afd0-68cf9af20681-scripts" (OuterVolumeSpecName: "scripts") pod "277c7024-9f41-4e45-afd0-68cf9af20681" (UID: "277c7024-9f41-4e45-afd0-68cf9af20681"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:38:40 crc kubenswrapper[5005]: I0225 11:38:40.093806 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/277c7024-9f41-4e45-afd0-68cf9af20681-kube-api-access-z4dmn" (OuterVolumeSpecName: "kube-api-access-z4dmn") pod "277c7024-9f41-4e45-afd0-68cf9af20681" (UID: "277c7024-9f41-4e45-afd0-68cf9af20681"). InnerVolumeSpecName "kube-api-access-z4dmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:38:40 crc kubenswrapper[5005]: I0225 11:38:40.111605 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/277c7024-9f41-4e45-afd0-68cf9af20681-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "277c7024-9f41-4e45-afd0-68cf9af20681" (UID: "277c7024-9f41-4e45-afd0-68cf9af20681"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:38:40 crc kubenswrapper[5005]: I0225 11:38:40.131195 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/277c7024-9f41-4e45-afd0-68cf9af20681-config-data" (OuterVolumeSpecName: "config-data") pod "277c7024-9f41-4e45-afd0-68cf9af20681" (UID: "277c7024-9f41-4e45-afd0-68cf9af20681"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:38:40 crc kubenswrapper[5005]: I0225 11:38:40.189898 5005 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/277c7024-9f41-4e45-afd0-68cf9af20681-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 11:38:40 crc kubenswrapper[5005]: I0225 11:38:40.189976 5005 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/277c7024-9f41-4e45-afd0-68cf9af20681-scripts\") on node \"crc\" DevicePath \"\"" Feb 25 11:38:40 crc kubenswrapper[5005]: I0225 11:38:40.190007 5005 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/277c7024-9f41-4e45-afd0-68cf9af20681-config-data\") on node \"crc\" DevicePath \"\"" Feb 25 11:38:40 crc kubenswrapper[5005]: I0225 11:38:40.190034 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4dmn\" (UniqueName: \"kubernetes.io/projected/277c7024-9f41-4e45-afd0-68cf9af20681-kube-api-access-z4dmn\") on node \"crc\" DevicePath \"\"" Feb 25 11:38:40 crc kubenswrapper[5005]: I0225 11:38:40.511626 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-zqrvp" event={"ID":"277c7024-9f41-4e45-afd0-68cf9af20681","Type":"ContainerDied","Data":"682f0a57b6bdfc7262579aee61e3f2f6f6b6527c5d19f43fa18a2724b28818f6"} Feb 25 11:38:40 crc kubenswrapper[5005]: I0225 11:38:40.511708 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="682f0a57b6bdfc7262579aee61e3f2f6f6b6527c5d19f43fa18a2724b28818f6" Feb 25 11:38:40 crc kubenswrapper[5005]: I0225 11:38:40.512098 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-zqrvp" Feb 25 11:38:40 crc kubenswrapper[5005]: I0225 11:38:40.815835 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 25 11:38:40 crc kubenswrapper[5005]: I0225 11:38:40.816245 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="d06d7bee-8488-40a7-aa2b-56c3e45a92f0" containerName="nova-scheduler-scheduler" containerID="cri-o://09513e3732fdd0822587ab8822c51c20594b6a5660e7385dccb01f3e0f686d2d" gracePeriod=30 Feb 25 11:38:40 crc kubenswrapper[5005]: I0225 11:38:40.831434 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 25 11:38:40 crc kubenswrapper[5005]: I0225 11:38:40.831795 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="bb8753d7-153b-406c-875e-b54d53dc851e" containerName="nova-api-log" containerID="cri-o://1dd04f2897f108f22cd91a57076e30562f534cbb64edd7b01d772197b80c41e0" gracePeriod=30 Feb 25 11:38:40 crc kubenswrapper[5005]: I0225 11:38:40.831868 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="bb8753d7-153b-406c-875e-b54d53dc851e" containerName="nova-api-api" containerID="cri-o://33ed10690b073079afb4ba6eeab9e8c39eaa143e8a828c1c55e27d93d76734f0" gracePeriod=30 Feb 25 11:38:40 crc kubenswrapper[5005]: I0225 11:38:40.849237 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 25 11:38:40 crc kubenswrapper[5005]: I0225 11:38:40.850085 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="03bc0cdc-e98f-4161-aa11-534e22b5be40" containerName="nova-metadata-log" containerID="cri-o://760b9c3c979a4203ce06b4c8946b6cf0a1890b6c28ddf45d37d4d49e97880a46" gracePeriod=30 Feb 25 11:38:40 crc kubenswrapper[5005]: I0225 11:38:40.850587 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="03bc0cdc-e98f-4161-aa11-534e22b5be40" containerName="nova-metadata-metadata" containerID="cri-o://70474a14723af8d559c2002f5cb3c115b0a0726e4a4481418bc07bd50e6f4d42" gracePeriod=30 Feb 25 11:38:41 crc kubenswrapper[5005]: I0225 11:38:41.526267 5005 generic.go:334] "Generic (PLEG): container finished" podID="03bc0cdc-e98f-4161-aa11-534e22b5be40" containerID="760b9c3c979a4203ce06b4c8946b6cf0a1890b6c28ddf45d37d4d49e97880a46" exitCode=143 Feb 25 11:38:41 crc kubenswrapper[5005]: I0225 11:38:41.526419 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"03bc0cdc-e98f-4161-aa11-534e22b5be40","Type":"ContainerDied","Data":"760b9c3c979a4203ce06b4c8946b6cf0a1890b6c28ddf45d37d4d49e97880a46"} Feb 25 11:38:41 crc kubenswrapper[5005]: I0225 11:38:41.529240 5005 generic.go:334] "Generic (PLEG): container finished" podID="bb8753d7-153b-406c-875e-b54d53dc851e" containerID="1dd04f2897f108f22cd91a57076e30562f534cbb64edd7b01d772197b80c41e0" exitCode=143 Feb 25 11:38:41 crc kubenswrapper[5005]: I0225 11:38:41.529277 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bb8753d7-153b-406c-875e-b54d53dc851e","Type":"ContainerDied","Data":"1dd04f2897f108f22cd91a57076e30562f534cbb64edd7b01d772197b80c41e0"} Feb 25 11:38:42 crc kubenswrapper[5005]: I0225 11:38:42.483621 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 25 11:38:42 crc kubenswrapper[5005]: I0225 11:38:42.532932 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d06d7bee-8488-40a7-aa2b-56c3e45a92f0-config-data\") pod \"d06d7bee-8488-40a7-aa2b-56c3e45a92f0\" (UID: \"d06d7bee-8488-40a7-aa2b-56c3e45a92f0\") " Feb 25 11:38:42 crc kubenswrapper[5005]: I0225 11:38:42.533016 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2f4g\" (UniqueName: \"kubernetes.io/projected/d06d7bee-8488-40a7-aa2b-56c3e45a92f0-kube-api-access-q2f4g\") pod \"d06d7bee-8488-40a7-aa2b-56c3e45a92f0\" (UID: \"d06d7bee-8488-40a7-aa2b-56c3e45a92f0\") " Feb 25 11:38:42 crc kubenswrapper[5005]: I0225 11:38:42.533130 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d06d7bee-8488-40a7-aa2b-56c3e45a92f0-combined-ca-bundle\") pod \"d06d7bee-8488-40a7-aa2b-56c3e45a92f0\" (UID: \"d06d7bee-8488-40a7-aa2b-56c3e45a92f0\") " Feb 25 11:38:42 crc kubenswrapper[5005]: I0225 11:38:42.539511 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d06d7bee-8488-40a7-aa2b-56c3e45a92f0-kube-api-access-q2f4g" (OuterVolumeSpecName: "kube-api-access-q2f4g") pod "d06d7bee-8488-40a7-aa2b-56c3e45a92f0" (UID: "d06d7bee-8488-40a7-aa2b-56c3e45a92f0"). InnerVolumeSpecName "kube-api-access-q2f4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:38:42 crc kubenswrapper[5005]: I0225 11:38:42.540091 5005 generic.go:334] "Generic (PLEG): container finished" podID="d06d7bee-8488-40a7-aa2b-56c3e45a92f0" containerID="09513e3732fdd0822587ab8822c51c20594b6a5660e7385dccb01f3e0f686d2d" exitCode=0 Feb 25 11:38:42 crc kubenswrapper[5005]: I0225 11:38:42.540126 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d06d7bee-8488-40a7-aa2b-56c3e45a92f0","Type":"ContainerDied","Data":"09513e3732fdd0822587ab8822c51c20594b6a5660e7385dccb01f3e0f686d2d"} Feb 25 11:38:42 crc kubenswrapper[5005]: I0225 11:38:42.540148 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d06d7bee-8488-40a7-aa2b-56c3e45a92f0","Type":"ContainerDied","Data":"f03c79de2601947d90f647c3afc986f53c1e169158ead95c44c391f9eddf2953"} Feb 25 11:38:42 crc kubenswrapper[5005]: I0225 11:38:42.540164 5005 scope.go:117] "RemoveContainer" containerID="09513e3732fdd0822587ab8822c51c20594b6a5660e7385dccb01f3e0f686d2d" Feb 25 11:38:42 crc kubenswrapper[5005]: I0225 11:38:42.540269 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 25 11:38:42 crc kubenswrapper[5005]: I0225 11:38:42.557869 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d06d7bee-8488-40a7-aa2b-56c3e45a92f0-config-data" (OuterVolumeSpecName: "config-data") pod "d06d7bee-8488-40a7-aa2b-56c3e45a92f0" (UID: "d06d7bee-8488-40a7-aa2b-56c3e45a92f0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:38:42 crc kubenswrapper[5005]: I0225 11:38:42.561071 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d06d7bee-8488-40a7-aa2b-56c3e45a92f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d06d7bee-8488-40a7-aa2b-56c3e45a92f0" (UID: "d06d7bee-8488-40a7-aa2b-56c3e45a92f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:38:42 crc kubenswrapper[5005]: I0225 11:38:42.624162 5005 scope.go:117] "RemoveContainer" containerID="09513e3732fdd0822587ab8822c51c20594b6a5660e7385dccb01f3e0f686d2d" Feb 25 11:38:42 crc kubenswrapper[5005]: E0225 11:38:42.624576 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09513e3732fdd0822587ab8822c51c20594b6a5660e7385dccb01f3e0f686d2d\": container with ID starting with 09513e3732fdd0822587ab8822c51c20594b6a5660e7385dccb01f3e0f686d2d not found: ID does not exist" containerID="09513e3732fdd0822587ab8822c51c20594b6a5660e7385dccb01f3e0f686d2d" Feb 25 11:38:42 crc kubenswrapper[5005]: I0225 11:38:42.624609 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09513e3732fdd0822587ab8822c51c20594b6a5660e7385dccb01f3e0f686d2d"} err="failed to get container status \"09513e3732fdd0822587ab8822c51c20594b6a5660e7385dccb01f3e0f686d2d\": rpc error: code = NotFound desc = could not find container \"09513e3732fdd0822587ab8822c51c20594b6a5660e7385dccb01f3e0f686d2d\": container with ID starting with 09513e3732fdd0822587ab8822c51c20594b6a5660e7385dccb01f3e0f686d2d not found: ID does not exist" Feb 25 11:38:42 crc kubenswrapper[5005]: I0225 11:38:42.635218 5005 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d06d7bee-8488-40a7-aa2b-56c3e45a92f0-config-data\") on node \"crc\" DevicePath \"\"" Feb 25 11:38:42 crc kubenswrapper[5005]: I0225 11:38:42.635249 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2f4g\" (UniqueName: \"kubernetes.io/projected/d06d7bee-8488-40a7-aa2b-56c3e45a92f0-kube-api-access-q2f4g\") on node \"crc\" DevicePath \"\"" Feb 25 11:38:42 crc kubenswrapper[5005]: I0225 11:38:42.635264 5005 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d06d7bee-8488-40a7-aa2b-56c3e45a92f0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 11:38:42 crc kubenswrapper[5005]: I0225 11:38:42.868671 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 25 11:38:42 crc kubenswrapper[5005]: I0225 11:38:42.880344 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 25 11:38:42 crc kubenswrapper[5005]: I0225 11:38:42.890862 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 25 11:38:42 crc kubenswrapper[5005]: E0225 11:38:42.891265 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="277c7024-9f41-4e45-afd0-68cf9af20681" containerName="nova-manage" Feb 25 11:38:42 crc kubenswrapper[5005]: I0225 11:38:42.891285 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="277c7024-9f41-4e45-afd0-68cf9af20681" containerName="nova-manage" Feb 25 11:38:42 crc kubenswrapper[5005]: E0225 11:38:42.891305 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d06d7bee-8488-40a7-aa2b-56c3e45a92f0" containerName="nova-scheduler-scheduler" Feb 25 11:38:42 crc kubenswrapper[5005]: I0225 11:38:42.891313 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="d06d7bee-8488-40a7-aa2b-56c3e45a92f0" containerName="nova-scheduler-scheduler" Feb 25 11:38:42 crc kubenswrapper[5005]: I0225 11:38:42.891561 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="d06d7bee-8488-40a7-aa2b-56c3e45a92f0" containerName="nova-scheduler-scheduler" Feb 25 11:38:42 crc kubenswrapper[5005]: I0225 11:38:42.891587 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="277c7024-9f41-4e45-afd0-68cf9af20681" containerName="nova-manage" Feb 25 11:38:42 crc kubenswrapper[5005]: I0225 11:38:42.892216 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 25 11:38:42 crc kubenswrapper[5005]: I0225 11:38:42.895041 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 25 11:38:42 crc kubenswrapper[5005]: I0225 11:38:42.926041 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 25 11:38:42 crc kubenswrapper[5005]: I0225 11:38:42.944466 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4ac86e9-6f7f-4d5a-8307-6f1dd8f43032-config-data\") pod \"nova-scheduler-0\" (UID: \"a4ac86e9-6f7f-4d5a-8307-6f1dd8f43032\") " pod="openstack/nova-scheduler-0" Feb 25 11:38:42 crc kubenswrapper[5005]: I0225 11:38:42.944521 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4ac86e9-6f7f-4d5a-8307-6f1dd8f43032-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a4ac86e9-6f7f-4d5a-8307-6f1dd8f43032\") " pod="openstack/nova-scheduler-0" Feb 25 11:38:42 crc kubenswrapper[5005]: I0225 11:38:42.944794 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gt2hm\" (UniqueName: \"kubernetes.io/projected/a4ac86e9-6f7f-4d5a-8307-6f1dd8f43032-kube-api-access-gt2hm\") pod \"nova-scheduler-0\" (UID: \"a4ac86e9-6f7f-4d5a-8307-6f1dd8f43032\") " pod="openstack/nova-scheduler-0" Feb 25 11:38:43 crc kubenswrapper[5005]: I0225 11:38:43.047600 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gt2hm\" (UniqueName: \"kubernetes.io/projected/a4ac86e9-6f7f-4d5a-8307-6f1dd8f43032-kube-api-access-gt2hm\") pod \"nova-scheduler-0\" (UID: \"a4ac86e9-6f7f-4d5a-8307-6f1dd8f43032\") " pod="openstack/nova-scheduler-0" Feb 25 11:38:43 crc kubenswrapper[5005]: I0225 11:38:43.047856 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4ac86e9-6f7f-4d5a-8307-6f1dd8f43032-config-data\") pod \"nova-scheduler-0\" (UID: \"a4ac86e9-6f7f-4d5a-8307-6f1dd8f43032\") " pod="openstack/nova-scheduler-0" Feb 25 11:38:43 crc kubenswrapper[5005]: I0225 11:38:43.047957 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4ac86e9-6f7f-4d5a-8307-6f1dd8f43032-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a4ac86e9-6f7f-4d5a-8307-6f1dd8f43032\") " pod="openstack/nova-scheduler-0" Feb 25 11:38:43 crc kubenswrapper[5005]: I0225 11:38:43.054996 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4ac86e9-6f7f-4d5a-8307-6f1dd8f43032-config-data\") pod \"nova-scheduler-0\" (UID: \"a4ac86e9-6f7f-4d5a-8307-6f1dd8f43032\") " pod="openstack/nova-scheduler-0" Feb 25 11:38:43 crc kubenswrapper[5005]: I0225 11:38:43.057303 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4ac86e9-6f7f-4d5a-8307-6f1dd8f43032-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a4ac86e9-6f7f-4d5a-8307-6f1dd8f43032\") " pod="openstack/nova-scheduler-0" Feb 25 11:38:43 crc kubenswrapper[5005]: I0225 11:38:43.068137 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gt2hm\" (UniqueName: \"kubernetes.io/projected/a4ac86e9-6f7f-4d5a-8307-6f1dd8f43032-kube-api-access-gt2hm\") pod \"nova-scheduler-0\" (UID: \"a4ac86e9-6f7f-4d5a-8307-6f1dd8f43032\") " pod="openstack/nova-scheduler-0" Feb 25 11:38:43 crc kubenswrapper[5005]: I0225 11:38:43.225601 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 25 11:38:43 crc kubenswrapper[5005]: I0225 11:38:43.730230 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 25 11:38:43 crc kubenswrapper[5005]: W0225 11:38:43.732120 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda4ac86e9_6f7f_4d5a_8307_6f1dd8f43032.slice/crio-ab60ffe5d93e4cbc4d16a3e8af5a027eb5a6afc474cb93ac52a0f8a5cb0d6546 WatchSource:0}: Error finding container ab60ffe5d93e4cbc4d16a3e8af5a027eb5a6afc474cb93ac52a0f8a5cb0d6546: Status 404 returned error can't find the container with id ab60ffe5d93e4cbc4d16a3e8af5a027eb5a6afc474cb93ac52a0f8a5cb0d6546 Feb 25 11:38:43 crc kubenswrapper[5005]: I0225 11:38:43.982928 5005 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="03bc0cdc-e98f-4161-aa11-534e22b5be40" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.189:8775/\": read tcp 10.217.0.2:54220->10.217.0.189:8775: read: connection reset by peer" Feb 25 11:38:43 crc kubenswrapper[5005]: I0225 11:38:43.982942 5005 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="03bc0cdc-e98f-4161-aa11-534e22b5be40" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.189:8775/\": read tcp 10.217.0.2:54218->10.217.0.189:8775: read: connection reset by peer" Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.523978 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.529388 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.562699 5005 generic.go:334] "Generic (PLEG): container finished" podID="03bc0cdc-e98f-4161-aa11-534e22b5be40" containerID="70474a14723af8d559c2002f5cb3c115b0a0726e4a4481418bc07bd50e6f4d42" exitCode=0 Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.562798 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"03bc0cdc-e98f-4161-aa11-534e22b5be40","Type":"ContainerDied","Data":"70474a14723af8d559c2002f5cb3c115b0a0726e4a4481418bc07bd50e6f4d42"} Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.562849 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"03bc0cdc-e98f-4161-aa11-534e22b5be40","Type":"ContainerDied","Data":"a4d6b1f76b749fda0127ad0e84bdea7db36d23b464e7d0e2be8769ed9f930c89"} Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.562867 5005 scope.go:117] "RemoveContainer" containerID="70474a14723af8d559c2002f5cb3c115b0a0726e4a4481418bc07bd50e6f4d42" Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.563025 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.574214 5005 generic.go:334] "Generic (PLEG): container finished" podID="bb8753d7-153b-406c-875e-b54d53dc851e" containerID="33ed10690b073079afb4ba6eeab9e8c39eaa143e8a828c1c55e27d93d76734f0" exitCode=0 Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.574287 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bb8753d7-153b-406c-875e-b54d53dc851e","Type":"ContainerDied","Data":"33ed10690b073079afb4ba6eeab9e8c39eaa143e8a828c1c55e27d93d76734f0"} Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.574317 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bb8753d7-153b-406c-875e-b54d53dc851e","Type":"ContainerDied","Data":"aa35b281699ed9660019c1ebcecfb1c675f4da279eb828cf9ec184bed7e108b9"} Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.574402 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.580968 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a4ac86e9-6f7f-4d5a-8307-6f1dd8f43032","Type":"ContainerStarted","Data":"67d1fbea427a8a98af096263128112b28d9a0c785ee03ebbb917fe654aef1b7d"} Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.581012 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a4ac86e9-6f7f-4d5a-8307-6f1dd8f43032","Type":"ContainerStarted","Data":"ab60ffe5d93e4cbc4d16a3e8af5a027eb5a6afc474cb93ac52a0f8a5cb0d6546"} Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.602656 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03bc0cdc-e98f-4161-aa11-534e22b5be40-combined-ca-bundle\") pod \"03bc0cdc-e98f-4161-aa11-534e22b5be40\" (UID: \"03bc0cdc-e98f-4161-aa11-534e22b5be40\") " Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.602718 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb8753d7-153b-406c-875e-b54d53dc851e-public-tls-certs\") pod \"bb8753d7-153b-406c-875e-b54d53dc851e\" (UID: \"bb8753d7-153b-406c-875e-b54d53dc851e\") " Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.602759 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03bc0cdc-e98f-4161-aa11-534e22b5be40-logs\") pod \"03bc0cdc-e98f-4161-aa11-534e22b5be40\" (UID: \"03bc0cdc-e98f-4161-aa11-534e22b5be40\") " Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.602805 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrhfr\" (UniqueName: \"kubernetes.io/projected/03bc0cdc-e98f-4161-aa11-534e22b5be40-kube-api-access-zrhfr\") pod \"03bc0cdc-e98f-4161-aa11-534e22b5be40\" (UID: \"03bc0cdc-e98f-4161-aa11-534e22b5be40\") " Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.602821 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb8753d7-153b-406c-875e-b54d53dc851e-logs\") pod \"bb8753d7-153b-406c-875e-b54d53dc851e\" (UID: \"bb8753d7-153b-406c-875e-b54d53dc851e\") " Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.602854 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb8753d7-153b-406c-875e-b54d53dc851e-internal-tls-certs\") pod \"bb8753d7-153b-406c-875e-b54d53dc851e\" (UID: \"bb8753d7-153b-406c-875e-b54d53dc851e\") " Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.602873 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb8753d7-153b-406c-875e-b54d53dc851e-config-data\") pod \"bb8753d7-153b-406c-875e-b54d53dc851e\" (UID: \"bb8753d7-153b-406c-875e-b54d53dc851e\") " Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.602955 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5tzs\" (UniqueName: \"kubernetes.io/projected/bb8753d7-153b-406c-875e-b54d53dc851e-kube-api-access-h5tzs\") pod \"bb8753d7-153b-406c-875e-b54d53dc851e\" (UID: \"bb8753d7-153b-406c-875e-b54d53dc851e\") " Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.602995 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03bc0cdc-e98f-4161-aa11-534e22b5be40-config-data\") pod \"03bc0cdc-e98f-4161-aa11-534e22b5be40\" (UID: \"03bc0cdc-e98f-4161-aa11-534e22b5be40\") " Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.603011 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/03bc0cdc-e98f-4161-aa11-534e22b5be40-nova-metadata-tls-certs\") pod \"03bc0cdc-e98f-4161-aa11-534e22b5be40\" (UID: \"03bc0cdc-e98f-4161-aa11-534e22b5be40\") " Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.603050 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb8753d7-153b-406c-875e-b54d53dc851e-combined-ca-bundle\") pod \"bb8753d7-153b-406c-875e-b54d53dc851e\" (UID: \"bb8753d7-153b-406c-875e-b54d53dc851e\") " Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.605815 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb8753d7-153b-406c-875e-b54d53dc851e-logs" (OuterVolumeSpecName: "logs") pod "bb8753d7-153b-406c-875e-b54d53dc851e" (UID: "bb8753d7-153b-406c-875e-b54d53dc851e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.606301 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03bc0cdc-e98f-4161-aa11-534e22b5be40-logs" (OuterVolumeSpecName: "logs") pod "03bc0cdc-e98f-4161-aa11-534e22b5be40" (UID: "03bc0cdc-e98f-4161-aa11-534e22b5be40"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.608053 5005 scope.go:117] "RemoveContainer" containerID="760b9c3c979a4203ce06b4c8946b6cf0a1890b6c28ddf45d37d4d49e97880a46" Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.612882 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.6128648009999997 podStartE2EDuration="2.612864801s" podCreationTimestamp="2026-02-25 11:38:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:38:44.6039768 +0000 UTC m=+1238.644709117" watchObservedRunningTime="2026-02-25 11:38:44.612864801 +0000 UTC m=+1238.653597128" Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.613207 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03bc0cdc-e98f-4161-aa11-534e22b5be40-kube-api-access-zrhfr" (OuterVolumeSpecName: "kube-api-access-zrhfr") pod "03bc0cdc-e98f-4161-aa11-534e22b5be40" (UID: "03bc0cdc-e98f-4161-aa11-534e22b5be40"). InnerVolumeSpecName "kube-api-access-zrhfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.635558 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb8753d7-153b-406c-875e-b54d53dc851e-kube-api-access-h5tzs" (OuterVolumeSpecName: "kube-api-access-h5tzs") pod "bb8753d7-153b-406c-875e-b54d53dc851e" (UID: "bb8753d7-153b-406c-875e-b54d53dc851e"). InnerVolumeSpecName "kube-api-access-h5tzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.646558 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03bc0cdc-e98f-4161-aa11-534e22b5be40-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "03bc0cdc-e98f-4161-aa11-534e22b5be40" (UID: "03bc0cdc-e98f-4161-aa11-534e22b5be40"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.647037 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03bc0cdc-e98f-4161-aa11-534e22b5be40-config-data" (OuterVolumeSpecName: "config-data") pod "03bc0cdc-e98f-4161-aa11-534e22b5be40" (UID: "03bc0cdc-e98f-4161-aa11-534e22b5be40"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.658526 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb8753d7-153b-406c-875e-b54d53dc851e-config-data" (OuterVolumeSpecName: "config-data") pod "bb8753d7-153b-406c-875e-b54d53dc851e" (UID: "bb8753d7-153b-406c-875e-b54d53dc851e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.665530 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb8753d7-153b-406c-875e-b54d53dc851e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "bb8753d7-153b-406c-875e-b54d53dc851e" (UID: "bb8753d7-153b-406c-875e-b54d53dc851e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.682476 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03bc0cdc-e98f-4161-aa11-534e22b5be40-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "03bc0cdc-e98f-4161-aa11-534e22b5be40" (UID: "03bc0cdc-e98f-4161-aa11-534e22b5be40"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.684304 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb8753d7-153b-406c-875e-b54d53dc851e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb8753d7-153b-406c-875e-b54d53dc851e" (UID: "bb8753d7-153b-406c-875e-b54d53dc851e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.705115 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrhfr\" (UniqueName: \"kubernetes.io/projected/03bc0cdc-e98f-4161-aa11-534e22b5be40-kube-api-access-zrhfr\") on node \"crc\" DevicePath \"\"" Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.705170 5005 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb8753d7-153b-406c-875e-b54d53dc851e-logs\") on node \"crc\" DevicePath \"\"" Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.705518 5005 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb8753d7-153b-406c-875e-b54d53dc851e-config-data\") on node \"crc\" DevicePath \"\"" Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.705560 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5tzs\" (UniqueName: \"kubernetes.io/projected/bb8753d7-153b-406c-875e-b54d53dc851e-kube-api-access-h5tzs\") on node \"crc\" DevicePath \"\"" Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.705574 5005 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03bc0cdc-e98f-4161-aa11-534e22b5be40-config-data\") on node \"crc\" DevicePath \"\"" Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.705584 5005 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/03bc0cdc-e98f-4161-aa11-534e22b5be40-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.705593 5005 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb8753d7-153b-406c-875e-b54d53dc851e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.705604 5005 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03bc0cdc-e98f-4161-aa11-534e22b5be40-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.705613 5005 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb8753d7-153b-406c-875e-b54d53dc851e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.705621 5005 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03bc0cdc-e98f-4161-aa11-534e22b5be40-logs\") on node \"crc\" DevicePath \"\"" Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.705525 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d06d7bee-8488-40a7-aa2b-56c3e45a92f0" path="/var/lib/kubelet/pods/d06d7bee-8488-40a7-aa2b-56c3e45a92f0/volumes" Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.708415 5005 scope.go:117] "RemoveContainer" containerID="70474a14723af8d559c2002f5cb3c115b0a0726e4a4481418bc07bd50e6f4d42" Feb 25 11:38:44 crc kubenswrapper[5005]: E0225 11:38:44.708745 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70474a14723af8d559c2002f5cb3c115b0a0726e4a4481418bc07bd50e6f4d42\": container with ID starting with 70474a14723af8d559c2002f5cb3c115b0a0726e4a4481418bc07bd50e6f4d42 not found: ID does not exist" containerID="70474a14723af8d559c2002f5cb3c115b0a0726e4a4481418bc07bd50e6f4d42" Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.708787 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70474a14723af8d559c2002f5cb3c115b0a0726e4a4481418bc07bd50e6f4d42"} err="failed to get container status \"70474a14723af8d559c2002f5cb3c115b0a0726e4a4481418bc07bd50e6f4d42\": rpc error: code = NotFound desc = could not find container \"70474a14723af8d559c2002f5cb3c115b0a0726e4a4481418bc07bd50e6f4d42\": container with ID starting with 70474a14723af8d559c2002f5cb3c115b0a0726e4a4481418bc07bd50e6f4d42 not found: ID does not exist" Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.708811 5005 scope.go:117] "RemoveContainer" containerID="760b9c3c979a4203ce06b4c8946b6cf0a1890b6c28ddf45d37d4d49e97880a46" Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.708901 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb8753d7-153b-406c-875e-b54d53dc851e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "bb8753d7-153b-406c-875e-b54d53dc851e" (UID: "bb8753d7-153b-406c-875e-b54d53dc851e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:38:44 crc kubenswrapper[5005]: E0225 11:38:44.709014 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"760b9c3c979a4203ce06b4c8946b6cf0a1890b6c28ddf45d37d4d49e97880a46\": container with ID starting with 760b9c3c979a4203ce06b4c8946b6cf0a1890b6c28ddf45d37d4d49e97880a46 not found: ID does not exist" containerID="760b9c3c979a4203ce06b4c8946b6cf0a1890b6c28ddf45d37d4d49e97880a46" Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.709040 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"760b9c3c979a4203ce06b4c8946b6cf0a1890b6c28ddf45d37d4d49e97880a46"} err="failed to get container status \"760b9c3c979a4203ce06b4c8946b6cf0a1890b6c28ddf45d37d4d49e97880a46\": rpc error: code = NotFound desc = could not find container \"760b9c3c979a4203ce06b4c8946b6cf0a1890b6c28ddf45d37d4d49e97880a46\": container with ID starting with 760b9c3c979a4203ce06b4c8946b6cf0a1890b6c28ddf45d37d4d49e97880a46 not found: ID does not exist" Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.709056 5005 scope.go:117] "RemoveContainer" containerID="33ed10690b073079afb4ba6eeab9e8c39eaa143e8a828c1c55e27d93d76734f0" Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.732123 5005 scope.go:117] "RemoveContainer" containerID="1dd04f2897f108f22cd91a57076e30562f534cbb64edd7b01d772197b80c41e0" Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.752975 5005 scope.go:117] "RemoveContainer" containerID="33ed10690b073079afb4ba6eeab9e8c39eaa143e8a828c1c55e27d93d76734f0" Feb 25 11:38:44 crc kubenswrapper[5005]: E0225 11:38:44.753936 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33ed10690b073079afb4ba6eeab9e8c39eaa143e8a828c1c55e27d93d76734f0\": container with ID starting with 33ed10690b073079afb4ba6eeab9e8c39eaa143e8a828c1c55e27d93d76734f0 not found: ID does not exist" containerID="33ed10690b073079afb4ba6eeab9e8c39eaa143e8a828c1c55e27d93d76734f0" Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.753974 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33ed10690b073079afb4ba6eeab9e8c39eaa143e8a828c1c55e27d93d76734f0"} err="failed to get container status \"33ed10690b073079afb4ba6eeab9e8c39eaa143e8a828c1c55e27d93d76734f0\": rpc error: code = NotFound desc = could not find container \"33ed10690b073079afb4ba6eeab9e8c39eaa143e8a828c1c55e27d93d76734f0\": container with ID starting with 33ed10690b073079afb4ba6eeab9e8c39eaa143e8a828c1c55e27d93d76734f0 not found: ID does not exist" Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.753998 5005 scope.go:117] "RemoveContainer" containerID="1dd04f2897f108f22cd91a57076e30562f534cbb64edd7b01d772197b80c41e0" Feb 25 11:38:44 crc kubenswrapper[5005]: E0225 11:38:44.754486 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1dd04f2897f108f22cd91a57076e30562f534cbb64edd7b01d772197b80c41e0\": container with ID starting with 1dd04f2897f108f22cd91a57076e30562f534cbb64edd7b01d772197b80c41e0 not found: ID does not exist" containerID="1dd04f2897f108f22cd91a57076e30562f534cbb64edd7b01d772197b80c41e0" Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.754507 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1dd04f2897f108f22cd91a57076e30562f534cbb64edd7b01d772197b80c41e0"} err="failed to get container status \"1dd04f2897f108f22cd91a57076e30562f534cbb64edd7b01d772197b80c41e0\": rpc error: code = NotFound desc = could not find container \"1dd04f2897f108f22cd91a57076e30562f534cbb64edd7b01d772197b80c41e0\": container with ID starting with 1dd04f2897f108f22cd91a57076e30562f534cbb64edd7b01d772197b80c41e0 not found: ID does not exist" Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.807503 5005 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb8753d7-153b-406c-875e-b54d53dc851e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.899056 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.908499 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.923237 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.934082 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.942063 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 25 11:38:44 crc kubenswrapper[5005]: E0225 11:38:44.942512 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03bc0cdc-e98f-4161-aa11-534e22b5be40" containerName="nova-metadata-log" Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.942529 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="03bc0cdc-e98f-4161-aa11-534e22b5be40" containerName="nova-metadata-log" Feb 25 11:38:44 crc kubenswrapper[5005]: E0225 11:38:44.942541 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb8753d7-153b-406c-875e-b54d53dc851e" containerName="nova-api-api" Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.942547 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb8753d7-153b-406c-875e-b54d53dc851e" containerName="nova-api-api" Feb 25 11:38:44 crc kubenswrapper[5005]: E0225 11:38:44.942573 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb8753d7-153b-406c-875e-b54d53dc851e" containerName="nova-api-log" Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.942582 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb8753d7-153b-406c-875e-b54d53dc851e" containerName="nova-api-log" Feb 25 11:38:44 crc kubenswrapper[5005]: E0225 11:38:44.942592 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03bc0cdc-e98f-4161-aa11-534e22b5be40" containerName="nova-metadata-metadata" Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.942598 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="03bc0cdc-e98f-4161-aa11-534e22b5be40" containerName="nova-metadata-metadata" Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.942766 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb8753d7-153b-406c-875e-b54d53dc851e" containerName="nova-api-log" Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.942779 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="03bc0cdc-e98f-4161-aa11-534e22b5be40" containerName="nova-metadata-metadata" Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.942799 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="03bc0cdc-e98f-4161-aa11-534e22b5be40" containerName="nova-metadata-log" Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.942810 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb8753d7-153b-406c-875e-b54d53dc851e" containerName="nova-api-api" Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.943750 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.946004 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.946129 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.953037 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.963916 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.965266 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.971470 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.971754 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.971880 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 25 11:38:44 crc kubenswrapper[5005]: I0225 11:38:44.985584 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 25 11:38:45 crc kubenswrapper[5005]: I0225 11:38:45.010898 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7bae511-5969-46d6-9bb5-e45984a014e4-config-data\") pod \"nova-metadata-0\" (UID: \"a7bae511-5969-46d6-9bb5-e45984a014e4\") " pod="openstack/nova-metadata-0" Feb 25 11:38:45 crc kubenswrapper[5005]: I0225 11:38:45.010975 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmtc8\" (UniqueName: \"kubernetes.io/projected/a7bae511-5969-46d6-9bb5-e45984a014e4-kube-api-access-nmtc8\") pod \"nova-metadata-0\" (UID: \"a7bae511-5969-46d6-9bb5-e45984a014e4\") " pod="openstack/nova-metadata-0" Feb 25 11:38:45 crc kubenswrapper[5005]: I0225 11:38:45.011017 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7bae511-5969-46d6-9bb5-e45984a014e4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a7bae511-5969-46d6-9bb5-e45984a014e4\") " pod="openstack/nova-metadata-0" Feb 25 11:38:45 crc kubenswrapper[5005]: I0225 11:38:45.011032 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7bae511-5969-46d6-9bb5-e45984a014e4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a7bae511-5969-46d6-9bb5-e45984a014e4\") " pod="openstack/nova-metadata-0" Feb 25 11:38:45 crc kubenswrapper[5005]: I0225 11:38:45.011076 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7bae511-5969-46d6-9bb5-e45984a014e4-logs\") pod \"nova-metadata-0\" (UID: \"a7bae511-5969-46d6-9bb5-e45984a014e4\") " pod="openstack/nova-metadata-0" Feb 25 11:38:45 crc kubenswrapper[5005]: I0225 11:38:45.112166 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7bae511-5969-46d6-9bb5-e45984a014e4-config-data\") pod \"nova-metadata-0\" (UID: \"a7bae511-5969-46d6-9bb5-e45984a014e4\") " pod="openstack/nova-metadata-0" Feb 25 11:38:45 crc kubenswrapper[5005]: I0225 11:38:45.112211 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1050651a-de95-4e1f-92ca-9f89194a2ae3-config-data\") pod \"nova-api-0\" (UID: \"1050651a-de95-4e1f-92ca-9f89194a2ae3\") " pod="openstack/nova-api-0" Feb 25 11:38:45 crc kubenswrapper[5005]: I0225 11:38:45.112261 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmtc8\" (UniqueName: \"kubernetes.io/projected/a7bae511-5969-46d6-9bb5-e45984a014e4-kube-api-access-nmtc8\") pod \"nova-metadata-0\" (UID: \"a7bae511-5969-46d6-9bb5-e45984a014e4\") " pod="openstack/nova-metadata-0" Feb 25 11:38:45 crc kubenswrapper[5005]: I0225 11:38:45.112291 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1050651a-de95-4e1f-92ca-9f89194a2ae3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1050651a-de95-4e1f-92ca-9f89194a2ae3\") " pod="openstack/nova-api-0" Feb 25 11:38:45 crc kubenswrapper[5005]: I0225 11:38:45.112318 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7bae511-5969-46d6-9bb5-e45984a014e4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a7bae511-5969-46d6-9bb5-e45984a014e4\") " pod="openstack/nova-metadata-0" Feb 25 11:38:45 crc kubenswrapper[5005]: I0225 11:38:45.112334 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7bae511-5969-46d6-9bb5-e45984a014e4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a7bae511-5969-46d6-9bb5-e45984a014e4\") " pod="openstack/nova-metadata-0" Feb 25 11:38:45 crc kubenswrapper[5005]: I0225 11:38:45.112358 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp4qr\" (UniqueName: \"kubernetes.io/projected/1050651a-de95-4e1f-92ca-9f89194a2ae3-kube-api-access-vp4qr\") pod \"nova-api-0\" (UID: \"1050651a-de95-4e1f-92ca-9f89194a2ae3\") " pod="openstack/nova-api-0" Feb 25 11:38:45 crc kubenswrapper[5005]: I0225 11:38:45.112410 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7bae511-5969-46d6-9bb5-e45984a014e4-logs\") pod \"nova-metadata-0\" (UID: \"a7bae511-5969-46d6-9bb5-e45984a014e4\") " pod="openstack/nova-metadata-0" Feb 25 11:38:45 crc kubenswrapper[5005]: I0225 11:38:45.112431 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1050651a-de95-4e1f-92ca-9f89194a2ae3-public-tls-certs\") pod \"nova-api-0\" (UID: \"1050651a-de95-4e1f-92ca-9f89194a2ae3\") " pod="openstack/nova-api-0" Feb 25 11:38:45 crc kubenswrapper[5005]: I0225 11:38:45.112449 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1050651a-de95-4e1f-92ca-9f89194a2ae3-logs\") pod \"nova-api-0\" (UID: \"1050651a-de95-4e1f-92ca-9f89194a2ae3\") " pod="openstack/nova-api-0" Feb 25 11:38:45 crc kubenswrapper[5005]: I0225 11:38:45.112473 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1050651a-de95-4e1f-92ca-9f89194a2ae3-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1050651a-de95-4e1f-92ca-9f89194a2ae3\") " pod="openstack/nova-api-0" Feb 25 11:38:45 crc kubenswrapper[5005]: I0225 11:38:45.113517 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7bae511-5969-46d6-9bb5-e45984a014e4-logs\") pod \"nova-metadata-0\" (UID: \"a7bae511-5969-46d6-9bb5-e45984a014e4\") " pod="openstack/nova-metadata-0" Feb 25 11:38:45 crc kubenswrapper[5005]: I0225 11:38:45.115764 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7bae511-5969-46d6-9bb5-e45984a014e4-config-data\") pod \"nova-metadata-0\" (UID: \"a7bae511-5969-46d6-9bb5-e45984a014e4\") " pod="openstack/nova-metadata-0" Feb 25 11:38:45 crc kubenswrapper[5005]: I0225 11:38:45.116399 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7bae511-5969-46d6-9bb5-e45984a014e4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a7bae511-5969-46d6-9bb5-e45984a014e4\") " pod="openstack/nova-metadata-0" Feb 25 11:38:45 crc kubenswrapper[5005]: I0225 11:38:45.117712 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7bae511-5969-46d6-9bb5-e45984a014e4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a7bae511-5969-46d6-9bb5-e45984a014e4\") " pod="openstack/nova-metadata-0" Feb 25 11:38:45 crc kubenswrapper[5005]: I0225 11:38:45.130688 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmtc8\" (UniqueName: \"kubernetes.io/projected/a7bae511-5969-46d6-9bb5-e45984a014e4-kube-api-access-nmtc8\") pod \"nova-metadata-0\" (UID: \"a7bae511-5969-46d6-9bb5-e45984a014e4\") " pod="openstack/nova-metadata-0" Feb 25 11:38:45 crc kubenswrapper[5005]: I0225 11:38:45.214340 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp4qr\" (UniqueName: \"kubernetes.io/projected/1050651a-de95-4e1f-92ca-9f89194a2ae3-kube-api-access-vp4qr\") pod \"nova-api-0\" (UID: \"1050651a-de95-4e1f-92ca-9f89194a2ae3\") " pod="openstack/nova-api-0" Feb 25 11:38:45 crc kubenswrapper[5005]: I0225 11:38:45.214770 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1050651a-de95-4e1f-92ca-9f89194a2ae3-public-tls-certs\") pod \"nova-api-0\" (UID: \"1050651a-de95-4e1f-92ca-9f89194a2ae3\") " pod="openstack/nova-api-0" Feb 25 11:38:45 crc kubenswrapper[5005]: I0225 11:38:45.214803 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1050651a-de95-4e1f-92ca-9f89194a2ae3-logs\") pod \"nova-api-0\" (UID: \"1050651a-de95-4e1f-92ca-9f89194a2ae3\") " pod="openstack/nova-api-0" Feb 25 11:38:45 crc kubenswrapper[5005]: I0225 11:38:45.215494 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1050651a-de95-4e1f-92ca-9f89194a2ae3-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1050651a-de95-4e1f-92ca-9f89194a2ae3\") " pod="openstack/nova-api-0" Feb 25 11:38:45 crc kubenswrapper[5005]: I0225 11:38:45.215114 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1050651a-de95-4e1f-92ca-9f89194a2ae3-logs\") pod \"nova-api-0\" (UID: \"1050651a-de95-4e1f-92ca-9f89194a2ae3\") " pod="openstack/nova-api-0" Feb 25 11:38:45 crc kubenswrapper[5005]: I0225 11:38:45.215581 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1050651a-de95-4e1f-92ca-9f89194a2ae3-config-data\") pod \"nova-api-0\" (UID: \"1050651a-de95-4e1f-92ca-9f89194a2ae3\") " pod="openstack/nova-api-0" Feb 25 11:38:45 crc kubenswrapper[5005]: I0225 11:38:45.215956 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1050651a-de95-4e1f-92ca-9f89194a2ae3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1050651a-de95-4e1f-92ca-9f89194a2ae3\") " pod="openstack/nova-api-0" Feb 25 11:38:45 crc kubenswrapper[5005]: I0225 11:38:45.218731 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1050651a-de95-4e1f-92ca-9f89194a2ae3-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1050651a-de95-4e1f-92ca-9f89194a2ae3\") " pod="openstack/nova-api-0" Feb 25 11:38:45 crc kubenswrapper[5005]: I0225 11:38:45.219028 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1050651a-de95-4e1f-92ca-9f89194a2ae3-public-tls-certs\") pod \"nova-api-0\" (UID: \"1050651a-de95-4e1f-92ca-9f89194a2ae3\") " pod="openstack/nova-api-0" Feb 25 11:38:45 crc kubenswrapper[5005]: I0225 11:38:45.219769 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1050651a-de95-4e1f-92ca-9f89194a2ae3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1050651a-de95-4e1f-92ca-9f89194a2ae3\") " pod="openstack/nova-api-0" Feb 25 11:38:45 crc kubenswrapper[5005]: I0225 11:38:45.221310 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1050651a-de95-4e1f-92ca-9f89194a2ae3-config-data\") pod \"nova-api-0\" (UID: \"1050651a-de95-4e1f-92ca-9f89194a2ae3\") " pod="openstack/nova-api-0" Feb 25 11:38:45 crc kubenswrapper[5005]: I0225 11:38:45.234522 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp4qr\" (UniqueName: \"kubernetes.io/projected/1050651a-de95-4e1f-92ca-9f89194a2ae3-kube-api-access-vp4qr\") pod \"nova-api-0\" (UID: \"1050651a-de95-4e1f-92ca-9f89194a2ae3\") " pod="openstack/nova-api-0" Feb 25 11:38:45 crc kubenswrapper[5005]: I0225 11:38:45.261302 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 25 11:38:45 crc kubenswrapper[5005]: I0225 11:38:45.287739 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 25 11:38:45 crc kubenswrapper[5005]: I0225 11:38:45.785171 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 25 11:38:45 crc kubenswrapper[5005]: W0225 11:38:45.789761 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7bae511_5969_46d6_9bb5_e45984a014e4.slice/crio-3a4d5e7d664967cd5ca12d16b4a268d58578535ad00bb70c5234a461d57fd3d3 WatchSource:0}: Error finding container 3a4d5e7d664967cd5ca12d16b4a268d58578535ad00bb70c5234a461d57fd3d3: Status 404 returned error can't find the container with id 3a4d5e7d664967cd5ca12d16b4a268d58578535ad00bb70c5234a461d57fd3d3 Feb 25 11:38:45 crc kubenswrapper[5005]: I0225 11:38:45.857886 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 25 11:38:46 crc kubenswrapper[5005]: I0225 11:38:46.600788 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1050651a-de95-4e1f-92ca-9f89194a2ae3","Type":"ContainerStarted","Data":"9cb272587ff70b395835477af6c65bba504e1e73250f91640dd773133131d90c"} Feb 25 11:38:46 crc kubenswrapper[5005]: I0225 11:38:46.600830 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1050651a-de95-4e1f-92ca-9f89194a2ae3","Type":"ContainerStarted","Data":"ef019fe04b4efe064cd20137207eac8bba2814955f1072ad341b986aa8f61e59"} Feb 25 11:38:46 crc kubenswrapper[5005]: I0225 11:38:46.600842 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1050651a-de95-4e1f-92ca-9f89194a2ae3","Type":"ContainerStarted","Data":"91860d00431622154186d79f8fa9f7168dcc2a9e669c4bb82c4be55d85bc691f"} Feb 25 11:38:46 crc kubenswrapper[5005]: I0225 11:38:46.602186 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a7bae511-5969-46d6-9bb5-e45984a014e4","Type":"ContainerStarted","Data":"282a8d6425311704cfae06b7ed4503c0b766ba257546adf9fe15017a8377ae55"} Feb 25 11:38:46 crc kubenswrapper[5005]: I0225 11:38:46.602217 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a7bae511-5969-46d6-9bb5-e45984a014e4","Type":"ContainerStarted","Data":"89d27b2705af3932e6dac909522d590180989405452d2f377cbbd65fc1075fc6"} Feb 25 11:38:46 crc kubenswrapper[5005]: I0225 11:38:46.602231 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a7bae511-5969-46d6-9bb5-e45984a014e4","Type":"ContainerStarted","Data":"3a4d5e7d664967cd5ca12d16b4a268d58578535ad00bb70c5234a461d57fd3d3"} Feb 25 11:38:46 crc kubenswrapper[5005]: I0225 11:38:46.628677 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.6286590199999997 podStartE2EDuration="2.62865902s" podCreationTimestamp="2026-02-25 11:38:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:38:46.627603849 +0000 UTC m=+1240.668336216" watchObservedRunningTime="2026-02-25 11:38:46.62865902 +0000 UTC m=+1240.669391347" Feb 25 11:38:46 crc kubenswrapper[5005]: I0225 11:38:46.671769 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.671746763 podStartE2EDuration="2.671746763s" podCreationTimestamp="2026-02-25 11:38:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:38:46.657990174 +0000 UTC m=+1240.698722491" watchObservedRunningTime="2026-02-25 11:38:46.671746763 +0000 UTC m=+1240.712479120" Feb 25 11:38:46 crc kubenswrapper[5005]: I0225 11:38:46.695442 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03bc0cdc-e98f-4161-aa11-534e22b5be40" path="/var/lib/kubelet/pods/03bc0cdc-e98f-4161-aa11-534e22b5be40/volumes" Feb 25 11:38:46 crc kubenswrapper[5005]: I0225 11:38:46.696854 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb8753d7-153b-406c-875e-b54d53dc851e" path="/var/lib/kubelet/pods/bb8753d7-153b-406c-875e-b54d53dc851e/volumes" Feb 25 11:38:48 crc kubenswrapper[5005]: I0225 11:38:48.226250 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 25 11:38:50 crc kubenswrapper[5005]: I0225 11:38:50.261810 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 25 11:38:50 crc kubenswrapper[5005]: I0225 11:38:50.262363 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 25 11:38:50 crc kubenswrapper[5005]: I0225 11:38:50.921084 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 25 11:38:53 crc kubenswrapper[5005]: I0225 11:38:53.226643 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 25 11:38:53 crc kubenswrapper[5005]: I0225 11:38:53.255769 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 25 11:38:53 crc kubenswrapper[5005]: I0225 11:38:53.699072 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 25 11:38:55 crc kubenswrapper[5005]: I0225 11:38:55.262130 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 25 11:38:55 crc kubenswrapper[5005]: I0225 11:38:55.262572 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 25 11:38:55 crc kubenswrapper[5005]: I0225 11:38:55.288415 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 25 11:38:55 crc kubenswrapper[5005]: I0225 11:38:55.288487 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 25 11:38:56 crc kubenswrapper[5005]: I0225 11:38:56.272539 5005 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a7bae511-5969-46d6-9bb5-e45984a014e4" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.200:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 25 11:38:56 crc kubenswrapper[5005]: I0225 11:38:56.272558 5005 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a7bae511-5969-46d6-9bb5-e45984a014e4" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.200:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 25 11:38:56 crc kubenswrapper[5005]: I0225 11:38:56.311620 5005 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1050651a-de95-4e1f-92ca-9f89194a2ae3" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.201:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 25 11:38:56 crc kubenswrapper[5005]: I0225 11:38:56.311628 5005 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1050651a-de95-4e1f-92ca-9f89194a2ae3" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.201:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 25 11:39:05 crc kubenswrapper[5005]: I0225 11:39:05.272170 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 25 11:39:05 crc kubenswrapper[5005]: I0225 11:39:05.275077 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 25 11:39:05 crc kubenswrapper[5005]: I0225 11:39:05.284320 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 25 11:39:05 crc kubenswrapper[5005]: I0225 11:39:05.308338 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 25 11:39:05 crc kubenswrapper[5005]: I0225 11:39:05.308815 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 25 11:39:05 crc kubenswrapper[5005]: I0225 11:39:05.311412 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 25 11:39:05 crc kubenswrapper[5005]: I0225 11:39:05.321532 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 25 11:39:05 crc kubenswrapper[5005]: I0225 11:39:05.819727 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 25 11:39:05 crc kubenswrapper[5005]: I0225 11:39:05.827872 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 25 11:39:05 crc kubenswrapper[5005]: I0225 11:39:05.830441 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 25 11:39:13 crc kubenswrapper[5005]: I0225 11:39:13.553877 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 25 11:39:14 crc kubenswrapper[5005]: I0225 11:39:14.192569 5005 scope.go:117] "RemoveContainer" containerID="ae0ccc75338ef1910f8158646778e538c6b3b7c03ec91b3835796b28344d2d19" Feb 25 11:39:14 crc kubenswrapper[5005]: I0225 11:39:14.213620 5005 scope.go:117] "RemoveContainer" containerID="71e7e9ae85655d2627e1ff02fd93bc9bc29f3ff8067b7890a6205f9ca5a91348" Feb 25 11:39:14 crc kubenswrapper[5005]: I0225 11:39:14.271745 5005 scope.go:117] "RemoveContainer" containerID="bddc27004ef1380b7262a3eba3d8a61f685f8e5cc5ab654a6f41db825980ede7" Feb 25 11:39:14 crc kubenswrapper[5005]: I0225 11:39:14.891850 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 25 11:39:17 crc kubenswrapper[5005]: I0225 11:39:17.649452 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="773c4b57-17bf-4159-9b75-81072c68692e" containerName="rabbitmq" containerID="cri-o://961da7cb929f2b8a6214035d30be71ffd508090ea56b3de8cfa7aeff6862ff0e" gracePeriod=604796 Feb 25 11:39:18 crc kubenswrapper[5005]: I0225 11:39:18.925594 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="86534792-e561-447f-bcef-4ff82b02561c" containerName="rabbitmq" containerID="cri-o://ddb459434e538b24e97be4a21eb45c15cef47dd89dd904bc100b7a86cefd81ba" gracePeriod=604796 Feb 25 11:39:19 crc kubenswrapper[5005]: I0225 11:39:19.262137 5005 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="773c4b57-17bf-4159-9b75-81072c68692e" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.102:5671: connect: connection refused" Feb 25 11:39:19 crc kubenswrapper[5005]: I0225 11:39:19.523836 5005 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="86534792-e561-447f-bcef-4ff82b02561c" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.103:5671: connect: connection refused" Feb 25 11:39:23 crc kubenswrapper[5005]: I0225 11:39:23.986215 5005 generic.go:334] "Generic (PLEG): container finished" podID="773c4b57-17bf-4159-9b75-81072c68692e" containerID="961da7cb929f2b8a6214035d30be71ffd508090ea56b3de8cfa7aeff6862ff0e" exitCode=0 Feb 25 11:39:23 crc kubenswrapper[5005]: I0225 11:39:23.986993 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"773c4b57-17bf-4159-9b75-81072c68692e","Type":"ContainerDied","Data":"961da7cb929f2b8a6214035d30be71ffd508090ea56b3de8cfa7aeff6862ff0e"} Feb 25 11:39:24 crc kubenswrapper[5005]: I0225 11:39:24.279155 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 25 11:39:24 crc kubenswrapper[5005]: I0225 11:39:24.378628 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/773c4b57-17bf-4159-9b75-81072c68692e-pod-info\") pod \"773c4b57-17bf-4159-9b75-81072c68692e\" (UID: \"773c4b57-17bf-4159-9b75-81072c68692e\") " Feb 25 11:39:24 crc kubenswrapper[5005]: I0225 11:39:24.378715 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/773c4b57-17bf-4159-9b75-81072c68692e-erlang-cookie-secret\") pod \"773c4b57-17bf-4159-9b75-81072c68692e\" (UID: \"773c4b57-17bf-4159-9b75-81072c68692e\") " Feb 25 11:39:24 crc kubenswrapper[5005]: I0225 11:39:24.378786 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/773c4b57-17bf-4159-9b75-81072c68692e-config-data\") pod \"773c4b57-17bf-4159-9b75-81072c68692e\" (UID: \"773c4b57-17bf-4159-9b75-81072c68692e\") " Feb 25 11:39:24 crc kubenswrapper[5005]: I0225 11:39:24.378829 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/773c4b57-17bf-4159-9b75-81072c68692e-plugins-conf\") pod \"773c4b57-17bf-4159-9b75-81072c68692e\" (UID: \"773c4b57-17bf-4159-9b75-81072c68692e\") " Feb 25 11:39:24 crc kubenswrapper[5005]: I0225 11:39:24.378878 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnnc2\" (UniqueName: \"kubernetes.io/projected/773c4b57-17bf-4159-9b75-81072c68692e-kube-api-access-tnnc2\") pod \"773c4b57-17bf-4159-9b75-81072c68692e\" (UID: \"773c4b57-17bf-4159-9b75-81072c68692e\") " Feb 25 11:39:24 crc kubenswrapper[5005]: I0225 11:39:24.378925 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/773c4b57-17bf-4159-9b75-81072c68692e-rabbitmq-tls\") pod \"773c4b57-17bf-4159-9b75-81072c68692e\" (UID: \"773c4b57-17bf-4159-9b75-81072c68692e\") " Feb 25 11:39:24 crc kubenswrapper[5005]: I0225 11:39:24.378965 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/773c4b57-17bf-4159-9b75-81072c68692e-server-conf\") pod \"773c4b57-17bf-4159-9b75-81072c68692e\" (UID: \"773c4b57-17bf-4159-9b75-81072c68692e\") " Feb 25 11:39:24 crc kubenswrapper[5005]: I0225 11:39:24.379013 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"773c4b57-17bf-4159-9b75-81072c68692e\" (UID: \"773c4b57-17bf-4159-9b75-81072c68692e\") " Feb 25 11:39:24 crc kubenswrapper[5005]: I0225 11:39:24.379116 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/773c4b57-17bf-4159-9b75-81072c68692e-rabbitmq-confd\") pod \"773c4b57-17bf-4159-9b75-81072c68692e\" (UID: \"773c4b57-17bf-4159-9b75-81072c68692e\") " Feb 25 11:39:24 crc kubenswrapper[5005]: I0225 11:39:24.379166 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/773c4b57-17bf-4159-9b75-81072c68692e-rabbitmq-plugins\") pod \"773c4b57-17bf-4159-9b75-81072c68692e\" (UID: \"773c4b57-17bf-4159-9b75-81072c68692e\") " Feb 25 11:39:24 crc kubenswrapper[5005]: I0225 11:39:24.379264 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/773c4b57-17bf-4159-9b75-81072c68692e-rabbitmq-erlang-cookie\") pod \"773c4b57-17bf-4159-9b75-81072c68692e\" (UID: \"773c4b57-17bf-4159-9b75-81072c68692e\") " Feb 25 11:39:24 crc kubenswrapper[5005]: I0225 11:39:24.379994 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/773c4b57-17bf-4159-9b75-81072c68692e-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "773c4b57-17bf-4159-9b75-81072c68692e" (UID: "773c4b57-17bf-4159-9b75-81072c68692e"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:39:24 crc kubenswrapper[5005]: I0225 11:39:24.380818 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/773c4b57-17bf-4159-9b75-81072c68692e-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "773c4b57-17bf-4159-9b75-81072c68692e" (UID: "773c4b57-17bf-4159-9b75-81072c68692e"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:39:24 crc kubenswrapper[5005]: I0225 11:39:24.385517 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/773c4b57-17bf-4159-9b75-81072c68692e-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "773c4b57-17bf-4159-9b75-81072c68692e" (UID: "773c4b57-17bf-4159-9b75-81072c68692e"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:39:24 crc kubenswrapper[5005]: I0225 11:39:24.387175 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/773c4b57-17bf-4159-9b75-81072c68692e-pod-info" (OuterVolumeSpecName: "pod-info") pod "773c4b57-17bf-4159-9b75-81072c68692e" (UID: "773c4b57-17bf-4159-9b75-81072c68692e"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 25 11:39:24 crc kubenswrapper[5005]: I0225 11:39:24.388237 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/773c4b57-17bf-4159-9b75-81072c68692e-kube-api-access-tnnc2" (OuterVolumeSpecName: "kube-api-access-tnnc2") pod "773c4b57-17bf-4159-9b75-81072c68692e" (UID: "773c4b57-17bf-4159-9b75-81072c68692e"). InnerVolumeSpecName "kube-api-access-tnnc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:39:24 crc kubenswrapper[5005]: I0225 11:39:24.389411 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/773c4b57-17bf-4159-9b75-81072c68692e-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "773c4b57-17bf-4159-9b75-81072c68692e" (UID: "773c4b57-17bf-4159-9b75-81072c68692e"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:39:24 crc kubenswrapper[5005]: I0225 11:39:24.399914 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/773c4b57-17bf-4159-9b75-81072c68692e-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "773c4b57-17bf-4159-9b75-81072c68692e" (UID: "773c4b57-17bf-4159-9b75-81072c68692e"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:39:24 crc kubenswrapper[5005]: I0225 11:39:24.400723 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "persistence") pod "773c4b57-17bf-4159-9b75-81072c68692e" (UID: "773c4b57-17bf-4159-9b75-81072c68692e"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 25 11:39:24 crc kubenswrapper[5005]: I0225 11:39:24.436029 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/773c4b57-17bf-4159-9b75-81072c68692e-config-data" (OuterVolumeSpecName: "config-data") pod "773c4b57-17bf-4159-9b75-81072c68692e" (UID: "773c4b57-17bf-4159-9b75-81072c68692e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:39:24 crc kubenswrapper[5005]: I0225 11:39:24.462292 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/773c4b57-17bf-4159-9b75-81072c68692e-server-conf" (OuterVolumeSpecName: "server-conf") pod "773c4b57-17bf-4159-9b75-81072c68692e" (UID: "773c4b57-17bf-4159-9b75-81072c68692e"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:39:24 crc kubenswrapper[5005]: I0225 11:39:24.481459 5005 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/773c4b57-17bf-4159-9b75-81072c68692e-pod-info\") on node \"crc\" DevicePath \"\"" Feb 25 11:39:24 crc kubenswrapper[5005]: I0225 11:39:24.481522 5005 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/773c4b57-17bf-4159-9b75-81072c68692e-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 25 11:39:24 crc kubenswrapper[5005]: I0225 11:39:24.481540 5005 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/773c4b57-17bf-4159-9b75-81072c68692e-config-data\") on node \"crc\" DevicePath \"\"" Feb 25 11:39:24 crc kubenswrapper[5005]: I0225 11:39:24.481550 5005 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/773c4b57-17bf-4159-9b75-81072c68692e-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 25 11:39:24 crc kubenswrapper[5005]: I0225 11:39:24.481559 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnnc2\" (UniqueName: \"kubernetes.io/projected/773c4b57-17bf-4159-9b75-81072c68692e-kube-api-access-tnnc2\") on node \"crc\" DevicePath \"\"" Feb 25 11:39:24 crc kubenswrapper[5005]: I0225 11:39:24.481567 5005 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/773c4b57-17bf-4159-9b75-81072c68692e-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 25 11:39:24 crc kubenswrapper[5005]: I0225 11:39:24.481574 5005 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/773c4b57-17bf-4159-9b75-81072c68692e-server-conf\") on node \"crc\" DevicePath \"\"" Feb 25 11:39:24 crc kubenswrapper[5005]: I0225 11:39:24.481596 5005 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Feb 25 11:39:24 crc kubenswrapper[5005]: I0225 11:39:24.481604 5005 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/773c4b57-17bf-4159-9b75-81072c68692e-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 25 11:39:24 crc kubenswrapper[5005]: I0225 11:39:24.481613 5005 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/773c4b57-17bf-4159-9b75-81072c68692e-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 25 11:39:24 crc kubenswrapper[5005]: I0225 11:39:24.498701 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/773c4b57-17bf-4159-9b75-81072c68692e-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "773c4b57-17bf-4159-9b75-81072c68692e" (UID: "773c4b57-17bf-4159-9b75-81072c68692e"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:39:24 crc kubenswrapper[5005]: I0225 11:39:24.526041 5005 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Feb 25 11:39:24 crc kubenswrapper[5005]: I0225 11:39:24.584115 5005 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Feb 25 11:39:24 crc kubenswrapper[5005]: I0225 11:39:24.584156 5005 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/773c4b57-17bf-4159-9b75-81072c68692e-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.000063 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"773c4b57-17bf-4159-9b75-81072c68692e","Type":"ContainerDied","Data":"473bc070e3444253060c438faf3aa2fb5e55dc8d4b3b3563c81f3038d76081bb"} Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.000340 5005 scope.go:117] "RemoveContainer" containerID="961da7cb929f2b8a6214035d30be71ffd508090ea56b3de8cfa7aeff6862ff0e" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.000240 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.050491 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.050710 5005 scope.go:117] "RemoveContainer" containerID="06eb8e47ec1e952c5b7fecf002337ff4943629e25e9129542412322444f70bcd" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.060188 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.097272 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 25 11:39:25 crc kubenswrapper[5005]: E0225 11:39:25.097641 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="773c4b57-17bf-4159-9b75-81072c68692e" containerName="setup-container" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.097659 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="773c4b57-17bf-4159-9b75-81072c68692e" containerName="setup-container" Feb 25 11:39:25 crc kubenswrapper[5005]: E0225 11:39:25.097686 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="773c4b57-17bf-4159-9b75-81072c68692e" containerName="rabbitmq" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.097693 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="773c4b57-17bf-4159-9b75-81072c68692e" containerName="rabbitmq" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.097855 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="773c4b57-17bf-4159-9b75-81072c68692e" containerName="rabbitmq" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.098808 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.102275 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.102493 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.102602 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.102711 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.102815 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-5dcsx" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.102963 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.103066 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.111548 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.296997 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7951928d-6b95-4766-b04f-3b7f448ad731-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7951928d-6b95-4766-b04f-3b7f448ad731\") " pod="openstack/rabbitmq-server-0" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.297053 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rwsn\" (UniqueName: \"kubernetes.io/projected/7951928d-6b95-4766-b04f-3b7f448ad731-kube-api-access-2rwsn\") pod \"rabbitmq-server-0\" (UID: \"7951928d-6b95-4766-b04f-3b7f448ad731\") " pod="openstack/rabbitmq-server-0" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.297117 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7951928d-6b95-4766-b04f-3b7f448ad731-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7951928d-6b95-4766-b04f-3b7f448ad731\") " pod="openstack/rabbitmq-server-0" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.297320 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7951928d-6b95-4766-b04f-3b7f448ad731-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7951928d-6b95-4766-b04f-3b7f448ad731\") " pod="openstack/rabbitmq-server-0" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.297411 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7951928d-6b95-4766-b04f-3b7f448ad731-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7951928d-6b95-4766-b04f-3b7f448ad731\") " pod="openstack/rabbitmq-server-0" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.297468 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7951928d-6b95-4766-b04f-3b7f448ad731-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7951928d-6b95-4766-b04f-3b7f448ad731\") " pod="openstack/rabbitmq-server-0" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.297568 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"7951928d-6b95-4766-b04f-3b7f448ad731\") " pod="openstack/rabbitmq-server-0" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.297739 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7951928d-6b95-4766-b04f-3b7f448ad731-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7951928d-6b95-4766-b04f-3b7f448ad731\") " pod="openstack/rabbitmq-server-0" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.297781 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7951928d-6b95-4766-b04f-3b7f448ad731-config-data\") pod \"rabbitmq-server-0\" (UID: \"7951928d-6b95-4766-b04f-3b7f448ad731\") " pod="openstack/rabbitmq-server-0" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.297819 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7951928d-6b95-4766-b04f-3b7f448ad731-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7951928d-6b95-4766-b04f-3b7f448ad731\") " pod="openstack/rabbitmq-server-0" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.297919 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7951928d-6b95-4766-b04f-3b7f448ad731-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7951928d-6b95-4766-b04f-3b7f448ad731\") " pod="openstack/rabbitmq-server-0" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.402189 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7951928d-6b95-4766-b04f-3b7f448ad731-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7951928d-6b95-4766-b04f-3b7f448ad731\") " pod="openstack/rabbitmq-server-0" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.402230 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7951928d-6b95-4766-b04f-3b7f448ad731-config-data\") pod \"rabbitmq-server-0\" (UID: \"7951928d-6b95-4766-b04f-3b7f448ad731\") " pod="openstack/rabbitmq-server-0" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.402259 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7951928d-6b95-4766-b04f-3b7f448ad731-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7951928d-6b95-4766-b04f-3b7f448ad731\") " pod="openstack/rabbitmq-server-0" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.402306 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7951928d-6b95-4766-b04f-3b7f448ad731-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7951928d-6b95-4766-b04f-3b7f448ad731\") " pod="openstack/rabbitmq-server-0" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.402351 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7951928d-6b95-4766-b04f-3b7f448ad731-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7951928d-6b95-4766-b04f-3b7f448ad731\") " pod="openstack/rabbitmq-server-0" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.402396 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rwsn\" (UniqueName: \"kubernetes.io/projected/7951928d-6b95-4766-b04f-3b7f448ad731-kube-api-access-2rwsn\") pod \"rabbitmq-server-0\" (UID: \"7951928d-6b95-4766-b04f-3b7f448ad731\") " pod="openstack/rabbitmq-server-0" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.402434 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7951928d-6b95-4766-b04f-3b7f448ad731-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7951928d-6b95-4766-b04f-3b7f448ad731\") " pod="openstack/rabbitmq-server-0" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.402499 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7951928d-6b95-4766-b04f-3b7f448ad731-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7951928d-6b95-4766-b04f-3b7f448ad731\") " pod="openstack/rabbitmq-server-0" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.402521 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7951928d-6b95-4766-b04f-3b7f448ad731-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7951928d-6b95-4766-b04f-3b7f448ad731\") " pod="openstack/rabbitmq-server-0" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.402546 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7951928d-6b95-4766-b04f-3b7f448ad731-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7951928d-6b95-4766-b04f-3b7f448ad731\") " pod="openstack/rabbitmq-server-0" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.403043 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7951928d-6b95-4766-b04f-3b7f448ad731-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7951928d-6b95-4766-b04f-3b7f448ad731\") " pod="openstack/rabbitmq-server-0" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.411736 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7951928d-6b95-4766-b04f-3b7f448ad731-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7951928d-6b95-4766-b04f-3b7f448ad731\") " pod="openstack/rabbitmq-server-0" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.411949 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7951928d-6b95-4766-b04f-3b7f448ad731-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7951928d-6b95-4766-b04f-3b7f448ad731\") " pod="openstack/rabbitmq-server-0" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.412273 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7951928d-6b95-4766-b04f-3b7f448ad731-config-data\") pod \"rabbitmq-server-0\" (UID: \"7951928d-6b95-4766-b04f-3b7f448ad731\") " pod="openstack/rabbitmq-server-0" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.412666 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"7951928d-6b95-4766-b04f-3b7f448ad731\") " pod="openstack/rabbitmq-server-0" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.412993 5005 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"7951928d-6b95-4766-b04f-3b7f448ad731\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-server-0" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.416951 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7951928d-6b95-4766-b04f-3b7f448ad731-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7951928d-6b95-4766-b04f-3b7f448ad731\") " pod="openstack/rabbitmq-server-0" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.420030 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7951928d-6b95-4766-b04f-3b7f448ad731-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7951928d-6b95-4766-b04f-3b7f448ad731\") " pod="openstack/rabbitmq-server-0" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.424243 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7951928d-6b95-4766-b04f-3b7f448ad731-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7951928d-6b95-4766-b04f-3b7f448ad731\") " pod="openstack/rabbitmq-server-0" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.424340 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7951928d-6b95-4766-b04f-3b7f448ad731-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7951928d-6b95-4766-b04f-3b7f448ad731\") " pod="openstack/rabbitmq-server-0" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.424874 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7951928d-6b95-4766-b04f-3b7f448ad731-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7951928d-6b95-4766-b04f-3b7f448ad731\") " pod="openstack/rabbitmq-server-0" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.427545 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rwsn\" (UniqueName: \"kubernetes.io/projected/7951928d-6b95-4766-b04f-3b7f448ad731-kube-api-access-2rwsn\") pod \"rabbitmq-server-0\" (UID: \"7951928d-6b95-4766-b04f-3b7f448ad731\") " pod="openstack/rabbitmq-server-0" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.445244 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"7951928d-6b95-4766-b04f-3b7f448ad731\") " pod="openstack/rabbitmq-server-0" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.489116 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.561193 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.616138 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/86534792-e561-447f-bcef-4ff82b02561c-server-conf\") pod \"86534792-e561-447f-bcef-4ff82b02561c\" (UID: \"86534792-e561-447f-bcef-4ff82b02561c\") " Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.616186 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/86534792-e561-447f-bcef-4ff82b02561c-rabbitmq-confd\") pod \"86534792-e561-447f-bcef-4ff82b02561c\" (UID: \"86534792-e561-447f-bcef-4ff82b02561c\") " Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.616237 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"86534792-e561-447f-bcef-4ff82b02561c\" (UID: \"86534792-e561-447f-bcef-4ff82b02561c\") " Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.616328 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/86534792-e561-447f-bcef-4ff82b02561c-plugins-conf\") pod \"86534792-e561-447f-bcef-4ff82b02561c\" (UID: \"86534792-e561-447f-bcef-4ff82b02561c\") " Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.616405 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/86534792-e561-447f-bcef-4ff82b02561c-pod-info\") pod \"86534792-e561-447f-bcef-4ff82b02561c\" (UID: \"86534792-e561-447f-bcef-4ff82b02561c\") " Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.616442 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkrdx\" (UniqueName: \"kubernetes.io/projected/86534792-e561-447f-bcef-4ff82b02561c-kube-api-access-hkrdx\") pod \"86534792-e561-447f-bcef-4ff82b02561c\" (UID: \"86534792-e561-447f-bcef-4ff82b02561c\") " Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.616480 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/86534792-e561-447f-bcef-4ff82b02561c-config-data\") pod \"86534792-e561-447f-bcef-4ff82b02561c\" (UID: \"86534792-e561-447f-bcef-4ff82b02561c\") " Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.616521 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/86534792-e561-447f-bcef-4ff82b02561c-erlang-cookie-secret\") pod \"86534792-e561-447f-bcef-4ff82b02561c\" (UID: \"86534792-e561-447f-bcef-4ff82b02561c\") " Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.616559 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/86534792-e561-447f-bcef-4ff82b02561c-rabbitmq-tls\") pod \"86534792-e561-447f-bcef-4ff82b02561c\" (UID: \"86534792-e561-447f-bcef-4ff82b02561c\") " Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.616580 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/86534792-e561-447f-bcef-4ff82b02561c-rabbitmq-plugins\") pod \"86534792-e561-447f-bcef-4ff82b02561c\" (UID: \"86534792-e561-447f-bcef-4ff82b02561c\") " Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.616646 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/86534792-e561-447f-bcef-4ff82b02561c-rabbitmq-erlang-cookie\") pod \"86534792-e561-447f-bcef-4ff82b02561c\" (UID: \"86534792-e561-447f-bcef-4ff82b02561c\") " Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.617243 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86534792-e561-447f-bcef-4ff82b02561c-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "86534792-e561-447f-bcef-4ff82b02561c" (UID: "86534792-e561-447f-bcef-4ff82b02561c"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.618257 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86534792-e561-447f-bcef-4ff82b02561c-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "86534792-e561-447f-bcef-4ff82b02561c" (UID: "86534792-e561-447f-bcef-4ff82b02561c"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.618490 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86534792-e561-447f-bcef-4ff82b02561c-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "86534792-e561-447f-bcef-4ff82b02561c" (UID: "86534792-e561-447f-bcef-4ff82b02561c"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.621030 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "persistence") pod "86534792-e561-447f-bcef-4ff82b02561c" (UID: "86534792-e561-447f-bcef-4ff82b02561c"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.623079 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/86534792-e561-447f-bcef-4ff82b02561c-pod-info" (OuterVolumeSpecName: "pod-info") pod "86534792-e561-447f-bcef-4ff82b02561c" (UID: "86534792-e561-447f-bcef-4ff82b02561c"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.623642 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86534792-e561-447f-bcef-4ff82b02561c-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "86534792-e561-447f-bcef-4ff82b02561c" (UID: "86534792-e561-447f-bcef-4ff82b02561c"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.624089 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86534792-e561-447f-bcef-4ff82b02561c-kube-api-access-hkrdx" (OuterVolumeSpecName: "kube-api-access-hkrdx") pod "86534792-e561-447f-bcef-4ff82b02561c" (UID: "86534792-e561-447f-bcef-4ff82b02561c"). InnerVolumeSpecName "kube-api-access-hkrdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.626036 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86534792-e561-447f-bcef-4ff82b02561c-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "86534792-e561-447f-bcef-4ff82b02561c" (UID: "86534792-e561-447f-bcef-4ff82b02561c"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.659437 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86534792-e561-447f-bcef-4ff82b02561c-config-data" (OuterVolumeSpecName: "config-data") pod "86534792-e561-447f-bcef-4ff82b02561c" (UID: "86534792-e561-447f-bcef-4ff82b02561c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.661731 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86534792-e561-447f-bcef-4ff82b02561c-server-conf" (OuterVolumeSpecName: "server-conf") pod "86534792-e561-447f-bcef-4ff82b02561c" (UID: "86534792-e561-447f-bcef-4ff82b02561c"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.725497 5005 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/86534792-e561-447f-bcef-4ff82b02561c-server-conf\") on node \"crc\" DevicePath \"\"" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.725558 5005 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.726685 5005 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/86534792-e561-447f-bcef-4ff82b02561c-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.726719 5005 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/86534792-e561-447f-bcef-4ff82b02561c-pod-info\") on node \"crc\" DevicePath \"\"" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.726738 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkrdx\" (UniqueName: \"kubernetes.io/projected/86534792-e561-447f-bcef-4ff82b02561c-kube-api-access-hkrdx\") on node \"crc\" DevicePath \"\"" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.726755 5005 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/86534792-e561-447f-bcef-4ff82b02561c-config-data\") on node \"crc\" DevicePath \"\"" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.726769 5005 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/86534792-e561-447f-bcef-4ff82b02561c-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.726786 5005 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/86534792-e561-447f-bcef-4ff82b02561c-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.726884 5005 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/86534792-e561-447f-bcef-4ff82b02561c-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.727271 5005 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/86534792-e561-447f-bcef-4ff82b02561c-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.789565 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86534792-e561-447f-bcef-4ff82b02561c-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "86534792-e561-447f-bcef-4ff82b02561c" (UID: "86534792-e561-447f-bcef-4ff82b02561c"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.790292 5005 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.829325 5005 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Feb 25 11:39:25 crc kubenswrapper[5005]: I0225 11:39:25.829353 5005 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/86534792-e561-447f-bcef-4ff82b02561c-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 25 11:39:26 crc kubenswrapper[5005]: I0225 11:39:26.009989 5005 generic.go:334] "Generic (PLEG): container finished" podID="86534792-e561-447f-bcef-4ff82b02561c" containerID="ddb459434e538b24e97be4a21eb45c15cef47dd89dd904bc100b7a86cefd81ba" exitCode=0 Feb 25 11:39:26 crc kubenswrapper[5005]: I0225 11:39:26.010048 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"86534792-e561-447f-bcef-4ff82b02561c","Type":"ContainerDied","Data":"ddb459434e538b24e97be4a21eb45c15cef47dd89dd904bc100b7a86cefd81ba"} Feb 25 11:39:26 crc kubenswrapper[5005]: I0225 11:39:26.010358 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"86534792-e561-447f-bcef-4ff82b02561c","Type":"ContainerDied","Data":"69288773e998c6320c4ba0d768e4c5ef4117d64813dc89bdb3d7a2f050dcdf1e"} Feb 25 11:39:26 crc kubenswrapper[5005]: I0225 11:39:26.010073 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:39:26 crc kubenswrapper[5005]: I0225 11:39:26.010398 5005 scope.go:117] "RemoveContainer" containerID="ddb459434e538b24e97be4a21eb45c15cef47dd89dd904bc100b7a86cefd81ba" Feb 25 11:39:26 crc kubenswrapper[5005]: I0225 11:39:26.033282 5005 scope.go:117] "RemoveContainer" containerID="626ca3e16ee06387e105dcce1325831e7fb6d4531a71af0b7c907cf52bdf6da2" Feb 25 11:39:26 crc kubenswrapper[5005]: I0225 11:39:26.051581 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 25 11:39:26 crc kubenswrapper[5005]: I0225 11:39:26.062462 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 25 11:39:26 crc kubenswrapper[5005]: I0225 11:39:26.071705 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 25 11:39:26 crc kubenswrapper[5005]: I0225 11:39:26.079835 5005 scope.go:117] "RemoveContainer" containerID="ddb459434e538b24e97be4a21eb45c15cef47dd89dd904bc100b7a86cefd81ba" Feb 25 11:39:26 crc kubenswrapper[5005]: E0225 11:39:26.080447 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddb459434e538b24e97be4a21eb45c15cef47dd89dd904bc100b7a86cefd81ba\": container with ID starting with ddb459434e538b24e97be4a21eb45c15cef47dd89dd904bc100b7a86cefd81ba not found: ID does not exist" containerID="ddb459434e538b24e97be4a21eb45c15cef47dd89dd904bc100b7a86cefd81ba" Feb 25 11:39:26 crc kubenswrapper[5005]: I0225 11:39:26.080575 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddb459434e538b24e97be4a21eb45c15cef47dd89dd904bc100b7a86cefd81ba"} err="failed to get container status \"ddb459434e538b24e97be4a21eb45c15cef47dd89dd904bc100b7a86cefd81ba\": rpc error: code = NotFound desc = could not find container \"ddb459434e538b24e97be4a21eb45c15cef47dd89dd904bc100b7a86cefd81ba\": container with ID starting with ddb459434e538b24e97be4a21eb45c15cef47dd89dd904bc100b7a86cefd81ba not found: ID does not exist" Feb 25 11:39:26 crc kubenswrapper[5005]: I0225 11:39:26.080709 5005 scope.go:117] "RemoveContainer" containerID="626ca3e16ee06387e105dcce1325831e7fb6d4531a71af0b7c907cf52bdf6da2" Feb 25 11:39:26 crc kubenswrapper[5005]: E0225 11:39:26.081540 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"626ca3e16ee06387e105dcce1325831e7fb6d4531a71af0b7c907cf52bdf6da2\": container with ID starting with 626ca3e16ee06387e105dcce1325831e7fb6d4531a71af0b7c907cf52bdf6da2 not found: ID does not exist" containerID="626ca3e16ee06387e105dcce1325831e7fb6d4531a71af0b7c907cf52bdf6da2" Feb 25 11:39:26 crc kubenswrapper[5005]: I0225 11:39:26.081623 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"626ca3e16ee06387e105dcce1325831e7fb6d4531a71af0b7c907cf52bdf6da2"} err="failed to get container status \"626ca3e16ee06387e105dcce1325831e7fb6d4531a71af0b7c907cf52bdf6da2\": rpc error: code = NotFound desc = could not find container \"626ca3e16ee06387e105dcce1325831e7fb6d4531a71af0b7c907cf52bdf6da2\": container with ID starting with 626ca3e16ee06387e105dcce1325831e7fb6d4531a71af0b7c907cf52bdf6da2 not found: ID does not exist" Feb 25 11:39:26 crc kubenswrapper[5005]: I0225 11:39:26.083125 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 25 11:39:26 crc kubenswrapper[5005]: E0225 11:39:26.083493 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86534792-e561-447f-bcef-4ff82b02561c" containerName="rabbitmq" Feb 25 11:39:26 crc kubenswrapper[5005]: I0225 11:39:26.083512 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="86534792-e561-447f-bcef-4ff82b02561c" containerName="rabbitmq" Feb 25 11:39:26 crc kubenswrapper[5005]: E0225 11:39:26.083529 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86534792-e561-447f-bcef-4ff82b02561c" containerName="setup-container" Feb 25 11:39:26 crc kubenswrapper[5005]: I0225 11:39:26.083535 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="86534792-e561-447f-bcef-4ff82b02561c" containerName="setup-container" Feb 25 11:39:26 crc kubenswrapper[5005]: I0225 11:39:26.083697 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="86534792-e561-447f-bcef-4ff82b02561c" containerName="rabbitmq" Feb 25 11:39:26 crc kubenswrapper[5005]: I0225 11:39:26.084533 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:39:26 crc kubenswrapper[5005]: I0225 11:39:26.088059 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 25 11:39:26 crc kubenswrapper[5005]: I0225 11:39:26.088290 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-6pt8q" Feb 25 11:39:26 crc kubenswrapper[5005]: I0225 11:39:26.088651 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 25 11:39:26 crc kubenswrapper[5005]: I0225 11:39:26.088832 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 25 11:39:26 crc kubenswrapper[5005]: I0225 11:39:26.089023 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 25 11:39:26 crc kubenswrapper[5005]: I0225 11:39:26.089171 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 25 11:39:26 crc kubenswrapper[5005]: I0225 11:39:26.089297 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 25 11:39:26 crc kubenswrapper[5005]: I0225 11:39:26.111111 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 25 11:39:26 crc kubenswrapper[5005]: I0225 11:39:26.235935 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e53714d3-f02e-4700-a89d-d6a8dbcff7d3-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e53714d3-f02e-4700-a89d-d6a8dbcff7d3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:39:26 crc kubenswrapper[5005]: I0225 11:39:26.235988 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e53714d3-f02e-4700-a89d-d6a8dbcff7d3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:39:26 crc kubenswrapper[5005]: I0225 11:39:26.236020 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqnbk\" (UniqueName: \"kubernetes.io/projected/e53714d3-f02e-4700-a89d-d6a8dbcff7d3-kube-api-access-jqnbk\") pod \"rabbitmq-cell1-server-0\" (UID: \"e53714d3-f02e-4700-a89d-d6a8dbcff7d3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:39:26 crc kubenswrapper[5005]: I0225 11:39:26.236087 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e53714d3-f02e-4700-a89d-d6a8dbcff7d3-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e53714d3-f02e-4700-a89d-d6a8dbcff7d3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:39:26 crc kubenswrapper[5005]: I0225 11:39:26.236110 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e53714d3-f02e-4700-a89d-d6a8dbcff7d3-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e53714d3-f02e-4700-a89d-d6a8dbcff7d3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:39:26 crc kubenswrapper[5005]: I0225 11:39:26.236165 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e53714d3-f02e-4700-a89d-d6a8dbcff7d3-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e53714d3-f02e-4700-a89d-d6a8dbcff7d3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:39:26 crc kubenswrapper[5005]: I0225 11:39:26.236265 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e53714d3-f02e-4700-a89d-d6a8dbcff7d3-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e53714d3-f02e-4700-a89d-d6a8dbcff7d3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:39:26 crc kubenswrapper[5005]: I0225 11:39:26.236312 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e53714d3-f02e-4700-a89d-d6a8dbcff7d3-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e53714d3-f02e-4700-a89d-d6a8dbcff7d3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:39:26 crc kubenswrapper[5005]: I0225 11:39:26.236349 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e53714d3-f02e-4700-a89d-d6a8dbcff7d3-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e53714d3-f02e-4700-a89d-d6a8dbcff7d3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:39:26 crc kubenswrapper[5005]: I0225 11:39:26.236452 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e53714d3-f02e-4700-a89d-d6a8dbcff7d3-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e53714d3-f02e-4700-a89d-d6a8dbcff7d3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:39:26 crc kubenswrapper[5005]: I0225 11:39:26.236484 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e53714d3-f02e-4700-a89d-d6a8dbcff7d3-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e53714d3-f02e-4700-a89d-d6a8dbcff7d3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:39:26 crc kubenswrapper[5005]: I0225 11:39:26.338045 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e53714d3-f02e-4700-a89d-d6a8dbcff7d3-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e53714d3-f02e-4700-a89d-d6a8dbcff7d3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:39:26 crc kubenswrapper[5005]: I0225 11:39:26.338115 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e53714d3-f02e-4700-a89d-d6a8dbcff7d3-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e53714d3-f02e-4700-a89d-d6a8dbcff7d3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:39:26 crc kubenswrapper[5005]: I0225 11:39:26.338154 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e53714d3-f02e-4700-a89d-d6a8dbcff7d3-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e53714d3-f02e-4700-a89d-d6a8dbcff7d3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:39:26 crc kubenswrapper[5005]: I0225 11:39:26.338207 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e53714d3-f02e-4700-a89d-d6a8dbcff7d3-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e53714d3-f02e-4700-a89d-d6a8dbcff7d3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:39:26 crc kubenswrapper[5005]: I0225 11:39:26.338246 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e53714d3-f02e-4700-a89d-d6a8dbcff7d3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:39:26 crc kubenswrapper[5005]: I0225 11:39:26.338294 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqnbk\" (UniqueName: \"kubernetes.io/projected/e53714d3-f02e-4700-a89d-d6a8dbcff7d3-kube-api-access-jqnbk\") pod \"rabbitmq-cell1-server-0\" (UID: \"e53714d3-f02e-4700-a89d-d6a8dbcff7d3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:39:26 crc kubenswrapper[5005]: I0225 11:39:26.338343 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e53714d3-f02e-4700-a89d-d6a8dbcff7d3-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e53714d3-f02e-4700-a89d-d6a8dbcff7d3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:39:26 crc kubenswrapper[5005]: I0225 11:39:26.338435 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e53714d3-f02e-4700-a89d-d6a8dbcff7d3-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e53714d3-f02e-4700-a89d-d6a8dbcff7d3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:39:26 crc kubenswrapper[5005]: I0225 11:39:26.338518 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e53714d3-f02e-4700-a89d-d6a8dbcff7d3-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e53714d3-f02e-4700-a89d-d6a8dbcff7d3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:39:26 crc kubenswrapper[5005]: I0225 11:39:26.338578 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e53714d3-f02e-4700-a89d-d6a8dbcff7d3-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e53714d3-f02e-4700-a89d-d6a8dbcff7d3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:39:26 crc kubenswrapper[5005]: I0225 11:39:26.338639 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e53714d3-f02e-4700-a89d-d6a8dbcff7d3-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e53714d3-f02e-4700-a89d-d6a8dbcff7d3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:39:26 crc kubenswrapper[5005]: I0225 11:39:26.340024 5005 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e53714d3-f02e-4700-a89d-d6a8dbcff7d3\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:39:26 crc kubenswrapper[5005]: I0225 11:39:26.340320 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e53714d3-f02e-4700-a89d-d6a8dbcff7d3-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e53714d3-f02e-4700-a89d-d6a8dbcff7d3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:39:26 crc kubenswrapper[5005]: I0225 11:39:26.340391 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e53714d3-f02e-4700-a89d-d6a8dbcff7d3-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e53714d3-f02e-4700-a89d-d6a8dbcff7d3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:39:26 crc kubenswrapper[5005]: I0225 11:39:26.340402 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e53714d3-f02e-4700-a89d-d6a8dbcff7d3-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e53714d3-f02e-4700-a89d-d6a8dbcff7d3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:39:26 crc kubenswrapper[5005]: I0225 11:39:26.341455 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e53714d3-f02e-4700-a89d-d6a8dbcff7d3-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e53714d3-f02e-4700-a89d-d6a8dbcff7d3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:39:26 crc kubenswrapper[5005]: I0225 11:39:26.343679 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e53714d3-f02e-4700-a89d-d6a8dbcff7d3-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e53714d3-f02e-4700-a89d-d6a8dbcff7d3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:39:26 crc kubenswrapper[5005]: I0225 11:39:26.345258 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e53714d3-f02e-4700-a89d-d6a8dbcff7d3-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e53714d3-f02e-4700-a89d-d6a8dbcff7d3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:39:26 crc kubenswrapper[5005]: I0225 11:39:26.345524 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e53714d3-f02e-4700-a89d-d6a8dbcff7d3-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e53714d3-f02e-4700-a89d-d6a8dbcff7d3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:39:26 crc kubenswrapper[5005]: I0225 11:39:26.346538 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e53714d3-f02e-4700-a89d-d6a8dbcff7d3-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e53714d3-f02e-4700-a89d-d6a8dbcff7d3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:39:26 crc kubenswrapper[5005]: I0225 11:39:26.347135 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e53714d3-f02e-4700-a89d-d6a8dbcff7d3-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e53714d3-f02e-4700-a89d-d6a8dbcff7d3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:39:26 crc kubenswrapper[5005]: I0225 11:39:26.356145 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqnbk\" (UniqueName: \"kubernetes.io/projected/e53714d3-f02e-4700-a89d-d6a8dbcff7d3-kube-api-access-jqnbk\") pod \"rabbitmq-cell1-server-0\" (UID: \"e53714d3-f02e-4700-a89d-d6a8dbcff7d3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:39:26 crc kubenswrapper[5005]: I0225 11:39:26.378808 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e53714d3-f02e-4700-a89d-d6a8dbcff7d3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:39:26 crc kubenswrapper[5005]: I0225 11:39:26.603529 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:39:26 crc kubenswrapper[5005]: I0225 11:39:26.705010 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="773c4b57-17bf-4159-9b75-81072c68692e" path="/var/lib/kubelet/pods/773c4b57-17bf-4159-9b75-81072c68692e/volumes" Feb 25 11:39:26 crc kubenswrapper[5005]: I0225 11:39:26.707506 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86534792-e561-447f-bcef-4ff82b02561c" path="/var/lib/kubelet/pods/86534792-e561-447f-bcef-4ff82b02561c/volumes" Feb 25 11:39:27 crc kubenswrapper[5005]: I0225 11:39:27.014972 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-4dzww"] Feb 25 11:39:27 crc kubenswrapper[5005]: I0225 11:39:27.016747 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-4dzww" Feb 25 11:39:27 crc kubenswrapper[5005]: I0225 11:39:27.019424 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 25 11:39:27 crc kubenswrapper[5005]: I0225 11:39:27.023174 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7951928d-6b95-4766-b04f-3b7f448ad731","Type":"ContainerStarted","Data":"32fe1e0c34516ff5dec82b00fa4190bbb2807aa8c14b2fc600fb4de94c57b3c3"} Feb 25 11:39:27 crc kubenswrapper[5005]: I0225 11:39:27.043587 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-4dzww"] Feb 25 11:39:27 crc kubenswrapper[5005]: I0225 11:39:27.074483 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 25 11:39:27 crc kubenswrapper[5005]: I0225 11:39:27.155223 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qpjr\" (UniqueName: \"kubernetes.io/projected/776b5bfd-b20d-402b-a529-c243e074bf71-kube-api-access-9qpjr\") pod \"dnsmasq-dns-6447ccbd8f-4dzww\" (UID: \"776b5bfd-b20d-402b-a529-c243e074bf71\") " pod="openstack/dnsmasq-dns-6447ccbd8f-4dzww" Feb 25 11:39:27 crc kubenswrapper[5005]: I0225 11:39:27.155305 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/776b5bfd-b20d-402b-a529-c243e074bf71-ovsdbserver-nb\") pod \"dnsmasq-dns-6447ccbd8f-4dzww\" (UID: \"776b5bfd-b20d-402b-a529-c243e074bf71\") " pod="openstack/dnsmasq-dns-6447ccbd8f-4dzww" Feb 25 11:39:27 crc kubenswrapper[5005]: I0225 11:39:27.155457 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/776b5bfd-b20d-402b-a529-c243e074bf71-dns-svc\") pod \"dnsmasq-dns-6447ccbd8f-4dzww\" (UID: \"776b5bfd-b20d-402b-a529-c243e074bf71\") " pod="openstack/dnsmasq-dns-6447ccbd8f-4dzww" Feb 25 11:39:27 crc kubenswrapper[5005]: I0225 11:39:27.155520 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/776b5bfd-b20d-402b-a529-c243e074bf71-openstack-edpm-ipam\") pod \"dnsmasq-dns-6447ccbd8f-4dzww\" (UID: \"776b5bfd-b20d-402b-a529-c243e074bf71\") " pod="openstack/dnsmasq-dns-6447ccbd8f-4dzww" Feb 25 11:39:27 crc kubenswrapper[5005]: I0225 11:39:27.155632 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/776b5bfd-b20d-402b-a529-c243e074bf71-ovsdbserver-sb\") pod \"dnsmasq-dns-6447ccbd8f-4dzww\" (UID: \"776b5bfd-b20d-402b-a529-c243e074bf71\") " pod="openstack/dnsmasq-dns-6447ccbd8f-4dzww" Feb 25 11:39:27 crc kubenswrapper[5005]: I0225 11:39:27.155713 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/776b5bfd-b20d-402b-a529-c243e074bf71-config\") pod \"dnsmasq-dns-6447ccbd8f-4dzww\" (UID: \"776b5bfd-b20d-402b-a529-c243e074bf71\") " pod="openstack/dnsmasq-dns-6447ccbd8f-4dzww" Feb 25 11:39:27 crc kubenswrapper[5005]: I0225 11:39:27.257122 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/776b5bfd-b20d-402b-a529-c243e074bf71-ovsdbserver-sb\") pod \"dnsmasq-dns-6447ccbd8f-4dzww\" (UID: \"776b5bfd-b20d-402b-a529-c243e074bf71\") " pod="openstack/dnsmasq-dns-6447ccbd8f-4dzww" Feb 25 11:39:27 crc kubenswrapper[5005]: I0225 11:39:27.257181 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/776b5bfd-b20d-402b-a529-c243e074bf71-config\") pod \"dnsmasq-dns-6447ccbd8f-4dzww\" (UID: \"776b5bfd-b20d-402b-a529-c243e074bf71\") " pod="openstack/dnsmasq-dns-6447ccbd8f-4dzww" Feb 25 11:39:27 crc kubenswrapper[5005]: I0225 11:39:27.257276 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qpjr\" (UniqueName: \"kubernetes.io/projected/776b5bfd-b20d-402b-a529-c243e074bf71-kube-api-access-9qpjr\") pod \"dnsmasq-dns-6447ccbd8f-4dzww\" (UID: \"776b5bfd-b20d-402b-a529-c243e074bf71\") " pod="openstack/dnsmasq-dns-6447ccbd8f-4dzww" Feb 25 11:39:27 crc kubenswrapper[5005]: I0225 11:39:27.257354 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/776b5bfd-b20d-402b-a529-c243e074bf71-ovsdbserver-nb\") pod \"dnsmasq-dns-6447ccbd8f-4dzww\" (UID: \"776b5bfd-b20d-402b-a529-c243e074bf71\") " pod="openstack/dnsmasq-dns-6447ccbd8f-4dzww" Feb 25 11:39:27 crc kubenswrapper[5005]: I0225 11:39:27.257403 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/776b5bfd-b20d-402b-a529-c243e074bf71-dns-svc\") pod \"dnsmasq-dns-6447ccbd8f-4dzww\" (UID: \"776b5bfd-b20d-402b-a529-c243e074bf71\") " pod="openstack/dnsmasq-dns-6447ccbd8f-4dzww" Feb 25 11:39:27 crc kubenswrapper[5005]: I0225 11:39:27.257436 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/776b5bfd-b20d-402b-a529-c243e074bf71-openstack-edpm-ipam\") pod \"dnsmasq-dns-6447ccbd8f-4dzww\" (UID: \"776b5bfd-b20d-402b-a529-c243e074bf71\") " pod="openstack/dnsmasq-dns-6447ccbd8f-4dzww" Feb 25 11:39:27 crc kubenswrapper[5005]: I0225 11:39:27.257958 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/776b5bfd-b20d-402b-a529-c243e074bf71-ovsdbserver-sb\") pod \"dnsmasq-dns-6447ccbd8f-4dzww\" (UID: \"776b5bfd-b20d-402b-a529-c243e074bf71\") " pod="openstack/dnsmasq-dns-6447ccbd8f-4dzww" Feb 25 11:39:27 crc kubenswrapper[5005]: I0225 11:39:27.258128 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/776b5bfd-b20d-402b-a529-c243e074bf71-config\") pod \"dnsmasq-dns-6447ccbd8f-4dzww\" (UID: \"776b5bfd-b20d-402b-a529-c243e074bf71\") " pod="openstack/dnsmasq-dns-6447ccbd8f-4dzww" Feb 25 11:39:27 crc kubenswrapper[5005]: I0225 11:39:27.258171 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/776b5bfd-b20d-402b-a529-c243e074bf71-openstack-edpm-ipam\") pod \"dnsmasq-dns-6447ccbd8f-4dzww\" (UID: \"776b5bfd-b20d-402b-a529-c243e074bf71\") " pod="openstack/dnsmasq-dns-6447ccbd8f-4dzww" Feb 25 11:39:27 crc kubenswrapper[5005]: I0225 11:39:27.258414 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/776b5bfd-b20d-402b-a529-c243e074bf71-dns-svc\") pod \"dnsmasq-dns-6447ccbd8f-4dzww\" (UID: \"776b5bfd-b20d-402b-a529-c243e074bf71\") " pod="openstack/dnsmasq-dns-6447ccbd8f-4dzww" Feb 25 11:39:27 crc kubenswrapper[5005]: I0225 11:39:27.258490 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/776b5bfd-b20d-402b-a529-c243e074bf71-ovsdbserver-nb\") pod \"dnsmasq-dns-6447ccbd8f-4dzww\" (UID: \"776b5bfd-b20d-402b-a529-c243e074bf71\") " pod="openstack/dnsmasq-dns-6447ccbd8f-4dzww" Feb 25 11:39:27 crc kubenswrapper[5005]: I0225 11:39:27.358975 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qpjr\" (UniqueName: \"kubernetes.io/projected/776b5bfd-b20d-402b-a529-c243e074bf71-kube-api-access-9qpjr\") pod \"dnsmasq-dns-6447ccbd8f-4dzww\" (UID: \"776b5bfd-b20d-402b-a529-c243e074bf71\") " pod="openstack/dnsmasq-dns-6447ccbd8f-4dzww" Feb 25 11:39:27 crc kubenswrapper[5005]: I0225 11:39:27.636518 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-4dzww" Feb 25 11:39:28 crc kubenswrapper[5005]: I0225 11:39:28.032355 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e53714d3-f02e-4700-a89d-d6a8dbcff7d3","Type":"ContainerStarted","Data":"df9b93da9561cf9bb3a786350f2cb8cc6197adf1e94d6378b21c61be3c519504"} Feb 25 11:39:28 crc kubenswrapper[5005]: I0225 11:39:28.033815 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7951928d-6b95-4766-b04f-3b7f448ad731","Type":"ContainerStarted","Data":"3de78f763149fa261d877146ec0dfeb6376c8e8131e79c120b8d56943296e763"} Feb 25 11:39:28 crc kubenswrapper[5005]: I0225 11:39:28.135329 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-4dzww"] Feb 25 11:39:29 crc kubenswrapper[5005]: I0225 11:39:29.048060 5005 generic.go:334] "Generic (PLEG): container finished" podID="776b5bfd-b20d-402b-a529-c243e074bf71" containerID="9fcde274f9a1d6760a7a48845c3f9a55b61cbcf5a3f1ac854914e0bcf17cdbb1" exitCode=0 Feb 25 11:39:29 crc kubenswrapper[5005]: I0225 11:39:29.048136 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-4dzww" event={"ID":"776b5bfd-b20d-402b-a529-c243e074bf71","Type":"ContainerDied","Data":"9fcde274f9a1d6760a7a48845c3f9a55b61cbcf5a3f1ac854914e0bcf17cdbb1"} Feb 25 11:39:29 crc kubenswrapper[5005]: I0225 11:39:29.048648 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-4dzww" event={"ID":"776b5bfd-b20d-402b-a529-c243e074bf71","Type":"ContainerStarted","Data":"663a02443ed8aa2f5b1f040b2dab66e8810572bb40493c6511c7a5829c618150"} Feb 25 11:39:29 crc kubenswrapper[5005]: I0225 11:39:29.052618 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e53714d3-f02e-4700-a89d-d6a8dbcff7d3","Type":"ContainerStarted","Data":"f1e246362a5caabe865ded0d2a5547d856c26c1fc00efee2d5a45bba4390570f"} Feb 25 11:39:30 crc kubenswrapper[5005]: I0225 11:39:30.062354 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-4dzww" event={"ID":"776b5bfd-b20d-402b-a529-c243e074bf71","Type":"ContainerStarted","Data":"129533c0d07b7b6154442c9b8b210a7230b177f9e138ab348793f29c2357d04b"} Feb 25 11:39:30 crc kubenswrapper[5005]: I0225 11:39:30.097340 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6447ccbd8f-4dzww" podStartSLOduration=4.097325137 podStartE2EDuration="4.097325137s" podCreationTimestamp="2026-02-25 11:39:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:39:30.088962722 +0000 UTC m=+1284.129695059" watchObservedRunningTime="2026-02-25 11:39:30.097325137 +0000 UTC m=+1284.138057464" Feb 25 11:39:31 crc kubenswrapper[5005]: I0225 11:39:31.075153 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6447ccbd8f-4dzww" Feb 25 11:39:37 crc kubenswrapper[5005]: I0225 11:39:37.639768 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6447ccbd8f-4dzww" Feb 25 11:39:37 crc kubenswrapper[5005]: I0225 11:39:37.721886 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-xdxb8"] Feb 25 11:39:37 crc kubenswrapper[5005]: I0225 11:39:37.722229 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b856c5697-xdxb8" podUID="00cece27-2c2b-466b-a9c8-e72aabef0410" containerName="dnsmasq-dns" containerID="cri-o://e9c775f852281bdc6bb8b5cbf58776f8307b2680e0ece8466c24b636d1632fbe" gracePeriod=10 Feb 25 11:39:37 crc kubenswrapper[5005]: I0225 11:39:37.950810 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-c22hh"] Feb 25 11:39:37 crc kubenswrapper[5005]: I0225 11:39:37.973051 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-c22hh"] Feb 25 11:39:37 crc kubenswrapper[5005]: I0225 11:39:37.973151 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864d5fc68c-c22hh" Feb 25 11:39:38 crc kubenswrapper[5005]: I0225 11:39:38.113621 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/84abcbc9-9dcf-42fa-ac91-67d55b7895b8-ovsdbserver-sb\") pod \"dnsmasq-dns-864d5fc68c-c22hh\" (UID: \"84abcbc9-9dcf-42fa-ac91-67d55b7895b8\") " pod="openstack/dnsmasq-dns-864d5fc68c-c22hh" Feb 25 11:39:38 crc kubenswrapper[5005]: I0225 11:39:38.114061 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/84abcbc9-9dcf-42fa-ac91-67d55b7895b8-openstack-edpm-ipam\") pod \"dnsmasq-dns-864d5fc68c-c22hh\" (UID: \"84abcbc9-9dcf-42fa-ac91-67d55b7895b8\") " pod="openstack/dnsmasq-dns-864d5fc68c-c22hh" Feb 25 11:39:38 crc kubenswrapper[5005]: I0225 11:39:38.114088 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/84abcbc9-9dcf-42fa-ac91-67d55b7895b8-ovsdbserver-nb\") pod \"dnsmasq-dns-864d5fc68c-c22hh\" (UID: \"84abcbc9-9dcf-42fa-ac91-67d55b7895b8\") " pod="openstack/dnsmasq-dns-864d5fc68c-c22hh" Feb 25 11:39:38 crc kubenswrapper[5005]: I0225 11:39:38.114121 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84abcbc9-9dcf-42fa-ac91-67d55b7895b8-config\") pod \"dnsmasq-dns-864d5fc68c-c22hh\" (UID: \"84abcbc9-9dcf-42fa-ac91-67d55b7895b8\") " pod="openstack/dnsmasq-dns-864d5fc68c-c22hh" Feb 25 11:39:38 crc kubenswrapper[5005]: I0225 11:39:38.114160 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84abcbc9-9dcf-42fa-ac91-67d55b7895b8-dns-svc\") pod \"dnsmasq-dns-864d5fc68c-c22hh\" (UID: \"84abcbc9-9dcf-42fa-ac91-67d55b7895b8\") " pod="openstack/dnsmasq-dns-864d5fc68c-c22hh" Feb 25 11:39:38 crc kubenswrapper[5005]: I0225 11:39:38.114231 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmfm4\" (UniqueName: \"kubernetes.io/projected/84abcbc9-9dcf-42fa-ac91-67d55b7895b8-kube-api-access-hmfm4\") pod \"dnsmasq-dns-864d5fc68c-c22hh\" (UID: \"84abcbc9-9dcf-42fa-ac91-67d55b7895b8\") " pod="openstack/dnsmasq-dns-864d5fc68c-c22hh" Feb 25 11:39:38 crc kubenswrapper[5005]: I0225 11:39:38.172159 5005 generic.go:334] "Generic (PLEG): container finished" podID="00cece27-2c2b-466b-a9c8-e72aabef0410" containerID="e9c775f852281bdc6bb8b5cbf58776f8307b2680e0ece8466c24b636d1632fbe" exitCode=0 Feb 25 11:39:38 crc kubenswrapper[5005]: I0225 11:39:38.172208 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-xdxb8" event={"ID":"00cece27-2c2b-466b-a9c8-e72aabef0410","Type":"ContainerDied","Data":"e9c775f852281bdc6bb8b5cbf58776f8307b2680e0ece8466c24b636d1632fbe"} Feb 25 11:39:38 crc kubenswrapper[5005]: I0225 11:39:38.172235 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-xdxb8" event={"ID":"00cece27-2c2b-466b-a9c8-e72aabef0410","Type":"ContainerDied","Data":"fc5bfec534b114c0298b05aa301c45cffc90375bc2f8e1a920021119c1dc8ca0"} Feb 25 11:39:38 crc kubenswrapper[5005]: I0225 11:39:38.172246 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc5bfec534b114c0298b05aa301c45cffc90375bc2f8e1a920021119c1dc8ca0" Feb 25 11:39:38 crc kubenswrapper[5005]: I0225 11:39:38.215742 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/84abcbc9-9dcf-42fa-ac91-67d55b7895b8-ovsdbserver-sb\") pod \"dnsmasq-dns-864d5fc68c-c22hh\" (UID: \"84abcbc9-9dcf-42fa-ac91-67d55b7895b8\") " pod="openstack/dnsmasq-dns-864d5fc68c-c22hh" Feb 25 11:39:38 crc kubenswrapper[5005]: I0225 11:39:38.215815 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/84abcbc9-9dcf-42fa-ac91-67d55b7895b8-openstack-edpm-ipam\") pod \"dnsmasq-dns-864d5fc68c-c22hh\" (UID: \"84abcbc9-9dcf-42fa-ac91-67d55b7895b8\") " pod="openstack/dnsmasq-dns-864d5fc68c-c22hh" Feb 25 11:39:38 crc kubenswrapper[5005]: I0225 11:39:38.215846 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/84abcbc9-9dcf-42fa-ac91-67d55b7895b8-ovsdbserver-nb\") pod \"dnsmasq-dns-864d5fc68c-c22hh\" (UID: \"84abcbc9-9dcf-42fa-ac91-67d55b7895b8\") " pod="openstack/dnsmasq-dns-864d5fc68c-c22hh" Feb 25 11:39:38 crc kubenswrapper[5005]: I0225 11:39:38.215874 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84abcbc9-9dcf-42fa-ac91-67d55b7895b8-config\") pod \"dnsmasq-dns-864d5fc68c-c22hh\" (UID: \"84abcbc9-9dcf-42fa-ac91-67d55b7895b8\") " pod="openstack/dnsmasq-dns-864d5fc68c-c22hh" Feb 25 11:39:38 crc kubenswrapper[5005]: I0225 11:39:38.215916 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84abcbc9-9dcf-42fa-ac91-67d55b7895b8-dns-svc\") pod \"dnsmasq-dns-864d5fc68c-c22hh\" (UID: \"84abcbc9-9dcf-42fa-ac91-67d55b7895b8\") " pod="openstack/dnsmasq-dns-864d5fc68c-c22hh" Feb 25 11:39:38 crc kubenswrapper[5005]: I0225 11:39:38.215975 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmfm4\" (UniqueName: \"kubernetes.io/projected/84abcbc9-9dcf-42fa-ac91-67d55b7895b8-kube-api-access-hmfm4\") pod \"dnsmasq-dns-864d5fc68c-c22hh\" (UID: \"84abcbc9-9dcf-42fa-ac91-67d55b7895b8\") " pod="openstack/dnsmasq-dns-864d5fc68c-c22hh" Feb 25 11:39:38 crc kubenswrapper[5005]: I0225 11:39:38.216901 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/84abcbc9-9dcf-42fa-ac91-67d55b7895b8-ovsdbserver-sb\") pod \"dnsmasq-dns-864d5fc68c-c22hh\" (UID: \"84abcbc9-9dcf-42fa-ac91-67d55b7895b8\") " pod="openstack/dnsmasq-dns-864d5fc68c-c22hh" Feb 25 11:39:38 crc kubenswrapper[5005]: I0225 11:39:38.217190 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/84abcbc9-9dcf-42fa-ac91-67d55b7895b8-openstack-edpm-ipam\") pod \"dnsmasq-dns-864d5fc68c-c22hh\" (UID: \"84abcbc9-9dcf-42fa-ac91-67d55b7895b8\") " pod="openstack/dnsmasq-dns-864d5fc68c-c22hh" Feb 25 11:39:38 crc kubenswrapper[5005]: I0225 11:39:38.217468 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84abcbc9-9dcf-42fa-ac91-67d55b7895b8-config\") pod \"dnsmasq-dns-864d5fc68c-c22hh\" (UID: \"84abcbc9-9dcf-42fa-ac91-67d55b7895b8\") " pod="openstack/dnsmasq-dns-864d5fc68c-c22hh" Feb 25 11:39:38 crc kubenswrapper[5005]: I0225 11:39:38.217916 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84abcbc9-9dcf-42fa-ac91-67d55b7895b8-dns-svc\") pod \"dnsmasq-dns-864d5fc68c-c22hh\" (UID: \"84abcbc9-9dcf-42fa-ac91-67d55b7895b8\") " pod="openstack/dnsmasq-dns-864d5fc68c-c22hh" Feb 25 11:39:38 crc kubenswrapper[5005]: I0225 11:39:38.218191 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/84abcbc9-9dcf-42fa-ac91-67d55b7895b8-ovsdbserver-nb\") pod \"dnsmasq-dns-864d5fc68c-c22hh\" (UID: \"84abcbc9-9dcf-42fa-ac91-67d55b7895b8\") " pod="openstack/dnsmasq-dns-864d5fc68c-c22hh" Feb 25 11:39:38 crc kubenswrapper[5005]: I0225 11:39:38.237905 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmfm4\" (UniqueName: \"kubernetes.io/projected/84abcbc9-9dcf-42fa-ac91-67d55b7895b8-kube-api-access-hmfm4\") pod \"dnsmasq-dns-864d5fc68c-c22hh\" (UID: \"84abcbc9-9dcf-42fa-ac91-67d55b7895b8\") " pod="openstack/dnsmasq-dns-864d5fc68c-c22hh" Feb 25 11:39:38 crc kubenswrapper[5005]: I0225 11:39:38.239583 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-xdxb8" Feb 25 11:39:38 crc kubenswrapper[5005]: I0225 11:39:38.316567 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864d5fc68c-c22hh" Feb 25 11:39:38 crc kubenswrapper[5005]: I0225 11:39:38.316963 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/00cece27-2c2b-466b-a9c8-e72aabef0410-ovsdbserver-nb\") pod \"00cece27-2c2b-466b-a9c8-e72aabef0410\" (UID: \"00cece27-2c2b-466b-a9c8-e72aabef0410\") " Feb 25 11:39:38 crc kubenswrapper[5005]: I0225 11:39:38.317199 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00cece27-2c2b-466b-a9c8-e72aabef0410-config\") pod \"00cece27-2c2b-466b-a9c8-e72aabef0410\" (UID: \"00cece27-2c2b-466b-a9c8-e72aabef0410\") " Feb 25 11:39:38 crc kubenswrapper[5005]: I0225 11:39:38.317259 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00cece27-2c2b-466b-a9c8-e72aabef0410-ovsdbserver-sb\") pod \"00cece27-2c2b-466b-a9c8-e72aabef0410\" (UID: \"00cece27-2c2b-466b-a9c8-e72aabef0410\") " Feb 25 11:39:38 crc kubenswrapper[5005]: I0225 11:39:38.317305 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llgnf\" (UniqueName: \"kubernetes.io/projected/00cece27-2c2b-466b-a9c8-e72aabef0410-kube-api-access-llgnf\") pod \"00cece27-2c2b-466b-a9c8-e72aabef0410\" (UID: \"00cece27-2c2b-466b-a9c8-e72aabef0410\") " Feb 25 11:39:38 crc kubenswrapper[5005]: I0225 11:39:38.317328 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00cece27-2c2b-466b-a9c8-e72aabef0410-dns-svc\") pod \"00cece27-2c2b-466b-a9c8-e72aabef0410\" (UID: \"00cece27-2c2b-466b-a9c8-e72aabef0410\") " Feb 25 11:39:38 crc kubenswrapper[5005]: I0225 11:39:38.323099 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00cece27-2c2b-466b-a9c8-e72aabef0410-kube-api-access-llgnf" (OuterVolumeSpecName: "kube-api-access-llgnf") pod "00cece27-2c2b-466b-a9c8-e72aabef0410" (UID: "00cece27-2c2b-466b-a9c8-e72aabef0410"). InnerVolumeSpecName "kube-api-access-llgnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:39:38 crc kubenswrapper[5005]: I0225 11:39:38.358807 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00cece27-2c2b-466b-a9c8-e72aabef0410-config" (OuterVolumeSpecName: "config") pod "00cece27-2c2b-466b-a9c8-e72aabef0410" (UID: "00cece27-2c2b-466b-a9c8-e72aabef0410"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:39:38 crc kubenswrapper[5005]: I0225 11:39:38.364003 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00cece27-2c2b-466b-a9c8-e72aabef0410-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "00cece27-2c2b-466b-a9c8-e72aabef0410" (UID: "00cece27-2c2b-466b-a9c8-e72aabef0410"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:39:38 crc kubenswrapper[5005]: I0225 11:39:38.371735 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00cece27-2c2b-466b-a9c8-e72aabef0410-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "00cece27-2c2b-466b-a9c8-e72aabef0410" (UID: "00cece27-2c2b-466b-a9c8-e72aabef0410"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:39:38 crc kubenswrapper[5005]: I0225 11:39:38.375966 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00cece27-2c2b-466b-a9c8-e72aabef0410-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "00cece27-2c2b-466b-a9c8-e72aabef0410" (UID: "00cece27-2c2b-466b-a9c8-e72aabef0410"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:39:38 crc kubenswrapper[5005]: I0225 11:39:38.419262 5005 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00cece27-2c2b-466b-a9c8-e72aabef0410-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 25 11:39:38 crc kubenswrapper[5005]: I0225 11:39:38.419301 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llgnf\" (UniqueName: \"kubernetes.io/projected/00cece27-2c2b-466b-a9c8-e72aabef0410-kube-api-access-llgnf\") on node \"crc\" DevicePath \"\"" Feb 25 11:39:38 crc kubenswrapper[5005]: I0225 11:39:38.419313 5005 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00cece27-2c2b-466b-a9c8-e72aabef0410-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 25 11:39:38 crc kubenswrapper[5005]: I0225 11:39:38.419321 5005 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/00cece27-2c2b-466b-a9c8-e72aabef0410-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 25 11:39:38 crc kubenswrapper[5005]: I0225 11:39:38.419330 5005 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00cece27-2c2b-466b-a9c8-e72aabef0410-config\") on node \"crc\" DevicePath \"\"" Feb 25 11:39:39 crc kubenswrapper[5005]: I0225 11:39:39.181002 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-xdxb8" Feb 25 11:39:39 crc kubenswrapper[5005]: I0225 11:39:39.203238 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-xdxb8"] Feb 25 11:39:39 crc kubenswrapper[5005]: I0225 11:39:39.208551 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-xdxb8"] Feb 25 11:39:39 crc kubenswrapper[5005]: I0225 11:39:39.341141 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-c22hh"] Feb 25 11:39:40 crc kubenswrapper[5005]: I0225 11:39:40.191885 5005 generic.go:334] "Generic (PLEG): container finished" podID="84abcbc9-9dcf-42fa-ac91-67d55b7895b8" containerID="d1fb3736af0dac1d68c5dc0e6520bdb6a2e0d5b60f34d0084a3816ff0a118dd3" exitCode=0 Feb 25 11:39:40 crc kubenswrapper[5005]: I0225 11:39:40.192004 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-c22hh" event={"ID":"84abcbc9-9dcf-42fa-ac91-67d55b7895b8","Type":"ContainerDied","Data":"d1fb3736af0dac1d68c5dc0e6520bdb6a2e0d5b60f34d0084a3816ff0a118dd3"} Feb 25 11:39:40 crc kubenswrapper[5005]: I0225 11:39:40.192196 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-c22hh" event={"ID":"84abcbc9-9dcf-42fa-ac91-67d55b7895b8","Type":"ContainerStarted","Data":"f3f38256fdf40b3ce521b522c2b4fb3083e0dbd8ac7c7d65ad01bf9d58e94027"} Feb 25 11:39:40 crc kubenswrapper[5005]: I0225 11:39:40.695008 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00cece27-2c2b-466b-a9c8-e72aabef0410" path="/var/lib/kubelet/pods/00cece27-2c2b-466b-a9c8-e72aabef0410/volumes" Feb 25 11:39:41 crc kubenswrapper[5005]: I0225 11:39:41.206629 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-c22hh" event={"ID":"84abcbc9-9dcf-42fa-ac91-67d55b7895b8","Type":"ContainerStarted","Data":"954bb89a70bbfcc542b01de400fd7af576b9256d11d9d25461ae572ee9913db8"} Feb 25 11:39:41 crc kubenswrapper[5005]: I0225 11:39:41.207051 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-864d5fc68c-c22hh" Feb 25 11:39:41 crc kubenswrapper[5005]: I0225 11:39:41.252903 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-864d5fc68c-c22hh" podStartSLOduration=4.252882112 podStartE2EDuration="4.252882112s" podCreationTimestamp="2026-02-25 11:39:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:39:41.242183886 +0000 UTC m=+1295.282916223" watchObservedRunningTime="2026-02-25 11:39:41.252882112 +0000 UTC m=+1295.293614449" Feb 25 11:39:43 crc kubenswrapper[5005]: I0225 11:39:43.345440 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nrbgx"] Feb 25 11:39:43 crc kubenswrapper[5005]: E0225 11:39:43.346588 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00cece27-2c2b-466b-a9c8-e72aabef0410" containerName="init" Feb 25 11:39:43 crc kubenswrapper[5005]: I0225 11:39:43.346631 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="00cece27-2c2b-466b-a9c8-e72aabef0410" containerName="init" Feb 25 11:39:43 crc kubenswrapper[5005]: E0225 11:39:43.346684 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00cece27-2c2b-466b-a9c8-e72aabef0410" containerName="dnsmasq-dns" Feb 25 11:39:43 crc kubenswrapper[5005]: I0225 11:39:43.346702 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="00cece27-2c2b-466b-a9c8-e72aabef0410" containerName="dnsmasq-dns" Feb 25 11:39:43 crc kubenswrapper[5005]: I0225 11:39:43.347051 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="00cece27-2c2b-466b-a9c8-e72aabef0410" containerName="dnsmasq-dns" Feb 25 11:39:43 crc kubenswrapper[5005]: I0225 11:39:43.348138 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nrbgx" Feb 25 11:39:43 crc kubenswrapper[5005]: I0225 11:39:43.352511 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 25 11:39:43 crc kubenswrapper[5005]: I0225 11:39:43.352957 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 25 11:39:43 crc kubenswrapper[5005]: I0225 11:39:43.353641 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 25 11:39:43 crc kubenswrapper[5005]: I0225 11:39:43.353971 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dgrbb" Feb 25 11:39:43 crc kubenswrapper[5005]: I0225 11:39:43.381087 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nrbgx"] Feb 25 11:39:43 crc kubenswrapper[5005]: I0225 11:39:43.444517 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9e1d51e-93aa-42b5-af7d-caa1b451b7fb-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-nrbgx\" (UID: \"b9e1d51e-93aa-42b5-af7d-caa1b451b7fb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nrbgx" Feb 25 11:39:43 crc kubenswrapper[5005]: I0225 11:39:43.444623 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b9e1d51e-93aa-42b5-af7d-caa1b451b7fb-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-nrbgx\" (UID: \"b9e1d51e-93aa-42b5-af7d-caa1b451b7fb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nrbgx" Feb 25 11:39:43 crc kubenswrapper[5005]: I0225 11:39:43.444685 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7wlv\" (UniqueName: \"kubernetes.io/projected/b9e1d51e-93aa-42b5-af7d-caa1b451b7fb-kube-api-access-q7wlv\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-nrbgx\" (UID: \"b9e1d51e-93aa-42b5-af7d-caa1b451b7fb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nrbgx" Feb 25 11:39:43 crc kubenswrapper[5005]: I0225 11:39:43.444758 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b9e1d51e-93aa-42b5-af7d-caa1b451b7fb-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-nrbgx\" (UID: \"b9e1d51e-93aa-42b5-af7d-caa1b451b7fb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nrbgx" Feb 25 11:39:43 crc kubenswrapper[5005]: I0225 11:39:43.546319 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b9e1d51e-93aa-42b5-af7d-caa1b451b7fb-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-nrbgx\" (UID: \"b9e1d51e-93aa-42b5-af7d-caa1b451b7fb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nrbgx" Feb 25 11:39:43 crc kubenswrapper[5005]: I0225 11:39:43.546522 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9e1d51e-93aa-42b5-af7d-caa1b451b7fb-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-nrbgx\" (UID: \"b9e1d51e-93aa-42b5-af7d-caa1b451b7fb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nrbgx" Feb 25 11:39:43 crc kubenswrapper[5005]: I0225 11:39:43.546568 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b9e1d51e-93aa-42b5-af7d-caa1b451b7fb-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-nrbgx\" (UID: \"b9e1d51e-93aa-42b5-af7d-caa1b451b7fb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nrbgx" Feb 25 11:39:43 crc kubenswrapper[5005]: I0225 11:39:43.546610 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7wlv\" (UniqueName: \"kubernetes.io/projected/b9e1d51e-93aa-42b5-af7d-caa1b451b7fb-kube-api-access-q7wlv\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-nrbgx\" (UID: \"b9e1d51e-93aa-42b5-af7d-caa1b451b7fb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nrbgx" Feb 25 11:39:43 crc kubenswrapper[5005]: I0225 11:39:43.554422 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b9e1d51e-93aa-42b5-af7d-caa1b451b7fb-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-nrbgx\" (UID: \"b9e1d51e-93aa-42b5-af7d-caa1b451b7fb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nrbgx" Feb 25 11:39:43 crc kubenswrapper[5005]: I0225 11:39:43.554449 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b9e1d51e-93aa-42b5-af7d-caa1b451b7fb-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-nrbgx\" (UID: \"b9e1d51e-93aa-42b5-af7d-caa1b451b7fb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nrbgx" Feb 25 11:39:43 crc kubenswrapper[5005]: I0225 11:39:43.566988 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9e1d51e-93aa-42b5-af7d-caa1b451b7fb-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-nrbgx\" (UID: \"b9e1d51e-93aa-42b5-af7d-caa1b451b7fb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nrbgx" Feb 25 11:39:43 crc kubenswrapper[5005]: I0225 11:39:43.569327 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7wlv\" (UniqueName: \"kubernetes.io/projected/b9e1d51e-93aa-42b5-af7d-caa1b451b7fb-kube-api-access-q7wlv\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-nrbgx\" (UID: \"b9e1d51e-93aa-42b5-af7d-caa1b451b7fb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nrbgx" Feb 25 11:39:43 crc kubenswrapper[5005]: I0225 11:39:43.685140 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nrbgx" Feb 25 11:39:44 crc kubenswrapper[5005]: I0225 11:39:44.281855 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nrbgx"] Feb 25 11:39:45 crc kubenswrapper[5005]: I0225 11:39:45.258924 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nrbgx" event={"ID":"b9e1d51e-93aa-42b5-af7d-caa1b451b7fb","Type":"ContainerStarted","Data":"0bd91615e8c07f0e76f5b27e68508a6263d671a1807cee287b100a2b7593fbc5"} Feb 25 11:39:48 crc kubenswrapper[5005]: I0225 11:39:48.317587 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-864d5fc68c-c22hh" Feb 25 11:39:48 crc kubenswrapper[5005]: I0225 11:39:48.391899 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-4dzww"] Feb 25 11:39:48 crc kubenswrapper[5005]: I0225 11:39:48.392426 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6447ccbd8f-4dzww" podUID="776b5bfd-b20d-402b-a529-c243e074bf71" containerName="dnsmasq-dns" containerID="cri-o://129533c0d07b7b6154442c9b8b210a7230b177f9e138ab348793f29c2357d04b" gracePeriod=10 Feb 25 11:39:49 crc kubenswrapper[5005]: I0225 11:39:49.329179 5005 generic.go:334] "Generic (PLEG): container finished" podID="776b5bfd-b20d-402b-a529-c243e074bf71" containerID="129533c0d07b7b6154442c9b8b210a7230b177f9e138ab348793f29c2357d04b" exitCode=0 Feb 25 11:39:49 crc kubenswrapper[5005]: I0225 11:39:49.329500 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-4dzww" event={"ID":"776b5bfd-b20d-402b-a529-c243e074bf71","Type":"ContainerDied","Data":"129533c0d07b7b6154442c9b8b210a7230b177f9e138ab348793f29c2357d04b"} Feb 25 11:39:52 crc kubenswrapper[5005]: I0225 11:39:52.637724 5005 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6447ccbd8f-4dzww" podUID="776b5bfd-b20d-402b-a529-c243e074bf71" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.204:5353: connect: connection refused" Feb 25 11:39:53 crc kubenswrapper[5005]: I0225 11:39:53.254360 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-4dzww" Feb 25 11:39:53 crc kubenswrapper[5005]: I0225 11:39:53.332110 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/776b5bfd-b20d-402b-a529-c243e074bf71-config\") pod \"776b5bfd-b20d-402b-a529-c243e074bf71\" (UID: \"776b5bfd-b20d-402b-a529-c243e074bf71\") " Feb 25 11:39:53 crc kubenswrapper[5005]: I0225 11:39:53.332234 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/776b5bfd-b20d-402b-a529-c243e074bf71-dns-svc\") pod \"776b5bfd-b20d-402b-a529-c243e074bf71\" (UID: \"776b5bfd-b20d-402b-a529-c243e074bf71\") " Feb 25 11:39:53 crc kubenswrapper[5005]: I0225 11:39:53.332609 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qpjr\" (UniqueName: \"kubernetes.io/projected/776b5bfd-b20d-402b-a529-c243e074bf71-kube-api-access-9qpjr\") pod \"776b5bfd-b20d-402b-a529-c243e074bf71\" (UID: \"776b5bfd-b20d-402b-a529-c243e074bf71\") " Feb 25 11:39:53 crc kubenswrapper[5005]: I0225 11:39:53.332651 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/776b5bfd-b20d-402b-a529-c243e074bf71-ovsdbserver-nb\") pod \"776b5bfd-b20d-402b-a529-c243e074bf71\" (UID: \"776b5bfd-b20d-402b-a529-c243e074bf71\") " Feb 25 11:39:53 crc kubenswrapper[5005]: I0225 11:39:53.332679 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/776b5bfd-b20d-402b-a529-c243e074bf71-ovsdbserver-sb\") pod \"776b5bfd-b20d-402b-a529-c243e074bf71\" (UID: \"776b5bfd-b20d-402b-a529-c243e074bf71\") " Feb 25 11:39:53 crc kubenswrapper[5005]: I0225 11:39:53.332708 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/776b5bfd-b20d-402b-a529-c243e074bf71-openstack-edpm-ipam\") pod \"776b5bfd-b20d-402b-a529-c243e074bf71\" (UID: \"776b5bfd-b20d-402b-a529-c243e074bf71\") " Feb 25 11:39:53 crc kubenswrapper[5005]: I0225 11:39:53.364548 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/776b5bfd-b20d-402b-a529-c243e074bf71-kube-api-access-9qpjr" (OuterVolumeSpecName: "kube-api-access-9qpjr") pod "776b5bfd-b20d-402b-a529-c243e074bf71" (UID: "776b5bfd-b20d-402b-a529-c243e074bf71"). InnerVolumeSpecName "kube-api-access-9qpjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:39:53 crc kubenswrapper[5005]: I0225 11:39:53.404895 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-4dzww" Feb 25 11:39:53 crc kubenswrapper[5005]: I0225 11:39:53.405390 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-4dzww" event={"ID":"776b5bfd-b20d-402b-a529-c243e074bf71","Type":"ContainerDied","Data":"663a02443ed8aa2f5b1f040b2dab66e8810572bb40493c6511c7a5829c618150"} Feb 25 11:39:53 crc kubenswrapper[5005]: I0225 11:39:53.405475 5005 scope.go:117] "RemoveContainer" containerID="129533c0d07b7b6154442c9b8b210a7230b177f9e138ab348793f29c2357d04b" Feb 25 11:39:53 crc kubenswrapper[5005]: I0225 11:39:53.406757 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nrbgx" event={"ID":"b9e1d51e-93aa-42b5-af7d-caa1b451b7fb","Type":"ContainerStarted","Data":"bbd6076323f167afc45fa69cd58ecc26cb1de71a0593366f148299504254c2f0"} Feb 25 11:39:53 crc kubenswrapper[5005]: I0225 11:39:53.429180 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/776b5bfd-b20d-402b-a529-c243e074bf71-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "776b5bfd-b20d-402b-a529-c243e074bf71" (UID: "776b5bfd-b20d-402b-a529-c243e074bf71"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:39:53 crc kubenswrapper[5005]: I0225 11:39:53.432806 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/776b5bfd-b20d-402b-a529-c243e074bf71-config" (OuterVolumeSpecName: "config") pod "776b5bfd-b20d-402b-a529-c243e074bf71" (UID: "776b5bfd-b20d-402b-a529-c243e074bf71"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:39:53 crc kubenswrapper[5005]: I0225 11:39:53.435005 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qpjr\" (UniqueName: \"kubernetes.io/projected/776b5bfd-b20d-402b-a529-c243e074bf71-kube-api-access-9qpjr\") on node \"crc\" DevicePath \"\"" Feb 25 11:39:53 crc kubenswrapper[5005]: I0225 11:39:53.435056 5005 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/776b5bfd-b20d-402b-a529-c243e074bf71-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 25 11:39:53 crc kubenswrapper[5005]: I0225 11:39:53.435073 5005 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/776b5bfd-b20d-402b-a529-c243e074bf71-config\") on node \"crc\" DevicePath \"\"" Feb 25 11:39:53 crc kubenswrapper[5005]: I0225 11:39:53.435689 5005 scope.go:117] "RemoveContainer" containerID="9fcde274f9a1d6760a7a48845c3f9a55b61cbcf5a3f1ac854914e0bcf17cdbb1" Feb 25 11:39:53 crc kubenswrapper[5005]: I0225 11:39:53.443236 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/776b5bfd-b20d-402b-a529-c243e074bf71-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "776b5bfd-b20d-402b-a529-c243e074bf71" (UID: "776b5bfd-b20d-402b-a529-c243e074bf71"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:39:53 crc kubenswrapper[5005]: I0225 11:39:53.447960 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/776b5bfd-b20d-402b-a529-c243e074bf71-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "776b5bfd-b20d-402b-a529-c243e074bf71" (UID: "776b5bfd-b20d-402b-a529-c243e074bf71"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:39:53 crc kubenswrapper[5005]: I0225 11:39:53.453942 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nrbgx" podStartSLOduration=1.786276494 podStartE2EDuration="10.453922711s" podCreationTimestamp="2026-02-25 11:39:43 +0000 UTC" firstStartedPulling="2026-02-25 11:39:44.293490924 +0000 UTC m=+1298.334223251" lastFinishedPulling="2026-02-25 11:39:52.961137141 +0000 UTC m=+1307.001869468" observedRunningTime="2026-02-25 11:39:53.440039818 +0000 UTC m=+1307.480772145" watchObservedRunningTime="2026-02-25 11:39:53.453922711 +0000 UTC m=+1307.494655038" Feb 25 11:39:53 crc kubenswrapper[5005]: I0225 11:39:53.467146 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/776b5bfd-b20d-402b-a529-c243e074bf71-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "776b5bfd-b20d-402b-a529-c243e074bf71" (UID: "776b5bfd-b20d-402b-a529-c243e074bf71"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:39:53 crc kubenswrapper[5005]: I0225 11:39:53.536172 5005 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/776b5bfd-b20d-402b-a529-c243e074bf71-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 25 11:39:53 crc kubenswrapper[5005]: I0225 11:39:53.536207 5005 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/776b5bfd-b20d-402b-a529-c243e074bf71-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 25 11:39:53 crc kubenswrapper[5005]: I0225 11:39:53.536219 5005 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/776b5bfd-b20d-402b-a529-c243e074bf71-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 25 11:39:53 crc kubenswrapper[5005]: I0225 11:39:53.739261 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-4dzww"] Feb 25 11:39:53 crc kubenswrapper[5005]: I0225 11:39:53.762759 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-4dzww"] Feb 25 11:39:54 crc kubenswrapper[5005]: I0225 11:39:54.710314 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="776b5bfd-b20d-402b-a529-c243e074bf71" path="/var/lib/kubelet/pods/776b5bfd-b20d-402b-a529-c243e074bf71/volumes" Feb 25 11:39:58 crc kubenswrapper[5005]: I0225 11:39:58.087712 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 11:39:58 crc kubenswrapper[5005]: I0225 11:39:58.088252 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 11:40:00 crc kubenswrapper[5005]: I0225 11:40:00.133549 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533660-sz59c"] Feb 25 11:40:00 crc kubenswrapper[5005]: E0225 11:40:00.134204 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="776b5bfd-b20d-402b-a529-c243e074bf71" containerName="init" Feb 25 11:40:00 crc kubenswrapper[5005]: I0225 11:40:00.134220 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="776b5bfd-b20d-402b-a529-c243e074bf71" containerName="init" Feb 25 11:40:00 crc kubenswrapper[5005]: E0225 11:40:00.134244 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="776b5bfd-b20d-402b-a529-c243e074bf71" containerName="dnsmasq-dns" Feb 25 11:40:00 crc kubenswrapper[5005]: I0225 11:40:00.134252 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="776b5bfd-b20d-402b-a529-c243e074bf71" containerName="dnsmasq-dns" Feb 25 11:40:00 crc kubenswrapper[5005]: I0225 11:40:00.134458 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="776b5bfd-b20d-402b-a529-c243e074bf71" containerName="dnsmasq-dns" Feb 25 11:40:00 crc kubenswrapper[5005]: I0225 11:40:00.135094 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533660-sz59c" Feb 25 11:40:00 crc kubenswrapper[5005]: I0225 11:40:00.136969 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 11:40:00 crc kubenswrapper[5005]: I0225 11:40:00.140696 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 11:40:00 crc kubenswrapper[5005]: I0225 11:40:00.142408 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 11:40:00 crc kubenswrapper[5005]: I0225 11:40:00.149609 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533660-sz59c"] Feb 25 11:40:00 crc kubenswrapper[5005]: I0225 11:40:00.259825 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tztsk\" (UniqueName: \"kubernetes.io/projected/930e9619-3e1f-454f-b6f0-10cebcf075b3-kube-api-access-tztsk\") pod \"auto-csr-approver-29533660-sz59c\" (UID: \"930e9619-3e1f-454f-b6f0-10cebcf075b3\") " pod="openshift-infra/auto-csr-approver-29533660-sz59c" Feb 25 11:40:00 crc kubenswrapper[5005]: I0225 11:40:00.361583 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tztsk\" (UniqueName: \"kubernetes.io/projected/930e9619-3e1f-454f-b6f0-10cebcf075b3-kube-api-access-tztsk\") pod \"auto-csr-approver-29533660-sz59c\" (UID: \"930e9619-3e1f-454f-b6f0-10cebcf075b3\") " pod="openshift-infra/auto-csr-approver-29533660-sz59c" Feb 25 11:40:00 crc kubenswrapper[5005]: I0225 11:40:00.385763 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tztsk\" (UniqueName: \"kubernetes.io/projected/930e9619-3e1f-454f-b6f0-10cebcf075b3-kube-api-access-tztsk\") pod \"auto-csr-approver-29533660-sz59c\" (UID: \"930e9619-3e1f-454f-b6f0-10cebcf075b3\") " pod="openshift-infra/auto-csr-approver-29533660-sz59c" Feb 25 11:40:00 crc kubenswrapper[5005]: I0225 11:40:00.453985 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533660-sz59c" Feb 25 11:40:00 crc kubenswrapper[5005]: I0225 11:40:00.477728 5005 generic.go:334] "Generic (PLEG): container finished" podID="7951928d-6b95-4766-b04f-3b7f448ad731" containerID="3de78f763149fa261d877146ec0dfeb6376c8e8131e79c120b8d56943296e763" exitCode=0 Feb 25 11:40:00 crc kubenswrapper[5005]: I0225 11:40:00.477866 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7951928d-6b95-4766-b04f-3b7f448ad731","Type":"ContainerDied","Data":"3de78f763149fa261d877146ec0dfeb6376c8e8131e79c120b8d56943296e763"} Feb 25 11:40:00 crc kubenswrapper[5005]: I0225 11:40:00.892566 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533660-sz59c"] Feb 25 11:40:00 crc kubenswrapper[5005]: W0225 11:40:00.895517 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod930e9619_3e1f_454f_b6f0_10cebcf075b3.slice/crio-bb6b3eb2212e9398f85fbf5763d9bbe7c734821daa41520bb8c86a0e22300854 WatchSource:0}: Error finding container bb6b3eb2212e9398f85fbf5763d9bbe7c734821daa41520bb8c86a0e22300854: Status 404 returned error can't find the container with id bb6b3eb2212e9398f85fbf5763d9bbe7c734821daa41520bb8c86a0e22300854 Feb 25 11:40:01 crc kubenswrapper[5005]: I0225 11:40:01.488036 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533660-sz59c" event={"ID":"930e9619-3e1f-454f-b6f0-10cebcf075b3","Type":"ContainerStarted","Data":"bb6b3eb2212e9398f85fbf5763d9bbe7c734821daa41520bb8c86a0e22300854"} Feb 25 11:40:01 crc kubenswrapper[5005]: I0225 11:40:01.490396 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7951928d-6b95-4766-b04f-3b7f448ad731","Type":"ContainerStarted","Data":"c4c8b612ad4c21898d17e31a0bf0efe8fbf3b6ac334961366427b335ba6bead2"} Feb 25 11:40:01 crc kubenswrapper[5005]: I0225 11:40:01.490597 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 25 11:40:01 crc kubenswrapper[5005]: I0225 11:40:01.491966 5005 generic.go:334] "Generic (PLEG): container finished" podID="e53714d3-f02e-4700-a89d-d6a8dbcff7d3" containerID="f1e246362a5caabe865ded0d2a5547d856c26c1fc00efee2d5a45bba4390570f" exitCode=0 Feb 25 11:40:01 crc kubenswrapper[5005]: I0225 11:40:01.491980 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e53714d3-f02e-4700-a89d-d6a8dbcff7d3","Type":"ContainerDied","Data":"f1e246362a5caabe865ded0d2a5547d856c26c1fc00efee2d5a45bba4390570f"} Feb 25 11:40:01 crc kubenswrapper[5005]: I0225 11:40:01.539903 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.5398612 podStartE2EDuration="36.5398612s" podCreationTimestamp="2026-02-25 11:39:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:40:01.51888432 +0000 UTC m=+1315.559616647" watchObservedRunningTime="2026-02-25 11:40:01.5398612 +0000 UTC m=+1315.580593527" Feb 25 11:40:02 crc kubenswrapper[5005]: I0225 11:40:02.502578 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e53714d3-f02e-4700-a89d-d6a8dbcff7d3","Type":"ContainerStarted","Data":"27579fe704da4e2e88c6395a9a55622c7679eeebc13f32eaf09dc8e24d64793c"} Feb 25 11:40:02 crc kubenswrapper[5005]: I0225 11:40:02.503136 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:40:02 crc kubenswrapper[5005]: I0225 11:40:02.504203 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533660-sz59c" event={"ID":"930e9619-3e1f-454f-b6f0-10cebcf075b3","Type":"ContainerStarted","Data":"fad59e2dfdd148190603447ea44946b4d8e53047ced797174e21b05b78d9c9b1"} Feb 25 11:40:02 crc kubenswrapper[5005]: I0225 11:40:02.531163 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.531140253 podStartE2EDuration="36.531140253s" podCreationTimestamp="2026-02-25 11:39:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 11:40:02.522428958 +0000 UTC m=+1316.563161305" watchObservedRunningTime="2026-02-25 11:40:02.531140253 +0000 UTC m=+1316.571872580" Feb 25 11:40:02 crc kubenswrapper[5005]: I0225 11:40:02.544035 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29533660-sz59c" podStartSLOduration=1.294433684 podStartE2EDuration="2.544016825s" podCreationTimestamp="2026-02-25 11:40:00 +0000 UTC" firstStartedPulling="2026-02-25 11:40:00.899028901 +0000 UTC m=+1314.939761238" lastFinishedPulling="2026-02-25 11:40:02.148612052 +0000 UTC m=+1316.189344379" observedRunningTime="2026-02-25 11:40:02.542301083 +0000 UTC m=+1316.583033410" watchObservedRunningTime="2026-02-25 11:40:02.544016825 +0000 UTC m=+1316.584749152" Feb 25 11:40:03 crc kubenswrapper[5005]: I0225 11:40:03.515628 5005 generic.go:334] "Generic (PLEG): container finished" podID="930e9619-3e1f-454f-b6f0-10cebcf075b3" containerID="fad59e2dfdd148190603447ea44946b4d8e53047ced797174e21b05b78d9c9b1" exitCode=0 Feb 25 11:40:03 crc kubenswrapper[5005]: I0225 11:40:03.515682 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533660-sz59c" event={"ID":"930e9619-3e1f-454f-b6f0-10cebcf075b3","Type":"ContainerDied","Data":"fad59e2dfdd148190603447ea44946b4d8e53047ced797174e21b05b78d9c9b1"} Feb 25 11:40:04 crc kubenswrapper[5005]: I0225 11:40:04.884129 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533660-sz59c" Feb 25 11:40:04 crc kubenswrapper[5005]: I0225 11:40:04.958775 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tztsk\" (UniqueName: \"kubernetes.io/projected/930e9619-3e1f-454f-b6f0-10cebcf075b3-kube-api-access-tztsk\") pod \"930e9619-3e1f-454f-b6f0-10cebcf075b3\" (UID: \"930e9619-3e1f-454f-b6f0-10cebcf075b3\") " Feb 25 11:40:04 crc kubenswrapper[5005]: I0225 11:40:04.970951 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/930e9619-3e1f-454f-b6f0-10cebcf075b3-kube-api-access-tztsk" (OuterVolumeSpecName: "kube-api-access-tztsk") pod "930e9619-3e1f-454f-b6f0-10cebcf075b3" (UID: "930e9619-3e1f-454f-b6f0-10cebcf075b3"). InnerVolumeSpecName "kube-api-access-tztsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:40:05 crc kubenswrapper[5005]: I0225 11:40:05.060694 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tztsk\" (UniqueName: \"kubernetes.io/projected/930e9619-3e1f-454f-b6f0-10cebcf075b3-kube-api-access-tztsk\") on node \"crc\" DevicePath \"\"" Feb 25 11:40:05 crc kubenswrapper[5005]: I0225 11:40:05.535862 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533660-sz59c" event={"ID":"930e9619-3e1f-454f-b6f0-10cebcf075b3","Type":"ContainerDied","Data":"bb6b3eb2212e9398f85fbf5763d9bbe7c734821daa41520bb8c86a0e22300854"} Feb 25 11:40:05 crc kubenswrapper[5005]: I0225 11:40:05.535899 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb6b3eb2212e9398f85fbf5763d9bbe7c734821daa41520bb8c86a0e22300854" Feb 25 11:40:05 crc kubenswrapper[5005]: I0225 11:40:05.535943 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533660-sz59c" Feb 25 11:40:05 crc kubenswrapper[5005]: I0225 11:40:05.962082 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533654-trt7q"] Feb 25 11:40:05 crc kubenswrapper[5005]: I0225 11:40:05.968766 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533654-trt7q"] Feb 25 11:40:06 crc kubenswrapper[5005]: I0225 11:40:06.549503 5005 generic.go:334] "Generic (PLEG): container finished" podID="b9e1d51e-93aa-42b5-af7d-caa1b451b7fb" containerID="bbd6076323f167afc45fa69cd58ecc26cb1de71a0593366f148299504254c2f0" exitCode=0 Feb 25 11:40:06 crc kubenswrapper[5005]: I0225 11:40:06.549552 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nrbgx" event={"ID":"b9e1d51e-93aa-42b5-af7d-caa1b451b7fb","Type":"ContainerDied","Data":"bbd6076323f167afc45fa69cd58ecc26cb1de71a0593366f148299504254c2f0"} Feb 25 11:40:06 crc kubenswrapper[5005]: I0225 11:40:06.695507 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78bb9afc-1f26-46ce-bf55-2087e295f2e8" path="/var/lib/kubelet/pods/78bb9afc-1f26-46ce-bf55-2087e295f2e8/volumes" Feb 25 11:40:07 crc kubenswrapper[5005]: I0225 11:40:07.969036 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nrbgx" Feb 25 11:40:08 crc kubenswrapper[5005]: I0225 11:40:08.121463 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b9e1d51e-93aa-42b5-af7d-caa1b451b7fb-ssh-key-openstack-edpm-ipam\") pod \"b9e1d51e-93aa-42b5-af7d-caa1b451b7fb\" (UID: \"b9e1d51e-93aa-42b5-af7d-caa1b451b7fb\") " Feb 25 11:40:08 crc kubenswrapper[5005]: I0225 11:40:08.121830 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b9e1d51e-93aa-42b5-af7d-caa1b451b7fb-inventory\") pod \"b9e1d51e-93aa-42b5-af7d-caa1b451b7fb\" (UID: \"b9e1d51e-93aa-42b5-af7d-caa1b451b7fb\") " Feb 25 11:40:08 crc kubenswrapper[5005]: I0225 11:40:08.121910 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7wlv\" (UniqueName: \"kubernetes.io/projected/b9e1d51e-93aa-42b5-af7d-caa1b451b7fb-kube-api-access-q7wlv\") pod \"b9e1d51e-93aa-42b5-af7d-caa1b451b7fb\" (UID: \"b9e1d51e-93aa-42b5-af7d-caa1b451b7fb\") " Feb 25 11:40:08 crc kubenswrapper[5005]: I0225 11:40:08.121962 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9e1d51e-93aa-42b5-af7d-caa1b451b7fb-repo-setup-combined-ca-bundle\") pod \"b9e1d51e-93aa-42b5-af7d-caa1b451b7fb\" (UID: \"b9e1d51e-93aa-42b5-af7d-caa1b451b7fb\") " Feb 25 11:40:08 crc kubenswrapper[5005]: I0225 11:40:08.127784 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9e1d51e-93aa-42b5-af7d-caa1b451b7fb-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "b9e1d51e-93aa-42b5-af7d-caa1b451b7fb" (UID: "b9e1d51e-93aa-42b5-af7d-caa1b451b7fb"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:40:08 crc kubenswrapper[5005]: I0225 11:40:08.129774 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9e1d51e-93aa-42b5-af7d-caa1b451b7fb-kube-api-access-q7wlv" (OuterVolumeSpecName: "kube-api-access-q7wlv") pod "b9e1d51e-93aa-42b5-af7d-caa1b451b7fb" (UID: "b9e1d51e-93aa-42b5-af7d-caa1b451b7fb"). InnerVolumeSpecName "kube-api-access-q7wlv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:40:08 crc kubenswrapper[5005]: I0225 11:40:08.176012 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9e1d51e-93aa-42b5-af7d-caa1b451b7fb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b9e1d51e-93aa-42b5-af7d-caa1b451b7fb" (UID: "b9e1d51e-93aa-42b5-af7d-caa1b451b7fb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:40:08 crc kubenswrapper[5005]: I0225 11:40:08.182692 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9e1d51e-93aa-42b5-af7d-caa1b451b7fb-inventory" (OuterVolumeSpecName: "inventory") pod "b9e1d51e-93aa-42b5-af7d-caa1b451b7fb" (UID: "b9e1d51e-93aa-42b5-af7d-caa1b451b7fb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:40:08 crc kubenswrapper[5005]: I0225 11:40:08.225517 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7wlv\" (UniqueName: \"kubernetes.io/projected/b9e1d51e-93aa-42b5-af7d-caa1b451b7fb-kube-api-access-q7wlv\") on node \"crc\" DevicePath \"\"" Feb 25 11:40:08 crc kubenswrapper[5005]: I0225 11:40:08.225587 5005 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9e1d51e-93aa-42b5-af7d-caa1b451b7fb-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 11:40:08 crc kubenswrapper[5005]: I0225 11:40:08.225601 5005 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b9e1d51e-93aa-42b5-af7d-caa1b451b7fb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 25 11:40:08 crc kubenswrapper[5005]: I0225 11:40:08.225614 5005 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b9e1d51e-93aa-42b5-af7d-caa1b451b7fb-inventory\") on node \"crc\" DevicePath \"\"" Feb 25 11:40:08 crc kubenswrapper[5005]: I0225 11:40:08.573321 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nrbgx" event={"ID":"b9e1d51e-93aa-42b5-af7d-caa1b451b7fb","Type":"ContainerDied","Data":"0bd91615e8c07f0e76f5b27e68508a6263d671a1807cee287b100a2b7593fbc5"} Feb 25 11:40:08 crc kubenswrapper[5005]: I0225 11:40:08.573361 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0bd91615e8c07f0e76f5b27e68508a6263d671a1807cee287b100a2b7593fbc5" Feb 25 11:40:08 crc kubenswrapper[5005]: I0225 11:40:08.573458 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nrbgx" Feb 25 11:40:08 crc kubenswrapper[5005]: I0225 11:40:08.646904 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vxsrw"] Feb 25 11:40:08 crc kubenswrapper[5005]: E0225 11:40:08.647270 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="930e9619-3e1f-454f-b6f0-10cebcf075b3" containerName="oc" Feb 25 11:40:08 crc kubenswrapper[5005]: I0225 11:40:08.647287 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="930e9619-3e1f-454f-b6f0-10cebcf075b3" containerName="oc" Feb 25 11:40:08 crc kubenswrapper[5005]: E0225 11:40:08.647306 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9e1d51e-93aa-42b5-af7d-caa1b451b7fb" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 25 11:40:08 crc kubenswrapper[5005]: I0225 11:40:08.647314 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9e1d51e-93aa-42b5-af7d-caa1b451b7fb" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 25 11:40:08 crc kubenswrapper[5005]: I0225 11:40:08.647486 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9e1d51e-93aa-42b5-af7d-caa1b451b7fb" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 25 11:40:08 crc kubenswrapper[5005]: I0225 11:40:08.647504 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="930e9619-3e1f-454f-b6f0-10cebcf075b3" containerName="oc" Feb 25 11:40:08 crc kubenswrapper[5005]: I0225 11:40:08.648017 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vxsrw" Feb 25 11:40:08 crc kubenswrapper[5005]: I0225 11:40:08.649847 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 25 11:40:08 crc kubenswrapper[5005]: I0225 11:40:08.650098 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dgrbb" Feb 25 11:40:08 crc kubenswrapper[5005]: I0225 11:40:08.650642 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 25 11:40:08 crc kubenswrapper[5005]: I0225 11:40:08.661767 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vxsrw"] Feb 25 11:40:08 crc kubenswrapper[5005]: I0225 11:40:08.664566 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 25 11:40:08 crc kubenswrapper[5005]: I0225 11:40:08.735903 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/796df0b2-b3a6-4f4f-99cb-dda670ed4411-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vxsrw\" (UID: \"796df0b2-b3a6-4f4f-99cb-dda670ed4411\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vxsrw" Feb 25 11:40:08 crc kubenswrapper[5005]: I0225 11:40:08.735979 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk5fc\" (UniqueName: \"kubernetes.io/projected/796df0b2-b3a6-4f4f-99cb-dda670ed4411-kube-api-access-mk5fc\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vxsrw\" (UID: \"796df0b2-b3a6-4f4f-99cb-dda670ed4411\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vxsrw" Feb 25 11:40:08 crc kubenswrapper[5005]: I0225 11:40:08.736186 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/796df0b2-b3a6-4f4f-99cb-dda670ed4411-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vxsrw\" (UID: \"796df0b2-b3a6-4f4f-99cb-dda670ed4411\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vxsrw" Feb 25 11:40:08 crc kubenswrapper[5005]: I0225 11:40:08.736328 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/796df0b2-b3a6-4f4f-99cb-dda670ed4411-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vxsrw\" (UID: \"796df0b2-b3a6-4f4f-99cb-dda670ed4411\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vxsrw" Feb 25 11:40:08 crc kubenswrapper[5005]: I0225 11:40:08.837587 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/796df0b2-b3a6-4f4f-99cb-dda670ed4411-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vxsrw\" (UID: \"796df0b2-b3a6-4f4f-99cb-dda670ed4411\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vxsrw" Feb 25 11:40:08 crc kubenswrapper[5005]: I0225 11:40:08.837744 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/796df0b2-b3a6-4f4f-99cb-dda670ed4411-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vxsrw\" (UID: \"796df0b2-b3a6-4f4f-99cb-dda670ed4411\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vxsrw" Feb 25 11:40:08 crc kubenswrapper[5005]: I0225 11:40:08.837809 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mk5fc\" (UniqueName: \"kubernetes.io/projected/796df0b2-b3a6-4f4f-99cb-dda670ed4411-kube-api-access-mk5fc\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vxsrw\" (UID: \"796df0b2-b3a6-4f4f-99cb-dda670ed4411\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vxsrw" Feb 25 11:40:08 crc kubenswrapper[5005]: I0225 11:40:08.837910 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/796df0b2-b3a6-4f4f-99cb-dda670ed4411-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vxsrw\" (UID: \"796df0b2-b3a6-4f4f-99cb-dda670ed4411\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vxsrw" Feb 25 11:40:08 crc kubenswrapper[5005]: I0225 11:40:08.842312 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/796df0b2-b3a6-4f4f-99cb-dda670ed4411-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vxsrw\" (UID: \"796df0b2-b3a6-4f4f-99cb-dda670ed4411\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vxsrw" Feb 25 11:40:08 crc kubenswrapper[5005]: I0225 11:40:08.842961 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/796df0b2-b3a6-4f4f-99cb-dda670ed4411-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vxsrw\" (UID: \"796df0b2-b3a6-4f4f-99cb-dda670ed4411\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vxsrw" Feb 25 11:40:08 crc kubenswrapper[5005]: I0225 11:40:08.848158 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/796df0b2-b3a6-4f4f-99cb-dda670ed4411-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vxsrw\" (UID: \"796df0b2-b3a6-4f4f-99cb-dda670ed4411\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vxsrw" Feb 25 11:40:08 crc kubenswrapper[5005]: I0225 11:40:08.862425 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mk5fc\" (UniqueName: \"kubernetes.io/projected/796df0b2-b3a6-4f4f-99cb-dda670ed4411-kube-api-access-mk5fc\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vxsrw\" (UID: \"796df0b2-b3a6-4f4f-99cb-dda670ed4411\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vxsrw" Feb 25 11:40:08 crc kubenswrapper[5005]: I0225 11:40:08.967491 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vxsrw" Feb 25 11:40:09 crc kubenswrapper[5005]: I0225 11:40:09.383185 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vxsrw"] Feb 25 11:40:09 crc kubenswrapper[5005]: I0225 11:40:09.597014 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vxsrw" event={"ID":"796df0b2-b3a6-4f4f-99cb-dda670ed4411","Type":"ContainerStarted","Data":"b18c05be4c6af9f52e77407967f193e2327bc4d1da544d26d1abea21fdab2915"} Feb 25 11:40:10 crc kubenswrapper[5005]: I0225 11:40:10.608947 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vxsrw" event={"ID":"796df0b2-b3a6-4f4f-99cb-dda670ed4411","Type":"ContainerStarted","Data":"60b53bcf84c11e3494e210a5aec6360d16f0e25342c611df553d11e582f3cea5"} Feb 25 11:40:10 crc kubenswrapper[5005]: I0225 11:40:10.634855 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vxsrw" podStartSLOduration=2.243603835 podStartE2EDuration="2.634830442s" podCreationTimestamp="2026-02-25 11:40:08 +0000 UTC" firstStartedPulling="2026-02-25 11:40:09.389932394 +0000 UTC m=+1323.430664721" lastFinishedPulling="2026-02-25 11:40:09.781159001 +0000 UTC m=+1323.821891328" observedRunningTime="2026-02-25 11:40:10.628565392 +0000 UTC m=+1324.669297739" watchObservedRunningTime="2026-02-25 11:40:10.634830442 +0000 UTC m=+1324.675562809" Feb 25 11:40:14 crc kubenswrapper[5005]: I0225 11:40:14.412405 5005 scope.go:117] "RemoveContainer" containerID="11f599fbfa88123dd57ef003dbd336d8f4f102be15526c05b87074389c64f328" Feb 25 11:40:15 crc kubenswrapper[5005]: I0225 11:40:15.567664 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 25 11:40:16 crc kubenswrapper[5005]: I0225 11:40:16.608607 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 25 11:40:28 crc kubenswrapper[5005]: I0225 11:40:28.087137 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 11:40:28 crc kubenswrapper[5005]: I0225 11:40:28.087815 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 11:40:58 crc kubenswrapper[5005]: I0225 11:40:58.087698 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 11:40:58 crc kubenswrapper[5005]: I0225 11:40:58.088309 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 11:40:58 crc kubenswrapper[5005]: I0225 11:40:58.088409 5005 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" Feb 25 11:40:58 crc kubenswrapper[5005]: I0225 11:40:58.089159 5005 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c816cc274b9544be8ead9d4693ad6ccdd4ec8d5d3bca03945f606f68950d1d53"} pod="openshift-machine-config-operator/machine-config-daemon-tct5q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 25 11:40:58 crc kubenswrapper[5005]: I0225 11:40:58.089227 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" containerID="cri-o://c816cc274b9544be8ead9d4693ad6ccdd4ec8d5d3bca03945f606f68950d1d53" gracePeriod=600 Feb 25 11:40:58 crc kubenswrapper[5005]: I0225 11:40:58.271776 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nvhk8"] Feb 25 11:40:58 crc kubenswrapper[5005]: I0225 11:40:58.273855 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nvhk8" Feb 25 11:40:58 crc kubenswrapper[5005]: I0225 11:40:58.285113 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nvhk8"] Feb 25 11:40:58 crc kubenswrapper[5005]: I0225 11:40:58.413816 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c97g6\" (UniqueName: \"kubernetes.io/projected/c0d1392a-9a05-4f55-903e-65cd02b30b5b-kube-api-access-c97g6\") pod \"redhat-operators-nvhk8\" (UID: \"c0d1392a-9a05-4f55-903e-65cd02b30b5b\") " pod="openshift-marketplace/redhat-operators-nvhk8" Feb 25 11:40:58 crc kubenswrapper[5005]: I0225 11:40:58.414352 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0d1392a-9a05-4f55-903e-65cd02b30b5b-catalog-content\") pod \"redhat-operators-nvhk8\" (UID: \"c0d1392a-9a05-4f55-903e-65cd02b30b5b\") " pod="openshift-marketplace/redhat-operators-nvhk8" Feb 25 11:40:58 crc kubenswrapper[5005]: I0225 11:40:58.414884 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0d1392a-9a05-4f55-903e-65cd02b30b5b-utilities\") pod \"redhat-operators-nvhk8\" (UID: \"c0d1392a-9a05-4f55-903e-65cd02b30b5b\") " pod="openshift-marketplace/redhat-operators-nvhk8" Feb 25 11:40:58 crc kubenswrapper[5005]: I0225 11:40:58.516208 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0d1392a-9a05-4f55-903e-65cd02b30b5b-utilities\") pod \"redhat-operators-nvhk8\" (UID: \"c0d1392a-9a05-4f55-903e-65cd02b30b5b\") " pod="openshift-marketplace/redhat-operators-nvhk8" Feb 25 11:40:58 crc kubenswrapper[5005]: I0225 11:40:58.516275 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c97g6\" (UniqueName: \"kubernetes.io/projected/c0d1392a-9a05-4f55-903e-65cd02b30b5b-kube-api-access-c97g6\") pod \"redhat-operators-nvhk8\" (UID: \"c0d1392a-9a05-4f55-903e-65cd02b30b5b\") " pod="openshift-marketplace/redhat-operators-nvhk8" Feb 25 11:40:58 crc kubenswrapper[5005]: I0225 11:40:58.516312 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0d1392a-9a05-4f55-903e-65cd02b30b5b-catalog-content\") pod \"redhat-operators-nvhk8\" (UID: \"c0d1392a-9a05-4f55-903e-65cd02b30b5b\") " pod="openshift-marketplace/redhat-operators-nvhk8" Feb 25 11:40:58 crc kubenswrapper[5005]: I0225 11:40:58.516665 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0d1392a-9a05-4f55-903e-65cd02b30b5b-utilities\") pod \"redhat-operators-nvhk8\" (UID: \"c0d1392a-9a05-4f55-903e-65cd02b30b5b\") " pod="openshift-marketplace/redhat-operators-nvhk8" Feb 25 11:40:58 crc kubenswrapper[5005]: I0225 11:40:58.516682 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0d1392a-9a05-4f55-903e-65cd02b30b5b-catalog-content\") pod \"redhat-operators-nvhk8\" (UID: \"c0d1392a-9a05-4f55-903e-65cd02b30b5b\") " pod="openshift-marketplace/redhat-operators-nvhk8" Feb 25 11:40:58 crc kubenswrapper[5005]: I0225 11:40:58.534312 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c97g6\" (UniqueName: \"kubernetes.io/projected/c0d1392a-9a05-4f55-903e-65cd02b30b5b-kube-api-access-c97g6\") pod \"redhat-operators-nvhk8\" (UID: \"c0d1392a-9a05-4f55-903e-65cd02b30b5b\") " pod="openshift-marketplace/redhat-operators-nvhk8" Feb 25 11:40:58 crc kubenswrapper[5005]: I0225 11:40:58.639857 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nvhk8" Feb 25 11:40:58 crc kubenswrapper[5005]: I0225 11:40:58.893058 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nvhk8"] Feb 25 11:40:59 crc kubenswrapper[5005]: I0225 11:40:59.075729 5005 generic.go:334] "Generic (PLEG): container finished" podID="c0d1392a-9a05-4f55-903e-65cd02b30b5b" containerID="b2c7c5f5a3ff5bfa9d04af14f2dd401eeb6de514529100eb65f910fd3bf06897" exitCode=0 Feb 25 11:40:59 crc kubenswrapper[5005]: I0225 11:40:59.076042 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nvhk8" event={"ID":"c0d1392a-9a05-4f55-903e-65cd02b30b5b","Type":"ContainerDied","Data":"b2c7c5f5a3ff5bfa9d04af14f2dd401eeb6de514529100eb65f910fd3bf06897"} Feb 25 11:40:59 crc kubenswrapper[5005]: I0225 11:40:59.076092 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nvhk8" event={"ID":"c0d1392a-9a05-4f55-903e-65cd02b30b5b","Type":"ContainerStarted","Data":"2c084854b2b92a3155e6f1d94ebff04180be561808221c0ceec6d14a3c51fdcf"} Feb 25 11:40:59 crc kubenswrapper[5005]: I0225 11:40:59.080284 5005 generic.go:334] "Generic (PLEG): container finished" podID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerID="c816cc274b9544be8ead9d4693ad6ccdd4ec8d5d3bca03945f606f68950d1d53" exitCode=0 Feb 25 11:40:59 crc kubenswrapper[5005]: I0225 11:40:59.080325 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerDied","Data":"c816cc274b9544be8ead9d4693ad6ccdd4ec8d5d3bca03945f606f68950d1d53"} Feb 25 11:40:59 crc kubenswrapper[5005]: I0225 11:40:59.080354 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerStarted","Data":"e2a8d5f65a424a6167d6d361209a412da688e351bf037195b0ced2458f3bac94"} Feb 25 11:40:59 crc kubenswrapper[5005]: I0225 11:40:59.080384 5005 scope.go:117] "RemoveContainer" containerID="436a11adc02a3406c7c2d7029cfaa74683b64c268bdd958d676e56e989c38e2c" Feb 25 11:41:00 crc kubenswrapper[5005]: I0225 11:41:00.093362 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nvhk8" event={"ID":"c0d1392a-9a05-4f55-903e-65cd02b30b5b","Type":"ContainerStarted","Data":"8d66e7b2cca15d57d7af243424aeee60d0f87c018eb9db9e87e8f248c1f359c6"} Feb 25 11:41:02 crc kubenswrapper[5005]: I0225 11:41:02.110356 5005 generic.go:334] "Generic (PLEG): container finished" podID="c0d1392a-9a05-4f55-903e-65cd02b30b5b" containerID="8d66e7b2cca15d57d7af243424aeee60d0f87c018eb9db9e87e8f248c1f359c6" exitCode=0 Feb 25 11:41:02 crc kubenswrapper[5005]: I0225 11:41:02.110449 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nvhk8" event={"ID":"c0d1392a-9a05-4f55-903e-65cd02b30b5b","Type":"ContainerDied","Data":"8d66e7b2cca15d57d7af243424aeee60d0f87c018eb9db9e87e8f248c1f359c6"} Feb 25 11:41:03 crc kubenswrapper[5005]: I0225 11:41:03.122967 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nvhk8" event={"ID":"c0d1392a-9a05-4f55-903e-65cd02b30b5b","Type":"ContainerStarted","Data":"aace6243b8f9f4ed9f08e03732b1a42afb9de413c680c7037f984b49c9a1bced"} Feb 25 11:41:08 crc kubenswrapper[5005]: I0225 11:41:08.640858 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nvhk8" Feb 25 11:41:08 crc kubenswrapper[5005]: I0225 11:41:08.641482 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nvhk8" Feb 25 11:41:09 crc kubenswrapper[5005]: I0225 11:41:09.702318 5005 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nvhk8" podUID="c0d1392a-9a05-4f55-903e-65cd02b30b5b" containerName="registry-server" probeResult="failure" output=< Feb 25 11:41:09 crc kubenswrapper[5005]: timeout: failed to connect service ":50051" within 1s Feb 25 11:41:09 crc kubenswrapper[5005]: > Feb 25 11:41:14 crc kubenswrapper[5005]: I0225 11:41:14.562970 5005 scope.go:117] "RemoveContainer" containerID="44b43c062143be50350cdf7b19a3fb68bcc51eeaca2f7dbccb6f7a44c4ec64d9" Feb 25 11:41:14 crc kubenswrapper[5005]: I0225 11:41:14.587395 5005 scope.go:117] "RemoveContainer" containerID="910dec65f28760f00b5da1cd4499b5bed074277e43f7eeaba4945ccdb6ef44a1" Feb 25 11:41:18 crc kubenswrapper[5005]: I0225 11:41:18.707650 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nvhk8" Feb 25 11:41:18 crc kubenswrapper[5005]: I0225 11:41:18.735310 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nvhk8" podStartSLOduration=17.169808952 podStartE2EDuration="20.735291641s" podCreationTimestamp="2026-02-25 11:40:58 +0000 UTC" firstStartedPulling="2026-02-25 11:40:59.077394961 +0000 UTC m=+1373.118127288" lastFinishedPulling="2026-02-25 11:41:02.64287765 +0000 UTC m=+1376.683609977" observedRunningTime="2026-02-25 11:41:03.14034939 +0000 UTC m=+1377.181081717" watchObservedRunningTime="2026-02-25 11:41:18.735291641 +0000 UTC m=+1392.776023978" Feb 25 11:41:18 crc kubenswrapper[5005]: I0225 11:41:18.774558 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nvhk8" Feb 25 11:41:18 crc kubenswrapper[5005]: I0225 11:41:18.951795 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nvhk8"] Feb 25 11:41:20 crc kubenswrapper[5005]: I0225 11:41:20.294023 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nvhk8" podUID="c0d1392a-9a05-4f55-903e-65cd02b30b5b" containerName="registry-server" containerID="cri-o://aace6243b8f9f4ed9f08e03732b1a42afb9de413c680c7037f984b49c9a1bced" gracePeriod=2 Feb 25 11:41:20 crc kubenswrapper[5005]: I0225 11:41:20.850855 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nvhk8" Feb 25 11:41:20 crc kubenswrapper[5005]: I0225 11:41:20.955749 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c97g6\" (UniqueName: \"kubernetes.io/projected/c0d1392a-9a05-4f55-903e-65cd02b30b5b-kube-api-access-c97g6\") pod \"c0d1392a-9a05-4f55-903e-65cd02b30b5b\" (UID: \"c0d1392a-9a05-4f55-903e-65cd02b30b5b\") " Feb 25 11:41:20 crc kubenswrapper[5005]: I0225 11:41:20.955828 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0d1392a-9a05-4f55-903e-65cd02b30b5b-catalog-content\") pod \"c0d1392a-9a05-4f55-903e-65cd02b30b5b\" (UID: \"c0d1392a-9a05-4f55-903e-65cd02b30b5b\") " Feb 25 11:41:20 crc kubenswrapper[5005]: I0225 11:41:20.955877 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0d1392a-9a05-4f55-903e-65cd02b30b5b-utilities\") pod \"c0d1392a-9a05-4f55-903e-65cd02b30b5b\" (UID: \"c0d1392a-9a05-4f55-903e-65cd02b30b5b\") " Feb 25 11:41:20 crc kubenswrapper[5005]: I0225 11:41:20.956808 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0d1392a-9a05-4f55-903e-65cd02b30b5b-utilities" (OuterVolumeSpecName: "utilities") pod "c0d1392a-9a05-4f55-903e-65cd02b30b5b" (UID: "c0d1392a-9a05-4f55-903e-65cd02b30b5b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:41:20 crc kubenswrapper[5005]: I0225 11:41:20.963790 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0d1392a-9a05-4f55-903e-65cd02b30b5b-kube-api-access-c97g6" (OuterVolumeSpecName: "kube-api-access-c97g6") pod "c0d1392a-9a05-4f55-903e-65cd02b30b5b" (UID: "c0d1392a-9a05-4f55-903e-65cd02b30b5b"). InnerVolumeSpecName "kube-api-access-c97g6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:41:21 crc kubenswrapper[5005]: I0225 11:41:21.058721 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c97g6\" (UniqueName: \"kubernetes.io/projected/c0d1392a-9a05-4f55-903e-65cd02b30b5b-kube-api-access-c97g6\") on node \"crc\" DevicePath \"\"" Feb 25 11:41:21 crc kubenswrapper[5005]: I0225 11:41:21.058770 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0d1392a-9a05-4f55-903e-65cd02b30b5b-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 11:41:21 crc kubenswrapper[5005]: I0225 11:41:21.071655 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0d1392a-9a05-4f55-903e-65cd02b30b5b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c0d1392a-9a05-4f55-903e-65cd02b30b5b" (UID: "c0d1392a-9a05-4f55-903e-65cd02b30b5b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:41:21 crc kubenswrapper[5005]: I0225 11:41:21.160712 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0d1392a-9a05-4f55-903e-65cd02b30b5b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 11:41:21 crc kubenswrapper[5005]: I0225 11:41:21.310591 5005 generic.go:334] "Generic (PLEG): container finished" podID="c0d1392a-9a05-4f55-903e-65cd02b30b5b" containerID="aace6243b8f9f4ed9f08e03732b1a42afb9de413c680c7037f984b49c9a1bced" exitCode=0 Feb 25 11:41:21 crc kubenswrapper[5005]: I0225 11:41:21.310648 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nvhk8" event={"ID":"c0d1392a-9a05-4f55-903e-65cd02b30b5b","Type":"ContainerDied","Data":"aace6243b8f9f4ed9f08e03732b1a42afb9de413c680c7037f984b49c9a1bced"} Feb 25 11:41:21 crc kubenswrapper[5005]: I0225 11:41:21.310692 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nvhk8" event={"ID":"c0d1392a-9a05-4f55-903e-65cd02b30b5b","Type":"ContainerDied","Data":"2c084854b2b92a3155e6f1d94ebff04180be561808221c0ceec6d14a3c51fdcf"} Feb 25 11:41:21 crc kubenswrapper[5005]: I0225 11:41:21.310713 5005 scope.go:117] "RemoveContainer" containerID="aace6243b8f9f4ed9f08e03732b1a42afb9de413c680c7037f984b49c9a1bced" Feb 25 11:41:21 crc kubenswrapper[5005]: I0225 11:41:21.310719 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nvhk8" Feb 25 11:41:21 crc kubenswrapper[5005]: I0225 11:41:21.350901 5005 scope.go:117] "RemoveContainer" containerID="8d66e7b2cca15d57d7af243424aeee60d0f87c018eb9db9e87e8f248c1f359c6" Feb 25 11:41:21 crc kubenswrapper[5005]: I0225 11:41:21.368241 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nvhk8"] Feb 25 11:41:21 crc kubenswrapper[5005]: I0225 11:41:21.386790 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nvhk8"] Feb 25 11:41:21 crc kubenswrapper[5005]: I0225 11:41:21.396750 5005 scope.go:117] "RemoveContainer" containerID="b2c7c5f5a3ff5bfa9d04af14f2dd401eeb6de514529100eb65f910fd3bf06897" Feb 25 11:41:21 crc kubenswrapper[5005]: I0225 11:41:21.441956 5005 scope.go:117] "RemoveContainer" containerID="aace6243b8f9f4ed9f08e03732b1a42afb9de413c680c7037f984b49c9a1bced" Feb 25 11:41:21 crc kubenswrapper[5005]: E0225 11:41:21.442351 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aace6243b8f9f4ed9f08e03732b1a42afb9de413c680c7037f984b49c9a1bced\": container with ID starting with aace6243b8f9f4ed9f08e03732b1a42afb9de413c680c7037f984b49c9a1bced not found: ID does not exist" containerID="aace6243b8f9f4ed9f08e03732b1a42afb9de413c680c7037f984b49c9a1bced" Feb 25 11:41:21 crc kubenswrapper[5005]: I0225 11:41:21.442405 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aace6243b8f9f4ed9f08e03732b1a42afb9de413c680c7037f984b49c9a1bced"} err="failed to get container status \"aace6243b8f9f4ed9f08e03732b1a42afb9de413c680c7037f984b49c9a1bced\": rpc error: code = NotFound desc = could not find container \"aace6243b8f9f4ed9f08e03732b1a42afb9de413c680c7037f984b49c9a1bced\": container with ID starting with aace6243b8f9f4ed9f08e03732b1a42afb9de413c680c7037f984b49c9a1bced not found: ID does not exist" Feb 25 11:41:21 crc kubenswrapper[5005]: I0225 11:41:21.442430 5005 scope.go:117] "RemoveContainer" containerID="8d66e7b2cca15d57d7af243424aeee60d0f87c018eb9db9e87e8f248c1f359c6" Feb 25 11:41:21 crc kubenswrapper[5005]: E0225 11:41:21.442695 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d66e7b2cca15d57d7af243424aeee60d0f87c018eb9db9e87e8f248c1f359c6\": container with ID starting with 8d66e7b2cca15d57d7af243424aeee60d0f87c018eb9db9e87e8f248c1f359c6 not found: ID does not exist" containerID="8d66e7b2cca15d57d7af243424aeee60d0f87c018eb9db9e87e8f248c1f359c6" Feb 25 11:41:21 crc kubenswrapper[5005]: I0225 11:41:21.442718 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d66e7b2cca15d57d7af243424aeee60d0f87c018eb9db9e87e8f248c1f359c6"} err="failed to get container status \"8d66e7b2cca15d57d7af243424aeee60d0f87c018eb9db9e87e8f248c1f359c6\": rpc error: code = NotFound desc = could not find container \"8d66e7b2cca15d57d7af243424aeee60d0f87c018eb9db9e87e8f248c1f359c6\": container with ID starting with 8d66e7b2cca15d57d7af243424aeee60d0f87c018eb9db9e87e8f248c1f359c6 not found: ID does not exist" Feb 25 11:41:21 crc kubenswrapper[5005]: I0225 11:41:21.442733 5005 scope.go:117] "RemoveContainer" containerID="b2c7c5f5a3ff5bfa9d04af14f2dd401eeb6de514529100eb65f910fd3bf06897" Feb 25 11:41:21 crc kubenswrapper[5005]: E0225 11:41:21.442996 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2c7c5f5a3ff5bfa9d04af14f2dd401eeb6de514529100eb65f910fd3bf06897\": container with ID starting with b2c7c5f5a3ff5bfa9d04af14f2dd401eeb6de514529100eb65f910fd3bf06897 not found: ID does not exist" containerID="b2c7c5f5a3ff5bfa9d04af14f2dd401eeb6de514529100eb65f910fd3bf06897" Feb 25 11:41:21 crc kubenswrapper[5005]: I0225 11:41:21.443014 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2c7c5f5a3ff5bfa9d04af14f2dd401eeb6de514529100eb65f910fd3bf06897"} err="failed to get container status \"b2c7c5f5a3ff5bfa9d04af14f2dd401eeb6de514529100eb65f910fd3bf06897\": rpc error: code = NotFound desc = could not find container \"b2c7c5f5a3ff5bfa9d04af14f2dd401eeb6de514529100eb65f910fd3bf06897\": container with ID starting with b2c7c5f5a3ff5bfa9d04af14f2dd401eeb6de514529100eb65f910fd3bf06897 not found: ID does not exist" Feb 25 11:41:22 crc kubenswrapper[5005]: I0225 11:41:22.698198 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0d1392a-9a05-4f55-903e-65cd02b30b5b" path="/var/lib/kubelet/pods/c0d1392a-9a05-4f55-903e-65cd02b30b5b/volumes" Feb 25 11:42:00 crc kubenswrapper[5005]: I0225 11:42:00.143417 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533662-6cx7c"] Feb 25 11:42:00 crc kubenswrapper[5005]: E0225 11:42:00.144396 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0d1392a-9a05-4f55-903e-65cd02b30b5b" containerName="registry-server" Feb 25 11:42:00 crc kubenswrapper[5005]: I0225 11:42:00.144411 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0d1392a-9a05-4f55-903e-65cd02b30b5b" containerName="registry-server" Feb 25 11:42:00 crc kubenswrapper[5005]: E0225 11:42:00.144434 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0d1392a-9a05-4f55-903e-65cd02b30b5b" containerName="extract-utilities" Feb 25 11:42:00 crc kubenswrapper[5005]: I0225 11:42:00.144444 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0d1392a-9a05-4f55-903e-65cd02b30b5b" containerName="extract-utilities" Feb 25 11:42:00 crc kubenswrapper[5005]: E0225 11:42:00.144462 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0d1392a-9a05-4f55-903e-65cd02b30b5b" containerName="extract-content" Feb 25 11:42:00 crc kubenswrapper[5005]: I0225 11:42:00.144471 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0d1392a-9a05-4f55-903e-65cd02b30b5b" containerName="extract-content" Feb 25 11:42:00 crc kubenswrapper[5005]: I0225 11:42:00.144696 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0d1392a-9a05-4f55-903e-65cd02b30b5b" containerName="registry-server" Feb 25 11:42:00 crc kubenswrapper[5005]: I0225 11:42:00.145581 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533662-6cx7c" Feb 25 11:42:00 crc kubenswrapper[5005]: I0225 11:42:00.147856 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 11:42:00 crc kubenswrapper[5005]: I0225 11:42:00.147999 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 11:42:00 crc kubenswrapper[5005]: I0225 11:42:00.148322 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 11:42:00 crc kubenswrapper[5005]: I0225 11:42:00.160896 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533662-6cx7c"] Feb 25 11:42:00 crc kubenswrapper[5005]: I0225 11:42:00.184758 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxtb6\" (UniqueName: \"kubernetes.io/projected/caeeaeb7-a7f5-49f0-9fa2-2788312deefa-kube-api-access-dxtb6\") pod \"auto-csr-approver-29533662-6cx7c\" (UID: \"caeeaeb7-a7f5-49f0-9fa2-2788312deefa\") " pod="openshift-infra/auto-csr-approver-29533662-6cx7c" Feb 25 11:42:00 crc kubenswrapper[5005]: I0225 11:42:00.286054 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxtb6\" (UniqueName: \"kubernetes.io/projected/caeeaeb7-a7f5-49f0-9fa2-2788312deefa-kube-api-access-dxtb6\") pod \"auto-csr-approver-29533662-6cx7c\" (UID: \"caeeaeb7-a7f5-49f0-9fa2-2788312deefa\") " pod="openshift-infra/auto-csr-approver-29533662-6cx7c" Feb 25 11:42:00 crc kubenswrapper[5005]: I0225 11:42:00.305865 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxtb6\" (UniqueName: \"kubernetes.io/projected/caeeaeb7-a7f5-49f0-9fa2-2788312deefa-kube-api-access-dxtb6\") pod \"auto-csr-approver-29533662-6cx7c\" (UID: \"caeeaeb7-a7f5-49f0-9fa2-2788312deefa\") " pod="openshift-infra/auto-csr-approver-29533662-6cx7c" Feb 25 11:42:00 crc kubenswrapper[5005]: I0225 11:42:00.467842 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533662-6cx7c" Feb 25 11:42:00 crc kubenswrapper[5005]: I0225 11:42:00.959154 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533662-6cx7c"] Feb 25 11:42:00 crc kubenswrapper[5005]: I0225 11:42:00.976389 5005 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 25 11:42:01 crc kubenswrapper[5005]: I0225 11:42:01.731018 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533662-6cx7c" event={"ID":"caeeaeb7-a7f5-49f0-9fa2-2788312deefa","Type":"ContainerStarted","Data":"c4056a1428c2bbffd9722eeab5b9b5abc7652db89958b8b83006202b6b175f38"} Feb 25 11:42:02 crc kubenswrapper[5005]: I0225 11:42:02.743187 5005 generic.go:334] "Generic (PLEG): container finished" podID="caeeaeb7-a7f5-49f0-9fa2-2788312deefa" containerID="0b05392c1b22cf360818b23617594aab29720670bc1bbcf219fb4e2f3b9bbe59" exitCode=0 Feb 25 11:42:02 crc kubenswrapper[5005]: I0225 11:42:02.743290 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533662-6cx7c" event={"ID":"caeeaeb7-a7f5-49f0-9fa2-2788312deefa","Type":"ContainerDied","Data":"0b05392c1b22cf360818b23617594aab29720670bc1bbcf219fb4e2f3b9bbe59"} Feb 25 11:42:04 crc kubenswrapper[5005]: I0225 11:42:04.081055 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533662-6cx7c" Feb 25 11:42:04 crc kubenswrapper[5005]: I0225 11:42:04.164978 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxtb6\" (UniqueName: \"kubernetes.io/projected/caeeaeb7-a7f5-49f0-9fa2-2788312deefa-kube-api-access-dxtb6\") pod \"caeeaeb7-a7f5-49f0-9fa2-2788312deefa\" (UID: \"caeeaeb7-a7f5-49f0-9fa2-2788312deefa\") " Feb 25 11:42:04 crc kubenswrapper[5005]: I0225 11:42:04.170899 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/caeeaeb7-a7f5-49f0-9fa2-2788312deefa-kube-api-access-dxtb6" (OuterVolumeSpecName: "kube-api-access-dxtb6") pod "caeeaeb7-a7f5-49f0-9fa2-2788312deefa" (UID: "caeeaeb7-a7f5-49f0-9fa2-2788312deefa"). InnerVolumeSpecName "kube-api-access-dxtb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:42:04 crc kubenswrapper[5005]: I0225 11:42:04.267604 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxtb6\" (UniqueName: \"kubernetes.io/projected/caeeaeb7-a7f5-49f0-9fa2-2788312deefa-kube-api-access-dxtb6\") on node \"crc\" DevicePath \"\"" Feb 25 11:42:04 crc kubenswrapper[5005]: I0225 11:42:04.765115 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533662-6cx7c" event={"ID":"caeeaeb7-a7f5-49f0-9fa2-2788312deefa","Type":"ContainerDied","Data":"c4056a1428c2bbffd9722eeab5b9b5abc7652db89958b8b83006202b6b175f38"} Feb 25 11:42:04 crc kubenswrapper[5005]: I0225 11:42:04.765174 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4056a1428c2bbffd9722eeab5b9b5abc7652db89958b8b83006202b6b175f38" Feb 25 11:42:04 crc kubenswrapper[5005]: I0225 11:42:04.765174 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533662-6cx7c" Feb 25 11:42:05 crc kubenswrapper[5005]: I0225 11:42:05.163892 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533656-kxl9w"] Feb 25 11:42:05 crc kubenswrapper[5005]: I0225 11:42:05.174556 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533656-kxl9w"] Feb 25 11:42:06 crc kubenswrapper[5005]: I0225 11:42:06.700279 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83478c50-2691-45b5-abc6-039a23bb1645" path="/var/lib/kubelet/pods/83478c50-2691-45b5-abc6-039a23bb1645/volumes" Feb 25 11:42:14 crc kubenswrapper[5005]: I0225 11:42:14.700121 5005 scope.go:117] "RemoveContainer" containerID="c04c4388259f62c1d6f77ee3160313648fc0753c4b24d2afe7cb0263ca31d349" Feb 25 11:42:14 crc kubenswrapper[5005]: I0225 11:42:14.748279 5005 scope.go:117] "RemoveContainer" containerID="dedcdd95d0fcaae436f2c94355a36ad69612b37b90325fbca4b37c8544993638" Feb 25 11:42:14 crc kubenswrapper[5005]: I0225 11:42:14.814088 5005 scope.go:117] "RemoveContainer" containerID="aa7a9d8fd143595a707742ac934f2cc7a54f4c4ae8c5b8a2d61db4fa12497721" Feb 25 11:42:14 crc kubenswrapper[5005]: I0225 11:42:14.850791 5005 scope.go:117] "RemoveContainer" containerID="c0bbfe176cabf6d4dbe27ce68585a6d343e23b905b032d79cbe4c2c37ca69e25" Feb 25 11:42:15 crc kubenswrapper[5005]: I0225 11:42:15.020136 5005 scope.go:117] "RemoveContainer" containerID="352b4c167508b99802a2b2e479fa1a8f87190e923206c04f5d0eb1bef74ba6a0" Feb 25 11:42:15 crc kubenswrapper[5005]: I0225 11:42:15.077494 5005 scope.go:117] "RemoveContainer" containerID="e5ff6c4894d10dce91d86d353419478f98f2b32e9775cb58f903f0f8348bd8af" Feb 25 11:42:58 crc kubenswrapper[5005]: I0225 11:42:58.087816 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 11:42:58 crc kubenswrapper[5005]: I0225 11:42:58.088424 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 11:43:04 crc kubenswrapper[5005]: I0225 11:43:04.814617 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-t9jdk"] Feb 25 11:43:04 crc kubenswrapper[5005]: E0225 11:43:04.815465 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="caeeaeb7-a7f5-49f0-9fa2-2788312deefa" containerName="oc" Feb 25 11:43:04 crc kubenswrapper[5005]: I0225 11:43:04.815476 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="caeeaeb7-a7f5-49f0-9fa2-2788312deefa" containerName="oc" Feb 25 11:43:04 crc kubenswrapper[5005]: I0225 11:43:04.815623 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="caeeaeb7-a7f5-49f0-9fa2-2788312deefa" containerName="oc" Feb 25 11:43:04 crc kubenswrapper[5005]: I0225 11:43:04.817083 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t9jdk"] Feb 25 11:43:04 crc kubenswrapper[5005]: I0225 11:43:04.817182 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t9jdk" Feb 25 11:43:04 crc kubenswrapper[5005]: I0225 11:43:04.889379 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d302b4f6-c778-4925-aa38-0b69865c0192-catalog-content\") pod \"redhat-marketplace-t9jdk\" (UID: \"d302b4f6-c778-4925-aa38-0b69865c0192\") " pod="openshift-marketplace/redhat-marketplace-t9jdk" Feb 25 11:43:04 crc kubenswrapper[5005]: I0225 11:43:04.889673 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d302b4f6-c778-4925-aa38-0b69865c0192-utilities\") pod \"redhat-marketplace-t9jdk\" (UID: \"d302b4f6-c778-4925-aa38-0b69865c0192\") " pod="openshift-marketplace/redhat-marketplace-t9jdk" Feb 25 11:43:04 crc kubenswrapper[5005]: I0225 11:43:04.889772 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8msds\" (UniqueName: \"kubernetes.io/projected/d302b4f6-c778-4925-aa38-0b69865c0192-kube-api-access-8msds\") pod \"redhat-marketplace-t9jdk\" (UID: \"d302b4f6-c778-4925-aa38-0b69865c0192\") " pod="openshift-marketplace/redhat-marketplace-t9jdk" Feb 25 11:43:04 crc kubenswrapper[5005]: I0225 11:43:04.991524 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d302b4f6-c778-4925-aa38-0b69865c0192-utilities\") pod \"redhat-marketplace-t9jdk\" (UID: \"d302b4f6-c778-4925-aa38-0b69865c0192\") " pod="openshift-marketplace/redhat-marketplace-t9jdk" Feb 25 11:43:04 crc kubenswrapper[5005]: I0225 11:43:04.991846 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8msds\" (UniqueName: \"kubernetes.io/projected/d302b4f6-c778-4925-aa38-0b69865c0192-kube-api-access-8msds\") pod \"redhat-marketplace-t9jdk\" (UID: \"d302b4f6-c778-4925-aa38-0b69865c0192\") " pod="openshift-marketplace/redhat-marketplace-t9jdk" Feb 25 11:43:04 crc kubenswrapper[5005]: I0225 11:43:04.991894 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d302b4f6-c778-4925-aa38-0b69865c0192-catalog-content\") pod \"redhat-marketplace-t9jdk\" (UID: \"d302b4f6-c778-4925-aa38-0b69865c0192\") " pod="openshift-marketplace/redhat-marketplace-t9jdk" Feb 25 11:43:04 crc kubenswrapper[5005]: I0225 11:43:04.991957 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d302b4f6-c778-4925-aa38-0b69865c0192-utilities\") pod \"redhat-marketplace-t9jdk\" (UID: \"d302b4f6-c778-4925-aa38-0b69865c0192\") " pod="openshift-marketplace/redhat-marketplace-t9jdk" Feb 25 11:43:04 crc kubenswrapper[5005]: I0225 11:43:04.992264 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d302b4f6-c778-4925-aa38-0b69865c0192-catalog-content\") pod \"redhat-marketplace-t9jdk\" (UID: \"d302b4f6-c778-4925-aa38-0b69865c0192\") " pod="openshift-marketplace/redhat-marketplace-t9jdk" Feb 25 11:43:05 crc kubenswrapper[5005]: I0225 11:43:05.010102 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8msds\" (UniqueName: \"kubernetes.io/projected/d302b4f6-c778-4925-aa38-0b69865c0192-kube-api-access-8msds\") pod \"redhat-marketplace-t9jdk\" (UID: \"d302b4f6-c778-4925-aa38-0b69865c0192\") " pod="openshift-marketplace/redhat-marketplace-t9jdk" Feb 25 11:43:05 crc kubenswrapper[5005]: I0225 11:43:05.175569 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t9jdk" Feb 25 11:43:05 crc kubenswrapper[5005]: I0225 11:43:05.659595 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t9jdk"] Feb 25 11:43:06 crc kubenswrapper[5005]: I0225 11:43:06.564796 5005 generic.go:334] "Generic (PLEG): container finished" podID="d302b4f6-c778-4925-aa38-0b69865c0192" containerID="ffd39186ab588f59435d94240cdf366dc4528a2558e4c58c076a96a64597bcb3" exitCode=0 Feb 25 11:43:06 crc kubenswrapper[5005]: I0225 11:43:06.564863 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t9jdk" event={"ID":"d302b4f6-c778-4925-aa38-0b69865c0192","Type":"ContainerDied","Data":"ffd39186ab588f59435d94240cdf366dc4528a2558e4c58c076a96a64597bcb3"} Feb 25 11:43:06 crc kubenswrapper[5005]: I0225 11:43:06.565336 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t9jdk" event={"ID":"d302b4f6-c778-4925-aa38-0b69865c0192","Type":"ContainerStarted","Data":"e6eb5ec7b062da8de9b9d6a687f8f8759c6480d1ef9198d5a3d25d655edb73b8"} Feb 25 11:43:08 crc kubenswrapper[5005]: I0225 11:43:08.584788 5005 generic.go:334] "Generic (PLEG): container finished" podID="d302b4f6-c778-4925-aa38-0b69865c0192" containerID="a23b5b58108fb40da9558be57548637e880f5ac781ab26de1792ecf0ae2913d8" exitCode=0 Feb 25 11:43:08 crc kubenswrapper[5005]: I0225 11:43:08.584903 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t9jdk" event={"ID":"d302b4f6-c778-4925-aa38-0b69865c0192","Type":"ContainerDied","Data":"a23b5b58108fb40da9558be57548637e880f5ac781ab26de1792ecf0ae2913d8"} Feb 25 11:43:09 crc kubenswrapper[5005]: I0225 11:43:09.597761 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t9jdk" event={"ID":"d302b4f6-c778-4925-aa38-0b69865c0192","Type":"ContainerStarted","Data":"5744cd086989dd6fcd3b799eb689fa8bfc3e2451fbc7d3024a5b3822f8fffd34"} Feb 25 11:43:09 crc kubenswrapper[5005]: I0225 11:43:09.621663 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-t9jdk" podStartSLOduration=3.165138312 podStartE2EDuration="5.621647735s" podCreationTimestamp="2026-02-25 11:43:04 +0000 UTC" firstStartedPulling="2026-02-25 11:43:06.568093619 +0000 UTC m=+1500.608825986" lastFinishedPulling="2026-02-25 11:43:09.024603052 +0000 UTC m=+1503.065335409" observedRunningTime="2026-02-25 11:43:09.618848679 +0000 UTC m=+1503.659581006" watchObservedRunningTime="2026-02-25 11:43:09.621647735 +0000 UTC m=+1503.662380062" Feb 25 11:43:15 crc kubenswrapper[5005]: I0225 11:43:15.176714 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-t9jdk" Feb 25 11:43:15 crc kubenswrapper[5005]: I0225 11:43:15.177799 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-t9jdk" Feb 25 11:43:15 crc kubenswrapper[5005]: I0225 11:43:15.271655 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-t9jdk" Feb 25 11:43:15 crc kubenswrapper[5005]: I0225 11:43:15.720557 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-t9jdk" Feb 25 11:43:15 crc kubenswrapper[5005]: I0225 11:43:15.772503 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t9jdk"] Feb 25 11:43:17 crc kubenswrapper[5005]: I0225 11:43:17.669589 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-t9jdk" podUID="d302b4f6-c778-4925-aa38-0b69865c0192" containerName="registry-server" containerID="cri-o://5744cd086989dd6fcd3b799eb689fa8bfc3e2451fbc7d3024a5b3822f8fffd34" gracePeriod=2 Feb 25 11:43:18 crc kubenswrapper[5005]: I0225 11:43:18.065228 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t9jdk" Feb 25 11:43:18 crc kubenswrapper[5005]: I0225 11:43:18.153149 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d302b4f6-c778-4925-aa38-0b69865c0192-utilities\") pod \"d302b4f6-c778-4925-aa38-0b69865c0192\" (UID: \"d302b4f6-c778-4925-aa38-0b69865c0192\") " Feb 25 11:43:18 crc kubenswrapper[5005]: I0225 11:43:18.153648 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8msds\" (UniqueName: \"kubernetes.io/projected/d302b4f6-c778-4925-aa38-0b69865c0192-kube-api-access-8msds\") pod \"d302b4f6-c778-4925-aa38-0b69865c0192\" (UID: \"d302b4f6-c778-4925-aa38-0b69865c0192\") " Feb 25 11:43:18 crc kubenswrapper[5005]: I0225 11:43:18.153836 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d302b4f6-c778-4925-aa38-0b69865c0192-catalog-content\") pod \"d302b4f6-c778-4925-aa38-0b69865c0192\" (UID: \"d302b4f6-c778-4925-aa38-0b69865c0192\") " Feb 25 11:43:18 crc kubenswrapper[5005]: I0225 11:43:18.157087 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d302b4f6-c778-4925-aa38-0b69865c0192-utilities" (OuterVolumeSpecName: "utilities") pod "d302b4f6-c778-4925-aa38-0b69865c0192" (UID: "d302b4f6-c778-4925-aa38-0b69865c0192"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:43:18 crc kubenswrapper[5005]: I0225 11:43:18.162053 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d302b4f6-c778-4925-aa38-0b69865c0192-kube-api-access-8msds" (OuterVolumeSpecName: "kube-api-access-8msds") pod "d302b4f6-c778-4925-aa38-0b69865c0192" (UID: "d302b4f6-c778-4925-aa38-0b69865c0192"). InnerVolumeSpecName "kube-api-access-8msds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:43:18 crc kubenswrapper[5005]: I0225 11:43:18.181118 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d302b4f6-c778-4925-aa38-0b69865c0192-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d302b4f6-c778-4925-aa38-0b69865c0192" (UID: "d302b4f6-c778-4925-aa38-0b69865c0192"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:43:18 crc kubenswrapper[5005]: I0225 11:43:18.256747 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d302b4f6-c778-4925-aa38-0b69865c0192-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 11:43:18 crc kubenswrapper[5005]: I0225 11:43:18.256799 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d302b4f6-c778-4925-aa38-0b69865c0192-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 11:43:18 crc kubenswrapper[5005]: I0225 11:43:18.256821 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8msds\" (UniqueName: \"kubernetes.io/projected/d302b4f6-c778-4925-aa38-0b69865c0192-kube-api-access-8msds\") on node \"crc\" DevicePath \"\"" Feb 25 11:43:18 crc kubenswrapper[5005]: I0225 11:43:18.679945 5005 generic.go:334] "Generic (PLEG): container finished" podID="d302b4f6-c778-4925-aa38-0b69865c0192" containerID="5744cd086989dd6fcd3b799eb689fa8bfc3e2451fbc7d3024a5b3822f8fffd34" exitCode=0 Feb 25 11:43:18 crc kubenswrapper[5005]: I0225 11:43:18.679999 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t9jdk" event={"ID":"d302b4f6-c778-4925-aa38-0b69865c0192","Type":"ContainerDied","Data":"5744cd086989dd6fcd3b799eb689fa8bfc3e2451fbc7d3024a5b3822f8fffd34"} Feb 25 11:43:18 crc kubenswrapper[5005]: I0225 11:43:18.680021 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t9jdk" Feb 25 11:43:18 crc kubenswrapper[5005]: I0225 11:43:18.680042 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t9jdk" event={"ID":"d302b4f6-c778-4925-aa38-0b69865c0192","Type":"ContainerDied","Data":"e6eb5ec7b062da8de9b9d6a687f8f8759c6480d1ef9198d5a3d25d655edb73b8"} Feb 25 11:43:18 crc kubenswrapper[5005]: I0225 11:43:18.680072 5005 scope.go:117] "RemoveContainer" containerID="5744cd086989dd6fcd3b799eb689fa8bfc3e2451fbc7d3024a5b3822f8fffd34" Feb 25 11:43:18 crc kubenswrapper[5005]: I0225 11:43:18.700179 5005 scope.go:117] "RemoveContainer" containerID="a23b5b58108fb40da9558be57548637e880f5ac781ab26de1792ecf0ae2913d8" Feb 25 11:43:18 crc kubenswrapper[5005]: I0225 11:43:18.724149 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t9jdk"] Feb 25 11:43:18 crc kubenswrapper[5005]: I0225 11:43:18.727255 5005 scope.go:117] "RemoveContainer" containerID="ffd39186ab588f59435d94240cdf366dc4528a2558e4c58c076a96a64597bcb3" Feb 25 11:43:18 crc kubenswrapper[5005]: I0225 11:43:18.732844 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-t9jdk"] Feb 25 11:43:18 crc kubenswrapper[5005]: I0225 11:43:18.766046 5005 scope.go:117] "RemoveContainer" containerID="5744cd086989dd6fcd3b799eb689fa8bfc3e2451fbc7d3024a5b3822f8fffd34" Feb 25 11:43:18 crc kubenswrapper[5005]: E0225 11:43:18.766638 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5744cd086989dd6fcd3b799eb689fa8bfc3e2451fbc7d3024a5b3822f8fffd34\": container with ID starting with 5744cd086989dd6fcd3b799eb689fa8bfc3e2451fbc7d3024a5b3822f8fffd34 not found: ID does not exist" containerID="5744cd086989dd6fcd3b799eb689fa8bfc3e2451fbc7d3024a5b3822f8fffd34" Feb 25 11:43:18 crc kubenswrapper[5005]: I0225 11:43:18.766677 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5744cd086989dd6fcd3b799eb689fa8bfc3e2451fbc7d3024a5b3822f8fffd34"} err="failed to get container status \"5744cd086989dd6fcd3b799eb689fa8bfc3e2451fbc7d3024a5b3822f8fffd34\": rpc error: code = NotFound desc = could not find container \"5744cd086989dd6fcd3b799eb689fa8bfc3e2451fbc7d3024a5b3822f8fffd34\": container with ID starting with 5744cd086989dd6fcd3b799eb689fa8bfc3e2451fbc7d3024a5b3822f8fffd34 not found: ID does not exist" Feb 25 11:43:18 crc kubenswrapper[5005]: I0225 11:43:18.766702 5005 scope.go:117] "RemoveContainer" containerID="a23b5b58108fb40da9558be57548637e880f5ac781ab26de1792ecf0ae2913d8" Feb 25 11:43:18 crc kubenswrapper[5005]: E0225 11:43:18.767005 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a23b5b58108fb40da9558be57548637e880f5ac781ab26de1792ecf0ae2913d8\": container with ID starting with a23b5b58108fb40da9558be57548637e880f5ac781ab26de1792ecf0ae2913d8 not found: ID does not exist" containerID="a23b5b58108fb40da9558be57548637e880f5ac781ab26de1792ecf0ae2913d8" Feb 25 11:43:18 crc kubenswrapper[5005]: I0225 11:43:18.767028 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a23b5b58108fb40da9558be57548637e880f5ac781ab26de1792ecf0ae2913d8"} err="failed to get container status \"a23b5b58108fb40da9558be57548637e880f5ac781ab26de1792ecf0ae2913d8\": rpc error: code = NotFound desc = could not find container \"a23b5b58108fb40da9558be57548637e880f5ac781ab26de1792ecf0ae2913d8\": container with ID starting with a23b5b58108fb40da9558be57548637e880f5ac781ab26de1792ecf0ae2913d8 not found: ID does not exist" Feb 25 11:43:18 crc kubenswrapper[5005]: I0225 11:43:18.767041 5005 scope.go:117] "RemoveContainer" containerID="ffd39186ab588f59435d94240cdf366dc4528a2558e4c58c076a96a64597bcb3" Feb 25 11:43:18 crc kubenswrapper[5005]: E0225 11:43:18.767452 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffd39186ab588f59435d94240cdf366dc4528a2558e4c58c076a96a64597bcb3\": container with ID starting with ffd39186ab588f59435d94240cdf366dc4528a2558e4c58c076a96a64597bcb3 not found: ID does not exist" containerID="ffd39186ab588f59435d94240cdf366dc4528a2558e4c58c076a96a64597bcb3" Feb 25 11:43:18 crc kubenswrapper[5005]: I0225 11:43:18.767484 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffd39186ab588f59435d94240cdf366dc4528a2558e4c58c076a96a64597bcb3"} err="failed to get container status \"ffd39186ab588f59435d94240cdf366dc4528a2558e4c58c076a96a64597bcb3\": rpc error: code = NotFound desc = could not find container \"ffd39186ab588f59435d94240cdf366dc4528a2558e4c58c076a96a64597bcb3\": container with ID starting with ffd39186ab588f59435d94240cdf366dc4528a2558e4c58c076a96a64597bcb3 not found: ID does not exist" Feb 25 11:43:20 crc kubenswrapper[5005]: I0225 11:43:20.695698 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d302b4f6-c778-4925-aa38-0b69865c0192" path="/var/lib/kubelet/pods/d302b4f6-c778-4925-aa38-0b69865c0192/volumes" Feb 25 11:43:28 crc kubenswrapper[5005]: I0225 11:43:28.087429 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 11:43:28 crc kubenswrapper[5005]: I0225 11:43:28.088101 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 11:43:47 crc kubenswrapper[5005]: I0225 11:43:47.983528 5005 generic.go:334] "Generic (PLEG): container finished" podID="796df0b2-b3a6-4f4f-99cb-dda670ed4411" containerID="60b53bcf84c11e3494e210a5aec6360d16f0e25342c611df553d11e582f3cea5" exitCode=0 Feb 25 11:43:47 crc kubenswrapper[5005]: I0225 11:43:47.983583 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vxsrw" event={"ID":"796df0b2-b3a6-4f4f-99cb-dda670ed4411","Type":"ContainerDied","Data":"60b53bcf84c11e3494e210a5aec6360d16f0e25342c611df553d11e582f3cea5"} Feb 25 11:43:49 crc kubenswrapper[5005]: I0225 11:43:49.517321 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vxsrw" Feb 25 11:43:49 crc kubenswrapper[5005]: I0225 11:43:49.618757 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mk5fc\" (UniqueName: \"kubernetes.io/projected/796df0b2-b3a6-4f4f-99cb-dda670ed4411-kube-api-access-mk5fc\") pod \"796df0b2-b3a6-4f4f-99cb-dda670ed4411\" (UID: \"796df0b2-b3a6-4f4f-99cb-dda670ed4411\") " Feb 25 11:43:49 crc kubenswrapper[5005]: I0225 11:43:49.619108 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/796df0b2-b3a6-4f4f-99cb-dda670ed4411-bootstrap-combined-ca-bundle\") pod \"796df0b2-b3a6-4f4f-99cb-dda670ed4411\" (UID: \"796df0b2-b3a6-4f4f-99cb-dda670ed4411\") " Feb 25 11:43:49 crc kubenswrapper[5005]: I0225 11:43:49.619152 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/796df0b2-b3a6-4f4f-99cb-dda670ed4411-inventory\") pod \"796df0b2-b3a6-4f4f-99cb-dda670ed4411\" (UID: \"796df0b2-b3a6-4f4f-99cb-dda670ed4411\") " Feb 25 11:43:49 crc kubenswrapper[5005]: I0225 11:43:49.619213 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/796df0b2-b3a6-4f4f-99cb-dda670ed4411-ssh-key-openstack-edpm-ipam\") pod \"796df0b2-b3a6-4f4f-99cb-dda670ed4411\" (UID: \"796df0b2-b3a6-4f4f-99cb-dda670ed4411\") " Feb 25 11:43:49 crc kubenswrapper[5005]: I0225 11:43:49.626215 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/796df0b2-b3a6-4f4f-99cb-dda670ed4411-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "796df0b2-b3a6-4f4f-99cb-dda670ed4411" (UID: "796df0b2-b3a6-4f4f-99cb-dda670ed4411"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:43:49 crc kubenswrapper[5005]: I0225 11:43:49.632533 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/796df0b2-b3a6-4f4f-99cb-dda670ed4411-kube-api-access-mk5fc" (OuterVolumeSpecName: "kube-api-access-mk5fc") pod "796df0b2-b3a6-4f4f-99cb-dda670ed4411" (UID: "796df0b2-b3a6-4f4f-99cb-dda670ed4411"). InnerVolumeSpecName "kube-api-access-mk5fc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:43:49 crc kubenswrapper[5005]: I0225 11:43:49.647526 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/796df0b2-b3a6-4f4f-99cb-dda670ed4411-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "796df0b2-b3a6-4f4f-99cb-dda670ed4411" (UID: "796df0b2-b3a6-4f4f-99cb-dda670ed4411"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:43:49 crc kubenswrapper[5005]: I0225 11:43:49.651110 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/796df0b2-b3a6-4f4f-99cb-dda670ed4411-inventory" (OuterVolumeSpecName: "inventory") pod "796df0b2-b3a6-4f4f-99cb-dda670ed4411" (UID: "796df0b2-b3a6-4f4f-99cb-dda670ed4411"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:43:49 crc kubenswrapper[5005]: I0225 11:43:49.721411 5005 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/796df0b2-b3a6-4f4f-99cb-dda670ed4411-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 25 11:43:49 crc kubenswrapper[5005]: I0225 11:43:49.721442 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mk5fc\" (UniqueName: \"kubernetes.io/projected/796df0b2-b3a6-4f4f-99cb-dda670ed4411-kube-api-access-mk5fc\") on node \"crc\" DevicePath \"\"" Feb 25 11:43:49 crc kubenswrapper[5005]: I0225 11:43:49.721452 5005 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/796df0b2-b3a6-4f4f-99cb-dda670ed4411-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 11:43:49 crc kubenswrapper[5005]: I0225 11:43:49.721462 5005 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/796df0b2-b3a6-4f4f-99cb-dda670ed4411-inventory\") on node \"crc\" DevicePath \"\"" Feb 25 11:43:50 crc kubenswrapper[5005]: I0225 11:43:50.006506 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vxsrw" event={"ID":"796df0b2-b3a6-4f4f-99cb-dda670ed4411","Type":"ContainerDied","Data":"b18c05be4c6af9f52e77407967f193e2327bc4d1da544d26d1abea21fdab2915"} Feb 25 11:43:50 crc kubenswrapper[5005]: I0225 11:43:50.006572 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b18c05be4c6af9f52e77407967f193e2327bc4d1da544d26d1abea21fdab2915" Feb 25 11:43:50 crc kubenswrapper[5005]: I0225 11:43:50.007045 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vxsrw" Feb 25 11:43:50 crc kubenswrapper[5005]: I0225 11:43:50.156029 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4xrlg"] Feb 25 11:43:50 crc kubenswrapper[5005]: E0225 11:43:50.156400 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="796df0b2-b3a6-4f4f-99cb-dda670ed4411" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 25 11:43:50 crc kubenswrapper[5005]: I0225 11:43:50.156417 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="796df0b2-b3a6-4f4f-99cb-dda670ed4411" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 25 11:43:50 crc kubenswrapper[5005]: E0225 11:43:50.156433 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d302b4f6-c778-4925-aa38-0b69865c0192" containerName="extract-content" Feb 25 11:43:50 crc kubenswrapper[5005]: I0225 11:43:50.156439 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="d302b4f6-c778-4925-aa38-0b69865c0192" containerName="extract-content" Feb 25 11:43:50 crc kubenswrapper[5005]: E0225 11:43:50.156467 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d302b4f6-c778-4925-aa38-0b69865c0192" containerName="extract-utilities" Feb 25 11:43:50 crc kubenswrapper[5005]: I0225 11:43:50.156474 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="d302b4f6-c778-4925-aa38-0b69865c0192" containerName="extract-utilities" Feb 25 11:43:50 crc kubenswrapper[5005]: E0225 11:43:50.156494 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d302b4f6-c778-4925-aa38-0b69865c0192" containerName="registry-server" Feb 25 11:43:50 crc kubenswrapper[5005]: I0225 11:43:50.156500 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="d302b4f6-c778-4925-aa38-0b69865c0192" containerName="registry-server" Feb 25 11:43:50 crc kubenswrapper[5005]: I0225 11:43:50.156657 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="d302b4f6-c778-4925-aa38-0b69865c0192" containerName="registry-server" Feb 25 11:43:50 crc kubenswrapper[5005]: I0225 11:43:50.156671 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="796df0b2-b3a6-4f4f-99cb-dda670ed4411" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 25 11:43:50 crc kubenswrapper[5005]: I0225 11:43:50.157253 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4xrlg" Feb 25 11:43:50 crc kubenswrapper[5005]: I0225 11:43:50.160246 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 25 11:43:50 crc kubenswrapper[5005]: I0225 11:43:50.160435 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 25 11:43:50 crc kubenswrapper[5005]: I0225 11:43:50.160487 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dgrbb" Feb 25 11:43:50 crc kubenswrapper[5005]: I0225 11:43:50.164365 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 25 11:43:50 crc kubenswrapper[5005]: I0225 11:43:50.190440 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4xrlg"] Feb 25 11:43:50 crc kubenswrapper[5005]: I0225 11:43:50.239903 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/291ad3a8-272b-4a32-b8bd-0d2b7fcb546a-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4xrlg\" (UID: \"291ad3a8-272b-4a32-b8bd-0d2b7fcb546a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4xrlg" Feb 25 11:43:50 crc kubenswrapper[5005]: I0225 11:43:50.239983 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tknr8\" (UniqueName: \"kubernetes.io/projected/291ad3a8-272b-4a32-b8bd-0d2b7fcb546a-kube-api-access-tknr8\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4xrlg\" (UID: \"291ad3a8-272b-4a32-b8bd-0d2b7fcb546a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4xrlg" Feb 25 11:43:50 crc kubenswrapper[5005]: I0225 11:43:50.240019 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/291ad3a8-272b-4a32-b8bd-0d2b7fcb546a-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4xrlg\" (UID: \"291ad3a8-272b-4a32-b8bd-0d2b7fcb546a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4xrlg" Feb 25 11:43:50 crc kubenswrapper[5005]: I0225 11:43:50.341702 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/291ad3a8-272b-4a32-b8bd-0d2b7fcb546a-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4xrlg\" (UID: \"291ad3a8-272b-4a32-b8bd-0d2b7fcb546a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4xrlg" Feb 25 11:43:50 crc kubenswrapper[5005]: I0225 11:43:50.341779 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tknr8\" (UniqueName: \"kubernetes.io/projected/291ad3a8-272b-4a32-b8bd-0d2b7fcb546a-kube-api-access-tknr8\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4xrlg\" (UID: \"291ad3a8-272b-4a32-b8bd-0d2b7fcb546a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4xrlg" Feb 25 11:43:50 crc kubenswrapper[5005]: I0225 11:43:50.341806 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/291ad3a8-272b-4a32-b8bd-0d2b7fcb546a-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4xrlg\" (UID: \"291ad3a8-272b-4a32-b8bd-0d2b7fcb546a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4xrlg" Feb 25 11:43:50 crc kubenswrapper[5005]: I0225 11:43:50.345768 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/291ad3a8-272b-4a32-b8bd-0d2b7fcb546a-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4xrlg\" (UID: \"291ad3a8-272b-4a32-b8bd-0d2b7fcb546a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4xrlg" Feb 25 11:43:50 crc kubenswrapper[5005]: I0225 11:43:50.351920 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/291ad3a8-272b-4a32-b8bd-0d2b7fcb546a-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4xrlg\" (UID: \"291ad3a8-272b-4a32-b8bd-0d2b7fcb546a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4xrlg" Feb 25 11:43:50 crc kubenswrapper[5005]: I0225 11:43:50.362509 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tknr8\" (UniqueName: \"kubernetes.io/projected/291ad3a8-272b-4a32-b8bd-0d2b7fcb546a-kube-api-access-tknr8\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4xrlg\" (UID: \"291ad3a8-272b-4a32-b8bd-0d2b7fcb546a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4xrlg" Feb 25 11:43:50 crc kubenswrapper[5005]: I0225 11:43:50.482686 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4xrlg" Feb 25 11:43:51 crc kubenswrapper[5005]: I0225 11:43:51.014984 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4xrlg"] Feb 25 11:43:52 crc kubenswrapper[5005]: I0225 11:43:52.031659 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4xrlg" event={"ID":"291ad3a8-272b-4a32-b8bd-0d2b7fcb546a","Type":"ContainerStarted","Data":"706a207c430d9e746e1b94015c889986df9389fe11a733c1b995e8dfda454bfa"} Feb 25 11:43:52 crc kubenswrapper[5005]: I0225 11:43:52.031982 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4xrlg" event={"ID":"291ad3a8-272b-4a32-b8bd-0d2b7fcb546a","Type":"ContainerStarted","Data":"c25d88360bc0e7d872d9f7d1924bbde06ec3112f3f0d96950e0483c2310804d2"} Feb 25 11:43:52 crc kubenswrapper[5005]: I0225 11:43:52.064719 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4xrlg" podStartSLOduration=1.670528741 podStartE2EDuration="2.064697699s" podCreationTimestamp="2026-02-25 11:43:50 +0000 UTC" firstStartedPulling="2026-02-25 11:43:51.027936769 +0000 UTC m=+1545.068669096" lastFinishedPulling="2026-02-25 11:43:51.422105727 +0000 UTC m=+1545.462838054" observedRunningTime="2026-02-25 11:43:52.059213711 +0000 UTC m=+1546.099946048" watchObservedRunningTime="2026-02-25 11:43:52.064697699 +0000 UTC m=+1546.105430036" Feb 25 11:43:58 crc kubenswrapper[5005]: I0225 11:43:58.088109 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 11:43:58 crc kubenswrapper[5005]: I0225 11:43:58.088997 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 11:43:58 crc kubenswrapper[5005]: I0225 11:43:58.089088 5005 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" Feb 25 11:43:58 crc kubenswrapper[5005]: I0225 11:43:58.090303 5005 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e2a8d5f65a424a6167d6d361209a412da688e351bf037195b0ced2458f3bac94"} pod="openshift-machine-config-operator/machine-config-daemon-tct5q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 25 11:43:58 crc kubenswrapper[5005]: I0225 11:43:58.090473 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" containerID="cri-o://e2a8d5f65a424a6167d6d361209a412da688e351bf037195b0ced2458f3bac94" gracePeriod=600 Feb 25 11:43:58 crc kubenswrapper[5005]: E0225 11:43:58.220005 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 11:43:59 crc kubenswrapper[5005]: I0225 11:43:59.140121 5005 generic.go:334] "Generic (PLEG): container finished" podID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerID="e2a8d5f65a424a6167d6d361209a412da688e351bf037195b0ced2458f3bac94" exitCode=0 Feb 25 11:43:59 crc kubenswrapper[5005]: I0225 11:43:59.140213 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerDied","Data":"e2a8d5f65a424a6167d6d361209a412da688e351bf037195b0ced2458f3bac94"} Feb 25 11:43:59 crc kubenswrapper[5005]: I0225 11:43:59.140657 5005 scope.go:117] "RemoveContainer" containerID="c816cc274b9544be8ead9d4693ad6ccdd4ec8d5d3bca03945f606f68950d1d53" Feb 25 11:43:59 crc kubenswrapper[5005]: I0225 11:43:59.141595 5005 scope.go:117] "RemoveContainer" containerID="e2a8d5f65a424a6167d6d361209a412da688e351bf037195b0ced2458f3bac94" Feb 25 11:43:59 crc kubenswrapper[5005]: E0225 11:43:59.142085 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 11:44:00 crc kubenswrapper[5005]: I0225 11:44:00.135849 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533664-tsb8n"] Feb 25 11:44:00 crc kubenswrapper[5005]: I0225 11:44:00.137462 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533664-tsb8n" Feb 25 11:44:00 crc kubenswrapper[5005]: I0225 11:44:00.140415 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 11:44:00 crc kubenswrapper[5005]: I0225 11:44:00.141039 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 11:44:00 crc kubenswrapper[5005]: I0225 11:44:00.142328 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 11:44:00 crc kubenswrapper[5005]: I0225 11:44:00.181025 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533664-tsb8n"] Feb 25 11:44:00 crc kubenswrapper[5005]: I0225 11:44:00.232383 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c64bz\" (UniqueName: \"kubernetes.io/projected/ceaeb004-07b3-4e93-a97e-a251ed27a076-kube-api-access-c64bz\") pod \"auto-csr-approver-29533664-tsb8n\" (UID: \"ceaeb004-07b3-4e93-a97e-a251ed27a076\") " pod="openshift-infra/auto-csr-approver-29533664-tsb8n" Feb 25 11:44:00 crc kubenswrapper[5005]: I0225 11:44:00.334454 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c64bz\" (UniqueName: \"kubernetes.io/projected/ceaeb004-07b3-4e93-a97e-a251ed27a076-kube-api-access-c64bz\") pod \"auto-csr-approver-29533664-tsb8n\" (UID: \"ceaeb004-07b3-4e93-a97e-a251ed27a076\") " pod="openshift-infra/auto-csr-approver-29533664-tsb8n" Feb 25 11:44:00 crc kubenswrapper[5005]: I0225 11:44:00.356549 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c64bz\" (UniqueName: \"kubernetes.io/projected/ceaeb004-07b3-4e93-a97e-a251ed27a076-kube-api-access-c64bz\") pod \"auto-csr-approver-29533664-tsb8n\" (UID: \"ceaeb004-07b3-4e93-a97e-a251ed27a076\") " pod="openshift-infra/auto-csr-approver-29533664-tsb8n" Feb 25 11:44:00 crc kubenswrapper[5005]: I0225 11:44:00.494117 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533664-tsb8n" Feb 25 11:44:00 crc kubenswrapper[5005]: I0225 11:44:00.952602 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533664-tsb8n"] Feb 25 11:44:00 crc kubenswrapper[5005]: W0225 11:44:00.954057 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podceaeb004_07b3_4e93_a97e_a251ed27a076.slice/crio-d43293e7463b4e1704b2234152f16067b5fa4e2329135960d8136292f84b10d5 WatchSource:0}: Error finding container d43293e7463b4e1704b2234152f16067b5fa4e2329135960d8136292f84b10d5: Status 404 returned error can't find the container with id d43293e7463b4e1704b2234152f16067b5fa4e2329135960d8136292f84b10d5 Feb 25 11:44:01 crc kubenswrapper[5005]: I0225 11:44:01.169128 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533664-tsb8n" event={"ID":"ceaeb004-07b3-4e93-a97e-a251ed27a076","Type":"ContainerStarted","Data":"d43293e7463b4e1704b2234152f16067b5fa4e2329135960d8136292f84b10d5"} Feb 25 11:44:02 crc kubenswrapper[5005]: I0225 11:44:02.785180 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gcmnt"] Feb 25 11:44:02 crc kubenswrapper[5005]: I0225 11:44:02.787496 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gcmnt" Feb 25 11:44:02 crc kubenswrapper[5005]: I0225 11:44:02.809890 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gcmnt"] Feb 25 11:44:02 crc kubenswrapper[5005]: I0225 11:44:02.883876 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgdw4\" (UniqueName: \"kubernetes.io/projected/333ed5c9-4ca6-4d85-b827-a3658b2f6d36-kube-api-access-sgdw4\") pod \"certified-operators-gcmnt\" (UID: \"333ed5c9-4ca6-4d85-b827-a3658b2f6d36\") " pod="openshift-marketplace/certified-operators-gcmnt" Feb 25 11:44:02 crc kubenswrapper[5005]: I0225 11:44:02.884027 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/333ed5c9-4ca6-4d85-b827-a3658b2f6d36-utilities\") pod \"certified-operators-gcmnt\" (UID: \"333ed5c9-4ca6-4d85-b827-a3658b2f6d36\") " pod="openshift-marketplace/certified-operators-gcmnt" Feb 25 11:44:02 crc kubenswrapper[5005]: I0225 11:44:02.884120 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/333ed5c9-4ca6-4d85-b827-a3658b2f6d36-catalog-content\") pod \"certified-operators-gcmnt\" (UID: \"333ed5c9-4ca6-4d85-b827-a3658b2f6d36\") " pod="openshift-marketplace/certified-operators-gcmnt" Feb 25 11:44:02 crc kubenswrapper[5005]: I0225 11:44:02.985610 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/333ed5c9-4ca6-4d85-b827-a3658b2f6d36-catalog-content\") pod \"certified-operators-gcmnt\" (UID: \"333ed5c9-4ca6-4d85-b827-a3658b2f6d36\") " pod="openshift-marketplace/certified-operators-gcmnt" Feb 25 11:44:02 crc kubenswrapper[5005]: I0225 11:44:02.985725 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgdw4\" (UniqueName: \"kubernetes.io/projected/333ed5c9-4ca6-4d85-b827-a3658b2f6d36-kube-api-access-sgdw4\") pod \"certified-operators-gcmnt\" (UID: \"333ed5c9-4ca6-4d85-b827-a3658b2f6d36\") " pod="openshift-marketplace/certified-operators-gcmnt" Feb 25 11:44:02 crc kubenswrapper[5005]: I0225 11:44:02.985852 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/333ed5c9-4ca6-4d85-b827-a3658b2f6d36-utilities\") pod \"certified-operators-gcmnt\" (UID: \"333ed5c9-4ca6-4d85-b827-a3658b2f6d36\") " pod="openshift-marketplace/certified-operators-gcmnt" Feb 25 11:44:02 crc kubenswrapper[5005]: I0225 11:44:02.986844 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/333ed5c9-4ca6-4d85-b827-a3658b2f6d36-utilities\") pod \"certified-operators-gcmnt\" (UID: \"333ed5c9-4ca6-4d85-b827-a3658b2f6d36\") " pod="openshift-marketplace/certified-operators-gcmnt" Feb 25 11:44:02 crc kubenswrapper[5005]: I0225 11:44:02.987210 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/333ed5c9-4ca6-4d85-b827-a3658b2f6d36-catalog-content\") pod \"certified-operators-gcmnt\" (UID: \"333ed5c9-4ca6-4d85-b827-a3658b2f6d36\") " pod="openshift-marketplace/certified-operators-gcmnt" Feb 25 11:44:03 crc kubenswrapper[5005]: I0225 11:44:03.017626 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgdw4\" (UniqueName: \"kubernetes.io/projected/333ed5c9-4ca6-4d85-b827-a3658b2f6d36-kube-api-access-sgdw4\") pod \"certified-operators-gcmnt\" (UID: \"333ed5c9-4ca6-4d85-b827-a3658b2f6d36\") " pod="openshift-marketplace/certified-operators-gcmnt" Feb 25 11:44:03 crc kubenswrapper[5005]: I0225 11:44:03.110600 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gcmnt" Feb 25 11:44:03 crc kubenswrapper[5005]: I0225 11:44:03.192146 5005 generic.go:334] "Generic (PLEG): container finished" podID="ceaeb004-07b3-4e93-a97e-a251ed27a076" containerID="25e6ee26bb952e6a503927614d7a12622db82da46d6d9116f7fb1941ef460671" exitCode=0 Feb 25 11:44:03 crc kubenswrapper[5005]: I0225 11:44:03.192189 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533664-tsb8n" event={"ID":"ceaeb004-07b3-4e93-a97e-a251ed27a076","Type":"ContainerDied","Data":"25e6ee26bb952e6a503927614d7a12622db82da46d6d9116f7fb1941ef460671"} Feb 25 11:44:03 crc kubenswrapper[5005]: I0225 11:44:03.625909 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gcmnt"] Feb 25 11:44:04 crc kubenswrapper[5005]: I0225 11:44:04.206012 5005 generic.go:334] "Generic (PLEG): container finished" podID="333ed5c9-4ca6-4d85-b827-a3658b2f6d36" containerID="1be7e43351345ca7e7ef6eb2af948a279a3f58c4bfdc3147982d6c0295616449" exitCode=0 Feb 25 11:44:04 crc kubenswrapper[5005]: I0225 11:44:04.206119 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gcmnt" event={"ID":"333ed5c9-4ca6-4d85-b827-a3658b2f6d36","Type":"ContainerDied","Data":"1be7e43351345ca7e7ef6eb2af948a279a3f58c4bfdc3147982d6c0295616449"} Feb 25 11:44:04 crc kubenswrapper[5005]: I0225 11:44:04.206190 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gcmnt" event={"ID":"333ed5c9-4ca6-4d85-b827-a3658b2f6d36","Type":"ContainerStarted","Data":"e7a55261432fdc648e7e1e99245705c8a4c63e44f38131451a74b228007ac22e"} Feb 25 11:44:04 crc kubenswrapper[5005]: I0225 11:44:04.597679 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533664-tsb8n" Feb 25 11:44:04 crc kubenswrapper[5005]: I0225 11:44:04.719353 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c64bz\" (UniqueName: \"kubernetes.io/projected/ceaeb004-07b3-4e93-a97e-a251ed27a076-kube-api-access-c64bz\") pod \"ceaeb004-07b3-4e93-a97e-a251ed27a076\" (UID: \"ceaeb004-07b3-4e93-a97e-a251ed27a076\") " Feb 25 11:44:04 crc kubenswrapper[5005]: I0225 11:44:04.727092 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ceaeb004-07b3-4e93-a97e-a251ed27a076-kube-api-access-c64bz" (OuterVolumeSpecName: "kube-api-access-c64bz") pod "ceaeb004-07b3-4e93-a97e-a251ed27a076" (UID: "ceaeb004-07b3-4e93-a97e-a251ed27a076"). InnerVolumeSpecName "kube-api-access-c64bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:44:04 crc kubenswrapper[5005]: I0225 11:44:04.821000 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c64bz\" (UniqueName: \"kubernetes.io/projected/ceaeb004-07b3-4e93-a97e-a251ed27a076-kube-api-access-c64bz\") on node \"crc\" DevicePath \"\"" Feb 25 11:44:05 crc kubenswrapper[5005]: I0225 11:44:05.219947 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533664-tsb8n" event={"ID":"ceaeb004-07b3-4e93-a97e-a251ed27a076","Type":"ContainerDied","Data":"d43293e7463b4e1704b2234152f16067b5fa4e2329135960d8136292f84b10d5"} Feb 25 11:44:05 crc kubenswrapper[5005]: I0225 11:44:05.220344 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d43293e7463b4e1704b2234152f16067b5fa4e2329135960d8136292f84b10d5" Feb 25 11:44:05 crc kubenswrapper[5005]: I0225 11:44:05.219976 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533664-tsb8n" Feb 25 11:44:05 crc kubenswrapper[5005]: I0225 11:44:05.696598 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533658-khbfz"] Feb 25 11:44:05 crc kubenswrapper[5005]: I0225 11:44:05.708924 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533658-khbfz"] Feb 25 11:44:06 crc kubenswrapper[5005]: I0225 11:44:06.236279 5005 generic.go:334] "Generic (PLEG): container finished" podID="333ed5c9-4ca6-4d85-b827-a3658b2f6d36" containerID="a98e1225aab02a57dc3abdf3fe2998ebabd83271d5a5b356361da83e1c97b114" exitCode=0 Feb 25 11:44:06 crc kubenswrapper[5005]: I0225 11:44:06.236323 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gcmnt" event={"ID":"333ed5c9-4ca6-4d85-b827-a3658b2f6d36","Type":"ContainerDied","Data":"a98e1225aab02a57dc3abdf3fe2998ebabd83271d5a5b356361da83e1c97b114"} Feb 25 11:44:06 crc kubenswrapper[5005]: I0225 11:44:06.707873 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3df09937-0397-4bf9-8f3a-435b47224f7e" path="/var/lib/kubelet/pods/3df09937-0397-4bf9-8f3a-435b47224f7e/volumes" Feb 25 11:44:07 crc kubenswrapper[5005]: I0225 11:44:07.246501 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gcmnt" event={"ID":"333ed5c9-4ca6-4d85-b827-a3658b2f6d36","Type":"ContainerStarted","Data":"b7a0bada7f7a41d9152501bfefd9439e64aba61b347f4719761615b5830e91c6"} Feb 25 11:44:07 crc kubenswrapper[5005]: I0225 11:44:07.267160 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gcmnt" podStartSLOduration=2.803935018 podStartE2EDuration="5.267144926s" podCreationTimestamp="2026-02-25 11:44:02 +0000 UTC" firstStartedPulling="2026-02-25 11:44:04.21129996 +0000 UTC m=+1558.252032317" lastFinishedPulling="2026-02-25 11:44:06.674509888 +0000 UTC m=+1560.715242225" observedRunningTime="2026-02-25 11:44:07.260810031 +0000 UTC m=+1561.301542358" watchObservedRunningTime="2026-02-25 11:44:07.267144926 +0000 UTC m=+1561.307877253" Feb 25 11:44:11 crc kubenswrapper[5005]: I0225 11:44:11.685718 5005 scope.go:117] "RemoveContainer" containerID="e2a8d5f65a424a6167d6d361209a412da688e351bf037195b0ced2458f3bac94" Feb 25 11:44:11 crc kubenswrapper[5005]: E0225 11:44:11.686773 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 11:44:13 crc kubenswrapper[5005]: I0225 11:44:13.111256 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gcmnt" Feb 25 11:44:13 crc kubenswrapper[5005]: I0225 11:44:13.111614 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gcmnt" Feb 25 11:44:13 crc kubenswrapper[5005]: I0225 11:44:13.165479 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gcmnt" Feb 25 11:44:13 crc kubenswrapper[5005]: I0225 11:44:13.363322 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gcmnt" Feb 25 11:44:13 crc kubenswrapper[5005]: I0225 11:44:13.416491 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gcmnt"] Feb 25 11:44:15 crc kubenswrapper[5005]: I0225 11:44:15.228058 5005 scope.go:117] "RemoveContainer" containerID="909fdd9ef9f189ddcc58320f4ff35609076afe4d8c7fc5800ec148320d07a934" Feb 25 11:44:15 crc kubenswrapper[5005]: I0225 11:44:15.309355 5005 scope.go:117] "RemoveContainer" containerID="349bb79dd4d251f0509c62a4ed8d2a2b8477ffa133040216112b2441f98e7030" Feb 25 11:44:15 crc kubenswrapper[5005]: I0225 11:44:15.335247 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gcmnt" podUID="333ed5c9-4ca6-4d85-b827-a3658b2f6d36" containerName="registry-server" containerID="cri-o://b7a0bada7f7a41d9152501bfefd9439e64aba61b347f4719761615b5830e91c6" gracePeriod=2 Feb 25 11:44:15 crc kubenswrapper[5005]: I0225 11:44:15.832408 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gcmnt" Feb 25 11:44:15 crc kubenswrapper[5005]: I0225 11:44:15.943487 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/333ed5c9-4ca6-4d85-b827-a3658b2f6d36-catalog-content\") pod \"333ed5c9-4ca6-4d85-b827-a3658b2f6d36\" (UID: \"333ed5c9-4ca6-4d85-b827-a3658b2f6d36\") " Feb 25 11:44:15 crc kubenswrapper[5005]: I0225 11:44:15.943530 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/333ed5c9-4ca6-4d85-b827-a3658b2f6d36-utilities\") pod \"333ed5c9-4ca6-4d85-b827-a3658b2f6d36\" (UID: \"333ed5c9-4ca6-4d85-b827-a3658b2f6d36\") " Feb 25 11:44:15 crc kubenswrapper[5005]: I0225 11:44:15.943662 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgdw4\" (UniqueName: \"kubernetes.io/projected/333ed5c9-4ca6-4d85-b827-a3658b2f6d36-kube-api-access-sgdw4\") pod \"333ed5c9-4ca6-4d85-b827-a3658b2f6d36\" (UID: \"333ed5c9-4ca6-4d85-b827-a3658b2f6d36\") " Feb 25 11:44:15 crc kubenswrapper[5005]: I0225 11:44:15.946060 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/333ed5c9-4ca6-4d85-b827-a3658b2f6d36-utilities" (OuterVolumeSpecName: "utilities") pod "333ed5c9-4ca6-4d85-b827-a3658b2f6d36" (UID: "333ed5c9-4ca6-4d85-b827-a3658b2f6d36"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:44:15 crc kubenswrapper[5005]: I0225 11:44:15.950726 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/333ed5c9-4ca6-4d85-b827-a3658b2f6d36-kube-api-access-sgdw4" (OuterVolumeSpecName: "kube-api-access-sgdw4") pod "333ed5c9-4ca6-4d85-b827-a3658b2f6d36" (UID: "333ed5c9-4ca6-4d85-b827-a3658b2f6d36"). InnerVolumeSpecName "kube-api-access-sgdw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:44:16 crc kubenswrapper[5005]: I0225 11:44:16.045634 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/333ed5c9-4ca6-4d85-b827-a3658b2f6d36-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 11:44:16 crc kubenswrapper[5005]: I0225 11:44:16.045668 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgdw4\" (UniqueName: \"kubernetes.io/projected/333ed5c9-4ca6-4d85-b827-a3658b2f6d36-kube-api-access-sgdw4\") on node \"crc\" DevicePath \"\"" Feb 25 11:44:16 crc kubenswrapper[5005]: I0225 11:44:16.181088 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/333ed5c9-4ca6-4d85-b827-a3658b2f6d36-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "333ed5c9-4ca6-4d85-b827-a3658b2f6d36" (UID: "333ed5c9-4ca6-4d85-b827-a3658b2f6d36"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:44:16 crc kubenswrapper[5005]: I0225 11:44:16.249537 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/333ed5c9-4ca6-4d85-b827-a3658b2f6d36-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 11:44:16 crc kubenswrapper[5005]: I0225 11:44:16.356309 5005 generic.go:334] "Generic (PLEG): container finished" podID="333ed5c9-4ca6-4d85-b827-a3658b2f6d36" containerID="b7a0bada7f7a41d9152501bfefd9439e64aba61b347f4719761615b5830e91c6" exitCode=0 Feb 25 11:44:16 crc kubenswrapper[5005]: I0225 11:44:16.356355 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gcmnt" Feb 25 11:44:16 crc kubenswrapper[5005]: I0225 11:44:16.356426 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gcmnt" event={"ID":"333ed5c9-4ca6-4d85-b827-a3658b2f6d36","Type":"ContainerDied","Data":"b7a0bada7f7a41d9152501bfefd9439e64aba61b347f4719761615b5830e91c6"} Feb 25 11:44:16 crc kubenswrapper[5005]: I0225 11:44:16.356489 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gcmnt" event={"ID":"333ed5c9-4ca6-4d85-b827-a3658b2f6d36","Type":"ContainerDied","Data":"e7a55261432fdc648e7e1e99245705c8a4c63e44f38131451a74b228007ac22e"} Feb 25 11:44:16 crc kubenswrapper[5005]: I0225 11:44:16.356528 5005 scope.go:117] "RemoveContainer" containerID="b7a0bada7f7a41d9152501bfefd9439e64aba61b347f4719761615b5830e91c6" Feb 25 11:44:16 crc kubenswrapper[5005]: I0225 11:44:16.393136 5005 scope.go:117] "RemoveContainer" containerID="a98e1225aab02a57dc3abdf3fe2998ebabd83271d5a5b356361da83e1c97b114" Feb 25 11:44:16 crc kubenswrapper[5005]: I0225 11:44:16.401861 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gcmnt"] Feb 25 11:44:16 crc kubenswrapper[5005]: I0225 11:44:16.412646 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gcmnt"] Feb 25 11:44:16 crc kubenswrapper[5005]: I0225 11:44:16.427556 5005 scope.go:117] "RemoveContainer" containerID="1be7e43351345ca7e7ef6eb2af948a279a3f58c4bfdc3147982d6c0295616449" Feb 25 11:44:16 crc kubenswrapper[5005]: I0225 11:44:16.464987 5005 scope.go:117] "RemoveContainer" containerID="b7a0bada7f7a41d9152501bfefd9439e64aba61b347f4719761615b5830e91c6" Feb 25 11:44:16 crc kubenswrapper[5005]: E0225 11:44:16.465505 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7a0bada7f7a41d9152501bfefd9439e64aba61b347f4719761615b5830e91c6\": container with ID starting with b7a0bada7f7a41d9152501bfefd9439e64aba61b347f4719761615b5830e91c6 not found: ID does not exist" containerID="b7a0bada7f7a41d9152501bfefd9439e64aba61b347f4719761615b5830e91c6" Feb 25 11:44:16 crc kubenswrapper[5005]: I0225 11:44:16.465544 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7a0bada7f7a41d9152501bfefd9439e64aba61b347f4719761615b5830e91c6"} err="failed to get container status \"b7a0bada7f7a41d9152501bfefd9439e64aba61b347f4719761615b5830e91c6\": rpc error: code = NotFound desc = could not find container \"b7a0bada7f7a41d9152501bfefd9439e64aba61b347f4719761615b5830e91c6\": container with ID starting with b7a0bada7f7a41d9152501bfefd9439e64aba61b347f4719761615b5830e91c6 not found: ID does not exist" Feb 25 11:44:16 crc kubenswrapper[5005]: I0225 11:44:16.465563 5005 scope.go:117] "RemoveContainer" containerID="a98e1225aab02a57dc3abdf3fe2998ebabd83271d5a5b356361da83e1c97b114" Feb 25 11:44:16 crc kubenswrapper[5005]: E0225 11:44:16.465883 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a98e1225aab02a57dc3abdf3fe2998ebabd83271d5a5b356361da83e1c97b114\": container with ID starting with a98e1225aab02a57dc3abdf3fe2998ebabd83271d5a5b356361da83e1c97b114 not found: ID does not exist" containerID="a98e1225aab02a57dc3abdf3fe2998ebabd83271d5a5b356361da83e1c97b114" Feb 25 11:44:16 crc kubenswrapper[5005]: I0225 11:44:16.465931 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a98e1225aab02a57dc3abdf3fe2998ebabd83271d5a5b356361da83e1c97b114"} err="failed to get container status \"a98e1225aab02a57dc3abdf3fe2998ebabd83271d5a5b356361da83e1c97b114\": rpc error: code = NotFound desc = could not find container \"a98e1225aab02a57dc3abdf3fe2998ebabd83271d5a5b356361da83e1c97b114\": container with ID starting with a98e1225aab02a57dc3abdf3fe2998ebabd83271d5a5b356361da83e1c97b114 not found: ID does not exist" Feb 25 11:44:16 crc kubenswrapper[5005]: I0225 11:44:16.465965 5005 scope.go:117] "RemoveContainer" containerID="1be7e43351345ca7e7ef6eb2af948a279a3f58c4bfdc3147982d6c0295616449" Feb 25 11:44:16 crc kubenswrapper[5005]: E0225 11:44:16.466310 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1be7e43351345ca7e7ef6eb2af948a279a3f58c4bfdc3147982d6c0295616449\": container with ID starting with 1be7e43351345ca7e7ef6eb2af948a279a3f58c4bfdc3147982d6c0295616449 not found: ID does not exist" containerID="1be7e43351345ca7e7ef6eb2af948a279a3f58c4bfdc3147982d6c0295616449" Feb 25 11:44:16 crc kubenswrapper[5005]: I0225 11:44:16.466336 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1be7e43351345ca7e7ef6eb2af948a279a3f58c4bfdc3147982d6c0295616449"} err="failed to get container status \"1be7e43351345ca7e7ef6eb2af948a279a3f58c4bfdc3147982d6c0295616449\": rpc error: code = NotFound desc = could not find container \"1be7e43351345ca7e7ef6eb2af948a279a3f58c4bfdc3147982d6c0295616449\": container with ID starting with 1be7e43351345ca7e7ef6eb2af948a279a3f58c4bfdc3147982d6c0295616449 not found: ID does not exist" Feb 25 11:44:16 crc kubenswrapper[5005]: I0225 11:44:16.699060 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="333ed5c9-4ca6-4d85-b827-a3658b2f6d36" path="/var/lib/kubelet/pods/333ed5c9-4ca6-4d85-b827-a3658b2f6d36/volumes" Feb 25 11:44:25 crc kubenswrapper[5005]: I0225 11:44:25.686107 5005 scope.go:117] "RemoveContainer" containerID="e2a8d5f65a424a6167d6d361209a412da688e351bf037195b0ced2458f3bac94" Feb 25 11:44:25 crc kubenswrapper[5005]: E0225 11:44:25.686873 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 11:44:39 crc kubenswrapper[5005]: I0225 11:44:39.686245 5005 scope.go:117] "RemoveContainer" containerID="e2a8d5f65a424a6167d6d361209a412da688e351bf037195b0ced2458f3bac94" Feb 25 11:44:39 crc kubenswrapper[5005]: E0225 11:44:39.687033 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 11:44:51 crc kubenswrapper[5005]: I0225 11:44:51.686206 5005 scope.go:117] "RemoveContainer" containerID="e2a8d5f65a424a6167d6d361209a412da688e351bf037195b0ced2458f3bac94" Feb 25 11:44:51 crc kubenswrapper[5005]: E0225 11:44:51.687205 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 11:44:57 crc kubenswrapper[5005]: I0225 11:44:57.055695 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-1873-account-create-update-6jhct"] Feb 25 11:44:57 crc kubenswrapper[5005]: I0225 11:44:57.074640 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-1873-account-create-update-6jhct"] Feb 25 11:44:58 crc kubenswrapper[5005]: I0225 11:44:58.046758 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-kdhsw"] Feb 25 11:44:58 crc kubenswrapper[5005]: I0225 11:44:58.059120 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-pgwr8"] Feb 25 11:44:58 crc kubenswrapper[5005]: I0225 11:44:58.069527 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-6acf-account-create-update-jxgrk"] Feb 25 11:44:58 crc kubenswrapper[5005]: I0225 11:44:58.080901 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-kdhsw"] Feb 25 11:44:58 crc kubenswrapper[5005]: I0225 11:44:58.091871 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-pgwr8"] Feb 25 11:44:58 crc kubenswrapper[5005]: I0225 11:44:58.101586 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-6acf-account-create-update-jxgrk"] Feb 25 11:44:58 crc kubenswrapper[5005]: I0225 11:44:58.703534 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09419be8-2dd4-4b0d-a830-e59c81a5a02c" path="/var/lib/kubelet/pods/09419be8-2dd4-4b0d-a830-e59c81a5a02c/volumes" Feb 25 11:44:58 crc kubenswrapper[5005]: I0225 11:44:58.704758 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f5eb2d0-058d-4b0e-80bc-5e2f919b4985" path="/var/lib/kubelet/pods/2f5eb2d0-058d-4b0e-80bc-5e2f919b4985/volumes" Feb 25 11:44:58 crc kubenswrapper[5005]: I0225 11:44:58.706063 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55244bcb-677d-4120-9e64-a52a075d96b8" path="/var/lib/kubelet/pods/55244bcb-677d-4120-9e64-a52a075d96b8/volumes" Feb 25 11:44:58 crc kubenswrapper[5005]: I0225 11:44:58.707136 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e276afbc-14ea-40fc-85b8-1c94ee29e8f6" path="/var/lib/kubelet/pods/e276afbc-14ea-40fc-85b8-1c94ee29e8f6/volumes" Feb 25 11:45:00 crc kubenswrapper[5005]: I0225 11:45:00.152608 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29533665-tgqfp"] Feb 25 11:45:00 crc kubenswrapper[5005]: E0225 11:45:00.154042 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="333ed5c9-4ca6-4d85-b827-a3658b2f6d36" containerName="registry-server" Feb 25 11:45:00 crc kubenswrapper[5005]: I0225 11:45:00.154068 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="333ed5c9-4ca6-4d85-b827-a3658b2f6d36" containerName="registry-server" Feb 25 11:45:00 crc kubenswrapper[5005]: E0225 11:45:00.154099 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="333ed5c9-4ca6-4d85-b827-a3658b2f6d36" containerName="extract-content" Feb 25 11:45:00 crc kubenswrapper[5005]: I0225 11:45:00.154110 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="333ed5c9-4ca6-4d85-b827-a3658b2f6d36" containerName="extract-content" Feb 25 11:45:00 crc kubenswrapper[5005]: E0225 11:45:00.154131 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceaeb004-07b3-4e93-a97e-a251ed27a076" containerName="oc" Feb 25 11:45:00 crc kubenswrapper[5005]: I0225 11:45:00.154142 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceaeb004-07b3-4e93-a97e-a251ed27a076" containerName="oc" Feb 25 11:45:00 crc kubenswrapper[5005]: E0225 11:45:00.154168 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="333ed5c9-4ca6-4d85-b827-a3658b2f6d36" containerName="extract-utilities" Feb 25 11:45:00 crc kubenswrapper[5005]: I0225 11:45:00.154176 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="333ed5c9-4ca6-4d85-b827-a3658b2f6d36" containerName="extract-utilities" Feb 25 11:45:00 crc kubenswrapper[5005]: I0225 11:45:00.154472 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceaeb004-07b3-4e93-a97e-a251ed27a076" containerName="oc" Feb 25 11:45:00 crc kubenswrapper[5005]: I0225 11:45:00.154509 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="333ed5c9-4ca6-4d85-b827-a3658b2f6d36" containerName="registry-server" Feb 25 11:45:00 crc kubenswrapper[5005]: I0225 11:45:00.155421 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29533665-tgqfp" Feb 25 11:45:00 crc kubenswrapper[5005]: I0225 11:45:00.158244 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 25 11:45:00 crc kubenswrapper[5005]: I0225 11:45:00.159383 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 25 11:45:00 crc kubenswrapper[5005]: I0225 11:45:00.175318 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29533665-tgqfp"] Feb 25 11:45:00 crc kubenswrapper[5005]: I0225 11:45:00.247920 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ht9v\" (UniqueName: \"kubernetes.io/projected/e97869f9-ad44-4d89-809d-45be0594ebd2-kube-api-access-2ht9v\") pod \"collect-profiles-29533665-tgqfp\" (UID: \"e97869f9-ad44-4d89-809d-45be0594ebd2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533665-tgqfp" Feb 25 11:45:00 crc kubenswrapper[5005]: I0225 11:45:00.247984 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e97869f9-ad44-4d89-809d-45be0594ebd2-config-volume\") pod \"collect-profiles-29533665-tgqfp\" (UID: \"e97869f9-ad44-4d89-809d-45be0594ebd2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533665-tgqfp" Feb 25 11:45:00 crc kubenswrapper[5005]: I0225 11:45:00.248345 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e97869f9-ad44-4d89-809d-45be0594ebd2-secret-volume\") pod \"collect-profiles-29533665-tgqfp\" (UID: \"e97869f9-ad44-4d89-809d-45be0594ebd2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533665-tgqfp" Feb 25 11:45:00 crc kubenswrapper[5005]: I0225 11:45:00.350057 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e97869f9-ad44-4d89-809d-45be0594ebd2-secret-volume\") pod \"collect-profiles-29533665-tgqfp\" (UID: \"e97869f9-ad44-4d89-809d-45be0594ebd2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533665-tgqfp" Feb 25 11:45:00 crc kubenswrapper[5005]: I0225 11:45:00.350162 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ht9v\" (UniqueName: \"kubernetes.io/projected/e97869f9-ad44-4d89-809d-45be0594ebd2-kube-api-access-2ht9v\") pod \"collect-profiles-29533665-tgqfp\" (UID: \"e97869f9-ad44-4d89-809d-45be0594ebd2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533665-tgqfp" Feb 25 11:45:00 crc kubenswrapper[5005]: I0225 11:45:00.350204 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e97869f9-ad44-4d89-809d-45be0594ebd2-config-volume\") pod \"collect-profiles-29533665-tgqfp\" (UID: \"e97869f9-ad44-4d89-809d-45be0594ebd2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533665-tgqfp" Feb 25 11:45:00 crc kubenswrapper[5005]: I0225 11:45:00.350979 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e97869f9-ad44-4d89-809d-45be0594ebd2-config-volume\") pod \"collect-profiles-29533665-tgqfp\" (UID: \"e97869f9-ad44-4d89-809d-45be0594ebd2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533665-tgqfp" Feb 25 11:45:00 crc kubenswrapper[5005]: I0225 11:45:00.355622 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e97869f9-ad44-4d89-809d-45be0594ebd2-secret-volume\") pod \"collect-profiles-29533665-tgqfp\" (UID: \"e97869f9-ad44-4d89-809d-45be0594ebd2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533665-tgqfp" Feb 25 11:45:00 crc kubenswrapper[5005]: I0225 11:45:00.364753 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ht9v\" (UniqueName: \"kubernetes.io/projected/e97869f9-ad44-4d89-809d-45be0594ebd2-kube-api-access-2ht9v\") pod \"collect-profiles-29533665-tgqfp\" (UID: \"e97869f9-ad44-4d89-809d-45be0594ebd2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533665-tgqfp" Feb 25 11:45:00 crc kubenswrapper[5005]: I0225 11:45:00.475109 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29533665-tgqfp" Feb 25 11:45:00 crc kubenswrapper[5005]: I0225 11:45:00.928476 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29533665-tgqfp"] Feb 25 11:45:01 crc kubenswrapper[5005]: I0225 11:45:01.051686 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-7a71-account-create-update-cmjms"] Feb 25 11:45:01 crc kubenswrapper[5005]: I0225 11:45:01.061965 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-7a71-account-create-update-cmjms"] Feb 25 11:45:01 crc kubenswrapper[5005]: I0225 11:45:01.873582 5005 generic.go:334] "Generic (PLEG): container finished" podID="e97869f9-ad44-4d89-809d-45be0594ebd2" containerID="ee3d4566c7a389e10f857ce4e23156b3ecdbcc37942b82016599207325df9035" exitCode=0 Feb 25 11:45:01 crc kubenswrapper[5005]: I0225 11:45:01.873695 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29533665-tgqfp" event={"ID":"e97869f9-ad44-4d89-809d-45be0594ebd2","Type":"ContainerDied","Data":"ee3d4566c7a389e10f857ce4e23156b3ecdbcc37942b82016599207325df9035"} Feb 25 11:45:01 crc kubenswrapper[5005]: I0225 11:45:01.873903 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29533665-tgqfp" event={"ID":"e97869f9-ad44-4d89-809d-45be0594ebd2","Type":"ContainerStarted","Data":"d546389daa686aa5488a865de4fa07781265b9ec93310d1a089155a601e62423"} Feb 25 11:45:02 crc kubenswrapper[5005]: I0225 11:45:02.029191 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-7bd7z"] Feb 25 11:45:02 crc kubenswrapper[5005]: I0225 11:45:02.036458 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-7bd7z"] Feb 25 11:45:02 crc kubenswrapper[5005]: I0225 11:45:02.685458 5005 scope.go:117] "RemoveContainer" containerID="e2a8d5f65a424a6167d6d361209a412da688e351bf037195b0ced2458f3bac94" Feb 25 11:45:02 crc kubenswrapper[5005]: E0225 11:45:02.685836 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 11:45:02 crc kubenswrapper[5005]: I0225 11:45:02.697237 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4a8b7c8-c7dc-42ad-8811-977d6f50f3d7" path="/var/lib/kubelet/pods/b4a8b7c8-c7dc-42ad-8811-977d6f50f3d7/volumes" Feb 25 11:45:02 crc kubenswrapper[5005]: I0225 11:45:02.697829 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c677f6a5-2bfc-4007-aed6-c065c62b582d" path="/var/lib/kubelet/pods/c677f6a5-2bfc-4007-aed6-c065c62b582d/volumes" Feb 25 11:45:03 crc kubenswrapper[5005]: I0225 11:45:03.335540 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29533665-tgqfp" Feb 25 11:45:03 crc kubenswrapper[5005]: I0225 11:45:03.502844 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ht9v\" (UniqueName: \"kubernetes.io/projected/e97869f9-ad44-4d89-809d-45be0594ebd2-kube-api-access-2ht9v\") pod \"e97869f9-ad44-4d89-809d-45be0594ebd2\" (UID: \"e97869f9-ad44-4d89-809d-45be0594ebd2\") " Feb 25 11:45:03 crc kubenswrapper[5005]: I0225 11:45:03.503130 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e97869f9-ad44-4d89-809d-45be0594ebd2-config-volume\") pod \"e97869f9-ad44-4d89-809d-45be0594ebd2\" (UID: \"e97869f9-ad44-4d89-809d-45be0594ebd2\") " Feb 25 11:45:03 crc kubenswrapper[5005]: I0225 11:45:03.503202 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e97869f9-ad44-4d89-809d-45be0594ebd2-secret-volume\") pod \"e97869f9-ad44-4d89-809d-45be0594ebd2\" (UID: \"e97869f9-ad44-4d89-809d-45be0594ebd2\") " Feb 25 11:45:03 crc kubenswrapper[5005]: I0225 11:45:03.504085 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e97869f9-ad44-4d89-809d-45be0594ebd2-config-volume" (OuterVolumeSpecName: "config-volume") pod "e97869f9-ad44-4d89-809d-45be0594ebd2" (UID: "e97869f9-ad44-4d89-809d-45be0594ebd2"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:45:03 crc kubenswrapper[5005]: I0225 11:45:03.508742 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e97869f9-ad44-4d89-809d-45be0594ebd2-kube-api-access-2ht9v" (OuterVolumeSpecName: "kube-api-access-2ht9v") pod "e97869f9-ad44-4d89-809d-45be0594ebd2" (UID: "e97869f9-ad44-4d89-809d-45be0594ebd2"). InnerVolumeSpecName "kube-api-access-2ht9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:45:03 crc kubenswrapper[5005]: I0225 11:45:03.508786 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e97869f9-ad44-4d89-809d-45be0594ebd2-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e97869f9-ad44-4d89-809d-45be0594ebd2" (UID: "e97869f9-ad44-4d89-809d-45be0594ebd2"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:45:03 crc kubenswrapper[5005]: I0225 11:45:03.605961 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ht9v\" (UniqueName: \"kubernetes.io/projected/e97869f9-ad44-4d89-809d-45be0594ebd2-kube-api-access-2ht9v\") on node \"crc\" DevicePath \"\"" Feb 25 11:45:03 crc kubenswrapper[5005]: I0225 11:45:03.606011 5005 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e97869f9-ad44-4d89-809d-45be0594ebd2-config-volume\") on node \"crc\" DevicePath \"\"" Feb 25 11:45:03 crc kubenswrapper[5005]: I0225 11:45:03.606031 5005 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e97869f9-ad44-4d89-809d-45be0594ebd2-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 25 11:45:03 crc kubenswrapper[5005]: I0225 11:45:03.893552 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29533665-tgqfp" event={"ID":"e97869f9-ad44-4d89-809d-45be0594ebd2","Type":"ContainerDied","Data":"d546389daa686aa5488a865de4fa07781265b9ec93310d1a089155a601e62423"} Feb 25 11:45:03 crc kubenswrapper[5005]: I0225 11:45:03.893590 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d546389daa686aa5488a865de4fa07781265b9ec93310d1a089155a601e62423" Feb 25 11:45:03 crc kubenswrapper[5005]: I0225 11:45:03.893635 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29533665-tgqfp" Feb 25 11:45:15 crc kubenswrapper[5005]: I0225 11:45:15.435221 5005 scope.go:117] "RemoveContainer" containerID="368e9872489a5e95a0c84300b654061df60ad2cbc71ef0fe767f3065c93bd780" Feb 25 11:45:15 crc kubenswrapper[5005]: I0225 11:45:15.470992 5005 scope.go:117] "RemoveContainer" containerID="75cbe7f0936212e1bffe2d28ea3d9f007ecc44c963752360cb7162d4aba80823" Feb 25 11:45:15 crc kubenswrapper[5005]: I0225 11:45:15.512480 5005 scope.go:117] "RemoveContainer" containerID="93df242d3df6401dcc3fa87a4bf9762771934edad27050fde7371cd41e024e89" Feb 25 11:45:15 crc kubenswrapper[5005]: I0225 11:45:15.553213 5005 scope.go:117] "RemoveContainer" containerID="ee2a0311e7e2de7640df3181dad75a64754ec9167f7d4c894b23a0bd9c748d29" Feb 25 11:45:15 crc kubenswrapper[5005]: I0225 11:45:15.599216 5005 scope.go:117] "RemoveContainer" containerID="e9c775f852281bdc6bb8b5cbf58776f8307b2680e0ece8466c24b636d1632fbe" Feb 25 11:45:15 crc kubenswrapper[5005]: I0225 11:45:15.644041 5005 scope.go:117] "RemoveContainer" containerID="b2eb9a440be25a4de790806ba4806867f4c9b22b04f12f8d951248290608dbda" Feb 25 11:45:15 crc kubenswrapper[5005]: I0225 11:45:15.677423 5005 scope.go:117] "RemoveContainer" containerID="037a174a553ca0e3c71f0a9242b078d596495feb895a62bac6462bacbd9b6133" Feb 25 11:45:15 crc kubenswrapper[5005]: I0225 11:45:15.710898 5005 scope.go:117] "RemoveContainer" containerID="d29d6f434381be09314967ee5f8cd4bf28986e657be8b78ff14f79551573feba" Feb 25 11:45:16 crc kubenswrapper[5005]: I0225 11:45:16.692167 5005 scope.go:117] "RemoveContainer" containerID="e2a8d5f65a424a6167d6d361209a412da688e351bf037195b0ced2458f3bac94" Feb 25 11:45:16 crc kubenswrapper[5005]: E0225 11:45:16.692656 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 11:45:17 crc kubenswrapper[5005]: I0225 11:45:17.033255 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-b824f"] Feb 25 11:45:17 crc kubenswrapper[5005]: I0225 11:45:17.041357 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-b824f"] Feb 25 11:45:18 crc kubenswrapper[5005]: I0225 11:45:18.697309 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e16ead21-e865-48ed-8adc-fa1892f6a38f" path="/var/lib/kubelet/pods/e16ead21-e865-48ed-8adc-fa1892f6a38f/volumes" Feb 25 11:45:21 crc kubenswrapper[5005]: I0225 11:45:21.054118 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-8k7gx"] Feb 25 11:45:21 crc kubenswrapper[5005]: I0225 11:45:21.069661 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-8k7gx"] Feb 25 11:45:22 crc kubenswrapper[5005]: I0225 11:45:22.705180 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6fdb7b6-9eca-4adc-a5d2-3aee73085ea4" path="/var/lib/kubelet/pods/a6fdb7b6-9eca-4adc-a5d2-3aee73085ea4/volumes" Feb 25 11:45:29 crc kubenswrapper[5005]: I0225 11:45:29.685721 5005 scope.go:117] "RemoveContainer" containerID="e2a8d5f65a424a6167d6d361209a412da688e351bf037195b0ced2458f3bac94" Feb 25 11:45:29 crc kubenswrapper[5005]: E0225 11:45:29.686360 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 11:45:33 crc kubenswrapper[5005]: I0225 11:45:33.047019 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-605b-account-create-update-dpg5x"] Feb 25 11:45:33 crc kubenswrapper[5005]: I0225 11:45:33.058090 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-kw2jf"] Feb 25 11:45:33 crc kubenswrapper[5005]: I0225 11:45:33.072093 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-kw2jf"] Feb 25 11:45:33 crc kubenswrapper[5005]: I0225 11:45:33.083963 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-605b-account-create-update-dpg5x"] Feb 25 11:45:34 crc kubenswrapper[5005]: I0225 11:45:34.036057 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-xv4rr"] Feb 25 11:45:34 crc kubenswrapper[5005]: I0225 11:45:34.051659 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-xv4rr"] Feb 25 11:45:34 crc kubenswrapper[5005]: I0225 11:45:34.219817 5005 generic.go:334] "Generic (PLEG): container finished" podID="291ad3a8-272b-4a32-b8bd-0d2b7fcb546a" containerID="706a207c430d9e746e1b94015c889986df9389fe11a733c1b995e8dfda454bfa" exitCode=0 Feb 25 11:45:34 crc kubenswrapper[5005]: I0225 11:45:34.219881 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4xrlg" event={"ID":"291ad3a8-272b-4a32-b8bd-0d2b7fcb546a","Type":"ContainerDied","Data":"706a207c430d9e746e1b94015c889986df9389fe11a733c1b995e8dfda454bfa"} Feb 25 11:45:34 crc kubenswrapper[5005]: I0225 11:45:34.703749 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0637913d-b2d2-4492-8aa2-eba57f5e7177" path="/var/lib/kubelet/pods/0637913d-b2d2-4492-8aa2-eba57f5e7177/volumes" Feb 25 11:45:34 crc kubenswrapper[5005]: I0225 11:45:34.705712 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14057729-fc6f-46d3-ba9f-606be9cb3e28" path="/var/lib/kubelet/pods/14057729-fc6f-46d3-ba9f-606be9cb3e28/volumes" Feb 25 11:45:34 crc kubenswrapper[5005]: I0225 11:45:34.706865 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef4bd119-b8db-4ea6-92e1-efca3d60f766" path="/var/lib/kubelet/pods/ef4bd119-b8db-4ea6-92e1-efca3d60f766/volumes" Feb 25 11:45:35 crc kubenswrapper[5005]: I0225 11:45:35.761553 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4xrlg" Feb 25 11:45:35 crc kubenswrapper[5005]: I0225 11:45:35.860628 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tknr8\" (UniqueName: \"kubernetes.io/projected/291ad3a8-272b-4a32-b8bd-0d2b7fcb546a-kube-api-access-tknr8\") pod \"291ad3a8-272b-4a32-b8bd-0d2b7fcb546a\" (UID: \"291ad3a8-272b-4a32-b8bd-0d2b7fcb546a\") " Feb 25 11:45:35 crc kubenswrapper[5005]: I0225 11:45:35.860705 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/291ad3a8-272b-4a32-b8bd-0d2b7fcb546a-inventory\") pod \"291ad3a8-272b-4a32-b8bd-0d2b7fcb546a\" (UID: \"291ad3a8-272b-4a32-b8bd-0d2b7fcb546a\") " Feb 25 11:45:35 crc kubenswrapper[5005]: I0225 11:45:35.860757 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/291ad3a8-272b-4a32-b8bd-0d2b7fcb546a-ssh-key-openstack-edpm-ipam\") pod \"291ad3a8-272b-4a32-b8bd-0d2b7fcb546a\" (UID: \"291ad3a8-272b-4a32-b8bd-0d2b7fcb546a\") " Feb 25 11:45:35 crc kubenswrapper[5005]: I0225 11:45:35.866078 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/291ad3a8-272b-4a32-b8bd-0d2b7fcb546a-kube-api-access-tknr8" (OuterVolumeSpecName: "kube-api-access-tknr8") pod "291ad3a8-272b-4a32-b8bd-0d2b7fcb546a" (UID: "291ad3a8-272b-4a32-b8bd-0d2b7fcb546a"). InnerVolumeSpecName "kube-api-access-tknr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:45:35 crc kubenswrapper[5005]: I0225 11:45:35.891250 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/291ad3a8-272b-4a32-b8bd-0d2b7fcb546a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "291ad3a8-272b-4a32-b8bd-0d2b7fcb546a" (UID: "291ad3a8-272b-4a32-b8bd-0d2b7fcb546a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:45:35 crc kubenswrapper[5005]: I0225 11:45:35.902464 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/291ad3a8-272b-4a32-b8bd-0d2b7fcb546a-inventory" (OuterVolumeSpecName: "inventory") pod "291ad3a8-272b-4a32-b8bd-0d2b7fcb546a" (UID: "291ad3a8-272b-4a32-b8bd-0d2b7fcb546a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:45:35 crc kubenswrapper[5005]: I0225 11:45:35.962819 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tknr8\" (UniqueName: \"kubernetes.io/projected/291ad3a8-272b-4a32-b8bd-0d2b7fcb546a-kube-api-access-tknr8\") on node \"crc\" DevicePath \"\"" Feb 25 11:45:35 crc kubenswrapper[5005]: I0225 11:45:35.962867 5005 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/291ad3a8-272b-4a32-b8bd-0d2b7fcb546a-inventory\") on node \"crc\" DevicePath \"\"" Feb 25 11:45:35 crc kubenswrapper[5005]: I0225 11:45:35.962886 5005 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/291ad3a8-272b-4a32-b8bd-0d2b7fcb546a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 25 11:45:36 crc kubenswrapper[5005]: I0225 11:45:36.238775 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4xrlg" event={"ID":"291ad3a8-272b-4a32-b8bd-0d2b7fcb546a","Type":"ContainerDied","Data":"c25d88360bc0e7d872d9f7d1924bbde06ec3112f3f0d96950e0483c2310804d2"} Feb 25 11:45:36 crc kubenswrapper[5005]: I0225 11:45:36.238840 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c25d88360bc0e7d872d9f7d1924bbde06ec3112f3f0d96950e0483c2310804d2" Feb 25 11:45:36 crc kubenswrapper[5005]: I0225 11:45:36.238896 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4xrlg" Feb 25 11:45:36 crc kubenswrapper[5005]: I0225 11:45:36.340567 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g496k"] Feb 25 11:45:36 crc kubenswrapper[5005]: E0225 11:45:36.341073 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="291ad3a8-272b-4a32-b8bd-0d2b7fcb546a" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 25 11:45:36 crc kubenswrapper[5005]: I0225 11:45:36.341097 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="291ad3a8-272b-4a32-b8bd-0d2b7fcb546a" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 25 11:45:36 crc kubenswrapper[5005]: E0225 11:45:36.341117 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e97869f9-ad44-4d89-809d-45be0594ebd2" containerName="collect-profiles" Feb 25 11:45:36 crc kubenswrapper[5005]: I0225 11:45:36.341129 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="e97869f9-ad44-4d89-809d-45be0594ebd2" containerName="collect-profiles" Feb 25 11:45:36 crc kubenswrapper[5005]: I0225 11:45:36.341412 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="e97869f9-ad44-4d89-809d-45be0594ebd2" containerName="collect-profiles" Feb 25 11:45:36 crc kubenswrapper[5005]: I0225 11:45:36.341437 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="291ad3a8-272b-4a32-b8bd-0d2b7fcb546a" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 25 11:45:36 crc kubenswrapper[5005]: I0225 11:45:36.342143 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g496k" Feb 25 11:45:36 crc kubenswrapper[5005]: I0225 11:45:36.345840 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 25 11:45:36 crc kubenswrapper[5005]: I0225 11:45:36.346078 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 25 11:45:36 crc kubenswrapper[5005]: I0225 11:45:36.346120 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dgrbb" Feb 25 11:45:36 crc kubenswrapper[5005]: I0225 11:45:36.346342 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 25 11:45:36 crc kubenswrapper[5005]: I0225 11:45:36.371580 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g496k"] Feb 25 11:45:36 crc kubenswrapper[5005]: I0225 11:45:36.472433 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1269c8b0-7511-4b9d-b8bd-f64a044ff8b5-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-g496k\" (UID: \"1269c8b0-7511-4b9d-b8bd-f64a044ff8b5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g496k" Feb 25 11:45:36 crc kubenswrapper[5005]: I0225 11:45:36.472687 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1269c8b0-7511-4b9d-b8bd-f64a044ff8b5-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-g496k\" (UID: \"1269c8b0-7511-4b9d-b8bd-f64a044ff8b5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g496k" Feb 25 11:45:36 crc kubenswrapper[5005]: I0225 11:45:36.472771 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85ldc\" (UniqueName: \"kubernetes.io/projected/1269c8b0-7511-4b9d-b8bd-f64a044ff8b5-kube-api-access-85ldc\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-g496k\" (UID: \"1269c8b0-7511-4b9d-b8bd-f64a044ff8b5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g496k" Feb 25 11:45:36 crc kubenswrapper[5005]: I0225 11:45:36.575107 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1269c8b0-7511-4b9d-b8bd-f64a044ff8b5-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-g496k\" (UID: \"1269c8b0-7511-4b9d-b8bd-f64a044ff8b5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g496k" Feb 25 11:45:36 crc kubenswrapper[5005]: I0225 11:45:36.575285 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1269c8b0-7511-4b9d-b8bd-f64a044ff8b5-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-g496k\" (UID: \"1269c8b0-7511-4b9d-b8bd-f64a044ff8b5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g496k" Feb 25 11:45:36 crc kubenswrapper[5005]: I0225 11:45:36.575366 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85ldc\" (UniqueName: \"kubernetes.io/projected/1269c8b0-7511-4b9d-b8bd-f64a044ff8b5-kube-api-access-85ldc\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-g496k\" (UID: \"1269c8b0-7511-4b9d-b8bd-f64a044ff8b5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g496k" Feb 25 11:45:36 crc kubenswrapper[5005]: I0225 11:45:36.580620 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1269c8b0-7511-4b9d-b8bd-f64a044ff8b5-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-g496k\" (UID: \"1269c8b0-7511-4b9d-b8bd-f64a044ff8b5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g496k" Feb 25 11:45:36 crc kubenswrapper[5005]: I0225 11:45:36.581143 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1269c8b0-7511-4b9d-b8bd-f64a044ff8b5-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-g496k\" (UID: \"1269c8b0-7511-4b9d-b8bd-f64a044ff8b5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g496k" Feb 25 11:45:36 crc kubenswrapper[5005]: I0225 11:45:36.607134 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85ldc\" (UniqueName: \"kubernetes.io/projected/1269c8b0-7511-4b9d-b8bd-f64a044ff8b5-kube-api-access-85ldc\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-g496k\" (UID: \"1269c8b0-7511-4b9d-b8bd-f64a044ff8b5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g496k" Feb 25 11:45:36 crc kubenswrapper[5005]: I0225 11:45:36.668826 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g496k" Feb 25 11:45:37 crc kubenswrapper[5005]: I0225 11:45:37.045575 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-jt9t4"] Feb 25 11:45:37 crc kubenswrapper[5005]: I0225 11:45:37.056089 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-jt9t4"] Feb 25 11:45:37 crc kubenswrapper[5005]: I0225 11:45:37.063935 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-1fc9-account-create-update-fhrgw"] Feb 25 11:45:37 crc kubenswrapper[5005]: I0225 11:45:37.071807 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-1fc9-account-create-update-fhrgw"] Feb 25 11:45:37 crc kubenswrapper[5005]: I0225 11:45:37.079544 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-495c-account-create-update-zshp6"] Feb 25 11:45:37 crc kubenswrapper[5005]: I0225 11:45:37.099265 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-495c-account-create-update-zshp6"] Feb 25 11:45:37 crc kubenswrapper[5005]: I0225 11:45:37.125943 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g496k"] Feb 25 11:45:37 crc kubenswrapper[5005]: I0225 11:45:37.256846 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g496k" event={"ID":"1269c8b0-7511-4b9d-b8bd-f64a044ff8b5","Type":"ContainerStarted","Data":"dbeae9cece456d419f4f317b5b7d30aab572c8412c41ffd3ac2b3b7d57ee491c"} Feb 25 11:45:38 crc kubenswrapper[5005]: I0225 11:45:38.278277 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g496k" event={"ID":"1269c8b0-7511-4b9d-b8bd-f64a044ff8b5","Type":"ContainerStarted","Data":"b7b1a062f9bb4eead6d8cfdb1b468e7c66460df30a1d8fcf642bca78393b58f1"} Feb 25 11:45:38 crc kubenswrapper[5005]: I0225 11:45:38.300664 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g496k" podStartSLOduration=1.868469618 podStartE2EDuration="2.300645222s" podCreationTimestamp="2026-02-25 11:45:36 +0000 UTC" firstStartedPulling="2026-02-25 11:45:37.128084208 +0000 UTC m=+1651.168816525" lastFinishedPulling="2026-02-25 11:45:37.560259812 +0000 UTC m=+1651.600992129" observedRunningTime="2026-02-25 11:45:38.295026479 +0000 UTC m=+1652.335758846" watchObservedRunningTime="2026-02-25 11:45:38.300645222 +0000 UTC m=+1652.341377549" Feb 25 11:45:38 crc kubenswrapper[5005]: I0225 11:45:38.699175 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41d7b8d7-c3a9-4065-8dd8-5c01a37f6566" path="/var/lib/kubelet/pods/41d7b8d7-c3a9-4065-8dd8-5c01a37f6566/volumes" Feb 25 11:45:38 crc kubenswrapper[5005]: I0225 11:45:38.700596 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bceed99-c54d-44a7-b7f0-85183b242006" path="/var/lib/kubelet/pods/9bceed99-c54d-44a7-b7f0-85183b242006/volumes" Feb 25 11:45:38 crc kubenswrapper[5005]: I0225 11:45:38.701703 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c924c62-341b-43ea-af0b-ac567b5acfd0" path="/var/lib/kubelet/pods/9c924c62-341b-43ea-af0b-ac567b5acfd0/volumes" Feb 25 11:45:42 crc kubenswrapper[5005]: I0225 11:45:42.035526 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-dp75q"] Feb 25 11:45:42 crc kubenswrapper[5005]: I0225 11:45:42.049140 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-dp75q"] Feb 25 11:45:42 crc kubenswrapper[5005]: I0225 11:45:42.694077 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50d0cec3-9a4a-4252-accb-dc4194ad752e" path="/var/lib/kubelet/pods/50d0cec3-9a4a-4252-accb-dc4194ad752e/volumes" Feb 25 11:45:43 crc kubenswrapper[5005]: I0225 11:45:43.686326 5005 scope.go:117] "RemoveContainer" containerID="e2a8d5f65a424a6167d6d361209a412da688e351bf037195b0ced2458f3bac94" Feb 25 11:45:43 crc kubenswrapper[5005]: E0225 11:45:43.687046 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 11:45:44 crc kubenswrapper[5005]: I0225 11:45:44.331498 5005 generic.go:334] "Generic (PLEG): container finished" podID="1269c8b0-7511-4b9d-b8bd-f64a044ff8b5" containerID="b7b1a062f9bb4eead6d8cfdb1b468e7c66460df30a1d8fcf642bca78393b58f1" exitCode=0 Feb 25 11:45:44 crc kubenswrapper[5005]: I0225 11:45:44.331538 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g496k" event={"ID":"1269c8b0-7511-4b9d-b8bd-f64a044ff8b5","Type":"ContainerDied","Data":"b7b1a062f9bb4eead6d8cfdb1b468e7c66460df30a1d8fcf642bca78393b58f1"} Feb 25 11:45:45 crc kubenswrapper[5005]: I0225 11:45:45.837671 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g496k" Feb 25 11:45:45 crc kubenswrapper[5005]: I0225 11:45:45.954604 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1269c8b0-7511-4b9d-b8bd-f64a044ff8b5-inventory\") pod \"1269c8b0-7511-4b9d-b8bd-f64a044ff8b5\" (UID: \"1269c8b0-7511-4b9d-b8bd-f64a044ff8b5\") " Feb 25 11:45:45 crc kubenswrapper[5005]: I0225 11:45:45.955075 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1269c8b0-7511-4b9d-b8bd-f64a044ff8b5-ssh-key-openstack-edpm-ipam\") pod \"1269c8b0-7511-4b9d-b8bd-f64a044ff8b5\" (UID: \"1269c8b0-7511-4b9d-b8bd-f64a044ff8b5\") " Feb 25 11:45:45 crc kubenswrapper[5005]: I0225 11:45:45.955283 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85ldc\" (UniqueName: \"kubernetes.io/projected/1269c8b0-7511-4b9d-b8bd-f64a044ff8b5-kube-api-access-85ldc\") pod \"1269c8b0-7511-4b9d-b8bd-f64a044ff8b5\" (UID: \"1269c8b0-7511-4b9d-b8bd-f64a044ff8b5\") " Feb 25 11:45:45 crc kubenswrapper[5005]: I0225 11:45:45.962599 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1269c8b0-7511-4b9d-b8bd-f64a044ff8b5-kube-api-access-85ldc" (OuterVolumeSpecName: "kube-api-access-85ldc") pod "1269c8b0-7511-4b9d-b8bd-f64a044ff8b5" (UID: "1269c8b0-7511-4b9d-b8bd-f64a044ff8b5"). InnerVolumeSpecName "kube-api-access-85ldc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:45:46 crc kubenswrapper[5005]: I0225 11:45:46.000667 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1269c8b0-7511-4b9d-b8bd-f64a044ff8b5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1269c8b0-7511-4b9d-b8bd-f64a044ff8b5" (UID: "1269c8b0-7511-4b9d-b8bd-f64a044ff8b5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:45:46 crc kubenswrapper[5005]: I0225 11:45:46.007943 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1269c8b0-7511-4b9d-b8bd-f64a044ff8b5-inventory" (OuterVolumeSpecName: "inventory") pod "1269c8b0-7511-4b9d-b8bd-f64a044ff8b5" (UID: "1269c8b0-7511-4b9d-b8bd-f64a044ff8b5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:45:46 crc kubenswrapper[5005]: I0225 11:45:46.061365 5005 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1269c8b0-7511-4b9d-b8bd-f64a044ff8b5-inventory\") on node \"crc\" DevicePath \"\"" Feb 25 11:45:46 crc kubenswrapper[5005]: I0225 11:45:46.061484 5005 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1269c8b0-7511-4b9d-b8bd-f64a044ff8b5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 25 11:45:46 crc kubenswrapper[5005]: I0225 11:45:46.061498 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85ldc\" (UniqueName: \"kubernetes.io/projected/1269c8b0-7511-4b9d-b8bd-f64a044ff8b5-kube-api-access-85ldc\") on node \"crc\" DevicePath \"\"" Feb 25 11:45:46 crc kubenswrapper[5005]: I0225 11:45:46.354825 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g496k" event={"ID":"1269c8b0-7511-4b9d-b8bd-f64a044ff8b5","Type":"ContainerDied","Data":"dbeae9cece456d419f4f317b5b7d30aab572c8412c41ffd3ac2b3b7d57ee491c"} Feb 25 11:45:46 crc kubenswrapper[5005]: I0225 11:45:46.354890 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dbeae9cece456d419f4f317b5b7d30aab572c8412c41ffd3ac2b3b7d57ee491c" Feb 25 11:45:46 crc kubenswrapper[5005]: I0225 11:45:46.354983 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g496k" Feb 25 11:45:46 crc kubenswrapper[5005]: I0225 11:45:46.498489 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-hrbkq"] Feb 25 11:45:46 crc kubenswrapper[5005]: E0225 11:45:46.498872 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1269c8b0-7511-4b9d-b8bd-f64a044ff8b5" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 25 11:45:46 crc kubenswrapper[5005]: I0225 11:45:46.498888 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="1269c8b0-7511-4b9d-b8bd-f64a044ff8b5" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 25 11:45:46 crc kubenswrapper[5005]: I0225 11:45:46.499056 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="1269c8b0-7511-4b9d-b8bd-f64a044ff8b5" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 25 11:45:46 crc kubenswrapper[5005]: I0225 11:45:46.499646 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hrbkq" Feb 25 11:45:46 crc kubenswrapper[5005]: I0225 11:45:46.502007 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 25 11:45:46 crc kubenswrapper[5005]: I0225 11:45:46.502237 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dgrbb" Feb 25 11:45:46 crc kubenswrapper[5005]: I0225 11:45:46.502591 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 25 11:45:46 crc kubenswrapper[5005]: I0225 11:45:46.502652 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 25 11:45:46 crc kubenswrapper[5005]: I0225 11:45:46.554982 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-hrbkq"] Feb 25 11:45:46 crc kubenswrapper[5005]: I0225 11:45:46.672748 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cv9p\" (UniqueName: \"kubernetes.io/projected/612d0145-d4b9-43a5-a0c1-00fe6be630a4-kube-api-access-6cv9p\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-hrbkq\" (UID: \"612d0145-d4b9-43a5-a0c1-00fe6be630a4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hrbkq" Feb 25 11:45:46 crc kubenswrapper[5005]: I0225 11:45:46.672811 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/612d0145-d4b9-43a5-a0c1-00fe6be630a4-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-hrbkq\" (UID: \"612d0145-d4b9-43a5-a0c1-00fe6be630a4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hrbkq" Feb 25 11:45:46 crc kubenswrapper[5005]: I0225 11:45:46.672919 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/612d0145-d4b9-43a5-a0c1-00fe6be630a4-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-hrbkq\" (UID: \"612d0145-d4b9-43a5-a0c1-00fe6be630a4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hrbkq" Feb 25 11:45:46 crc kubenswrapper[5005]: I0225 11:45:46.774417 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cv9p\" (UniqueName: \"kubernetes.io/projected/612d0145-d4b9-43a5-a0c1-00fe6be630a4-kube-api-access-6cv9p\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-hrbkq\" (UID: \"612d0145-d4b9-43a5-a0c1-00fe6be630a4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hrbkq" Feb 25 11:45:46 crc kubenswrapper[5005]: I0225 11:45:46.774479 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/612d0145-d4b9-43a5-a0c1-00fe6be630a4-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-hrbkq\" (UID: \"612d0145-d4b9-43a5-a0c1-00fe6be630a4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hrbkq" Feb 25 11:45:46 crc kubenswrapper[5005]: I0225 11:45:46.774606 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/612d0145-d4b9-43a5-a0c1-00fe6be630a4-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-hrbkq\" (UID: \"612d0145-d4b9-43a5-a0c1-00fe6be630a4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hrbkq" Feb 25 11:45:46 crc kubenswrapper[5005]: I0225 11:45:46.778987 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/612d0145-d4b9-43a5-a0c1-00fe6be630a4-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-hrbkq\" (UID: \"612d0145-d4b9-43a5-a0c1-00fe6be630a4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hrbkq" Feb 25 11:45:46 crc kubenswrapper[5005]: I0225 11:45:46.779350 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/612d0145-d4b9-43a5-a0c1-00fe6be630a4-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-hrbkq\" (UID: \"612d0145-d4b9-43a5-a0c1-00fe6be630a4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hrbkq" Feb 25 11:45:46 crc kubenswrapper[5005]: I0225 11:45:46.791954 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cv9p\" (UniqueName: \"kubernetes.io/projected/612d0145-d4b9-43a5-a0c1-00fe6be630a4-kube-api-access-6cv9p\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-hrbkq\" (UID: \"612d0145-d4b9-43a5-a0c1-00fe6be630a4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hrbkq" Feb 25 11:45:46 crc kubenswrapper[5005]: I0225 11:45:46.820985 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hrbkq" Feb 25 11:45:47 crc kubenswrapper[5005]: I0225 11:45:47.332905 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-hrbkq"] Feb 25 11:45:47 crc kubenswrapper[5005]: I0225 11:45:47.362717 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hrbkq" event={"ID":"612d0145-d4b9-43a5-a0c1-00fe6be630a4","Type":"ContainerStarted","Data":"1d0ec24c74567379b3eed1e9e93a617fb9a04bc6f895c455435e55d930c63326"} Feb 25 11:45:48 crc kubenswrapper[5005]: I0225 11:45:48.372040 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hrbkq" event={"ID":"612d0145-d4b9-43a5-a0c1-00fe6be630a4","Type":"ContainerStarted","Data":"3357acec55340978d55816892785f909266652d5d4590221a461f9127123769a"} Feb 25 11:45:48 crc kubenswrapper[5005]: I0225 11:45:48.392227 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hrbkq" podStartSLOduration=2.007808125 podStartE2EDuration="2.392210872s" podCreationTimestamp="2026-02-25 11:45:46 +0000 UTC" firstStartedPulling="2026-02-25 11:45:47.344225827 +0000 UTC m=+1661.384958154" lastFinishedPulling="2026-02-25 11:45:47.728628574 +0000 UTC m=+1661.769360901" observedRunningTime="2026-02-25 11:45:48.384828095 +0000 UTC m=+1662.425560412" watchObservedRunningTime="2026-02-25 11:45:48.392210872 +0000 UTC m=+1662.432943199" Feb 25 11:45:57 crc kubenswrapper[5005]: I0225 11:45:57.685112 5005 scope.go:117] "RemoveContainer" containerID="e2a8d5f65a424a6167d6d361209a412da688e351bf037195b0ced2458f3bac94" Feb 25 11:45:57 crc kubenswrapper[5005]: E0225 11:45:57.685876 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 11:46:00 crc kubenswrapper[5005]: I0225 11:46:00.152298 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533666-btzqv"] Feb 25 11:46:00 crc kubenswrapper[5005]: I0225 11:46:00.153960 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533666-btzqv" Feb 25 11:46:00 crc kubenswrapper[5005]: I0225 11:46:00.156974 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 11:46:00 crc kubenswrapper[5005]: I0225 11:46:00.157070 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 11:46:00 crc kubenswrapper[5005]: I0225 11:46:00.159002 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 11:46:00 crc kubenswrapper[5005]: I0225 11:46:00.164657 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533666-btzqv"] Feb 25 11:46:00 crc kubenswrapper[5005]: I0225 11:46:00.244242 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntq9r\" (UniqueName: \"kubernetes.io/projected/da2ade9b-1140-4c1f-82b4-362f0d96792f-kube-api-access-ntq9r\") pod \"auto-csr-approver-29533666-btzqv\" (UID: \"da2ade9b-1140-4c1f-82b4-362f0d96792f\") " pod="openshift-infra/auto-csr-approver-29533666-btzqv" Feb 25 11:46:00 crc kubenswrapper[5005]: I0225 11:46:00.346144 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntq9r\" (UniqueName: \"kubernetes.io/projected/da2ade9b-1140-4c1f-82b4-362f0d96792f-kube-api-access-ntq9r\") pod \"auto-csr-approver-29533666-btzqv\" (UID: \"da2ade9b-1140-4c1f-82b4-362f0d96792f\") " pod="openshift-infra/auto-csr-approver-29533666-btzqv" Feb 25 11:46:00 crc kubenswrapper[5005]: I0225 11:46:00.370809 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntq9r\" (UniqueName: \"kubernetes.io/projected/da2ade9b-1140-4c1f-82b4-362f0d96792f-kube-api-access-ntq9r\") pod \"auto-csr-approver-29533666-btzqv\" (UID: \"da2ade9b-1140-4c1f-82b4-362f0d96792f\") " pod="openshift-infra/auto-csr-approver-29533666-btzqv" Feb 25 11:46:00 crc kubenswrapper[5005]: I0225 11:46:00.474932 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533666-btzqv" Feb 25 11:46:00 crc kubenswrapper[5005]: I0225 11:46:00.960535 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533666-btzqv"] Feb 25 11:46:01 crc kubenswrapper[5005]: I0225 11:46:01.496692 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533666-btzqv" event={"ID":"da2ade9b-1140-4c1f-82b4-362f0d96792f","Type":"ContainerStarted","Data":"2e07b4c5241501a645ed75a15b30ee041ceb019b4a2db19277784072905b66e3"} Feb 25 11:46:02 crc kubenswrapper[5005]: I0225 11:46:02.504340 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533666-btzqv" event={"ID":"da2ade9b-1140-4c1f-82b4-362f0d96792f","Type":"ContainerStarted","Data":"33310c3065fedd0cb9a613726fb6541047c25d9729e374ce20d02440df2c524b"} Feb 25 11:46:02 crc kubenswrapper[5005]: I0225 11:46:02.525307 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29533666-btzqv" podStartSLOduration=1.387608736 podStartE2EDuration="2.525285778s" podCreationTimestamp="2026-02-25 11:46:00 +0000 UTC" firstStartedPulling="2026-02-25 11:46:00.966096719 +0000 UTC m=+1675.006829046" lastFinishedPulling="2026-02-25 11:46:02.103773761 +0000 UTC m=+1676.144506088" observedRunningTime="2026-02-25 11:46:02.520023457 +0000 UTC m=+1676.560755784" watchObservedRunningTime="2026-02-25 11:46:02.525285778 +0000 UTC m=+1676.566018115" Feb 25 11:46:03 crc kubenswrapper[5005]: I0225 11:46:03.518337 5005 generic.go:334] "Generic (PLEG): container finished" podID="da2ade9b-1140-4c1f-82b4-362f0d96792f" containerID="33310c3065fedd0cb9a613726fb6541047c25d9729e374ce20d02440df2c524b" exitCode=0 Feb 25 11:46:03 crc kubenswrapper[5005]: I0225 11:46:03.518426 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533666-btzqv" event={"ID":"da2ade9b-1140-4c1f-82b4-362f0d96792f","Type":"ContainerDied","Data":"33310c3065fedd0cb9a613726fb6541047c25d9729e374ce20d02440df2c524b"} Feb 25 11:46:04 crc kubenswrapper[5005]: I0225 11:46:04.945548 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533666-btzqv" Feb 25 11:46:05 crc kubenswrapper[5005]: I0225 11:46:05.039584 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntq9r\" (UniqueName: \"kubernetes.io/projected/da2ade9b-1140-4c1f-82b4-362f0d96792f-kube-api-access-ntq9r\") pod \"da2ade9b-1140-4c1f-82b4-362f0d96792f\" (UID: \"da2ade9b-1140-4c1f-82b4-362f0d96792f\") " Feb 25 11:46:05 crc kubenswrapper[5005]: I0225 11:46:05.057582 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da2ade9b-1140-4c1f-82b4-362f0d96792f-kube-api-access-ntq9r" (OuterVolumeSpecName: "kube-api-access-ntq9r") pod "da2ade9b-1140-4c1f-82b4-362f0d96792f" (UID: "da2ade9b-1140-4c1f-82b4-362f0d96792f"). InnerVolumeSpecName "kube-api-access-ntq9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:46:05 crc kubenswrapper[5005]: I0225 11:46:05.141927 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntq9r\" (UniqueName: \"kubernetes.io/projected/da2ade9b-1140-4c1f-82b4-362f0d96792f-kube-api-access-ntq9r\") on node \"crc\" DevicePath \"\"" Feb 25 11:46:05 crc kubenswrapper[5005]: I0225 11:46:05.538751 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533666-btzqv" event={"ID":"da2ade9b-1140-4c1f-82b4-362f0d96792f","Type":"ContainerDied","Data":"2e07b4c5241501a645ed75a15b30ee041ceb019b4a2db19277784072905b66e3"} Feb 25 11:46:05 crc kubenswrapper[5005]: I0225 11:46:05.538796 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e07b4c5241501a645ed75a15b30ee041ceb019b4a2db19277784072905b66e3" Feb 25 11:46:05 crc kubenswrapper[5005]: I0225 11:46:05.538794 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533666-btzqv" Feb 25 11:46:05 crc kubenswrapper[5005]: I0225 11:46:05.587748 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533660-sz59c"] Feb 25 11:46:05 crc kubenswrapper[5005]: I0225 11:46:05.595040 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533660-sz59c"] Feb 25 11:46:06 crc kubenswrapper[5005]: I0225 11:46:06.719396 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="930e9619-3e1f-454f-b6f0-10cebcf075b3" path="/var/lib/kubelet/pods/930e9619-3e1f-454f-b6f0-10cebcf075b3/volumes" Feb 25 11:46:08 crc kubenswrapper[5005]: I0225 11:46:08.685678 5005 scope.go:117] "RemoveContainer" containerID="e2a8d5f65a424a6167d6d361209a412da688e351bf037195b0ced2458f3bac94" Feb 25 11:46:08 crc kubenswrapper[5005]: E0225 11:46:08.686478 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 11:46:09 crc kubenswrapper[5005]: I0225 11:46:09.038750 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-wngsg"] Feb 25 11:46:09 crc kubenswrapper[5005]: I0225 11:46:09.049831 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-wngsg"] Feb 25 11:46:10 crc kubenswrapper[5005]: I0225 11:46:10.696678 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b06f7ac8-89c3-4886-9cd8-58353d24e476" path="/var/lib/kubelet/pods/b06f7ac8-89c3-4886-9cd8-58353d24e476/volumes" Feb 25 11:46:15 crc kubenswrapper[5005]: I0225 11:46:15.030678 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-wqnnp"] Feb 25 11:46:15 crc kubenswrapper[5005]: I0225 11:46:15.039806 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-wqnnp"] Feb 25 11:46:15 crc kubenswrapper[5005]: I0225 11:46:15.924747 5005 scope.go:117] "RemoveContainer" containerID="345ed5d728ed0d0d7036455b3a9ed66f585f80930fc0eb19f5bdba901adad354" Feb 25 11:46:15 crc kubenswrapper[5005]: I0225 11:46:15.946919 5005 scope.go:117] "RemoveContainer" containerID="f843b1f73bfe00c37a78c01266db491f0b8c22f10d1e81b85edc4ba1e6c0294c" Feb 25 11:46:15 crc kubenswrapper[5005]: I0225 11:46:15.984657 5005 scope.go:117] "RemoveContainer" containerID="815d02c514771320af523b35277ca48176d5cc869d02adb292dffe19d904376b" Feb 25 11:46:16 crc kubenswrapper[5005]: I0225 11:46:16.054643 5005 scope.go:117] "RemoveContainer" containerID="ace342d97d3cc627380850762cfb0a51025f9c59c584d1d4c1afab89d5d5c0f1" Feb 25 11:46:16 crc kubenswrapper[5005]: I0225 11:46:16.101176 5005 scope.go:117] "RemoveContainer" containerID="b835a9d324d0aed163674299f0268232a5e454350e3023a27a40fbf0c129ebb6" Feb 25 11:46:16 crc kubenswrapper[5005]: I0225 11:46:16.124112 5005 scope.go:117] "RemoveContainer" containerID="59848075720a1db4a249ef7203c2fb94753952f95787c54be547b60995c76725" Feb 25 11:46:16 crc kubenswrapper[5005]: I0225 11:46:16.590208 5005 scope.go:117] "RemoveContainer" containerID="f580a081e16413868fa6337e55d16bbe79ddf2c7f702cff7a44696c7d609c6ca" Feb 25 11:46:16 crc kubenswrapper[5005]: I0225 11:46:16.615465 5005 scope.go:117] "RemoveContainer" containerID="f2ebb03f275271efadc78e5ea3c7d55611237aea67ad3882ea356517a1261b27" Feb 25 11:46:16 crc kubenswrapper[5005]: I0225 11:46:16.635852 5005 scope.go:117] "RemoveContainer" containerID="fad59e2dfdd148190603447ea44946b4d8e53047ced797174e21b05b78d9c9b1" Feb 25 11:46:16 crc kubenswrapper[5005]: I0225 11:46:16.674471 5005 scope.go:117] "RemoveContainer" containerID="2f1fd2b4d6c287f940d037a15029e5fa6f07d763296be423d3346c04527a84c6" Feb 25 11:46:16 crc kubenswrapper[5005]: I0225 11:46:16.700690 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a68e341-40f2-4b1b-b92f-bca9a4946b39" path="/var/lib/kubelet/pods/5a68e341-40f2-4b1b-b92f-bca9a4946b39/volumes" Feb 25 11:46:16 crc kubenswrapper[5005]: I0225 11:46:16.709947 5005 scope.go:117] "RemoveContainer" containerID="8a7417500063e6ba1c55865afe2325e88a10b31fb69926a3a6b290fda7de620c" Feb 25 11:46:17 crc kubenswrapper[5005]: I0225 11:46:17.028215 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-hpfk4"] Feb 25 11:46:17 crc kubenswrapper[5005]: I0225 11:46:17.036758 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-hpfk4"] Feb 25 11:46:18 crc kubenswrapper[5005]: I0225 11:46:18.698552 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="753b53c4-aec7-4a64-bc02-76520bddb879" path="/var/lib/kubelet/pods/753b53c4-aec7-4a64-bc02-76520bddb879/volumes" Feb 25 11:46:21 crc kubenswrapper[5005]: I0225 11:46:21.685808 5005 scope.go:117] "RemoveContainer" containerID="e2a8d5f65a424a6167d6d361209a412da688e351bf037195b0ced2458f3bac94" Feb 25 11:46:21 crc kubenswrapper[5005]: E0225 11:46:21.686805 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 11:46:26 crc kubenswrapper[5005]: I0225 11:46:26.740274 5005 generic.go:334] "Generic (PLEG): container finished" podID="612d0145-d4b9-43a5-a0c1-00fe6be630a4" containerID="3357acec55340978d55816892785f909266652d5d4590221a461f9127123769a" exitCode=0 Feb 25 11:46:26 crc kubenswrapper[5005]: I0225 11:46:26.740394 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hrbkq" event={"ID":"612d0145-d4b9-43a5-a0c1-00fe6be630a4","Type":"ContainerDied","Data":"3357acec55340978d55816892785f909266652d5d4590221a461f9127123769a"} Feb 25 11:46:28 crc kubenswrapper[5005]: I0225 11:46:28.157217 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hrbkq" Feb 25 11:46:28 crc kubenswrapper[5005]: I0225 11:46:28.287985 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/612d0145-d4b9-43a5-a0c1-00fe6be630a4-inventory\") pod \"612d0145-d4b9-43a5-a0c1-00fe6be630a4\" (UID: \"612d0145-d4b9-43a5-a0c1-00fe6be630a4\") " Feb 25 11:46:28 crc kubenswrapper[5005]: I0225 11:46:28.288048 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cv9p\" (UniqueName: \"kubernetes.io/projected/612d0145-d4b9-43a5-a0c1-00fe6be630a4-kube-api-access-6cv9p\") pod \"612d0145-d4b9-43a5-a0c1-00fe6be630a4\" (UID: \"612d0145-d4b9-43a5-a0c1-00fe6be630a4\") " Feb 25 11:46:28 crc kubenswrapper[5005]: I0225 11:46:28.288104 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/612d0145-d4b9-43a5-a0c1-00fe6be630a4-ssh-key-openstack-edpm-ipam\") pod \"612d0145-d4b9-43a5-a0c1-00fe6be630a4\" (UID: \"612d0145-d4b9-43a5-a0c1-00fe6be630a4\") " Feb 25 11:46:28 crc kubenswrapper[5005]: I0225 11:46:28.296830 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/612d0145-d4b9-43a5-a0c1-00fe6be630a4-kube-api-access-6cv9p" (OuterVolumeSpecName: "kube-api-access-6cv9p") pod "612d0145-d4b9-43a5-a0c1-00fe6be630a4" (UID: "612d0145-d4b9-43a5-a0c1-00fe6be630a4"). InnerVolumeSpecName "kube-api-access-6cv9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:46:28 crc kubenswrapper[5005]: I0225 11:46:28.317062 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/612d0145-d4b9-43a5-a0c1-00fe6be630a4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "612d0145-d4b9-43a5-a0c1-00fe6be630a4" (UID: "612d0145-d4b9-43a5-a0c1-00fe6be630a4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:46:28 crc kubenswrapper[5005]: I0225 11:46:28.317857 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/612d0145-d4b9-43a5-a0c1-00fe6be630a4-inventory" (OuterVolumeSpecName: "inventory") pod "612d0145-d4b9-43a5-a0c1-00fe6be630a4" (UID: "612d0145-d4b9-43a5-a0c1-00fe6be630a4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:46:28 crc kubenswrapper[5005]: I0225 11:46:28.390552 5005 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/612d0145-d4b9-43a5-a0c1-00fe6be630a4-inventory\") on node \"crc\" DevicePath \"\"" Feb 25 11:46:28 crc kubenswrapper[5005]: I0225 11:46:28.390586 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6cv9p\" (UniqueName: \"kubernetes.io/projected/612d0145-d4b9-43a5-a0c1-00fe6be630a4-kube-api-access-6cv9p\") on node \"crc\" DevicePath \"\"" Feb 25 11:46:28 crc kubenswrapper[5005]: I0225 11:46:28.390601 5005 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/612d0145-d4b9-43a5-a0c1-00fe6be630a4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 25 11:46:28 crc kubenswrapper[5005]: I0225 11:46:28.759314 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hrbkq" event={"ID":"612d0145-d4b9-43a5-a0c1-00fe6be630a4","Type":"ContainerDied","Data":"1d0ec24c74567379b3eed1e9e93a617fb9a04bc6f895c455435e55d930c63326"} Feb 25 11:46:28 crc kubenswrapper[5005]: I0225 11:46:28.759354 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d0ec24c74567379b3eed1e9e93a617fb9a04bc6f895c455435e55d930c63326" Feb 25 11:46:28 crc kubenswrapper[5005]: I0225 11:46:28.759393 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hrbkq" Feb 25 11:46:28 crc kubenswrapper[5005]: I0225 11:46:28.842415 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4znvx"] Feb 25 11:46:28 crc kubenswrapper[5005]: E0225 11:46:28.842803 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da2ade9b-1140-4c1f-82b4-362f0d96792f" containerName="oc" Feb 25 11:46:28 crc kubenswrapper[5005]: I0225 11:46:28.842818 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="da2ade9b-1140-4c1f-82b4-362f0d96792f" containerName="oc" Feb 25 11:46:28 crc kubenswrapper[5005]: E0225 11:46:28.842830 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="612d0145-d4b9-43a5-a0c1-00fe6be630a4" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 25 11:46:28 crc kubenswrapper[5005]: I0225 11:46:28.842841 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="612d0145-d4b9-43a5-a0c1-00fe6be630a4" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 25 11:46:28 crc kubenswrapper[5005]: I0225 11:46:28.843048 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="612d0145-d4b9-43a5-a0c1-00fe6be630a4" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 25 11:46:28 crc kubenswrapper[5005]: I0225 11:46:28.843065 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="da2ade9b-1140-4c1f-82b4-362f0d96792f" containerName="oc" Feb 25 11:46:28 crc kubenswrapper[5005]: I0225 11:46:28.843810 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4znvx" Feb 25 11:46:28 crc kubenswrapper[5005]: I0225 11:46:28.848899 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 25 11:46:28 crc kubenswrapper[5005]: I0225 11:46:28.848995 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 25 11:46:28 crc kubenswrapper[5005]: I0225 11:46:28.849876 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dgrbb" Feb 25 11:46:28 crc kubenswrapper[5005]: I0225 11:46:28.856568 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 25 11:46:28 crc kubenswrapper[5005]: I0225 11:46:28.868851 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4znvx"] Feb 25 11:46:29 crc kubenswrapper[5005]: I0225 11:46:29.000620 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/68ef2c6f-d13a-4b57-9b87-e6d8c2643a3d-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4znvx\" (UID: \"68ef2c6f-d13a-4b57-9b87-e6d8c2643a3d\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4znvx" Feb 25 11:46:29 crc kubenswrapper[5005]: I0225 11:46:29.000689 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zqpj\" (UniqueName: \"kubernetes.io/projected/68ef2c6f-d13a-4b57-9b87-e6d8c2643a3d-kube-api-access-6zqpj\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4znvx\" (UID: \"68ef2c6f-d13a-4b57-9b87-e6d8c2643a3d\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4znvx" Feb 25 11:46:29 crc kubenswrapper[5005]: I0225 11:46:29.000722 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/68ef2c6f-d13a-4b57-9b87-e6d8c2643a3d-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4znvx\" (UID: \"68ef2c6f-d13a-4b57-9b87-e6d8c2643a3d\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4znvx" Feb 25 11:46:29 crc kubenswrapper[5005]: I0225 11:46:29.102465 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zqpj\" (UniqueName: \"kubernetes.io/projected/68ef2c6f-d13a-4b57-9b87-e6d8c2643a3d-kube-api-access-6zqpj\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4znvx\" (UID: \"68ef2c6f-d13a-4b57-9b87-e6d8c2643a3d\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4znvx" Feb 25 11:46:29 crc kubenswrapper[5005]: I0225 11:46:29.102518 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/68ef2c6f-d13a-4b57-9b87-e6d8c2643a3d-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4znvx\" (UID: \"68ef2c6f-d13a-4b57-9b87-e6d8c2643a3d\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4znvx" Feb 25 11:46:29 crc kubenswrapper[5005]: I0225 11:46:29.102632 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/68ef2c6f-d13a-4b57-9b87-e6d8c2643a3d-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4znvx\" (UID: \"68ef2c6f-d13a-4b57-9b87-e6d8c2643a3d\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4znvx" Feb 25 11:46:29 crc kubenswrapper[5005]: I0225 11:46:29.107728 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/68ef2c6f-d13a-4b57-9b87-e6d8c2643a3d-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4znvx\" (UID: \"68ef2c6f-d13a-4b57-9b87-e6d8c2643a3d\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4znvx" Feb 25 11:46:29 crc kubenswrapper[5005]: I0225 11:46:29.107764 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/68ef2c6f-d13a-4b57-9b87-e6d8c2643a3d-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4znvx\" (UID: \"68ef2c6f-d13a-4b57-9b87-e6d8c2643a3d\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4znvx" Feb 25 11:46:29 crc kubenswrapper[5005]: I0225 11:46:29.117423 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zqpj\" (UniqueName: \"kubernetes.io/projected/68ef2c6f-d13a-4b57-9b87-e6d8c2643a3d-kube-api-access-6zqpj\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4znvx\" (UID: \"68ef2c6f-d13a-4b57-9b87-e6d8c2643a3d\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4znvx" Feb 25 11:46:29 crc kubenswrapper[5005]: I0225 11:46:29.160706 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4znvx" Feb 25 11:46:29 crc kubenswrapper[5005]: I0225 11:46:29.729307 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4znvx"] Feb 25 11:46:29 crc kubenswrapper[5005]: I0225 11:46:29.767282 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4znvx" event={"ID":"68ef2c6f-d13a-4b57-9b87-e6d8c2643a3d","Type":"ContainerStarted","Data":"db4bf830c9e017f0767c9377979140ebf4e915f234d06f9b83905a25a60a09ad"} Feb 25 11:46:30 crc kubenswrapper[5005]: I0225 11:46:30.779256 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4znvx" event={"ID":"68ef2c6f-d13a-4b57-9b87-e6d8c2643a3d","Type":"ContainerStarted","Data":"29edf728632afeaafcb8a78deda69494a539459d7d45dd99c28fffc2ab10088f"} Feb 25 11:46:30 crc kubenswrapper[5005]: I0225 11:46:30.810748 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4znvx" podStartSLOduration=2.3743246080000002 podStartE2EDuration="2.810724683s" podCreationTimestamp="2026-02-25 11:46:28 +0000 UTC" firstStartedPulling="2026-02-25 11:46:29.728083363 +0000 UTC m=+1703.768815730" lastFinishedPulling="2026-02-25 11:46:30.164483458 +0000 UTC m=+1704.205215805" observedRunningTime="2026-02-25 11:46:30.799621241 +0000 UTC m=+1704.840353608" watchObservedRunningTime="2026-02-25 11:46:30.810724683 +0000 UTC m=+1704.851457050" Feb 25 11:46:31 crc kubenswrapper[5005]: I0225 11:46:31.047377 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-qrzvf"] Feb 25 11:46:31 crc kubenswrapper[5005]: I0225 11:46:31.054013 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-qrzvf"] Feb 25 11:46:32 crc kubenswrapper[5005]: I0225 11:46:32.022887 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-6cr5h"] Feb 25 11:46:32 crc kubenswrapper[5005]: I0225 11:46:32.029287 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-6cr5h"] Feb 25 11:46:32 crc kubenswrapper[5005]: I0225 11:46:32.696824 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fa77b98-833c-4278-b615-49e4c28e69c5" path="/var/lib/kubelet/pods/1fa77b98-833c-4278-b615-49e4c28e69c5/volumes" Feb 25 11:46:32 crc kubenswrapper[5005]: I0225 11:46:32.698318 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3052cba9-7666-438d-a17c-d3028c836c1d" path="/var/lib/kubelet/pods/3052cba9-7666-438d-a17c-d3028c836c1d/volumes" Feb 25 11:46:34 crc kubenswrapper[5005]: I0225 11:46:34.811557 5005 generic.go:334] "Generic (PLEG): container finished" podID="68ef2c6f-d13a-4b57-9b87-e6d8c2643a3d" containerID="29edf728632afeaafcb8a78deda69494a539459d7d45dd99c28fffc2ab10088f" exitCode=0 Feb 25 11:46:34 crc kubenswrapper[5005]: I0225 11:46:34.811639 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4znvx" event={"ID":"68ef2c6f-d13a-4b57-9b87-e6d8c2643a3d","Type":"ContainerDied","Data":"29edf728632afeaafcb8a78deda69494a539459d7d45dd99c28fffc2ab10088f"} Feb 25 11:46:36 crc kubenswrapper[5005]: I0225 11:46:36.246747 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4znvx" Feb 25 11:46:36 crc kubenswrapper[5005]: I0225 11:46:36.350664 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/68ef2c6f-d13a-4b57-9b87-e6d8c2643a3d-inventory\") pod \"68ef2c6f-d13a-4b57-9b87-e6d8c2643a3d\" (UID: \"68ef2c6f-d13a-4b57-9b87-e6d8c2643a3d\") " Feb 25 11:46:36 crc kubenswrapper[5005]: I0225 11:46:36.351254 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/68ef2c6f-d13a-4b57-9b87-e6d8c2643a3d-ssh-key-openstack-edpm-ipam\") pod \"68ef2c6f-d13a-4b57-9b87-e6d8c2643a3d\" (UID: \"68ef2c6f-d13a-4b57-9b87-e6d8c2643a3d\") " Feb 25 11:46:36 crc kubenswrapper[5005]: I0225 11:46:36.351298 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zqpj\" (UniqueName: \"kubernetes.io/projected/68ef2c6f-d13a-4b57-9b87-e6d8c2643a3d-kube-api-access-6zqpj\") pod \"68ef2c6f-d13a-4b57-9b87-e6d8c2643a3d\" (UID: \"68ef2c6f-d13a-4b57-9b87-e6d8c2643a3d\") " Feb 25 11:46:36 crc kubenswrapper[5005]: I0225 11:46:36.357440 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68ef2c6f-d13a-4b57-9b87-e6d8c2643a3d-kube-api-access-6zqpj" (OuterVolumeSpecName: "kube-api-access-6zqpj") pod "68ef2c6f-d13a-4b57-9b87-e6d8c2643a3d" (UID: "68ef2c6f-d13a-4b57-9b87-e6d8c2643a3d"). InnerVolumeSpecName "kube-api-access-6zqpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:46:36 crc kubenswrapper[5005]: I0225 11:46:36.373660 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68ef2c6f-d13a-4b57-9b87-e6d8c2643a3d-inventory" (OuterVolumeSpecName: "inventory") pod "68ef2c6f-d13a-4b57-9b87-e6d8c2643a3d" (UID: "68ef2c6f-d13a-4b57-9b87-e6d8c2643a3d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:46:36 crc kubenswrapper[5005]: I0225 11:46:36.378776 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68ef2c6f-d13a-4b57-9b87-e6d8c2643a3d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "68ef2c6f-d13a-4b57-9b87-e6d8c2643a3d" (UID: "68ef2c6f-d13a-4b57-9b87-e6d8c2643a3d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:46:36 crc kubenswrapper[5005]: I0225 11:46:36.454194 5005 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/68ef2c6f-d13a-4b57-9b87-e6d8c2643a3d-inventory\") on node \"crc\" DevicePath \"\"" Feb 25 11:46:36 crc kubenswrapper[5005]: I0225 11:46:36.454481 5005 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/68ef2c6f-d13a-4b57-9b87-e6d8c2643a3d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 25 11:46:36 crc kubenswrapper[5005]: I0225 11:46:36.454630 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zqpj\" (UniqueName: \"kubernetes.io/projected/68ef2c6f-d13a-4b57-9b87-e6d8c2643a3d-kube-api-access-6zqpj\") on node \"crc\" DevicePath \"\"" Feb 25 11:46:36 crc kubenswrapper[5005]: I0225 11:46:36.690754 5005 scope.go:117] "RemoveContainer" containerID="e2a8d5f65a424a6167d6d361209a412da688e351bf037195b0ced2458f3bac94" Feb 25 11:46:36 crc kubenswrapper[5005]: E0225 11:46:36.691254 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 11:46:36 crc kubenswrapper[5005]: I0225 11:46:36.836707 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4znvx" event={"ID":"68ef2c6f-d13a-4b57-9b87-e6d8c2643a3d","Type":"ContainerDied","Data":"db4bf830c9e017f0767c9377979140ebf4e915f234d06f9b83905a25a60a09ad"} Feb 25 11:46:36 crc kubenswrapper[5005]: I0225 11:46:36.836745 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db4bf830c9e017f0767c9377979140ebf4e915f234d06f9b83905a25a60a09ad" Feb 25 11:46:36 crc kubenswrapper[5005]: I0225 11:46:36.836822 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4znvx" Feb 25 11:46:36 crc kubenswrapper[5005]: I0225 11:46:36.941889 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-q2qst"] Feb 25 11:46:36 crc kubenswrapper[5005]: E0225 11:46:36.942212 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68ef2c6f-d13a-4b57-9b87-e6d8c2643a3d" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Feb 25 11:46:36 crc kubenswrapper[5005]: I0225 11:46:36.942226 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="68ef2c6f-d13a-4b57-9b87-e6d8c2643a3d" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Feb 25 11:46:36 crc kubenswrapper[5005]: I0225 11:46:36.942422 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="68ef2c6f-d13a-4b57-9b87-e6d8c2643a3d" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Feb 25 11:46:36 crc kubenswrapper[5005]: I0225 11:46:36.942972 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-q2qst" Feb 25 11:46:36 crc kubenswrapper[5005]: I0225 11:46:36.944856 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 25 11:46:36 crc kubenswrapper[5005]: I0225 11:46:36.945064 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dgrbb" Feb 25 11:46:36 crc kubenswrapper[5005]: I0225 11:46:36.946457 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 25 11:46:36 crc kubenswrapper[5005]: I0225 11:46:36.959359 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-q2qst"] Feb 25 11:46:36 crc kubenswrapper[5005]: I0225 11:46:36.961522 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 25 11:46:37 crc kubenswrapper[5005]: I0225 11:46:37.066405 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwdzv\" (UniqueName: \"kubernetes.io/projected/66f4a866-d416-4207-9670-15416eadd794-kube-api-access-hwdzv\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-q2qst\" (UID: \"66f4a866-d416-4207-9670-15416eadd794\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-q2qst" Feb 25 11:46:37 crc kubenswrapper[5005]: I0225 11:46:37.066636 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/66f4a866-d416-4207-9670-15416eadd794-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-q2qst\" (UID: \"66f4a866-d416-4207-9670-15416eadd794\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-q2qst" Feb 25 11:46:37 crc kubenswrapper[5005]: I0225 11:46:37.066740 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/66f4a866-d416-4207-9670-15416eadd794-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-q2qst\" (UID: \"66f4a866-d416-4207-9670-15416eadd794\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-q2qst" Feb 25 11:46:37 crc kubenswrapper[5005]: I0225 11:46:37.168353 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwdzv\" (UniqueName: \"kubernetes.io/projected/66f4a866-d416-4207-9670-15416eadd794-kube-api-access-hwdzv\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-q2qst\" (UID: \"66f4a866-d416-4207-9670-15416eadd794\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-q2qst" Feb 25 11:46:37 crc kubenswrapper[5005]: I0225 11:46:37.168511 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/66f4a866-d416-4207-9670-15416eadd794-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-q2qst\" (UID: \"66f4a866-d416-4207-9670-15416eadd794\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-q2qst" Feb 25 11:46:37 crc kubenswrapper[5005]: I0225 11:46:37.168567 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/66f4a866-d416-4207-9670-15416eadd794-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-q2qst\" (UID: \"66f4a866-d416-4207-9670-15416eadd794\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-q2qst" Feb 25 11:46:37 crc kubenswrapper[5005]: I0225 11:46:37.172898 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/66f4a866-d416-4207-9670-15416eadd794-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-q2qst\" (UID: \"66f4a866-d416-4207-9670-15416eadd794\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-q2qst" Feb 25 11:46:37 crc kubenswrapper[5005]: I0225 11:46:37.173057 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/66f4a866-d416-4207-9670-15416eadd794-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-q2qst\" (UID: \"66f4a866-d416-4207-9670-15416eadd794\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-q2qst" Feb 25 11:46:37 crc kubenswrapper[5005]: I0225 11:46:37.185662 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwdzv\" (UniqueName: \"kubernetes.io/projected/66f4a866-d416-4207-9670-15416eadd794-kube-api-access-hwdzv\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-q2qst\" (UID: \"66f4a866-d416-4207-9670-15416eadd794\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-q2qst" Feb 25 11:46:37 crc kubenswrapper[5005]: I0225 11:46:37.262006 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-q2qst" Feb 25 11:46:37 crc kubenswrapper[5005]: I0225 11:46:37.560140 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-q2qst"] Feb 25 11:46:37 crc kubenswrapper[5005]: I0225 11:46:37.846475 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-q2qst" event={"ID":"66f4a866-d416-4207-9670-15416eadd794","Type":"ContainerStarted","Data":"0fc048ca1d2ce9e028c09b91f4426bf46a11bf7e07265295be4b7be7eb59fb56"} Feb 25 11:46:38 crc kubenswrapper[5005]: I0225 11:46:38.865459 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-q2qst" event={"ID":"66f4a866-d416-4207-9670-15416eadd794","Type":"ContainerStarted","Data":"7e4f3e81a265735b77c1fbfbbf9e5fff939695fad1c74d2ccdc7b881c4b26b28"} Feb 25 11:46:38 crc kubenswrapper[5005]: I0225 11:46:38.882150 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-q2qst" podStartSLOduration=2.37221369 podStartE2EDuration="2.882132685s" podCreationTimestamp="2026-02-25 11:46:36 +0000 UTC" firstStartedPulling="2026-02-25 11:46:37.563698467 +0000 UTC m=+1711.604430804" lastFinishedPulling="2026-02-25 11:46:38.073617472 +0000 UTC m=+1712.114349799" observedRunningTime="2026-02-25 11:46:38.878555945 +0000 UTC m=+1712.919288332" watchObservedRunningTime="2026-02-25 11:46:38.882132685 +0000 UTC m=+1712.922865012" Feb 25 11:46:47 crc kubenswrapper[5005]: I0225 11:46:47.685759 5005 scope.go:117] "RemoveContainer" containerID="e2a8d5f65a424a6167d6d361209a412da688e351bf037195b0ced2458f3bac94" Feb 25 11:46:47 crc kubenswrapper[5005]: E0225 11:46:47.686844 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 11:47:02 crc kubenswrapper[5005]: I0225 11:47:02.685695 5005 scope.go:117] "RemoveContainer" containerID="e2a8d5f65a424a6167d6d361209a412da688e351bf037195b0ced2458f3bac94" Feb 25 11:47:02 crc kubenswrapper[5005]: E0225 11:47:02.686304 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 11:47:09 crc kubenswrapper[5005]: I0225 11:47:09.061921 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-3048-account-create-update-h86sj"] Feb 25 11:47:09 crc kubenswrapper[5005]: I0225 11:47:09.082133 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-w992v"] Feb 25 11:47:09 crc kubenswrapper[5005]: I0225 11:47:09.092263 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-3c62-account-create-update-5949j"] Feb 25 11:47:09 crc kubenswrapper[5005]: I0225 11:47:09.106917 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-3c62-account-create-update-5949j"] Feb 25 11:47:09 crc kubenswrapper[5005]: I0225 11:47:09.114284 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-7a9e-account-create-update-v5tcv"] Feb 25 11:47:09 crc kubenswrapper[5005]: I0225 11:47:09.128140 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-w992v"] Feb 25 11:47:09 crc kubenswrapper[5005]: I0225 11:47:09.138682 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-qdf8s"] Feb 25 11:47:09 crc kubenswrapper[5005]: I0225 11:47:09.146309 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-n8jp5"] Feb 25 11:47:09 crc kubenswrapper[5005]: I0225 11:47:09.159390 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-7a9e-account-create-update-v5tcv"] Feb 25 11:47:09 crc kubenswrapper[5005]: I0225 11:47:09.166896 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-qdf8s"] Feb 25 11:47:09 crc kubenswrapper[5005]: I0225 11:47:09.174071 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-n8jp5"] Feb 25 11:47:09 crc kubenswrapper[5005]: I0225 11:47:09.180657 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-3048-account-create-update-h86sj"] Feb 25 11:47:10 crc kubenswrapper[5005]: I0225 11:47:10.696190 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="019aa605-4910-4b2d-aba0-de303611c1f4" path="/var/lib/kubelet/pods/019aa605-4910-4b2d-aba0-de303611c1f4/volumes" Feb 25 11:47:10 crc kubenswrapper[5005]: I0225 11:47:10.696843 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c0d38a7-c7e8-4dbd-80a4-403075937b43" path="/var/lib/kubelet/pods/2c0d38a7-c7e8-4dbd-80a4-403075937b43/volumes" Feb 25 11:47:10 crc kubenswrapper[5005]: I0225 11:47:10.697454 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7da5666d-9f95-46f2-9455-ee3eaecf137d" path="/var/lib/kubelet/pods/7da5666d-9f95-46f2-9455-ee3eaecf137d/volumes" Feb 25 11:47:10 crc kubenswrapper[5005]: I0225 11:47:10.698070 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bafe0b13-d056-4990-b93d-f4cb487c7cd2" path="/var/lib/kubelet/pods/bafe0b13-d056-4990-b93d-f4cb487c7cd2/volumes" Feb 25 11:47:10 crc kubenswrapper[5005]: I0225 11:47:10.699393 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da5c495c-be81-4a63-b604-a9c3f5d2de7c" path="/var/lib/kubelet/pods/da5c495c-be81-4a63-b604-a9c3f5d2de7c/volumes" Feb 25 11:47:10 crc kubenswrapper[5005]: I0225 11:47:10.699995 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="daf2d0ce-59f4-40b5-a2fd-fe4991a62dda" path="/var/lib/kubelet/pods/daf2d0ce-59f4-40b5-a2fd-fe4991a62dda/volumes" Feb 25 11:47:16 crc kubenswrapper[5005]: I0225 11:47:16.860509 5005 scope.go:117] "RemoveContainer" containerID="812606c9aa7e1cf9a292b1c36a21333a0c4d5758c70359d5e418e1781af50daf" Feb 25 11:47:17 crc kubenswrapper[5005]: I0225 11:47:17.173711 5005 scope.go:117] "RemoveContainer" containerID="38fd8d510baa510ac34891a0c2bf83c739295f5de5ada34a3e25b041e67c7532" Feb 25 11:47:17 crc kubenswrapper[5005]: I0225 11:47:17.194994 5005 scope.go:117] "RemoveContainer" containerID="7d355b56657a0b42dd9013ec613aab9610bf772b267f913540ed906741b34b9a" Feb 25 11:47:17 crc kubenswrapper[5005]: I0225 11:47:17.273107 5005 scope.go:117] "RemoveContainer" containerID="5a73deb7775153963dcb14bf1d55d8ab13fb9490574f4825b410412ba017db70" Feb 25 11:47:17 crc kubenswrapper[5005]: I0225 11:47:17.308153 5005 scope.go:117] "RemoveContainer" containerID="40dddeef3284b447a221f8bad3dd3c30f0d9ff7f43aff7d15107af580728b618" Feb 25 11:47:17 crc kubenswrapper[5005]: I0225 11:47:17.341440 5005 scope.go:117] "RemoveContainer" containerID="8267108057d7041af0dff418f9ec4f85bf5d371cfe92a23210fa40f55471ecc5" Feb 25 11:47:17 crc kubenswrapper[5005]: I0225 11:47:17.385064 5005 scope.go:117] "RemoveContainer" containerID="bf2586cb62f01cfd5f893e76eaf953c726779ca6b3b72e19e33751979d6ef325" Feb 25 11:47:17 crc kubenswrapper[5005]: I0225 11:47:17.403981 5005 scope.go:117] "RemoveContainer" containerID="5f462d533525280e83df1e793a4c5a455a8e4d66e81bef7ece22ac36f9d25951" Feb 25 11:47:17 crc kubenswrapper[5005]: I0225 11:47:17.430146 5005 scope.go:117] "RemoveContainer" containerID="a4886de982ae95cffe905f9aa650ce1944e0e94f50b541efd4073a37ba763d49" Feb 25 11:47:17 crc kubenswrapper[5005]: I0225 11:47:17.455113 5005 scope.go:117] "RemoveContainer" containerID="272f28327588b421ed42cdb025b601ba75a500b6159b3c18438088a373c9d492" Feb 25 11:47:17 crc kubenswrapper[5005]: I0225 11:47:17.686041 5005 scope.go:117] "RemoveContainer" containerID="e2a8d5f65a424a6167d6d361209a412da688e351bf037195b0ced2458f3bac94" Feb 25 11:47:17 crc kubenswrapper[5005]: E0225 11:47:17.686348 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 11:47:28 crc kubenswrapper[5005]: I0225 11:47:28.418249 5005 generic.go:334] "Generic (PLEG): container finished" podID="66f4a866-d416-4207-9670-15416eadd794" containerID="7e4f3e81a265735b77c1fbfbbf9e5fff939695fad1c74d2ccdc7b881c4b26b28" exitCode=0 Feb 25 11:47:28 crc kubenswrapper[5005]: I0225 11:47:28.418362 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-q2qst" event={"ID":"66f4a866-d416-4207-9670-15416eadd794","Type":"ContainerDied","Data":"7e4f3e81a265735b77c1fbfbbf9e5fff939695fad1c74d2ccdc7b881c4b26b28"} Feb 25 11:47:29 crc kubenswrapper[5005]: I0225 11:47:29.868419 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-q2qst" Feb 25 11:47:29 crc kubenswrapper[5005]: I0225 11:47:29.975483 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/66f4a866-d416-4207-9670-15416eadd794-inventory\") pod \"66f4a866-d416-4207-9670-15416eadd794\" (UID: \"66f4a866-d416-4207-9670-15416eadd794\") " Feb 25 11:47:29 crc kubenswrapper[5005]: I0225 11:47:29.975578 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwdzv\" (UniqueName: \"kubernetes.io/projected/66f4a866-d416-4207-9670-15416eadd794-kube-api-access-hwdzv\") pod \"66f4a866-d416-4207-9670-15416eadd794\" (UID: \"66f4a866-d416-4207-9670-15416eadd794\") " Feb 25 11:47:29 crc kubenswrapper[5005]: I0225 11:47:29.975639 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/66f4a866-d416-4207-9670-15416eadd794-ssh-key-openstack-edpm-ipam\") pod \"66f4a866-d416-4207-9670-15416eadd794\" (UID: \"66f4a866-d416-4207-9670-15416eadd794\") " Feb 25 11:47:29 crc kubenswrapper[5005]: I0225 11:47:29.983506 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66f4a866-d416-4207-9670-15416eadd794-kube-api-access-hwdzv" (OuterVolumeSpecName: "kube-api-access-hwdzv") pod "66f4a866-d416-4207-9670-15416eadd794" (UID: "66f4a866-d416-4207-9670-15416eadd794"). InnerVolumeSpecName "kube-api-access-hwdzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:47:30 crc kubenswrapper[5005]: I0225 11:47:30.005040 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66f4a866-d416-4207-9670-15416eadd794-inventory" (OuterVolumeSpecName: "inventory") pod "66f4a866-d416-4207-9670-15416eadd794" (UID: "66f4a866-d416-4207-9670-15416eadd794"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:47:30 crc kubenswrapper[5005]: I0225 11:47:30.006720 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66f4a866-d416-4207-9670-15416eadd794-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "66f4a866-d416-4207-9670-15416eadd794" (UID: "66f4a866-d416-4207-9670-15416eadd794"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:47:30 crc kubenswrapper[5005]: I0225 11:47:30.078798 5005 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/66f4a866-d416-4207-9670-15416eadd794-inventory\") on node \"crc\" DevicePath \"\"" Feb 25 11:47:30 crc kubenswrapper[5005]: I0225 11:47:30.078944 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwdzv\" (UniqueName: \"kubernetes.io/projected/66f4a866-d416-4207-9670-15416eadd794-kube-api-access-hwdzv\") on node \"crc\" DevicePath \"\"" Feb 25 11:47:30 crc kubenswrapper[5005]: I0225 11:47:30.078966 5005 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/66f4a866-d416-4207-9670-15416eadd794-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 25 11:47:30 crc kubenswrapper[5005]: I0225 11:47:30.437659 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-q2qst" event={"ID":"66f4a866-d416-4207-9670-15416eadd794","Type":"ContainerDied","Data":"0fc048ca1d2ce9e028c09b91f4426bf46a11bf7e07265295be4b7be7eb59fb56"} Feb 25 11:47:30 crc kubenswrapper[5005]: I0225 11:47:30.437720 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fc048ca1d2ce9e028c09b91f4426bf46a11bf7e07265295be4b7be7eb59fb56" Feb 25 11:47:30 crc kubenswrapper[5005]: I0225 11:47:30.437793 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-q2qst" Feb 25 11:47:30 crc kubenswrapper[5005]: I0225 11:47:30.520306 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-j5jvl"] Feb 25 11:47:30 crc kubenswrapper[5005]: E0225 11:47:30.520672 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66f4a866-d416-4207-9670-15416eadd794" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 25 11:47:30 crc kubenswrapper[5005]: I0225 11:47:30.520692 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="66f4a866-d416-4207-9670-15416eadd794" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 25 11:47:30 crc kubenswrapper[5005]: I0225 11:47:30.520857 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="66f4a866-d416-4207-9670-15416eadd794" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 25 11:47:30 crc kubenswrapper[5005]: I0225 11:47:30.521403 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-j5jvl" Feb 25 11:47:30 crc kubenswrapper[5005]: I0225 11:47:30.524124 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 25 11:47:30 crc kubenswrapper[5005]: I0225 11:47:30.524579 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 25 11:47:30 crc kubenswrapper[5005]: I0225 11:47:30.527739 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 25 11:47:30 crc kubenswrapper[5005]: I0225 11:47:30.527741 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dgrbb" Feb 25 11:47:30 crc kubenswrapper[5005]: I0225 11:47:30.536072 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-j5jvl"] Feb 25 11:47:30 crc kubenswrapper[5005]: I0225 11:47:30.689335 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9k4dh\" (UniqueName: \"kubernetes.io/projected/ec0be389-469d-43d6-a40f-98bacf082fdc-kube-api-access-9k4dh\") pod \"ssh-known-hosts-edpm-deployment-j5jvl\" (UID: \"ec0be389-469d-43d6-a40f-98bacf082fdc\") " pod="openstack/ssh-known-hosts-edpm-deployment-j5jvl" Feb 25 11:47:30 crc kubenswrapper[5005]: I0225 11:47:30.689403 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ec0be389-469d-43d6-a40f-98bacf082fdc-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-j5jvl\" (UID: \"ec0be389-469d-43d6-a40f-98bacf082fdc\") " pod="openstack/ssh-known-hosts-edpm-deployment-j5jvl" Feb 25 11:47:30 crc kubenswrapper[5005]: I0225 11:47:30.689439 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/ec0be389-469d-43d6-a40f-98bacf082fdc-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-j5jvl\" (UID: \"ec0be389-469d-43d6-a40f-98bacf082fdc\") " pod="openstack/ssh-known-hosts-edpm-deployment-j5jvl" Feb 25 11:47:30 crc kubenswrapper[5005]: I0225 11:47:30.791119 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9k4dh\" (UniqueName: \"kubernetes.io/projected/ec0be389-469d-43d6-a40f-98bacf082fdc-kube-api-access-9k4dh\") pod \"ssh-known-hosts-edpm-deployment-j5jvl\" (UID: \"ec0be389-469d-43d6-a40f-98bacf082fdc\") " pod="openstack/ssh-known-hosts-edpm-deployment-j5jvl" Feb 25 11:47:30 crc kubenswrapper[5005]: I0225 11:47:30.791160 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ec0be389-469d-43d6-a40f-98bacf082fdc-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-j5jvl\" (UID: \"ec0be389-469d-43d6-a40f-98bacf082fdc\") " pod="openstack/ssh-known-hosts-edpm-deployment-j5jvl" Feb 25 11:47:30 crc kubenswrapper[5005]: I0225 11:47:30.791184 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/ec0be389-469d-43d6-a40f-98bacf082fdc-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-j5jvl\" (UID: \"ec0be389-469d-43d6-a40f-98bacf082fdc\") " pod="openstack/ssh-known-hosts-edpm-deployment-j5jvl" Feb 25 11:47:30 crc kubenswrapper[5005]: I0225 11:47:30.796122 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ec0be389-469d-43d6-a40f-98bacf082fdc-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-j5jvl\" (UID: \"ec0be389-469d-43d6-a40f-98bacf082fdc\") " pod="openstack/ssh-known-hosts-edpm-deployment-j5jvl" Feb 25 11:47:30 crc kubenswrapper[5005]: I0225 11:47:30.798365 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/ec0be389-469d-43d6-a40f-98bacf082fdc-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-j5jvl\" (UID: \"ec0be389-469d-43d6-a40f-98bacf082fdc\") " pod="openstack/ssh-known-hosts-edpm-deployment-j5jvl" Feb 25 11:47:30 crc kubenswrapper[5005]: I0225 11:47:30.816869 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9k4dh\" (UniqueName: \"kubernetes.io/projected/ec0be389-469d-43d6-a40f-98bacf082fdc-kube-api-access-9k4dh\") pod \"ssh-known-hosts-edpm-deployment-j5jvl\" (UID: \"ec0be389-469d-43d6-a40f-98bacf082fdc\") " pod="openstack/ssh-known-hosts-edpm-deployment-j5jvl" Feb 25 11:47:30 crc kubenswrapper[5005]: I0225 11:47:30.838774 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-j5jvl" Feb 25 11:47:31 crc kubenswrapper[5005]: I0225 11:47:31.413182 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-j5jvl"] Feb 25 11:47:31 crc kubenswrapper[5005]: I0225 11:47:31.419975 5005 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 25 11:47:31 crc kubenswrapper[5005]: I0225 11:47:31.475822 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-j5jvl" event={"ID":"ec0be389-469d-43d6-a40f-98bacf082fdc","Type":"ContainerStarted","Data":"11eaefdd395b95ea0a94e4033f9e3ba9b436778258f2b4144d599b91ef988e93"} Feb 25 11:47:32 crc kubenswrapper[5005]: I0225 11:47:32.686580 5005 scope.go:117] "RemoveContainer" containerID="e2a8d5f65a424a6167d6d361209a412da688e351bf037195b0ced2458f3bac94" Feb 25 11:47:32 crc kubenswrapper[5005]: E0225 11:47:32.687704 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 11:47:33 crc kubenswrapper[5005]: I0225 11:47:33.047967 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-twxb2"] Feb 25 11:47:33 crc kubenswrapper[5005]: I0225 11:47:33.059735 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-twxb2"] Feb 25 11:47:33 crc kubenswrapper[5005]: I0225 11:47:33.493544 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-j5jvl" event={"ID":"ec0be389-469d-43d6-a40f-98bacf082fdc","Type":"ContainerStarted","Data":"777a1139954e0117593aab3a1f0633f3db4b45fb8183309ae6122a7b9660dc2a"} Feb 25 11:47:33 crc kubenswrapper[5005]: I0225 11:47:33.510624 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-j5jvl" podStartSLOduration=1.834110683 podStartE2EDuration="3.510576962s" podCreationTimestamp="2026-02-25 11:47:30 +0000 UTC" firstStartedPulling="2026-02-25 11:47:31.419763886 +0000 UTC m=+1765.460496223" lastFinishedPulling="2026-02-25 11:47:33.096230145 +0000 UTC m=+1767.136962502" observedRunningTime="2026-02-25 11:47:33.510312154 +0000 UTC m=+1767.551044501" watchObservedRunningTime="2026-02-25 11:47:33.510576962 +0000 UTC m=+1767.551309319" Feb 25 11:47:34 crc kubenswrapper[5005]: I0225 11:47:34.699072 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="811df314-ec45-441f-9869-cb6b976163cb" path="/var/lib/kubelet/pods/811df314-ec45-441f-9869-cb6b976163cb/volumes" Feb 25 11:47:40 crc kubenswrapper[5005]: I0225 11:47:40.561316 5005 generic.go:334] "Generic (PLEG): container finished" podID="ec0be389-469d-43d6-a40f-98bacf082fdc" containerID="777a1139954e0117593aab3a1f0633f3db4b45fb8183309ae6122a7b9660dc2a" exitCode=0 Feb 25 11:47:40 crc kubenswrapper[5005]: I0225 11:47:40.561354 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-j5jvl" event={"ID":"ec0be389-469d-43d6-a40f-98bacf082fdc","Type":"ContainerDied","Data":"777a1139954e0117593aab3a1f0633f3db4b45fb8183309ae6122a7b9660dc2a"} Feb 25 11:47:41 crc kubenswrapper[5005]: I0225 11:47:41.946982 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-j5jvl" Feb 25 11:47:42 crc kubenswrapper[5005]: I0225 11:47:42.140470 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9k4dh\" (UniqueName: \"kubernetes.io/projected/ec0be389-469d-43d6-a40f-98bacf082fdc-kube-api-access-9k4dh\") pod \"ec0be389-469d-43d6-a40f-98bacf082fdc\" (UID: \"ec0be389-469d-43d6-a40f-98bacf082fdc\") " Feb 25 11:47:42 crc kubenswrapper[5005]: I0225 11:47:42.140565 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/ec0be389-469d-43d6-a40f-98bacf082fdc-inventory-0\") pod \"ec0be389-469d-43d6-a40f-98bacf082fdc\" (UID: \"ec0be389-469d-43d6-a40f-98bacf082fdc\") " Feb 25 11:47:42 crc kubenswrapper[5005]: I0225 11:47:42.140606 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ec0be389-469d-43d6-a40f-98bacf082fdc-ssh-key-openstack-edpm-ipam\") pod \"ec0be389-469d-43d6-a40f-98bacf082fdc\" (UID: \"ec0be389-469d-43d6-a40f-98bacf082fdc\") " Feb 25 11:47:42 crc kubenswrapper[5005]: I0225 11:47:42.145979 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec0be389-469d-43d6-a40f-98bacf082fdc-kube-api-access-9k4dh" (OuterVolumeSpecName: "kube-api-access-9k4dh") pod "ec0be389-469d-43d6-a40f-98bacf082fdc" (UID: "ec0be389-469d-43d6-a40f-98bacf082fdc"). InnerVolumeSpecName "kube-api-access-9k4dh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:47:42 crc kubenswrapper[5005]: I0225 11:47:42.168254 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec0be389-469d-43d6-a40f-98bacf082fdc-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "ec0be389-469d-43d6-a40f-98bacf082fdc" (UID: "ec0be389-469d-43d6-a40f-98bacf082fdc"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:47:42 crc kubenswrapper[5005]: I0225 11:47:42.172243 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec0be389-469d-43d6-a40f-98bacf082fdc-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ec0be389-469d-43d6-a40f-98bacf082fdc" (UID: "ec0be389-469d-43d6-a40f-98bacf082fdc"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:47:42 crc kubenswrapper[5005]: I0225 11:47:42.242005 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9k4dh\" (UniqueName: \"kubernetes.io/projected/ec0be389-469d-43d6-a40f-98bacf082fdc-kube-api-access-9k4dh\") on node \"crc\" DevicePath \"\"" Feb 25 11:47:42 crc kubenswrapper[5005]: I0225 11:47:42.242043 5005 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/ec0be389-469d-43d6-a40f-98bacf082fdc-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 25 11:47:42 crc kubenswrapper[5005]: I0225 11:47:42.242059 5005 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ec0be389-469d-43d6-a40f-98bacf082fdc-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 25 11:47:42 crc kubenswrapper[5005]: I0225 11:47:42.578044 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-j5jvl" event={"ID":"ec0be389-469d-43d6-a40f-98bacf082fdc","Type":"ContainerDied","Data":"11eaefdd395b95ea0a94e4033f9e3ba9b436778258f2b4144d599b91ef988e93"} Feb 25 11:47:42 crc kubenswrapper[5005]: I0225 11:47:42.578078 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11eaefdd395b95ea0a94e4033f9e3ba9b436778258f2b4144d599b91ef988e93" Feb 25 11:47:42 crc kubenswrapper[5005]: I0225 11:47:42.578180 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-j5jvl" Feb 25 11:47:42 crc kubenswrapper[5005]: I0225 11:47:42.674919 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-d9zjh"] Feb 25 11:47:42 crc kubenswrapper[5005]: E0225 11:47:42.675322 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec0be389-469d-43d6-a40f-98bacf082fdc" containerName="ssh-known-hosts-edpm-deployment" Feb 25 11:47:42 crc kubenswrapper[5005]: I0225 11:47:42.675339 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec0be389-469d-43d6-a40f-98bacf082fdc" containerName="ssh-known-hosts-edpm-deployment" Feb 25 11:47:42 crc kubenswrapper[5005]: I0225 11:47:42.675544 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec0be389-469d-43d6-a40f-98bacf082fdc" containerName="ssh-known-hosts-edpm-deployment" Feb 25 11:47:42 crc kubenswrapper[5005]: I0225 11:47:42.676253 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-d9zjh" Feb 25 11:47:42 crc kubenswrapper[5005]: I0225 11:47:42.679426 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 25 11:47:42 crc kubenswrapper[5005]: I0225 11:47:42.679969 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 25 11:47:42 crc kubenswrapper[5005]: I0225 11:47:42.680214 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dgrbb" Feb 25 11:47:42 crc kubenswrapper[5005]: I0225 11:47:42.680484 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 25 11:47:42 crc kubenswrapper[5005]: I0225 11:47:42.697537 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-d9zjh"] Feb 25 11:47:42 crc kubenswrapper[5005]: I0225 11:47:42.850612 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnzhn\" (UniqueName: \"kubernetes.io/projected/3696b575-7217-4b6a-8097-f09d166f483c-kube-api-access-wnzhn\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-d9zjh\" (UID: \"3696b575-7217-4b6a-8097-f09d166f483c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-d9zjh" Feb 25 11:47:42 crc kubenswrapper[5005]: I0225 11:47:42.850676 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3696b575-7217-4b6a-8097-f09d166f483c-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-d9zjh\" (UID: \"3696b575-7217-4b6a-8097-f09d166f483c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-d9zjh" Feb 25 11:47:42 crc kubenswrapper[5005]: I0225 11:47:42.850778 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3696b575-7217-4b6a-8097-f09d166f483c-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-d9zjh\" (UID: \"3696b575-7217-4b6a-8097-f09d166f483c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-d9zjh" Feb 25 11:47:42 crc kubenswrapper[5005]: I0225 11:47:42.952100 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3696b575-7217-4b6a-8097-f09d166f483c-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-d9zjh\" (UID: \"3696b575-7217-4b6a-8097-f09d166f483c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-d9zjh" Feb 25 11:47:42 crc kubenswrapper[5005]: I0225 11:47:42.952204 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnzhn\" (UniqueName: \"kubernetes.io/projected/3696b575-7217-4b6a-8097-f09d166f483c-kube-api-access-wnzhn\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-d9zjh\" (UID: \"3696b575-7217-4b6a-8097-f09d166f483c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-d9zjh" Feb 25 11:47:42 crc kubenswrapper[5005]: I0225 11:47:42.952239 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3696b575-7217-4b6a-8097-f09d166f483c-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-d9zjh\" (UID: \"3696b575-7217-4b6a-8097-f09d166f483c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-d9zjh" Feb 25 11:47:42 crc kubenswrapper[5005]: I0225 11:47:42.957742 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3696b575-7217-4b6a-8097-f09d166f483c-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-d9zjh\" (UID: \"3696b575-7217-4b6a-8097-f09d166f483c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-d9zjh" Feb 25 11:47:42 crc kubenswrapper[5005]: I0225 11:47:42.961430 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3696b575-7217-4b6a-8097-f09d166f483c-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-d9zjh\" (UID: \"3696b575-7217-4b6a-8097-f09d166f483c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-d9zjh" Feb 25 11:47:42 crc kubenswrapper[5005]: I0225 11:47:42.975041 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnzhn\" (UniqueName: \"kubernetes.io/projected/3696b575-7217-4b6a-8097-f09d166f483c-kube-api-access-wnzhn\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-d9zjh\" (UID: \"3696b575-7217-4b6a-8097-f09d166f483c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-d9zjh" Feb 25 11:47:43 crc kubenswrapper[5005]: I0225 11:47:43.007606 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-d9zjh" Feb 25 11:47:43 crc kubenswrapper[5005]: I0225 11:47:43.537354 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-d9zjh"] Feb 25 11:47:43 crc kubenswrapper[5005]: I0225 11:47:43.585705 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-d9zjh" event={"ID":"3696b575-7217-4b6a-8097-f09d166f483c","Type":"ContainerStarted","Data":"ec83f385ebeaca3629dc6cba7301f6e39dfc121a257075c93e476336342ce53a"} Feb 25 11:47:44 crc kubenswrapper[5005]: I0225 11:47:44.596077 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-d9zjh" event={"ID":"3696b575-7217-4b6a-8097-f09d166f483c","Type":"ContainerStarted","Data":"ac18108c62a1c5b5d211df7c3cbe125fecc1bfb291375c0b4f829cae36905f1f"} Feb 25 11:47:44 crc kubenswrapper[5005]: I0225 11:47:44.613626 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-d9zjh" podStartSLOduration=2.122575661 podStartE2EDuration="2.613607345s" podCreationTimestamp="2026-02-25 11:47:42 +0000 UTC" firstStartedPulling="2026-02-25 11:47:43.546144489 +0000 UTC m=+1777.586876816" lastFinishedPulling="2026-02-25 11:47:44.037176173 +0000 UTC m=+1778.077908500" observedRunningTime="2026-02-25 11:47:44.6077161 +0000 UTC m=+1778.648448457" watchObservedRunningTime="2026-02-25 11:47:44.613607345 +0000 UTC m=+1778.654339672" Feb 25 11:47:47 crc kubenswrapper[5005]: I0225 11:47:47.685815 5005 scope.go:117] "RemoveContainer" containerID="e2a8d5f65a424a6167d6d361209a412da688e351bf037195b0ced2458f3bac94" Feb 25 11:47:47 crc kubenswrapper[5005]: E0225 11:47:47.687172 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 11:47:51 crc kubenswrapper[5005]: I0225 11:47:51.671733 5005 generic.go:334] "Generic (PLEG): container finished" podID="3696b575-7217-4b6a-8097-f09d166f483c" containerID="ac18108c62a1c5b5d211df7c3cbe125fecc1bfb291375c0b4f829cae36905f1f" exitCode=0 Feb 25 11:47:51 crc kubenswrapper[5005]: I0225 11:47:51.671814 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-d9zjh" event={"ID":"3696b575-7217-4b6a-8097-f09d166f483c","Type":"ContainerDied","Data":"ac18108c62a1c5b5d211df7c3cbe125fecc1bfb291375c0b4f829cae36905f1f"} Feb 25 11:47:53 crc kubenswrapper[5005]: I0225 11:47:53.057266 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-d9zjh" Feb 25 11:47:53 crc kubenswrapper[5005]: I0225 11:47:53.236693 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3696b575-7217-4b6a-8097-f09d166f483c-inventory\") pod \"3696b575-7217-4b6a-8097-f09d166f483c\" (UID: \"3696b575-7217-4b6a-8097-f09d166f483c\") " Feb 25 11:47:53 crc kubenswrapper[5005]: I0225 11:47:53.236819 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnzhn\" (UniqueName: \"kubernetes.io/projected/3696b575-7217-4b6a-8097-f09d166f483c-kube-api-access-wnzhn\") pod \"3696b575-7217-4b6a-8097-f09d166f483c\" (UID: \"3696b575-7217-4b6a-8097-f09d166f483c\") " Feb 25 11:47:53 crc kubenswrapper[5005]: I0225 11:47:53.236986 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3696b575-7217-4b6a-8097-f09d166f483c-ssh-key-openstack-edpm-ipam\") pod \"3696b575-7217-4b6a-8097-f09d166f483c\" (UID: \"3696b575-7217-4b6a-8097-f09d166f483c\") " Feb 25 11:47:53 crc kubenswrapper[5005]: I0225 11:47:53.244784 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3696b575-7217-4b6a-8097-f09d166f483c-kube-api-access-wnzhn" (OuterVolumeSpecName: "kube-api-access-wnzhn") pod "3696b575-7217-4b6a-8097-f09d166f483c" (UID: "3696b575-7217-4b6a-8097-f09d166f483c"). InnerVolumeSpecName "kube-api-access-wnzhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:47:53 crc kubenswrapper[5005]: I0225 11:47:53.268613 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3696b575-7217-4b6a-8097-f09d166f483c-inventory" (OuterVolumeSpecName: "inventory") pod "3696b575-7217-4b6a-8097-f09d166f483c" (UID: "3696b575-7217-4b6a-8097-f09d166f483c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:47:53 crc kubenswrapper[5005]: I0225 11:47:53.269849 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3696b575-7217-4b6a-8097-f09d166f483c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3696b575-7217-4b6a-8097-f09d166f483c" (UID: "3696b575-7217-4b6a-8097-f09d166f483c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:47:53 crc kubenswrapper[5005]: I0225 11:47:53.339603 5005 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3696b575-7217-4b6a-8097-f09d166f483c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 25 11:47:53 crc kubenswrapper[5005]: I0225 11:47:53.339657 5005 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3696b575-7217-4b6a-8097-f09d166f483c-inventory\") on node \"crc\" DevicePath \"\"" Feb 25 11:47:53 crc kubenswrapper[5005]: I0225 11:47:53.339679 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnzhn\" (UniqueName: \"kubernetes.io/projected/3696b575-7217-4b6a-8097-f09d166f483c-kube-api-access-wnzhn\") on node \"crc\" DevicePath \"\"" Feb 25 11:47:53 crc kubenswrapper[5005]: I0225 11:47:53.692746 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-d9zjh" event={"ID":"3696b575-7217-4b6a-8097-f09d166f483c","Type":"ContainerDied","Data":"ec83f385ebeaca3629dc6cba7301f6e39dfc121a257075c93e476336342ce53a"} Feb 25 11:47:53 crc kubenswrapper[5005]: I0225 11:47:53.692904 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec83f385ebeaca3629dc6cba7301f6e39dfc121a257075c93e476336342ce53a" Feb 25 11:47:53 crc kubenswrapper[5005]: I0225 11:47:53.692852 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-d9zjh" Feb 25 11:47:53 crc kubenswrapper[5005]: I0225 11:47:53.787798 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6lsq5"] Feb 25 11:47:53 crc kubenswrapper[5005]: E0225 11:47:53.788261 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3696b575-7217-4b6a-8097-f09d166f483c" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 25 11:47:53 crc kubenswrapper[5005]: I0225 11:47:53.788282 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="3696b575-7217-4b6a-8097-f09d166f483c" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 25 11:47:53 crc kubenswrapper[5005]: I0225 11:47:53.788812 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="3696b575-7217-4b6a-8097-f09d166f483c" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 25 11:47:53 crc kubenswrapper[5005]: I0225 11:47:53.789492 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6lsq5" Feb 25 11:47:53 crc kubenswrapper[5005]: I0225 11:47:53.792560 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 25 11:47:53 crc kubenswrapper[5005]: I0225 11:47:53.792756 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dgrbb" Feb 25 11:47:53 crc kubenswrapper[5005]: I0225 11:47:53.793607 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 25 11:47:53 crc kubenswrapper[5005]: I0225 11:47:53.793807 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 25 11:47:53 crc kubenswrapper[5005]: I0225 11:47:53.795854 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6lsq5"] Feb 25 11:47:53 crc kubenswrapper[5005]: I0225 11:47:53.848907 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50159114-de19-4815-b634-5c335e4b792e-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6lsq5\" (UID: \"50159114-de19-4815-b634-5c335e4b792e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6lsq5" Feb 25 11:47:53 crc kubenswrapper[5005]: I0225 11:47:53.849004 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqwp9\" (UniqueName: \"kubernetes.io/projected/50159114-de19-4815-b634-5c335e4b792e-kube-api-access-zqwp9\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6lsq5\" (UID: \"50159114-de19-4815-b634-5c335e4b792e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6lsq5" Feb 25 11:47:53 crc kubenswrapper[5005]: I0225 11:47:53.849157 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/50159114-de19-4815-b634-5c335e4b792e-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6lsq5\" (UID: \"50159114-de19-4815-b634-5c335e4b792e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6lsq5" Feb 25 11:47:53 crc kubenswrapper[5005]: I0225 11:47:53.950590 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/50159114-de19-4815-b634-5c335e4b792e-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6lsq5\" (UID: \"50159114-de19-4815-b634-5c335e4b792e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6lsq5" Feb 25 11:47:53 crc kubenswrapper[5005]: I0225 11:47:53.950927 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50159114-de19-4815-b634-5c335e4b792e-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6lsq5\" (UID: \"50159114-de19-4815-b634-5c335e4b792e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6lsq5" Feb 25 11:47:53 crc kubenswrapper[5005]: I0225 11:47:53.951058 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqwp9\" (UniqueName: \"kubernetes.io/projected/50159114-de19-4815-b634-5c335e4b792e-kube-api-access-zqwp9\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6lsq5\" (UID: \"50159114-de19-4815-b634-5c335e4b792e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6lsq5" Feb 25 11:47:53 crc kubenswrapper[5005]: I0225 11:47:53.955280 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50159114-de19-4815-b634-5c335e4b792e-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6lsq5\" (UID: \"50159114-de19-4815-b634-5c335e4b792e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6lsq5" Feb 25 11:47:53 crc kubenswrapper[5005]: I0225 11:47:53.955482 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/50159114-de19-4815-b634-5c335e4b792e-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6lsq5\" (UID: \"50159114-de19-4815-b634-5c335e4b792e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6lsq5" Feb 25 11:47:53 crc kubenswrapper[5005]: I0225 11:47:53.966892 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqwp9\" (UniqueName: \"kubernetes.io/projected/50159114-de19-4815-b634-5c335e4b792e-kube-api-access-zqwp9\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6lsq5\" (UID: \"50159114-de19-4815-b634-5c335e4b792e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6lsq5" Feb 25 11:47:54 crc kubenswrapper[5005]: I0225 11:47:54.049455 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-x788q"] Feb 25 11:47:54 crc kubenswrapper[5005]: I0225 11:47:54.062683 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-x788q"] Feb 25 11:47:54 crc kubenswrapper[5005]: I0225 11:47:54.146082 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6lsq5" Feb 25 11:47:54 crc kubenswrapper[5005]: I0225 11:47:54.697912 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa5f2bfb-c847-4266-9def-11101efa2256" path="/var/lib/kubelet/pods/aa5f2bfb-c847-4266-9def-11101efa2256/volumes" Feb 25 11:47:54 crc kubenswrapper[5005]: I0225 11:47:54.737530 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6lsq5"] Feb 25 11:47:54 crc kubenswrapper[5005]: W0225 11:47:54.741608 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50159114_de19_4815_b634_5c335e4b792e.slice/crio-52b8cec569dd3e99af51671574b121d26b61cbebadb48527226ba2830d9b7e00 WatchSource:0}: Error finding container 52b8cec569dd3e99af51671574b121d26b61cbebadb48527226ba2830d9b7e00: Status 404 returned error can't find the container with id 52b8cec569dd3e99af51671574b121d26b61cbebadb48527226ba2830d9b7e00 Feb 25 11:47:55 crc kubenswrapper[5005]: I0225 11:47:55.029712 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-ntj86"] Feb 25 11:47:55 crc kubenswrapper[5005]: I0225 11:47:55.042913 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-ntj86"] Feb 25 11:47:55 crc kubenswrapper[5005]: I0225 11:47:55.713271 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6lsq5" event={"ID":"50159114-de19-4815-b634-5c335e4b792e","Type":"ContainerStarted","Data":"b5f44d6dcd26c451670cd6ada77f547859a7da1b14cc0dd151b8e3ad039e9dce"} Feb 25 11:47:55 crc kubenswrapper[5005]: I0225 11:47:55.715504 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6lsq5" event={"ID":"50159114-de19-4815-b634-5c335e4b792e","Type":"ContainerStarted","Data":"52b8cec569dd3e99af51671574b121d26b61cbebadb48527226ba2830d9b7e00"} Feb 25 11:47:55 crc kubenswrapper[5005]: I0225 11:47:55.735943 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6lsq5" podStartSLOduration=2.336305818 podStartE2EDuration="2.735925743s" podCreationTimestamp="2026-02-25 11:47:53 +0000 UTC" firstStartedPulling="2026-02-25 11:47:54.744987368 +0000 UTC m=+1788.785719705" lastFinishedPulling="2026-02-25 11:47:55.144607273 +0000 UTC m=+1789.185339630" observedRunningTime="2026-02-25 11:47:55.731561326 +0000 UTC m=+1789.772293653" watchObservedRunningTime="2026-02-25 11:47:55.735925743 +0000 UTC m=+1789.776658070" Feb 25 11:47:56 crc kubenswrapper[5005]: I0225 11:47:56.707834 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1734d86e-d703-4000-9058-bdc27eae9765" path="/var/lib/kubelet/pods/1734d86e-d703-4000-9058-bdc27eae9765/volumes" Feb 25 11:48:00 crc kubenswrapper[5005]: I0225 11:48:00.136612 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533668-skbjk"] Feb 25 11:48:00 crc kubenswrapper[5005]: I0225 11:48:00.138748 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533668-skbjk" Feb 25 11:48:00 crc kubenswrapper[5005]: I0225 11:48:00.143741 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 11:48:00 crc kubenswrapper[5005]: I0225 11:48:00.143964 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 11:48:00 crc kubenswrapper[5005]: I0225 11:48:00.144161 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 11:48:00 crc kubenswrapper[5005]: I0225 11:48:00.151917 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533668-skbjk"] Feb 25 11:48:00 crc kubenswrapper[5005]: I0225 11:48:00.291349 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppmn4\" (UniqueName: \"kubernetes.io/projected/238f3b62-82a3-45ae-b0e2-f1a63e4d6412-kube-api-access-ppmn4\") pod \"auto-csr-approver-29533668-skbjk\" (UID: \"238f3b62-82a3-45ae-b0e2-f1a63e4d6412\") " pod="openshift-infra/auto-csr-approver-29533668-skbjk" Feb 25 11:48:00 crc kubenswrapper[5005]: I0225 11:48:00.393017 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppmn4\" (UniqueName: \"kubernetes.io/projected/238f3b62-82a3-45ae-b0e2-f1a63e4d6412-kube-api-access-ppmn4\") pod \"auto-csr-approver-29533668-skbjk\" (UID: \"238f3b62-82a3-45ae-b0e2-f1a63e4d6412\") " pod="openshift-infra/auto-csr-approver-29533668-skbjk" Feb 25 11:48:00 crc kubenswrapper[5005]: I0225 11:48:00.412722 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppmn4\" (UniqueName: \"kubernetes.io/projected/238f3b62-82a3-45ae-b0e2-f1a63e4d6412-kube-api-access-ppmn4\") pod \"auto-csr-approver-29533668-skbjk\" (UID: \"238f3b62-82a3-45ae-b0e2-f1a63e4d6412\") " pod="openshift-infra/auto-csr-approver-29533668-skbjk" Feb 25 11:48:00 crc kubenswrapper[5005]: I0225 11:48:00.464954 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533668-skbjk" Feb 25 11:48:00 crc kubenswrapper[5005]: I0225 11:48:00.767292 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533668-skbjk"] Feb 25 11:48:01 crc kubenswrapper[5005]: I0225 11:48:01.781731 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533668-skbjk" event={"ID":"238f3b62-82a3-45ae-b0e2-f1a63e4d6412","Type":"ContainerStarted","Data":"33a559e3e63d192397d32726b46fc165284bc5bc9218c7987d3cf1ca781132a2"} Feb 25 11:48:02 crc kubenswrapper[5005]: E0225 11:48:02.620998 5005 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod238f3b62_82a3_45ae_b0e2_f1a63e4d6412.slice/crio-d553b4a9b75baaa5c41e569d3f322919d6dd87597ba12fdb5d17a92ecf3d3397.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod238f3b62_82a3_45ae_b0e2_f1a63e4d6412.slice/crio-conmon-d553b4a9b75baaa5c41e569d3f322919d6dd87597ba12fdb5d17a92ecf3d3397.scope\": RecentStats: unable to find data in memory cache]" Feb 25 11:48:02 crc kubenswrapper[5005]: I0225 11:48:02.685423 5005 scope.go:117] "RemoveContainer" containerID="e2a8d5f65a424a6167d6d361209a412da688e351bf037195b0ced2458f3bac94" Feb 25 11:48:02 crc kubenswrapper[5005]: E0225 11:48:02.686170 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 11:48:02 crc kubenswrapper[5005]: I0225 11:48:02.793642 5005 generic.go:334] "Generic (PLEG): container finished" podID="238f3b62-82a3-45ae-b0e2-f1a63e4d6412" containerID="d553b4a9b75baaa5c41e569d3f322919d6dd87597ba12fdb5d17a92ecf3d3397" exitCode=0 Feb 25 11:48:02 crc kubenswrapper[5005]: I0225 11:48:02.793682 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533668-skbjk" event={"ID":"238f3b62-82a3-45ae-b0e2-f1a63e4d6412","Type":"ContainerDied","Data":"d553b4a9b75baaa5c41e569d3f322919d6dd87597ba12fdb5d17a92ecf3d3397"} Feb 25 11:48:04 crc kubenswrapper[5005]: I0225 11:48:04.071935 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533668-skbjk" Feb 25 11:48:04 crc kubenswrapper[5005]: I0225 11:48:04.182576 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppmn4\" (UniqueName: \"kubernetes.io/projected/238f3b62-82a3-45ae-b0e2-f1a63e4d6412-kube-api-access-ppmn4\") pod \"238f3b62-82a3-45ae-b0e2-f1a63e4d6412\" (UID: \"238f3b62-82a3-45ae-b0e2-f1a63e4d6412\") " Feb 25 11:48:04 crc kubenswrapper[5005]: I0225 11:48:04.187397 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/238f3b62-82a3-45ae-b0e2-f1a63e4d6412-kube-api-access-ppmn4" (OuterVolumeSpecName: "kube-api-access-ppmn4") pod "238f3b62-82a3-45ae-b0e2-f1a63e4d6412" (UID: "238f3b62-82a3-45ae-b0e2-f1a63e4d6412"). InnerVolumeSpecName "kube-api-access-ppmn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:48:04 crc kubenswrapper[5005]: I0225 11:48:04.284890 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppmn4\" (UniqueName: \"kubernetes.io/projected/238f3b62-82a3-45ae-b0e2-f1a63e4d6412-kube-api-access-ppmn4\") on node \"crc\" DevicePath \"\"" Feb 25 11:48:04 crc kubenswrapper[5005]: I0225 11:48:04.812394 5005 generic.go:334] "Generic (PLEG): container finished" podID="50159114-de19-4815-b634-5c335e4b792e" containerID="b5f44d6dcd26c451670cd6ada77f547859a7da1b14cc0dd151b8e3ad039e9dce" exitCode=0 Feb 25 11:48:04 crc kubenswrapper[5005]: I0225 11:48:04.812476 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6lsq5" event={"ID":"50159114-de19-4815-b634-5c335e4b792e","Type":"ContainerDied","Data":"b5f44d6dcd26c451670cd6ada77f547859a7da1b14cc0dd151b8e3ad039e9dce"} Feb 25 11:48:04 crc kubenswrapper[5005]: I0225 11:48:04.816110 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533668-skbjk" event={"ID":"238f3b62-82a3-45ae-b0e2-f1a63e4d6412","Type":"ContainerDied","Data":"33a559e3e63d192397d32726b46fc165284bc5bc9218c7987d3cf1ca781132a2"} Feb 25 11:48:04 crc kubenswrapper[5005]: I0225 11:48:04.816147 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33a559e3e63d192397d32726b46fc165284bc5bc9218c7987d3cf1ca781132a2" Feb 25 11:48:04 crc kubenswrapper[5005]: I0225 11:48:04.816152 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533668-skbjk" Feb 25 11:48:05 crc kubenswrapper[5005]: I0225 11:48:05.135758 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533662-6cx7c"] Feb 25 11:48:05 crc kubenswrapper[5005]: I0225 11:48:05.143336 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533662-6cx7c"] Feb 25 11:48:06 crc kubenswrapper[5005]: I0225 11:48:06.355503 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6lsq5" Feb 25 11:48:06 crc kubenswrapper[5005]: I0225 11:48:06.422837 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50159114-de19-4815-b634-5c335e4b792e-inventory\") pod \"50159114-de19-4815-b634-5c335e4b792e\" (UID: \"50159114-de19-4815-b634-5c335e4b792e\") " Feb 25 11:48:06 crc kubenswrapper[5005]: I0225 11:48:06.423084 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/50159114-de19-4815-b634-5c335e4b792e-ssh-key-openstack-edpm-ipam\") pod \"50159114-de19-4815-b634-5c335e4b792e\" (UID: \"50159114-de19-4815-b634-5c335e4b792e\") " Feb 25 11:48:06 crc kubenswrapper[5005]: I0225 11:48:06.423123 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqwp9\" (UniqueName: \"kubernetes.io/projected/50159114-de19-4815-b634-5c335e4b792e-kube-api-access-zqwp9\") pod \"50159114-de19-4815-b634-5c335e4b792e\" (UID: \"50159114-de19-4815-b634-5c335e4b792e\") " Feb 25 11:48:06 crc kubenswrapper[5005]: I0225 11:48:06.431616 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50159114-de19-4815-b634-5c335e4b792e-kube-api-access-zqwp9" (OuterVolumeSpecName: "kube-api-access-zqwp9") pod "50159114-de19-4815-b634-5c335e4b792e" (UID: "50159114-de19-4815-b634-5c335e4b792e"). InnerVolumeSpecName "kube-api-access-zqwp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:48:06 crc kubenswrapper[5005]: I0225 11:48:06.445596 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50159114-de19-4815-b634-5c335e4b792e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "50159114-de19-4815-b634-5c335e4b792e" (UID: "50159114-de19-4815-b634-5c335e4b792e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:48:06 crc kubenswrapper[5005]: I0225 11:48:06.457284 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50159114-de19-4815-b634-5c335e4b792e-inventory" (OuterVolumeSpecName: "inventory") pod "50159114-de19-4815-b634-5c335e4b792e" (UID: "50159114-de19-4815-b634-5c335e4b792e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:48:06 crc kubenswrapper[5005]: I0225 11:48:06.524447 5005 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/50159114-de19-4815-b634-5c335e4b792e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 25 11:48:06 crc kubenswrapper[5005]: I0225 11:48:06.524653 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqwp9\" (UniqueName: \"kubernetes.io/projected/50159114-de19-4815-b634-5c335e4b792e-kube-api-access-zqwp9\") on node \"crc\" DevicePath \"\"" Feb 25 11:48:06 crc kubenswrapper[5005]: I0225 11:48:06.524782 5005 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50159114-de19-4815-b634-5c335e4b792e-inventory\") on node \"crc\" DevicePath \"\"" Feb 25 11:48:06 crc kubenswrapper[5005]: I0225 11:48:06.695114 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="caeeaeb7-a7f5-49f0-9fa2-2788312deefa" path="/var/lib/kubelet/pods/caeeaeb7-a7f5-49f0-9fa2-2788312deefa/volumes" Feb 25 11:48:06 crc kubenswrapper[5005]: I0225 11:48:06.987080 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6lsq5" event={"ID":"50159114-de19-4815-b634-5c335e4b792e","Type":"ContainerDied","Data":"52b8cec569dd3e99af51671574b121d26b61cbebadb48527226ba2830d9b7e00"} Feb 25 11:48:06 crc kubenswrapper[5005]: I0225 11:48:06.987472 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52b8cec569dd3e99af51671574b121d26b61cbebadb48527226ba2830d9b7e00" Feb 25 11:48:06 crc kubenswrapper[5005]: I0225 11:48:06.987110 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6lsq5" Feb 25 11:48:15 crc kubenswrapper[5005]: I0225 11:48:15.685772 5005 scope.go:117] "RemoveContainer" containerID="e2a8d5f65a424a6167d6d361209a412da688e351bf037195b0ced2458f3bac94" Feb 25 11:48:15 crc kubenswrapper[5005]: E0225 11:48:15.686596 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 11:48:17 crc kubenswrapper[5005]: I0225 11:48:17.682491 5005 scope.go:117] "RemoveContainer" containerID="0b05392c1b22cf360818b23617594aab29720670bc1bbcf219fb4e2f3b9bbe59" Feb 25 11:48:17 crc kubenswrapper[5005]: I0225 11:48:17.755486 5005 scope.go:117] "RemoveContainer" containerID="d1f0b3ffae2f57e898b99e6db839bf3a05034740859bb1e89de16ea30c2661d0" Feb 25 11:48:17 crc kubenswrapper[5005]: I0225 11:48:17.810506 5005 scope.go:117] "RemoveContainer" containerID="8d018400ea1ec8acbfc543580027d54c18924188194205ba6bc96493fb10ef84" Feb 25 11:48:17 crc kubenswrapper[5005]: I0225 11:48:17.847144 5005 scope.go:117] "RemoveContainer" containerID="a5347fde957aa10d2aa20a22ae6ea22c21001a4e6d1794c539dd59af0f2ba786" Feb 25 11:48:26 crc kubenswrapper[5005]: I0225 11:48:26.696560 5005 scope.go:117] "RemoveContainer" containerID="e2a8d5f65a424a6167d6d361209a412da688e351bf037195b0ced2458f3bac94" Feb 25 11:48:26 crc kubenswrapper[5005]: E0225 11:48:26.697301 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 11:48:40 crc kubenswrapper[5005]: I0225 11:48:40.053761 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-zqrvp"] Feb 25 11:48:40 crc kubenswrapper[5005]: I0225 11:48:40.065046 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-zqrvp"] Feb 25 11:48:40 crc kubenswrapper[5005]: I0225 11:48:40.686650 5005 scope.go:117] "RemoveContainer" containerID="e2a8d5f65a424a6167d6d361209a412da688e351bf037195b0ced2458f3bac94" Feb 25 11:48:40 crc kubenswrapper[5005]: E0225 11:48:40.687189 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 11:48:40 crc kubenswrapper[5005]: I0225 11:48:40.704271 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="277c7024-9f41-4e45-afd0-68cf9af20681" path="/var/lib/kubelet/pods/277c7024-9f41-4e45-afd0-68cf9af20681/volumes" Feb 25 11:48:54 crc kubenswrapper[5005]: I0225 11:48:54.685823 5005 scope.go:117] "RemoveContainer" containerID="e2a8d5f65a424a6167d6d361209a412da688e351bf037195b0ced2458f3bac94" Feb 25 11:48:54 crc kubenswrapper[5005]: E0225 11:48:54.687055 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 11:49:08 crc kubenswrapper[5005]: I0225 11:49:08.685490 5005 scope.go:117] "RemoveContainer" containerID="e2a8d5f65a424a6167d6d361209a412da688e351bf037195b0ced2458f3bac94" Feb 25 11:49:09 crc kubenswrapper[5005]: I0225 11:49:09.620557 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerStarted","Data":"ad704000137b23295360aa1c0e0fd91240a3ef27bd833fc620dad6c30f9284e5"} Feb 25 11:49:17 crc kubenswrapper[5005]: I0225 11:49:17.932231 5005 scope.go:117] "RemoveContainer" containerID="ddea575fffbe9605941f6d3d236f15b2b2a08d14f8d8cf9aa9086ee9fb7331e2" Feb 25 11:50:00 crc kubenswrapper[5005]: I0225 11:50:00.142782 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533670-9xs56"] Feb 25 11:50:00 crc kubenswrapper[5005]: E0225 11:50:00.143948 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="238f3b62-82a3-45ae-b0e2-f1a63e4d6412" containerName="oc" Feb 25 11:50:00 crc kubenswrapper[5005]: I0225 11:50:00.143973 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="238f3b62-82a3-45ae-b0e2-f1a63e4d6412" containerName="oc" Feb 25 11:50:00 crc kubenswrapper[5005]: E0225 11:50:00.144015 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50159114-de19-4815-b634-5c335e4b792e" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 25 11:50:00 crc kubenswrapper[5005]: I0225 11:50:00.144029 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="50159114-de19-4815-b634-5c335e4b792e" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 25 11:50:00 crc kubenswrapper[5005]: I0225 11:50:00.145908 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="50159114-de19-4815-b634-5c335e4b792e" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 25 11:50:00 crc kubenswrapper[5005]: I0225 11:50:00.145956 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="238f3b62-82a3-45ae-b0e2-f1a63e4d6412" containerName="oc" Feb 25 11:50:00 crc kubenswrapper[5005]: I0225 11:50:00.146890 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533670-9xs56" Feb 25 11:50:00 crc kubenswrapper[5005]: I0225 11:50:00.149557 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 11:50:00 crc kubenswrapper[5005]: I0225 11:50:00.150060 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 11:50:00 crc kubenswrapper[5005]: I0225 11:50:00.153637 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 11:50:00 crc kubenswrapper[5005]: I0225 11:50:00.159340 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533670-9xs56"] Feb 25 11:50:00 crc kubenswrapper[5005]: I0225 11:50:00.162706 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6lx2\" (UniqueName: \"kubernetes.io/projected/1f3f0c56-c8c5-4737-8e2a-de8bd6c86fc4-kube-api-access-w6lx2\") pod \"auto-csr-approver-29533670-9xs56\" (UID: \"1f3f0c56-c8c5-4737-8e2a-de8bd6c86fc4\") " pod="openshift-infra/auto-csr-approver-29533670-9xs56" Feb 25 11:50:00 crc kubenswrapper[5005]: I0225 11:50:00.265119 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6lx2\" (UniqueName: \"kubernetes.io/projected/1f3f0c56-c8c5-4737-8e2a-de8bd6c86fc4-kube-api-access-w6lx2\") pod \"auto-csr-approver-29533670-9xs56\" (UID: \"1f3f0c56-c8c5-4737-8e2a-de8bd6c86fc4\") " pod="openshift-infra/auto-csr-approver-29533670-9xs56" Feb 25 11:50:00 crc kubenswrapper[5005]: I0225 11:50:00.294332 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6lx2\" (UniqueName: \"kubernetes.io/projected/1f3f0c56-c8c5-4737-8e2a-de8bd6c86fc4-kube-api-access-w6lx2\") pod \"auto-csr-approver-29533670-9xs56\" (UID: \"1f3f0c56-c8c5-4737-8e2a-de8bd6c86fc4\") " pod="openshift-infra/auto-csr-approver-29533670-9xs56" Feb 25 11:50:00 crc kubenswrapper[5005]: I0225 11:50:00.480087 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533670-9xs56" Feb 25 11:50:00 crc kubenswrapper[5005]: W0225 11:50:00.968794 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f3f0c56_c8c5_4737_8e2a_de8bd6c86fc4.slice/crio-851956a23431f9849baec690a7e93accc9deed0830a6f36afbab16cfd5906e8c WatchSource:0}: Error finding container 851956a23431f9849baec690a7e93accc9deed0830a6f36afbab16cfd5906e8c: Status 404 returned error can't find the container with id 851956a23431f9849baec690a7e93accc9deed0830a6f36afbab16cfd5906e8c Feb 25 11:50:00 crc kubenswrapper[5005]: I0225 11:50:00.990645 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533670-9xs56"] Feb 25 11:50:01 crc kubenswrapper[5005]: I0225 11:50:01.092529 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533670-9xs56" event={"ID":"1f3f0c56-c8c5-4737-8e2a-de8bd6c86fc4","Type":"ContainerStarted","Data":"851956a23431f9849baec690a7e93accc9deed0830a6f36afbab16cfd5906e8c"} Feb 25 11:50:03 crc kubenswrapper[5005]: I0225 11:50:03.112023 5005 generic.go:334] "Generic (PLEG): container finished" podID="1f3f0c56-c8c5-4737-8e2a-de8bd6c86fc4" containerID="94620fbde0d4da1256b83d4d61ac5fb46231a6fea341a2495ba4e1ca66ac1b37" exitCode=0 Feb 25 11:50:03 crc kubenswrapper[5005]: I0225 11:50:03.112081 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533670-9xs56" event={"ID":"1f3f0c56-c8c5-4737-8e2a-de8bd6c86fc4","Type":"ContainerDied","Data":"94620fbde0d4da1256b83d4d61ac5fb46231a6fea341a2495ba4e1ca66ac1b37"} Feb 25 11:50:04 crc kubenswrapper[5005]: I0225 11:50:04.473178 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533670-9xs56" Feb 25 11:50:04 crc kubenswrapper[5005]: I0225 11:50:04.649215 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6lx2\" (UniqueName: \"kubernetes.io/projected/1f3f0c56-c8c5-4737-8e2a-de8bd6c86fc4-kube-api-access-w6lx2\") pod \"1f3f0c56-c8c5-4737-8e2a-de8bd6c86fc4\" (UID: \"1f3f0c56-c8c5-4737-8e2a-de8bd6c86fc4\") " Feb 25 11:50:04 crc kubenswrapper[5005]: I0225 11:50:04.654628 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f3f0c56-c8c5-4737-8e2a-de8bd6c86fc4-kube-api-access-w6lx2" (OuterVolumeSpecName: "kube-api-access-w6lx2") pod "1f3f0c56-c8c5-4737-8e2a-de8bd6c86fc4" (UID: "1f3f0c56-c8c5-4737-8e2a-de8bd6c86fc4"). InnerVolumeSpecName "kube-api-access-w6lx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:50:04 crc kubenswrapper[5005]: I0225 11:50:04.752300 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6lx2\" (UniqueName: \"kubernetes.io/projected/1f3f0c56-c8c5-4737-8e2a-de8bd6c86fc4-kube-api-access-w6lx2\") on node \"crc\" DevicePath \"\"" Feb 25 11:50:05 crc kubenswrapper[5005]: I0225 11:50:05.135735 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533670-9xs56" event={"ID":"1f3f0c56-c8c5-4737-8e2a-de8bd6c86fc4","Type":"ContainerDied","Data":"851956a23431f9849baec690a7e93accc9deed0830a6f36afbab16cfd5906e8c"} Feb 25 11:50:05 crc kubenswrapper[5005]: I0225 11:50:05.135792 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="851956a23431f9849baec690a7e93accc9deed0830a6f36afbab16cfd5906e8c" Feb 25 11:50:05 crc kubenswrapper[5005]: I0225 11:50:05.135811 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533670-9xs56" Feb 25 11:50:05 crc kubenswrapper[5005]: I0225 11:50:05.556649 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533664-tsb8n"] Feb 25 11:50:05 crc kubenswrapper[5005]: I0225 11:50:05.567049 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533664-tsb8n"] Feb 25 11:50:06 crc kubenswrapper[5005]: I0225 11:50:06.727079 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ceaeb004-07b3-4e93-a97e-a251ed27a076" path="/var/lib/kubelet/pods/ceaeb004-07b3-4e93-a97e-a251ed27a076/volumes" Feb 25 11:50:18 crc kubenswrapper[5005]: I0225 11:50:18.013046 5005 scope.go:117] "RemoveContainer" containerID="25e6ee26bb952e6a503927614d7a12622db82da46d6d9116f7fb1941ef460671" Feb 25 11:51:28 crc kubenswrapper[5005]: I0225 11:51:28.087682 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 11:51:28 crc kubenswrapper[5005]: I0225 11:51:28.088333 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 11:51:34 crc kubenswrapper[5005]: I0225 11:51:34.428036 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hjq8g"] Feb 25 11:51:34 crc kubenswrapper[5005]: E0225 11:51:34.429415 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f3f0c56-c8c5-4737-8e2a-de8bd6c86fc4" containerName="oc" Feb 25 11:51:34 crc kubenswrapper[5005]: I0225 11:51:34.429437 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f3f0c56-c8c5-4737-8e2a-de8bd6c86fc4" containerName="oc" Feb 25 11:51:34 crc kubenswrapper[5005]: I0225 11:51:34.429792 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f3f0c56-c8c5-4737-8e2a-de8bd6c86fc4" containerName="oc" Feb 25 11:51:34 crc kubenswrapper[5005]: I0225 11:51:34.432232 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hjq8g" Feb 25 11:51:34 crc kubenswrapper[5005]: I0225 11:51:34.462702 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hjq8g"] Feb 25 11:51:34 crc kubenswrapper[5005]: I0225 11:51:34.556606 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ef4365f-7e22-4e31-a793-d42d18f97de9-utilities\") pod \"community-operators-hjq8g\" (UID: \"0ef4365f-7e22-4e31-a793-d42d18f97de9\") " pod="openshift-marketplace/community-operators-hjq8g" Feb 25 11:51:34 crc kubenswrapper[5005]: I0225 11:51:34.557031 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9r4r\" (UniqueName: \"kubernetes.io/projected/0ef4365f-7e22-4e31-a793-d42d18f97de9-kube-api-access-s9r4r\") pod \"community-operators-hjq8g\" (UID: \"0ef4365f-7e22-4e31-a793-d42d18f97de9\") " pod="openshift-marketplace/community-operators-hjq8g" Feb 25 11:51:34 crc kubenswrapper[5005]: I0225 11:51:34.557240 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ef4365f-7e22-4e31-a793-d42d18f97de9-catalog-content\") pod \"community-operators-hjq8g\" (UID: \"0ef4365f-7e22-4e31-a793-d42d18f97de9\") " pod="openshift-marketplace/community-operators-hjq8g" Feb 25 11:51:34 crc kubenswrapper[5005]: I0225 11:51:34.658974 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ef4365f-7e22-4e31-a793-d42d18f97de9-utilities\") pod \"community-operators-hjq8g\" (UID: \"0ef4365f-7e22-4e31-a793-d42d18f97de9\") " pod="openshift-marketplace/community-operators-hjq8g" Feb 25 11:51:34 crc kubenswrapper[5005]: I0225 11:51:34.659106 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9r4r\" (UniqueName: \"kubernetes.io/projected/0ef4365f-7e22-4e31-a793-d42d18f97de9-kube-api-access-s9r4r\") pod \"community-operators-hjq8g\" (UID: \"0ef4365f-7e22-4e31-a793-d42d18f97de9\") " pod="openshift-marketplace/community-operators-hjq8g" Feb 25 11:51:34 crc kubenswrapper[5005]: I0225 11:51:34.659144 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ef4365f-7e22-4e31-a793-d42d18f97de9-catalog-content\") pod \"community-operators-hjq8g\" (UID: \"0ef4365f-7e22-4e31-a793-d42d18f97de9\") " pod="openshift-marketplace/community-operators-hjq8g" Feb 25 11:51:34 crc kubenswrapper[5005]: I0225 11:51:34.659669 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ef4365f-7e22-4e31-a793-d42d18f97de9-catalog-content\") pod \"community-operators-hjq8g\" (UID: \"0ef4365f-7e22-4e31-a793-d42d18f97de9\") " pod="openshift-marketplace/community-operators-hjq8g" Feb 25 11:51:34 crc kubenswrapper[5005]: I0225 11:51:34.659666 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ef4365f-7e22-4e31-a793-d42d18f97de9-utilities\") pod \"community-operators-hjq8g\" (UID: \"0ef4365f-7e22-4e31-a793-d42d18f97de9\") " pod="openshift-marketplace/community-operators-hjq8g" Feb 25 11:51:34 crc kubenswrapper[5005]: I0225 11:51:34.691310 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9r4r\" (UniqueName: \"kubernetes.io/projected/0ef4365f-7e22-4e31-a793-d42d18f97de9-kube-api-access-s9r4r\") pod \"community-operators-hjq8g\" (UID: \"0ef4365f-7e22-4e31-a793-d42d18f97de9\") " pod="openshift-marketplace/community-operators-hjq8g" Feb 25 11:51:34 crc kubenswrapper[5005]: I0225 11:51:34.786724 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hjq8g" Feb 25 11:51:35 crc kubenswrapper[5005]: I0225 11:51:35.265663 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hjq8g"] Feb 25 11:51:35 crc kubenswrapper[5005]: I0225 11:51:35.987542 5005 generic.go:334] "Generic (PLEG): container finished" podID="0ef4365f-7e22-4e31-a793-d42d18f97de9" containerID="823f0b3aa1efe771632233682259d53f5655c0cce95677b7b9304da38499f2c2" exitCode=0 Feb 25 11:51:35 crc kubenswrapper[5005]: I0225 11:51:35.987921 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hjq8g" event={"ID":"0ef4365f-7e22-4e31-a793-d42d18f97de9","Type":"ContainerDied","Data":"823f0b3aa1efe771632233682259d53f5655c0cce95677b7b9304da38499f2c2"} Feb 25 11:51:35 crc kubenswrapper[5005]: I0225 11:51:35.987959 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hjq8g" event={"ID":"0ef4365f-7e22-4e31-a793-d42d18f97de9","Type":"ContainerStarted","Data":"da6106e82424139d572c1e4507062f2e4b129572fdf3abea679472d44d96e09a"} Feb 25 11:51:36 crc kubenswrapper[5005]: I0225 11:51:36.806523 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jm8f8"] Feb 25 11:51:36 crc kubenswrapper[5005]: I0225 11:51:36.811936 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jm8f8" Feb 25 11:51:36 crc kubenswrapper[5005]: I0225 11:51:36.813888 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jm8f8"] Feb 25 11:51:36 crc kubenswrapper[5005]: I0225 11:51:36.998486 5005 generic.go:334] "Generic (PLEG): container finished" podID="0ef4365f-7e22-4e31-a793-d42d18f97de9" containerID="73acce9e8d8ccd14f0f62f6e5cb8bff6a5c614cf60b16f80286c94f63026eb70" exitCode=0 Feb 25 11:51:36 crc kubenswrapper[5005]: I0225 11:51:36.998550 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hjq8g" event={"ID":"0ef4365f-7e22-4e31-a793-d42d18f97de9","Type":"ContainerDied","Data":"73acce9e8d8ccd14f0f62f6e5cb8bff6a5c614cf60b16f80286c94f63026eb70"} Feb 25 11:51:37 crc kubenswrapper[5005]: I0225 11:51:37.000461 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qddn\" (UniqueName: \"kubernetes.io/projected/39f12dd1-9750-4d1a-8956-95e6fbf9f3d0-kube-api-access-2qddn\") pod \"redhat-operators-jm8f8\" (UID: \"39f12dd1-9750-4d1a-8956-95e6fbf9f3d0\") " pod="openshift-marketplace/redhat-operators-jm8f8" Feb 25 11:51:37 crc kubenswrapper[5005]: I0225 11:51:37.000509 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39f12dd1-9750-4d1a-8956-95e6fbf9f3d0-catalog-content\") pod \"redhat-operators-jm8f8\" (UID: \"39f12dd1-9750-4d1a-8956-95e6fbf9f3d0\") " pod="openshift-marketplace/redhat-operators-jm8f8" Feb 25 11:51:37 crc kubenswrapper[5005]: I0225 11:51:37.000590 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39f12dd1-9750-4d1a-8956-95e6fbf9f3d0-utilities\") pod \"redhat-operators-jm8f8\" (UID: \"39f12dd1-9750-4d1a-8956-95e6fbf9f3d0\") " pod="openshift-marketplace/redhat-operators-jm8f8" Feb 25 11:51:37 crc kubenswrapper[5005]: I0225 11:51:37.102871 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39f12dd1-9750-4d1a-8956-95e6fbf9f3d0-utilities\") pod \"redhat-operators-jm8f8\" (UID: \"39f12dd1-9750-4d1a-8956-95e6fbf9f3d0\") " pod="openshift-marketplace/redhat-operators-jm8f8" Feb 25 11:51:37 crc kubenswrapper[5005]: I0225 11:51:37.103008 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qddn\" (UniqueName: \"kubernetes.io/projected/39f12dd1-9750-4d1a-8956-95e6fbf9f3d0-kube-api-access-2qddn\") pod \"redhat-operators-jm8f8\" (UID: \"39f12dd1-9750-4d1a-8956-95e6fbf9f3d0\") " pod="openshift-marketplace/redhat-operators-jm8f8" Feb 25 11:51:37 crc kubenswrapper[5005]: I0225 11:51:37.103079 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39f12dd1-9750-4d1a-8956-95e6fbf9f3d0-catalog-content\") pod \"redhat-operators-jm8f8\" (UID: \"39f12dd1-9750-4d1a-8956-95e6fbf9f3d0\") " pod="openshift-marketplace/redhat-operators-jm8f8" Feb 25 11:51:37 crc kubenswrapper[5005]: I0225 11:51:37.103659 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39f12dd1-9750-4d1a-8956-95e6fbf9f3d0-utilities\") pod \"redhat-operators-jm8f8\" (UID: \"39f12dd1-9750-4d1a-8956-95e6fbf9f3d0\") " pod="openshift-marketplace/redhat-operators-jm8f8" Feb 25 11:51:37 crc kubenswrapper[5005]: I0225 11:51:37.103675 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39f12dd1-9750-4d1a-8956-95e6fbf9f3d0-catalog-content\") pod \"redhat-operators-jm8f8\" (UID: \"39f12dd1-9750-4d1a-8956-95e6fbf9f3d0\") " pod="openshift-marketplace/redhat-operators-jm8f8" Feb 25 11:51:37 crc kubenswrapper[5005]: I0225 11:51:37.149008 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qddn\" (UniqueName: \"kubernetes.io/projected/39f12dd1-9750-4d1a-8956-95e6fbf9f3d0-kube-api-access-2qddn\") pod \"redhat-operators-jm8f8\" (UID: \"39f12dd1-9750-4d1a-8956-95e6fbf9f3d0\") " pod="openshift-marketplace/redhat-operators-jm8f8" Feb 25 11:51:37 crc kubenswrapper[5005]: I0225 11:51:37.443104 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jm8f8" Feb 25 11:51:37 crc kubenswrapper[5005]: E0225 11:51:37.757294 5005 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.233:54548->38.102.83.233:40985: write tcp 38.102.83.233:54548->38.102.83.233:40985: write: broken pipe Feb 25 11:51:37 crc kubenswrapper[5005]: W0225 11:51:37.917640 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39f12dd1_9750_4d1a_8956_95e6fbf9f3d0.slice/crio-88475bb377ef8ef6453a6cef27027613b1d67d3e6e140315ea1f19cd7b0cb6a0 WatchSource:0}: Error finding container 88475bb377ef8ef6453a6cef27027613b1d67d3e6e140315ea1f19cd7b0cb6a0: Status 404 returned error can't find the container with id 88475bb377ef8ef6453a6cef27027613b1d67d3e6e140315ea1f19cd7b0cb6a0 Feb 25 11:51:37 crc kubenswrapper[5005]: I0225 11:51:37.924177 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jm8f8"] Feb 25 11:51:38 crc kubenswrapper[5005]: I0225 11:51:38.009006 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hjq8g" event={"ID":"0ef4365f-7e22-4e31-a793-d42d18f97de9","Type":"ContainerStarted","Data":"8b05bfdd39c7ef17be3d5f30d727ebb7f517f9e8dfcb5532336ba0ebd772f5c2"} Feb 25 11:51:38 crc kubenswrapper[5005]: I0225 11:51:38.011291 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jm8f8" event={"ID":"39f12dd1-9750-4d1a-8956-95e6fbf9f3d0","Type":"ContainerStarted","Data":"88475bb377ef8ef6453a6cef27027613b1d67d3e6e140315ea1f19cd7b0cb6a0"} Feb 25 11:51:38 crc kubenswrapper[5005]: I0225 11:51:38.035430 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hjq8g" podStartSLOduration=2.680988118 podStartE2EDuration="4.035413005s" podCreationTimestamp="2026-02-25 11:51:34 +0000 UTC" firstStartedPulling="2026-02-25 11:51:35.991076437 +0000 UTC m=+2010.031808804" lastFinishedPulling="2026-02-25 11:51:37.345501344 +0000 UTC m=+2011.386233691" observedRunningTime="2026-02-25 11:51:38.030731446 +0000 UTC m=+2012.071463783" watchObservedRunningTime="2026-02-25 11:51:38.035413005 +0000 UTC m=+2012.076145332" Feb 25 11:51:39 crc kubenswrapper[5005]: I0225 11:51:39.021700 5005 generic.go:334] "Generic (PLEG): container finished" podID="39f12dd1-9750-4d1a-8956-95e6fbf9f3d0" containerID="dd5c50eb181336c974dab228c7c92687418c26e667fee42c4bef668d01d47f73" exitCode=0 Feb 25 11:51:39 crc kubenswrapper[5005]: I0225 11:51:39.021801 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jm8f8" event={"ID":"39f12dd1-9750-4d1a-8956-95e6fbf9f3d0","Type":"ContainerDied","Data":"dd5c50eb181336c974dab228c7c92687418c26e667fee42c4bef668d01d47f73"} Feb 25 11:51:41 crc kubenswrapper[5005]: I0225 11:51:41.048740 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jm8f8" event={"ID":"39f12dd1-9750-4d1a-8956-95e6fbf9f3d0","Type":"ContainerStarted","Data":"97a3feb9074869f36d873beedf245569526c26a70d81bff07f53055c66d60ae5"} Feb 25 11:51:43 crc kubenswrapper[5005]: I0225 11:51:43.073940 5005 generic.go:334] "Generic (PLEG): container finished" podID="39f12dd1-9750-4d1a-8956-95e6fbf9f3d0" containerID="97a3feb9074869f36d873beedf245569526c26a70d81bff07f53055c66d60ae5" exitCode=0 Feb 25 11:51:43 crc kubenswrapper[5005]: I0225 11:51:43.074049 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jm8f8" event={"ID":"39f12dd1-9750-4d1a-8956-95e6fbf9f3d0","Type":"ContainerDied","Data":"97a3feb9074869f36d873beedf245569526c26a70d81bff07f53055c66d60ae5"} Feb 25 11:51:44 crc kubenswrapper[5005]: I0225 11:51:44.786987 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hjq8g" Feb 25 11:51:44 crc kubenswrapper[5005]: I0225 11:51:44.787352 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hjq8g" Feb 25 11:51:44 crc kubenswrapper[5005]: I0225 11:51:44.855816 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hjq8g" Feb 25 11:51:45 crc kubenswrapper[5005]: I0225 11:51:45.094656 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jm8f8" event={"ID":"39f12dd1-9750-4d1a-8956-95e6fbf9f3d0","Type":"ContainerStarted","Data":"5adb73a63edceec812f46f6a33b3ab3feeaa32790bcdd7968f1e6c7d270b5714"} Feb 25 11:51:45 crc kubenswrapper[5005]: I0225 11:51:45.134793 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jm8f8" podStartSLOduration=3.827757313 podStartE2EDuration="9.134767565s" podCreationTimestamp="2026-02-25 11:51:36 +0000 UTC" firstStartedPulling="2026-02-25 11:51:39.02403494 +0000 UTC m=+2013.064767277" lastFinishedPulling="2026-02-25 11:51:44.331045172 +0000 UTC m=+2018.371777529" observedRunningTime="2026-02-25 11:51:45.123875227 +0000 UTC m=+2019.164607554" watchObservedRunningTime="2026-02-25 11:51:45.134767565 +0000 UTC m=+2019.175499932" Feb 25 11:51:45 crc kubenswrapper[5005]: I0225 11:51:45.165542 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hjq8g" Feb 25 11:51:46 crc kubenswrapper[5005]: I0225 11:51:46.799725 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hjq8g"] Feb 25 11:51:47 crc kubenswrapper[5005]: I0225 11:51:47.119982 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hjq8g" podUID="0ef4365f-7e22-4e31-a793-d42d18f97de9" containerName="registry-server" containerID="cri-o://8b05bfdd39c7ef17be3d5f30d727ebb7f517f9e8dfcb5532336ba0ebd772f5c2" gracePeriod=2 Feb 25 11:51:47 crc kubenswrapper[5005]: I0225 11:51:47.443690 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jm8f8" Feb 25 11:51:47 crc kubenswrapper[5005]: I0225 11:51:47.444117 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jm8f8" Feb 25 11:51:47 crc kubenswrapper[5005]: I0225 11:51:47.595895 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hjq8g" Feb 25 11:51:47 crc kubenswrapper[5005]: I0225 11:51:47.609219 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9r4r\" (UniqueName: \"kubernetes.io/projected/0ef4365f-7e22-4e31-a793-d42d18f97de9-kube-api-access-s9r4r\") pod \"0ef4365f-7e22-4e31-a793-d42d18f97de9\" (UID: \"0ef4365f-7e22-4e31-a793-d42d18f97de9\") " Feb 25 11:51:47 crc kubenswrapper[5005]: I0225 11:51:47.609474 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ef4365f-7e22-4e31-a793-d42d18f97de9-utilities\") pod \"0ef4365f-7e22-4e31-a793-d42d18f97de9\" (UID: \"0ef4365f-7e22-4e31-a793-d42d18f97de9\") " Feb 25 11:51:47 crc kubenswrapper[5005]: I0225 11:51:47.609525 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ef4365f-7e22-4e31-a793-d42d18f97de9-catalog-content\") pod \"0ef4365f-7e22-4e31-a793-d42d18f97de9\" (UID: \"0ef4365f-7e22-4e31-a793-d42d18f97de9\") " Feb 25 11:51:47 crc kubenswrapper[5005]: I0225 11:51:47.611175 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ef4365f-7e22-4e31-a793-d42d18f97de9-utilities" (OuterVolumeSpecName: "utilities") pod "0ef4365f-7e22-4e31-a793-d42d18f97de9" (UID: "0ef4365f-7e22-4e31-a793-d42d18f97de9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:51:47 crc kubenswrapper[5005]: I0225 11:51:47.617771 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ef4365f-7e22-4e31-a793-d42d18f97de9-kube-api-access-s9r4r" (OuterVolumeSpecName: "kube-api-access-s9r4r") pod "0ef4365f-7e22-4e31-a793-d42d18f97de9" (UID: "0ef4365f-7e22-4e31-a793-d42d18f97de9"). InnerVolumeSpecName "kube-api-access-s9r4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:51:47 crc kubenswrapper[5005]: I0225 11:51:47.677181 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ef4365f-7e22-4e31-a793-d42d18f97de9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0ef4365f-7e22-4e31-a793-d42d18f97de9" (UID: "0ef4365f-7e22-4e31-a793-d42d18f97de9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:51:47 crc kubenswrapper[5005]: I0225 11:51:47.713172 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9r4r\" (UniqueName: \"kubernetes.io/projected/0ef4365f-7e22-4e31-a793-d42d18f97de9-kube-api-access-s9r4r\") on node \"crc\" DevicePath \"\"" Feb 25 11:51:47 crc kubenswrapper[5005]: I0225 11:51:47.713216 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ef4365f-7e22-4e31-a793-d42d18f97de9-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 11:51:47 crc kubenswrapper[5005]: I0225 11:51:47.713228 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ef4365f-7e22-4e31-a793-d42d18f97de9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 11:51:48 crc kubenswrapper[5005]: I0225 11:51:48.128739 5005 generic.go:334] "Generic (PLEG): container finished" podID="0ef4365f-7e22-4e31-a793-d42d18f97de9" containerID="8b05bfdd39c7ef17be3d5f30d727ebb7f517f9e8dfcb5532336ba0ebd772f5c2" exitCode=0 Feb 25 11:51:48 crc kubenswrapper[5005]: I0225 11:51:48.128779 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hjq8g" event={"ID":"0ef4365f-7e22-4e31-a793-d42d18f97de9","Type":"ContainerDied","Data":"8b05bfdd39c7ef17be3d5f30d727ebb7f517f9e8dfcb5532336ba0ebd772f5c2"} Feb 25 11:51:48 crc kubenswrapper[5005]: I0225 11:51:48.128973 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hjq8g" event={"ID":"0ef4365f-7e22-4e31-a793-d42d18f97de9","Type":"ContainerDied","Data":"da6106e82424139d572c1e4507062f2e4b129572fdf3abea679472d44d96e09a"} Feb 25 11:51:48 crc kubenswrapper[5005]: I0225 11:51:48.128995 5005 scope.go:117] "RemoveContainer" containerID="8b05bfdd39c7ef17be3d5f30d727ebb7f517f9e8dfcb5532336ba0ebd772f5c2" Feb 25 11:51:48 crc kubenswrapper[5005]: I0225 11:51:48.128816 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hjq8g" Feb 25 11:51:48 crc kubenswrapper[5005]: I0225 11:51:48.160886 5005 scope.go:117] "RemoveContainer" containerID="73acce9e8d8ccd14f0f62f6e5cb8bff6a5c614cf60b16f80286c94f63026eb70" Feb 25 11:51:48 crc kubenswrapper[5005]: I0225 11:51:48.162300 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hjq8g"] Feb 25 11:51:48 crc kubenswrapper[5005]: I0225 11:51:48.170380 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hjq8g"] Feb 25 11:51:48 crc kubenswrapper[5005]: I0225 11:51:48.177513 5005 scope.go:117] "RemoveContainer" containerID="823f0b3aa1efe771632233682259d53f5655c0cce95677b7b9304da38499f2c2" Feb 25 11:51:48 crc kubenswrapper[5005]: I0225 11:51:48.215594 5005 scope.go:117] "RemoveContainer" containerID="8b05bfdd39c7ef17be3d5f30d727ebb7f517f9e8dfcb5532336ba0ebd772f5c2" Feb 25 11:51:48 crc kubenswrapper[5005]: E0225 11:51:48.216038 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b05bfdd39c7ef17be3d5f30d727ebb7f517f9e8dfcb5532336ba0ebd772f5c2\": container with ID starting with 8b05bfdd39c7ef17be3d5f30d727ebb7f517f9e8dfcb5532336ba0ebd772f5c2 not found: ID does not exist" containerID="8b05bfdd39c7ef17be3d5f30d727ebb7f517f9e8dfcb5532336ba0ebd772f5c2" Feb 25 11:51:48 crc kubenswrapper[5005]: I0225 11:51:48.216072 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b05bfdd39c7ef17be3d5f30d727ebb7f517f9e8dfcb5532336ba0ebd772f5c2"} err="failed to get container status \"8b05bfdd39c7ef17be3d5f30d727ebb7f517f9e8dfcb5532336ba0ebd772f5c2\": rpc error: code = NotFound desc = could not find container \"8b05bfdd39c7ef17be3d5f30d727ebb7f517f9e8dfcb5532336ba0ebd772f5c2\": container with ID starting with 8b05bfdd39c7ef17be3d5f30d727ebb7f517f9e8dfcb5532336ba0ebd772f5c2 not found: ID does not exist" Feb 25 11:51:48 crc kubenswrapper[5005]: I0225 11:51:48.216096 5005 scope.go:117] "RemoveContainer" containerID="73acce9e8d8ccd14f0f62f6e5cb8bff6a5c614cf60b16f80286c94f63026eb70" Feb 25 11:51:48 crc kubenswrapper[5005]: E0225 11:51:48.216434 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73acce9e8d8ccd14f0f62f6e5cb8bff6a5c614cf60b16f80286c94f63026eb70\": container with ID starting with 73acce9e8d8ccd14f0f62f6e5cb8bff6a5c614cf60b16f80286c94f63026eb70 not found: ID does not exist" containerID="73acce9e8d8ccd14f0f62f6e5cb8bff6a5c614cf60b16f80286c94f63026eb70" Feb 25 11:51:48 crc kubenswrapper[5005]: I0225 11:51:48.216458 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73acce9e8d8ccd14f0f62f6e5cb8bff6a5c614cf60b16f80286c94f63026eb70"} err="failed to get container status \"73acce9e8d8ccd14f0f62f6e5cb8bff6a5c614cf60b16f80286c94f63026eb70\": rpc error: code = NotFound desc = could not find container \"73acce9e8d8ccd14f0f62f6e5cb8bff6a5c614cf60b16f80286c94f63026eb70\": container with ID starting with 73acce9e8d8ccd14f0f62f6e5cb8bff6a5c614cf60b16f80286c94f63026eb70 not found: ID does not exist" Feb 25 11:51:48 crc kubenswrapper[5005]: I0225 11:51:48.216475 5005 scope.go:117] "RemoveContainer" containerID="823f0b3aa1efe771632233682259d53f5655c0cce95677b7b9304da38499f2c2" Feb 25 11:51:48 crc kubenswrapper[5005]: E0225 11:51:48.216691 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"823f0b3aa1efe771632233682259d53f5655c0cce95677b7b9304da38499f2c2\": container with ID starting with 823f0b3aa1efe771632233682259d53f5655c0cce95677b7b9304da38499f2c2 not found: ID does not exist" containerID="823f0b3aa1efe771632233682259d53f5655c0cce95677b7b9304da38499f2c2" Feb 25 11:51:48 crc kubenswrapper[5005]: I0225 11:51:48.216716 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"823f0b3aa1efe771632233682259d53f5655c0cce95677b7b9304da38499f2c2"} err="failed to get container status \"823f0b3aa1efe771632233682259d53f5655c0cce95677b7b9304da38499f2c2\": rpc error: code = NotFound desc = could not find container \"823f0b3aa1efe771632233682259d53f5655c0cce95677b7b9304da38499f2c2\": container with ID starting with 823f0b3aa1efe771632233682259d53f5655c0cce95677b7b9304da38499f2c2 not found: ID does not exist" Feb 25 11:51:48 crc kubenswrapper[5005]: I0225 11:51:48.508928 5005 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jm8f8" podUID="39f12dd1-9750-4d1a-8956-95e6fbf9f3d0" containerName="registry-server" probeResult="failure" output=< Feb 25 11:51:48 crc kubenswrapper[5005]: timeout: failed to connect service ":50051" within 1s Feb 25 11:51:48 crc kubenswrapper[5005]: > Feb 25 11:51:48 crc kubenswrapper[5005]: I0225 11:51:48.697339 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ef4365f-7e22-4e31-a793-d42d18f97de9" path="/var/lib/kubelet/pods/0ef4365f-7e22-4e31-a793-d42d18f97de9/volumes" Feb 25 11:51:57 crc kubenswrapper[5005]: I0225 11:51:57.488672 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jm8f8" Feb 25 11:51:57 crc kubenswrapper[5005]: I0225 11:51:57.554639 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jm8f8" Feb 25 11:51:57 crc kubenswrapper[5005]: I0225 11:51:57.734051 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jm8f8"] Feb 25 11:51:58 crc kubenswrapper[5005]: I0225 11:51:58.087787 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 11:51:58 crc kubenswrapper[5005]: I0225 11:51:58.087841 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 11:51:59 crc kubenswrapper[5005]: I0225 11:51:59.231156 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jm8f8" podUID="39f12dd1-9750-4d1a-8956-95e6fbf9f3d0" containerName="registry-server" containerID="cri-o://5adb73a63edceec812f46f6a33b3ab3feeaa32790bcdd7968f1e6c7d270b5714" gracePeriod=2 Feb 25 11:51:59 crc kubenswrapper[5005]: I0225 11:51:59.765336 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jm8f8" Feb 25 11:51:59 crc kubenswrapper[5005]: I0225 11:51:59.823621 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39f12dd1-9750-4d1a-8956-95e6fbf9f3d0-catalog-content\") pod \"39f12dd1-9750-4d1a-8956-95e6fbf9f3d0\" (UID: \"39f12dd1-9750-4d1a-8956-95e6fbf9f3d0\") " Feb 25 11:51:59 crc kubenswrapper[5005]: I0225 11:51:59.834091 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39f12dd1-9750-4d1a-8956-95e6fbf9f3d0-utilities\") pod \"39f12dd1-9750-4d1a-8956-95e6fbf9f3d0\" (UID: \"39f12dd1-9750-4d1a-8956-95e6fbf9f3d0\") " Feb 25 11:51:59 crc kubenswrapper[5005]: I0225 11:51:59.834249 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qddn\" (UniqueName: \"kubernetes.io/projected/39f12dd1-9750-4d1a-8956-95e6fbf9f3d0-kube-api-access-2qddn\") pod \"39f12dd1-9750-4d1a-8956-95e6fbf9f3d0\" (UID: \"39f12dd1-9750-4d1a-8956-95e6fbf9f3d0\") " Feb 25 11:51:59 crc kubenswrapper[5005]: I0225 11:51:59.835600 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39f12dd1-9750-4d1a-8956-95e6fbf9f3d0-utilities" (OuterVolumeSpecName: "utilities") pod "39f12dd1-9750-4d1a-8956-95e6fbf9f3d0" (UID: "39f12dd1-9750-4d1a-8956-95e6fbf9f3d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:51:59 crc kubenswrapper[5005]: I0225 11:51:59.844635 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39f12dd1-9750-4d1a-8956-95e6fbf9f3d0-kube-api-access-2qddn" (OuterVolumeSpecName: "kube-api-access-2qddn") pod "39f12dd1-9750-4d1a-8956-95e6fbf9f3d0" (UID: "39f12dd1-9750-4d1a-8956-95e6fbf9f3d0"). InnerVolumeSpecName "kube-api-access-2qddn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:51:59 crc kubenswrapper[5005]: I0225 11:51:59.936203 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39f12dd1-9750-4d1a-8956-95e6fbf9f3d0-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 11:51:59 crc kubenswrapper[5005]: I0225 11:51:59.936242 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qddn\" (UniqueName: \"kubernetes.io/projected/39f12dd1-9750-4d1a-8956-95e6fbf9f3d0-kube-api-access-2qddn\") on node \"crc\" DevicePath \"\"" Feb 25 11:51:59 crc kubenswrapper[5005]: I0225 11:51:59.959144 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39f12dd1-9750-4d1a-8956-95e6fbf9f3d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "39f12dd1-9750-4d1a-8956-95e6fbf9f3d0" (UID: "39f12dd1-9750-4d1a-8956-95e6fbf9f3d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:52:00 crc kubenswrapper[5005]: I0225 11:52:00.037944 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39f12dd1-9750-4d1a-8956-95e6fbf9f3d0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 11:52:00 crc kubenswrapper[5005]: I0225 11:52:00.153974 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533672-2vvsz"] Feb 25 11:52:00 crc kubenswrapper[5005]: E0225 11:52:00.154890 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39f12dd1-9750-4d1a-8956-95e6fbf9f3d0" containerName="extract-utilities" Feb 25 11:52:00 crc kubenswrapper[5005]: I0225 11:52:00.154920 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="39f12dd1-9750-4d1a-8956-95e6fbf9f3d0" containerName="extract-utilities" Feb 25 11:52:00 crc kubenswrapper[5005]: E0225 11:52:00.154945 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ef4365f-7e22-4e31-a793-d42d18f97de9" containerName="registry-server" Feb 25 11:52:00 crc kubenswrapper[5005]: I0225 11:52:00.154957 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ef4365f-7e22-4e31-a793-d42d18f97de9" containerName="registry-server" Feb 25 11:52:00 crc kubenswrapper[5005]: E0225 11:52:00.154977 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ef4365f-7e22-4e31-a793-d42d18f97de9" containerName="extract-content" Feb 25 11:52:00 crc kubenswrapper[5005]: I0225 11:52:00.154991 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ef4365f-7e22-4e31-a793-d42d18f97de9" containerName="extract-content" Feb 25 11:52:00 crc kubenswrapper[5005]: E0225 11:52:00.155006 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39f12dd1-9750-4d1a-8956-95e6fbf9f3d0" containerName="extract-content" Feb 25 11:52:00 crc kubenswrapper[5005]: I0225 11:52:00.155016 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="39f12dd1-9750-4d1a-8956-95e6fbf9f3d0" containerName="extract-content" Feb 25 11:52:00 crc kubenswrapper[5005]: E0225 11:52:00.155035 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39f12dd1-9750-4d1a-8956-95e6fbf9f3d0" containerName="registry-server" Feb 25 11:52:00 crc kubenswrapper[5005]: I0225 11:52:00.155045 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="39f12dd1-9750-4d1a-8956-95e6fbf9f3d0" containerName="registry-server" Feb 25 11:52:00 crc kubenswrapper[5005]: E0225 11:52:00.155073 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ef4365f-7e22-4e31-a793-d42d18f97de9" containerName="extract-utilities" Feb 25 11:52:00 crc kubenswrapper[5005]: I0225 11:52:00.155084 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ef4365f-7e22-4e31-a793-d42d18f97de9" containerName="extract-utilities" Feb 25 11:52:00 crc kubenswrapper[5005]: I0225 11:52:00.155375 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ef4365f-7e22-4e31-a793-d42d18f97de9" containerName="registry-server" Feb 25 11:52:00 crc kubenswrapper[5005]: I0225 11:52:00.155427 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="39f12dd1-9750-4d1a-8956-95e6fbf9f3d0" containerName="registry-server" Feb 25 11:52:00 crc kubenswrapper[5005]: I0225 11:52:00.156248 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533672-2vvsz" Feb 25 11:52:00 crc kubenswrapper[5005]: I0225 11:52:00.159316 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 11:52:00 crc kubenswrapper[5005]: I0225 11:52:00.160840 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 11:52:00 crc kubenswrapper[5005]: I0225 11:52:00.161150 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 11:52:00 crc kubenswrapper[5005]: I0225 11:52:00.170620 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533672-2vvsz"] Feb 25 11:52:00 crc kubenswrapper[5005]: I0225 11:52:00.241066 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwlnb\" (UniqueName: \"kubernetes.io/projected/2ada5fa1-990b-4bb8-a32f-1f6b7fce77b4-kube-api-access-jwlnb\") pod \"auto-csr-approver-29533672-2vvsz\" (UID: \"2ada5fa1-990b-4bb8-a32f-1f6b7fce77b4\") " pod="openshift-infra/auto-csr-approver-29533672-2vvsz" Feb 25 11:52:00 crc kubenswrapper[5005]: I0225 11:52:00.243321 5005 generic.go:334] "Generic (PLEG): container finished" podID="39f12dd1-9750-4d1a-8956-95e6fbf9f3d0" containerID="5adb73a63edceec812f46f6a33b3ab3feeaa32790bcdd7968f1e6c7d270b5714" exitCode=0 Feb 25 11:52:00 crc kubenswrapper[5005]: I0225 11:52:00.243407 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jm8f8" Feb 25 11:52:00 crc kubenswrapper[5005]: I0225 11:52:00.243441 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jm8f8" event={"ID":"39f12dd1-9750-4d1a-8956-95e6fbf9f3d0","Type":"ContainerDied","Data":"5adb73a63edceec812f46f6a33b3ab3feeaa32790bcdd7968f1e6c7d270b5714"} Feb 25 11:52:00 crc kubenswrapper[5005]: I0225 11:52:00.243509 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jm8f8" event={"ID":"39f12dd1-9750-4d1a-8956-95e6fbf9f3d0","Type":"ContainerDied","Data":"88475bb377ef8ef6453a6cef27027613b1d67d3e6e140315ea1f19cd7b0cb6a0"} Feb 25 11:52:00 crc kubenswrapper[5005]: I0225 11:52:00.243535 5005 scope.go:117] "RemoveContainer" containerID="5adb73a63edceec812f46f6a33b3ab3feeaa32790bcdd7968f1e6c7d270b5714" Feb 25 11:52:00 crc kubenswrapper[5005]: I0225 11:52:00.269997 5005 scope.go:117] "RemoveContainer" containerID="97a3feb9074869f36d873beedf245569526c26a70d81bff07f53055c66d60ae5" Feb 25 11:52:00 crc kubenswrapper[5005]: I0225 11:52:00.285615 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jm8f8"] Feb 25 11:52:00 crc kubenswrapper[5005]: I0225 11:52:00.298406 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jm8f8"] Feb 25 11:52:00 crc kubenswrapper[5005]: I0225 11:52:00.298542 5005 scope.go:117] "RemoveContainer" containerID="dd5c50eb181336c974dab228c7c92687418c26e667fee42c4bef668d01d47f73" Feb 25 11:52:00 crc kubenswrapper[5005]: I0225 11:52:00.343259 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwlnb\" (UniqueName: \"kubernetes.io/projected/2ada5fa1-990b-4bb8-a32f-1f6b7fce77b4-kube-api-access-jwlnb\") pod \"auto-csr-approver-29533672-2vvsz\" (UID: \"2ada5fa1-990b-4bb8-a32f-1f6b7fce77b4\") " pod="openshift-infra/auto-csr-approver-29533672-2vvsz" Feb 25 11:52:00 crc kubenswrapper[5005]: I0225 11:52:00.352687 5005 scope.go:117] "RemoveContainer" containerID="5adb73a63edceec812f46f6a33b3ab3feeaa32790bcdd7968f1e6c7d270b5714" Feb 25 11:52:00 crc kubenswrapper[5005]: E0225 11:52:00.353249 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5adb73a63edceec812f46f6a33b3ab3feeaa32790bcdd7968f1e6c7d270b5714\": container with ID starting with 5adb73a63edceec812f46f6a33b3ab3feeaa32790bcdd7968f1e6c7d270b5714 not found: ID does not exist" containerID="5adb73a63edceec812f46f6a33b3ab3feeaa32790bcdd7968f1e6c7d270b5714" Feb 25 11:52:00 crc kubenswrapper[5005]: I0225 11:52:00.353286 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5adb73a63edceec812f46f6a33b3ab3feeaa32790bcdd7968f1e6c7d270b5714"} err="failed to get container status \"5adb73a63edceec812f46f6a33b3ab3feeaa32790bcdd7968f1e6c7d270b5714\": rpc error: code = NotFound desc = could not find container \"5adb73a63edceec812f46f6a33b3ab3feeaa32790bcdd7968f1e6c7d270b5714\": container with ID starting with 5adb73a63edceec812f46f6a33b3ab3feeaa32790bcdd7968f1e6c7d270b5714 not found: ID does not exist" Feb 25 11:52:00 crc kubenswrapper[5005]: I0225 11:52:00.353331 5005 scope.go:117] "RemoveContainer" containerID="97a3feb9074869f36d873beedf245569526c26a70d81bff07f53055c66d60ae5" Feb 25 11:52:00 crc kubenswrapper[5005]: E0225 11:52:00.353662 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97a3feb9074869f36d873beedf245569526c26a70d81bff07f53055c66d60ae5\": container with ID starting with 97a3feb9074869f36d873beedf245569526c26a70d81bff07f53055c66d60ae5 not found: ID does not exist" containerID="97a3feb9074869f36d873beedf245569526c26a70d81bff07f53055c66d60ae5" Feb 25 11:52:00 crc kubenswrapper[5005]: I0225 11:52:00.353784 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97a3feb9074869f36d873beedf245569526c26a70d81bff07f53055c66d60ae5"} err="failed to get container status \"97a3feb9074869f36d873beedf245569526c26a70d81bff07f53055c66d60ae5\": rpc error: code = NotFound desc = could not find container \"97a3feb9074869f36d873beedf245569526c26a70d81bff07f53055c66d60ae5\": container with ID starting with 97a3feb9074869f36d873beedf245569526c26a70d81bff07f53055c66d60ae5 not found: ID does not exist" Feb 25 11:52:00 crc kubenswrapper[5005]: I0225 11:52:00.353803 5005 scope.go:117] "RemoveContainer" containerID="dd5c50eb181336c974dab228c7c92687418c26e667fee42c4bef668d01d47f73" Feb 25 11:52:00 crc kubenswrapper[5005]: E0225 11:52:00.354218 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd5c50eb181336c974dab228c7c92687418c26e667fee42c4bef668d01d47f73\": container with ID starting with dd5c50eb181336c974dab228c7c92687418c26e667fee42c4bef668d01d47f73 not found: ID does not exist" containerID="dd5c50eb181336c974dab228c7c92687418c26e667fee42c4bef668d01d47f73" Feb 25 11:52:00 crc kubenswrapper[5005]: I0225 11:52:00.354269 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd5c50eb181336c974dab228c7c92687418c26e667fee42c4bef668d01d47f73"} err="failed to get container status \"dd5c50eb181336c974dab228c7c92687418c26e667fee42c4bef668d01d47f73\": rpc error: code = NotFound desc = could not find container \"dd5c50eb181336c974dab228c7c92687418c26e667fee42c4bef668d01d47f73\": container with ID starting with dd5c50eb181336c974dab228c7c92687418c26e667fee42c4bef668d01d47f73 not found: ID does not exist" Feb 25 11:52:00 crc kubenswrapper[5005]: I0225 11:52:00.362697 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwlnb\" (UniqueName: \"kubernetes.io/projected/2ada5fa1-990b-4bb8-a32f-1f6b7fce77b4-kube-api-access-jwlnb\") pod \"auto-csr-approver-29533672-2vvsz\" (UID: \"2ada5fa1-990b-4bb8-a32f-1f6b7fce77b4\") " pod="openshift-infra/auto-csr-approver-29533672-2vvsz" Feb 25 11:52:00 crc kubenswrapper[5005]: I0225 11:52:00.489064 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533672-2vvsz" Feb 25 11:52:00 crc kubenswrapper[5005]: I0225 11:52:00.698039 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39f12dd1-9750-4d1a-8956-95e6fbf9f3d0" path="/var/lib/kubelet/pods/39f12dd1-9750-4d1a-8956-95e6fbf9f3d0/volumes" Feb 25 11:52:00 crc kubenswrapper[5005]: I0225 11:52:00.980443 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533672-2vvsz"] Feb 25 11:52:01 crc kubenswrapper[5005]: I0225 11:52:01.292072 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533672-2vvsz" event={"ID":"2ada5fa1-990b-4bb8-a32f-1f6b7fce77b4","Type":"ContainerStarted","Data":"8f643ea51eca4df8f5ac496107c35d410465b833862f0c9d26fc379598980d7d"} Feb 25 11:52:03 crc kubenswrapper[5005]: I0225 11:52:03.307198 5005 generic.go:334] "Generic (PLEG): container finished" podID="2ada5fa1-990b-4bb8-a32f-1f6b7fce77b4" containerID="d2dd31e152451aad04e3824484f36e5a84d17183fc236cbb9d8748f77e440fd6" exitCode=0 Feb 25 11:52:03 crc kubenswrapper[5005]: I0225 11:52:03.307355 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533672-2vvsz" event={"ID":"2ada5fa1-990b-4bb8-a32f-1f6b7fce77b4","Type":"ContainerDied","Data":"d2dd31e152451aad04e3824484f36e5a84d17183fc236cbb9d8748f77e440fd6"} Feb 25 11:52:04 crc kubenswrapper[5005]: I0225 11:52:04.663116 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533672-2vvsz" Feb 25 11:52:04 crc kubenswrapper[5005]: I0225 11:52:04.836000 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwlnb\" (UniqueName: \"kubernetes.io/projected/2ada5fa1-990b-4bb8-a32f-1f6b7fce77b4-kube-api-access-jwlnb\") pod \"2ada5fa1-990b-4bb8-a32f-1f6b7fce77b4\" (UID: \"2ada5fa1-990b-4bb8-a32f-1f6b7fce77b4\") " Feb 25 11:52:04 crc kubenswrapper[5005]: I0225 11:52:04.844621 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ada5fa1-990b-4bb8-a32f-1f6b7fce77b4-kube-api-access-jwlnb" (OuterVolumeSpecName: "kube-api-access-jwlnb") pod "2ada5fa1-990b-4bb8-a32f-1f6b7fce77b4" (UID: "2ada5fa1-990b-4bb8-a32f-1f6b7fce77b4"). InnerVolumeSpecName "kube-api-access-jwlnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:52:04 crc kubenswrapper[5005]: I0225 11:52:04.938878 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwlnb\" (UniqueName: \"kubernetes.io/projected/2ada5fa1-990b-4bb8-a32f-1f6b7fce77b4-kube-api-access-jwlnb\") on node \"crc\" DevicePath \"\"" Feb 25 11:52:05 crc kubenswrapper[5005]: I0225 11:52:05.324304 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533672-2vvsz" event={"ID":"2ada5fa1-990b-4bb8-a32f-1f6b7fce77b4","Type":"ContainerDied","Data":"8f643ea51eca4df8f5ac496107c35d410465b833862f0c9d26fc379598980d7d"} Feb 25 11:52:05 crc kubenswrapper[5005]: I0225 11:52:05.324576 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f643ea51eca4df8f5ac496107c35d410465b833862f0c9d26fc379598980d7d" Feb 25 11:52:05 crc kubenswrapper[5005]: I0225 11:52:05.324744 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533672-2vvsz" Feb 25 11:52:05 crc kubenswrapper[5005]: I0225 11:52:05.740905 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533666-btzqv"] Feb 25 11:52:05 crc kubenswrapper[5005]: I0225 11:52:05.750621 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533666-btzqv"] Feb 25 11:52:06 crc kubenswrapper[5005]: I0225 11:52:06.695411 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da2ade9b-1140-4c1f-82b4-362f0d96792f" path="/var/lib/kubelet/pods/da2ade9b-1140-4c1f-82b4-362f0d96792f/volumes" Feb 25 11:52:08 crc kubenswrapper[5005]: I0225 11:52:08.786576 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vxsrw"] Feb 25 11:52:08 crc kubenswrapper[5005]: I0225 11:52:08.799953 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vxsrw"] Feb 25 11:52:08 crc kubenswrapper[5005]: I0225 11:52:08.808614 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g496k"] Feb 25 11:52:08 crc kubenswrapper[5005]: I0225 11:52:08.817087 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g496k"] Feb 25 11:52:08 crc kubenswrapper[5005]: I0225 11:52:08.825113 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-d9zjh"] Feb 25 11:52:08 crc kubenswrapper[5005]: I0225 11:52:08.832274 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4xrlg"] Feb 25 11:52:08 crc kubenswrapper[5005]: I0225 11:52:08.837925 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nrbgx"] Feb 25 11:52:08 crc kubenswrapper[5005]: I0225 11:52:08.843441 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-hrbkq"] Feb 25 11:52:08 crc kubenswrapper[5005]: I0225 11:52:08.848769 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-q2qst"] Feb 25 11:52:08 crc kubenswrapper[5005]: I0225 11:52:08.854183 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-j5jvl"] Feb 25 11:52:08 crc kubenswrapper[5005]: I0225 11:52:08.875298 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6lsq5"] Feb 25 11:52:08 crc kubenswrapper[5005]: I0225 11:52:08.883259 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-d9zjh"] Feb 25 11:52:08 crc kubenswrapper[5005]: I0225 11:52:08.890813 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-j5jvl"] Feb 25 11:52:08 crc kubenswrapper[5005]: I0225 11:52:08.896580 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nrbgx"] Feb 25 11:52:08 crc kubenswrapper[5005]: I0225 11:52:08.902530 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4xrlg"] Feb 25 11:52:08 crc kubenswrapper[5005]: I0225 11:52:08.908738 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-q2qst"] Feb 25 11:52:08 crc kubenswrapper[5005]: I0225 11:52:08.918177 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-hrbkq"] Feb 25 11:52:08 crc kubenswrapper[5005]: I0225 11:52:08.925695 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6lsq5"] Feb 25 11:52:08 crc kubenswrapper[5005]: I0225 11:52:08.932785 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4znvx"] Feb 25 11:52:08 crc kubenswrapper[5005]: I0225 11:52:08.939055 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4znvx"] Feb 25 11:52:10 crc kubenswrapper[5005]: I0225 11:52:10.704410 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1269c8b0-7511-4b9d-b8bd-f64a044ff8b5" path="/var/lib/kubelet/pods/1269c8b0-7511-4b9d-b8bd-f64a044ff8b5/volumes" Feb 25 11:52:10 crc kubenswrapper[5005]: I0225 11:52:10.706001 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="291ad3a8-272b-4a32-b8bd-0d2b7fcb546a" path="/var/lib/kubelet/pods/291ad3a8-272b-4a32-b8bd-0d2b7fcb546a/volumes" Feb 25 11:52:10 crc kubenswrapper[5005]: I0225 11:52:10.707042 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3696b575-7217-4b6a-8097-f09d166f483c" path="/var/lib/kubelet/pods/3696b575-7217-4b6a-8097-f09d166f483c/volumes" Feb 25 11:52:10 crc kubenswrapper[5005]: I0225 11:52:10.708330 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50159114-de19-4815-b634-5c335e4b792e" path="/var/lib/kubelet/pods/50159114-de19-4815-b634-5c335e4b792e/volumes" Feb 25 11:52:10 crc kubenswrapper[5005]: I0225 11:52:10.710267 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="612d0145-d4b9-43a5-a0c1-00fe6be630a4" path="/var/lib/kubelet/pods/612d0145-d4b9-43a5-a0c1-00fe6be630a4/volumes" Feb 25 11:52:10 crc kubenswrapper[5005]: I0225 11:52:10.710815 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66f4a866-d416-4207-9670-15416eadd794" path="/var/lib/kubelet/pods/66f4a866-d416-4207-9670-15416eadd794/volumes" Feb 25 11:52:10 crc kubenswrapper[5005]: I0225 11:52:10.711450 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68ef2c6f-d13a-4b57-9b87-e6d8c2643a3d" path="/var/lib/kubelet/pods/68ef2c6f-d13a-4b57-9b87-e6d8c2643a3d/volumes" Feb 25 11:52:10 crc kubenswrapper[5005]: I0225 11:52:10.712447 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="796df0b2-b3a6-4f4f-99cb-dda670ed4411" path="/var/lib/kubelet/pods/796df0b2-b3a6-4f4f-99cb-dda670ed4411/volumes" Feb 25 11:52:10 crc kubenswrapper[5005]: I0225 11:52:10.712939 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9e1d51e-93aa-42b5-af7d-caa1b451b7fb" path="/var/lib/kubelet/pods/b9e1d51e-93aa-42b5-af7d-caa1b451b7fb/volumes" Feb 25 11:52:10 crc kubenswrapper[5005]: I0225 11:52:10.713475 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec0be389-469d-43d6-a40f-98bacf082fdc" path="/var/lib/kubelet/pods/ec0be389-469d-43d6-a40f-98bacf082fdc/volumes" Feb 25 11:52:14 crc kubenswrapper[5005]: I0225 11:52:14.716471 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w89mw"] Feb 25 11:52:14 crc kubenswrapper[5005]: E0225 11:52:14.717298 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ada5fa1-990b-4bb8-a32f-1f6b7fce77b4" containerName="oc" Feb 25 11:52:14 crc kubenswrapper[5005]: I0225 11:52:14.717638 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ada5fa1-990b-4bb8-a32f-1f6b7fce77b4" containerName="oc" Feb 25 11:52:14 crc kubenswrapper[5005]: I0225 11:52:14.717928 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ada5fa1-990b-4bb8-a32f-1f6b7fce77b4" containerName="oc" Feb 25 11:52:14 crc kubenswrapper[5005]: I0225 11:52:14.718917 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w89mw" Feb 25 11:52:14 crc kubenswrapper[5005]: I0225 11:52:14.722317 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 25 11:52:14 crc kubenswrapper[5005]: I0225 11:52:14.722690 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 25 11:52:14 crc kubenswrapper[5005]: I0225 11:52:14.722842 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 25 11:52:14 crc kubenswrapper[5005]: I0225 11:52:14.723025 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 25 11:52:14 crc kubenswrapper[5005]: I0225 11:52:14.723200 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dgrbb" Feb 25 11:52:14 crc kubenswrapper[5005]: I0225 11:52:14.730064 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w89mw"] Feb 25 11:52:14 crc kubenswrapper[5005]: I0225 11:52:14.885113 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d7e93344-ea31-4de6-8473-4bd46cbe028b-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-w89mw\" (UID: \"d7e93344-ea31-4de6-8473-4bd46cbe028b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w89mw" Feb 25 11:52:14 crc kubenswrapper[5005]: I0225 11:52:14.885166 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7e93344-ea31-4de6-8473-4bd46cbe028b-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-w89mw\" (UID: \"d7e93344-ea31-4de6-8473-4bd46cbe028b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w89mw" Feb 25 11:52:14 crc kubenswrapper[5005]: I0225 11:52:14.885208 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d7e93344-ea31-4de6-8473-4bd46cbe028b-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-w89mw\" (UID: \"d7e93344-ea31-4de6-8473-4bd46cbe028b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w89mw" Feb 25 11:52:14 crc kubenswrapper[5005]: I0225 11:52:14.885232 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d7e93344-ea31-4de6-8473-4bd46cbe028b-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-w89mw\" (UID: \"d7e93344-ea31-4de6-8473-4bd46cbe028b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w89mw" Feb 25 11:52:14 crc kubenswrapper[5005]: I0225 11:52:14.885433 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkx62\" (UniqueName: \"kubernetes.io/projected/d7e93344-ea31-4de6-8473-4bd46cbe028b-kube-api-access-bkx62\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-w89mw\" (UID: \"d7e93344-ea31-4de6-8473-4bd46cbe028b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w89mw" Feb 25 11:52:14 crc kubenswrapper[5005]: I0225 11:52:14.989409 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkx62\" (UniqueName: \"kubernetes.io/projected/d7e93344-ea31-4de6-8473-4bd46cbe028b-kube-api-access-bkx62\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-w89mw\" (UID: \"d7e93344-ea31-4de6-8473-4bd46cbe028b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w89mw" Feb 25 11:52:14 crc kubenswrapper[5005]: I0225 11:52:14.989555 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d7e93344-ea31-4de6-8473-4bd46cbe028b-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-w89mw\" (UID: \"d7e93344-ea31-4de6-8473-4bd46cbe028b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w89mw" Feb 25 11:52:14 crc kubenswrapper[5005]: I0225 11:52:14.989581 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7e93344-ea31-4de6-8473-4bd46cbe028b-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-w89mw\" (UID: \"d7e93344-ea31-4de6-8473-4bd46cbe028b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w89mw" Feb 25 11:52:14 crc kubenswrapper[5005]: I0225 11:52:14.989611 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d7e93344-ea31-4de6-8473-4bd46cbe028b-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-w89mw\" (UID: \"d7e93344-ea31-4de6-8473-4bd46cbe028b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w89mw" Feb 25 11:52:14 crc kubenswrapper[5005]: I0225 11:52:14.989635 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d7e93344-ea31-4de6-8473-4bd46cbe028b-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-w89mw\" (UID: \"d7e93344-ea31-4de6-8473-4bd46cbe028b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w89mw" Feb 25 11:52:14 crc kubenswrapper[5005]: I0225 11:52:14.995332 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d7e93344-ea31-4de6-8473-4bd46cbe028b-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-w89mw\" (UID: \"d7e93344-ea31-4de6-8473-4bd46cbe028b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w89mw" Feb 25 11:52:14 crc kubenswrapper[5005]: I0225 11:52:14.995657 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d7e93344-ea31-4de6-8473-4bd46cbe028b-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-w89mw\" (UID: \"d7e93344-ea31-4de6-8473-4bd46cbe028b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w89mw" Feb 25 11:52:14 crc kubenswrapper[5005]: I0225 11:52:14.997622 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d7e93344-ea31-4de6-8473-4bd46cbe028b-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-w89mw\" (UID: \"d7e93344-ea31-4de6-8473-4bd46cbe028b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w89mw" Feb 25 11:52:14 crc kubenswrapper[5005]: I0225 11:52:14.998553 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7e93344-ea31-4de6-8473-4bd46cbe028b-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-w89mw\" (UID: \"d7e93344-ea31-4de6-8473-4bd46cbe028b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w89mw" Feb 25 11:52:15 crc kubenswrapper[5005]: I0225 11:52:15.018772 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkx62\" (UniqueName: \"kubernetes.io/projected/d7e93344-ea31-4de6-8473-4bd46cbe028b-kube-api-access-bkx62\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-w89mw\" (UID: \"d7e93344-ea31-4de6-8473-4bd46cbe028b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w89mw" Feb 25 11:52:15 crc kubenswrapper[5005]: I0225 11:52:15.046022 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w89mw" Feb 25 11:52:15 crc kubenswrapper[5005]: I0225 11:52:15.625243 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w89mw"] Feb 25 11:52:15 crc kubenswrapper[5005]: I0225 11:52:15.715298 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w89mw" event={"ID":"d7e93344-ea31-4de6-8473-4bd46cbe028b","Type":"ContainerStarted","Data":"6359831cb05683e28b1876f1dfadd1c59c82d0644f67e4dc51de0b9a94a4c3b8"} Feb 25 11:52:16 crc kubenswrapper[5005]: I0225 11:52:16.731800 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w89mw" event={"ID":"d7e93344-ea31-4de6-8473-4bd46cbe028b","Type":"ContainerStarted","Data":"ae317686d8c9de9dcbafa594f83baba3f20e423565cc26fc0af5f37ed820df7d"} Feb 25 11:52:16 crc kubenswrapper[5005]: I0225 11:52:16.757550 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w89mw" podStartSLOduration=2.344542357 podStartE2EDuration="2.757527048s" podCreationTimestamp="2026-02-25 11:52:14 +0000 UTC" firstStartedPulling="2026-02-25 11:52:15.637638913 +0000 UTC m=+2049.678371240" lastFinishedPulling="2026-02-25 11:52:16.050623604 +0000 UTC m=+2050.091355931" observedRunningTime="2026-02-25 11:52:16.747341013 +0000 UTC m=+2050.788073360" watchObservedRunningTime="2026-02-25 11:52:16.757527048 +0000 UTC m=+2050.798259385" Feb 25 11:52:18 crc kubenswrapper[5005]: I0225 11:52:18.100686 5005 scope.go:117] "RemoveContainer" containerID="b7b1a062f9bb4eead6d8cfdb1b468e7c66460df30a1d8fcf642bca78393b58f1" Feb 25 11:52:18 crc kubenswrapper[5005]: I0225 11:52:18.149718 5005 scope.go:117] "RemoveContainer" containerID="706a207c430d9e746e1b94015c889986df9389fe11a733c1b995e8dfda454bfa" Feb 25 11:52:18 crc kubenswrapper[5005]: I0225 11:52:18.190927 5005 scope.go:117] "RemoveContainer" containerID="60b53bcf84c11e3494e210a5aec6360d16f0e25342c611df553d11e582f3cea5" Feb 25 11:52:18 crc kubenswrapper[5005]: I0225 11:52:18.266861 5005 scope.go:117] "RemoveContainer" containerID="33310c3065fedd0cb9a613726fb6541047c25d9729e374ce20d02440df2c524b" Feb 25 11:52:18 crc kubenswrapper[5005]: I0225 11:52:18.309045 5005 scope.go:117] "RemoveContainer" containerID="3357acec55340978d55816892785f909266652d5d4590221a461f9127123769a" Feb 25 11:52:18 crc kubenswrapper[5005]: I0225 11:52:18.366593 5005 scope.go:117] "RemoveContainer" containerID="bbd6076323f167afc45fa69cd58ecc26cb1de71a0593366f148299504254c2f0" Feb 25 11:52:27 crc kubenswrapper[5005]: I0225 11:52:27.242451 5005 generic.go:334] "Generic (PLEG): container finished" podID="d7e93344-ea31-4de6-8473-4bd46cbe028b" containerID="ae317686d8c9de9dcbafa594f83baba3f20e423565cc26fc0af5f37ed820df7d" exitCode=0 Feb 25 11:52:27 crc kubenswrapper[5005]: I0225 11:52:27.242554 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w89mw" event={"ID":"d7e93344-ea31-4de6-8473-4bd46cbe028b","Type":"ContainerDied","Data":"ae317686d8c9de9dcbafa594f83baba3f20e423565cc26fc0af5f37ed820df7d"} Feb 25 11:52:28 crc kubenswrapper[5005]: I0225 11:52:28.087655 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 11:52:28 crc kubenswrapper[5005]: I0225 11:52:28.088076 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 11:52:28 crc kubenswrapper[5005]: I0225 11:52:28.088136 5005 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" Feb 25 11:52:28 crc kubenswrapper[5005]: I0225 11:52:28.088881 5005 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ad704000137b23295360aa1c0e0fd91240a3ef27bd833fc620dad6c30f9284e5"} pod="openshift-machine-config-operator/machine-config-daemon-tct5q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 25 11:52:28 crc kubenswrapper[5005]: I0225 11:52:28.088937 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" containerID="cri-o://ad704000137b23295360aa1c0e0fd91240a3ef27bd833fc620dad6c30f9284e5" gracePeriod=600 Feb 25 11:52:28 crc kubenswrapper[5005]: I0225 11:52:28.254538 5005 generic.go:334] "Generic (PLEG): container finished" podID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerID="ad704000137b23295360aa1c0e0fd91240a3ef27bd833fc620dad6c30f9284e5" exitCode=0 Feb 25 11:52:28 crc kubenswrapper[5005]: I0225 11:52:28.254645 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerDied","Data":"ad704000137b23295360aa1c0e0fd91240a3ef27bd833fc620dad6c30f9284e5"} Feb 25 11:52:28 crc kubenswrapper[5005]: I0225 11:52:28.254809 5005 scope.go:117] "RemoveContainer" containerID="e2a8d5f65a424a6167d6d361209a412da688e351bf037195b0ced2458f3bac94" Feb 25 11:52:28 crc kubenswrapper[5005]: I0225 11:52:28.679214 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w89mw" Feb 25 11:52:28 crc kubenswrapper[5005]: I0225 11:52:28.819672 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d7e93344-ea31-4de6-8473-4bd46cbe028b-ceph\") pod \"d7e93344-ea31-4de6-8473-4bd46cbe028b\" (UID: \"d7e93344-ea31-4de6-8473-4bd46cbe028b\") " Feb 25 11:52:28 crc kubenswrapper[5005]: I0225 11:52:28.820179 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7e93344-ea31-4de6-8473-4bd46cbe028b-repo-setup-combined-ca-bundle\") pod \"d7e93344-ea31-4de6-8473-4bd46cbe028b\" (UID: \"d7e93344-ea31-4de6-8473-4bd46cbe028b\") " Feb 25 11:52:28 crc kubenswrapper[5005]: I0225 11:52:28.820593 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d7e93344-ea31-4de6-8473-4bd46cbe028b-ssh-key-openstack-edpm-ipam\") pod \"d7e93344-ea31-4de6-8473-4bd46cbe028b\" (UID: \"d7e93344-ea31-4de6-8473-4bd46cbe028b\") " Feb 25 11:52:28 crc kubenswrapper[5005]: I0225 11:52:28.820645 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkx62\" (UniqueName: \"kubernetes.io/projected/d7e93344-ea31-4de6-8473-4bd46cbe028b-kube-api-access-bkx62\") pod \"d7e93344-ea31-4de6-8473-4bd46cbe028b\" (UID: \"d7e93344-ea31-4de6-8473-4bd46cbe028b\") " Feb 25 11:52:28 crc kubenswrapper[5005]: I0225 11:52:28.820782 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d7e93344-ea31-4de6-8473-4bd46cbe028b-inventory\") pod \"d7e93344-ea31-4de6-8473-4bd46cbe028b\" (UID: \"d7e93344-ea31-4de6-8473-4bd46cbe028b\") " Feb 25 11:52:28 crc kubenswrapper[5005]: I0225 11:52:28.826299 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e93344-ea31-4de6-8473-4bd46cbe028b-kube-api-access-bkx62" (OuterVolumeSpecName: "kube-api-access-bkx62") pod "d7e93344-ea31-4de6-8473-4bd46cbe028b" (UID: "d7e93344-ea31-4de6-8473-4bd46cbe028b"). InnerVolumeSpecName "kube-api-access-bkx62". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:52:28 crc kubenswrapper[5005]: I0225 11:52:28.826334 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e93344-ea31-4de6-8473-4bd46cbe028b-ceph" (OuterVolumeSpecName: "ceph") pod "d7e93344-ea31-4de6-8473-4bd46cbe028b" (UID: "d7e93344-ea31-4de6-8473-4bd46cbe028b"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:52:28 crc kubenswrapper[5005]: I0225 11:52:28.827857 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e93344-ea31-4de6-8473-4bd46cbe028b-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "d7e93344-ea31-4de6-8473-4bd46cbe028b" (UID: "d7e93344-ea31-4de6-8473-4bd46cbe028b"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:52:28 crc kubenswrapper[5005]: I0225 11:52:28.844384 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e93344-ea31-4de6-8473-4bd46cbe028b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d7e93344-ea31-4de6-8473-4bd46cbe028b" (UID: "d7e93344-ea31-4de6-8473-4bd46cbe028b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:52:28 crc kubenswrapper[5005]: I0225 11:52:28.867483 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e93344-ea31-4de6-8473-4bd46cbe028b-inventory" (OuterVolumeSpecName: "inventory") pod "d7e93344-ea31-4de6-8473-4bd46cbe028b" (UID: "d7e93344-ea31-4de6-8473-4bd46cbe028b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:52:28 crc kubenswrapper[5005]: I0225 11:52:28.922912 5005 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d7e93344-ea31-4de6-8473-4bd46cbe028b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 25 11:52:28 crc kubenswrapper[5005]: I0225 11:52:28.923107 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkx62\" (UniqueName: \"kubernetes.io/projected/d7e93344-ea31-4de6-8473-4bd46cbe028b-kube-api-access-bkx62\") on node \"crc\" DevicePath \"\"" Feb 25 11:52:28 crc kubenswrapper[5005]: I0225 11:52:28.923167 5005 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d7e93344-ea31-4de6-8473-4bd46cbe028b-inventory\") on node \"crc\" DevicePath \"\"" Feb 25 11:52:28 crc kubenswrapper[5005]: I0225 11:52:28.923258 5005 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d7e93344-ea31-4de6-8473-4bd46cbe028b-ceph\") on node \"crc\" DevicePath \"\"" Feb 25 11:52:28 crc kubenswrapper[5005]: I0225 11:52:28.923409 5005 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7e93344-ea31-4de6-8473-4bd46cbe028b-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 11:52:29 crc kubenswrapper[5005]: I0225 11:52:29.269731 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerStarted","Data":"0a0c931f6f39c45f66a41d485326ae99130758f22d324cc3fec975dfad96b162"} Feb 25 11:52:29 crc kubenswrapper[5005]: I0225 11:52:29.272986 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w89mw" event={"ID":"d7e93344-ea31-4de6-8473-4bd46cbe028b","Type":"ContainerDied","Data":"6359831cb05683e28b1876f1dfadd1c59c82d0644f67e4dc51de0b9a94a4c3b8"} Feb 25 11:52:29 crc kubenswrapper[5005]: I0225 11:52:29.273098 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6359831cb05683e28b1876f1dfadd1c59c82d0644f67e4dc51de0b9a94a4c3b8" Feb 25 11:52:29 crc kubenswrapper[5005]: I0225 11:52:29.273309 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w89mw" Feb 25 11:52:29 crc kubenswrapper[5005]: I0225 11:52:29.357767 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2jhfb"] Feb 25 11:52:29 crc kubenswrapper[5005]: E0225 11:52:29.358551 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7e93344-ea31-4de6-8473-4bd46cbe028b" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 25 11:52:29 crc kubenswrapper[5005]: I0225 11:52:29.358680 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7e93344-ea31-4de6-8473-4bd46cbe028b" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 25 11:52:29 crc kubenswrapper[5005]: I0225 11:52:29.359045 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7e93344-ea31-4de6-8473-4bd46cbe028b" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 25 11:52:29 crc kubenswrapper[5005]: I0225 11:52:29.359919 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2jhfb" Feb 25 11:52:29 crc kubenswrapper[5005]: I0225 11:52:29.363723 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 25 11:52:29 crc kubenswrapper[5005]: I0225 11:52:29.365724 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dgrbb" Feb 25 11:52:29 crc kubenswrapper[5005]: I0225 11:52:29.366398 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 25 11:52:29 crc kubenswrapper[5005]: I0225 11:52:29.366630 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 25 11:52:29 crc kubenswrapper[5005]: I0225 11:52:29.366861 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 25 11:52:29 crc kubenswrapper[5005]: I0225 11:52:29.375757 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2jhfb"] Feb 25 11:52:29 crc kubenswrapper[5005]: I0225 11:52:29.535012 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6148b2f6-0020-48b4-9d78-19fd37400e69-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2jhfb\" (UID: \"6148b2f6-0020-48b4-9d78-19fd37400e69\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2jhfb" Feb 25 11:52:29 crc kubenswrapper[5005]: I0225 11:52:29.535473 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6148b2f6-0020-48b4-9d78-19fd37400e69-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2jhfb\" (UID: \"6148b2f6-0020-48b4-9d78-19fd37400e69\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2jhfb" Feb 25 11:52:29 crc kubenswrapper[5005]: I0225 11:52:29.535646 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vghv5\" (UniqueName: \"kubernetes.io/projected/6148b2f6-0020-48b4-9d78-19fd37400e69-kube-api-access-vghv5\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2jhfb\" (UID: \"6148b2f6-0020-48b4-9d78-19fd37400e69\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2jhfb" Feb 25 11:52:29 crc kubenswrapper[5005]: I0225 11:52:29.535989 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6148b2f6-0020-48b4-9d78-19fd37400e69-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2jhfb\" (UID: \"6148b2f6-0020-48b4-9d78-19fd37400e69\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2jhfb" Feb 25 11:52:29 crc kubenswrapper[5005]: I0225 11:52:29.536117 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6148b2f6-0020-48b4-9d78-19fd37400e69-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2jhfb\" (UID: \"6148b2f6-0020-48b4-9d78-19fd37400e69\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2jhfb" Feb 25 11:52:29 crc kubenswrapper[5005]: I0225 11:52:29.638433 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vghv5\" (UniqueName: \"kubernetes.io/projected/6148b2f6-0020-48b4-9d78-19fd37400e69-kube-api-access-vghv5\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2jhfb\" (UID: \"6148b2f6-0020-48b4-9d78-19fd37400e69\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2jhfb" Feb 25 11:52:29 crc kubenswrapper[5005]: I0225 11:52:29.638518 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6148b2f6-0020-48b4-9d78-19fd37400e69-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2jhfb\" (UID: \"6148b2f6-0020-48b4-9d78-19fd37400e69\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2jhfb" Feb 25 11:52:29 crc kubenswrapper[5005]: I0225 11:52:29.638563 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6148b2f6-0020-48b4-9d78-19fd37400e69-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2jhfb\" (UID: \"6148b2f6-0020-48b4-9d78-19fd37400e69\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2jhfb" Feb 25 11:52:29 crc kubenswrapper[5005]: I0225 11:52:29.638655 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6148b2f6-0020-48b4-9d78-19fd37400e69-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2jhfb\" (UID: \"6148b2f6-0020-48b4-9d78-19fd37400e69\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2jhfb" Feb 25 11:52:29 crc kubenswrapper[5005]: I0225 11:52:29.639561 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6148b2f6-0020-48b4-9d78-19fd37400e69-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2jhfb\" (UID: \"6148b2f6-0020-48b4-9d78-19fd37400e69\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2jhfb" Feb 25 11:52:29 crc kubenswrapper[5005]: I0225 11:52:29.644338 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6148b2f6-0020-48b4-9d78-19fd37400e69-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2jhfb\" (UID: \"6148b2f6-0020-48b4-9d78-19fd37400e69\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2jhfb" Feb 25 11:52:29 crc kubenswrapper[5005]: I0225 11:52:29.645199 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6148b2f6-0020-48b4-9d78-19fd37400e69-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2jhfb\" (UID: \"6148b2f6-0020-48b4-9d78-19fd37400e69\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2jhfb" Feb 25 11:52:29 crc kubenswrapper[5005]: I0225 11:52:29.646548 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6148b2f6-0020-48b4-9d78-19fd37400e69-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2jhfb\" (UID: \"6148b2f6-0020-48b4-9d78-19fd37400e69\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2jhfb" Feb 25 11:52:29 crc kubenswrapper[5005]: I0225 11:52:29.646685 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6148b2f6-0020-48b4-9d78-19fd37400e69-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2jhfb\" (UID: \"6148b2f6-0020-48b4-9d78-19fd37400e69\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2jhfb" Feb 25 11:52:29 crc kubenswrapper[5005]: I0225 11:52:29.660293 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vghv5\" (UniqueName: \"kubernetes.io/projected/6148b2f6-0020-48b4-9d78-19fd37400e69-kube-api-access-vghv5\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2jhfb\" (UID: \"6148b2f6-0020-48b4-9d78-19fd37400e69\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2jhfb" Feb 25 11:52:29 crc kubenswrapper[5005]: I0225 11:52:29.682038 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2jhfb" Feb 25 11:52:30 crc kubenswrapper[5005]: I0225 11:52:30.124682 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2jhfb"] Feb 25 11:52:30 crc kubenswrapper[5005]: I0225 11:52:30.280464 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2jhfb" event={"ID":"6148b2f6-0020-48b4-9d78-19fd37400e69","Type":"ContainerStarted","Data":"47c30996f116bcbb17452efbde207fc812d91129b1d910b2e87cc3c5ce349c23"} Feb 25 11:52:31 crc kubenswrapper[5005]: I0225 11:52:31.288051 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2jhfb" event={"ID":"6148b2f6-0020-48b4-9d78-19fd37400e69","Type":"ContainerStarted","Data":"493dbe9275dfb09c65d05f51dc06055c84bb34f2a0efa8030b8eccbf95dc0c69"} Feb 25 11:52:31 crc kubenswrapper[5005]: I0225 11:52:31.314754 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2jhfb" podStartSLOduration=1.806847356 podStartE2EDuration="2.314728226s" podCreationTimestamp="2026-02-25 11:52:29 +0000 UTC" firstStartedPulling="2026-02-25 11:52:30.131425968 +0000 UTC m=+2064.172158335" lastFinishedPulling="2026-02-25 11:52:30.639306878 +0000 UTC m=+2064.680039205" observedRunningTime="2026-02-25 11:52:31.311801493 +0000 UTC m=+2065.352533830" watchObservedRunningTime="2026-02-25 11:52:31.314728226 +0000 UTC m=+2065.355460583" Feb 25 11:53:18 crc kubenswrapper[5005]: I0225 11:53:18.539491 5005 scope.go:117] "RemoveContainer" containerID="29edf728632afeaafcb8a78deda69494a539459d7d45dd99c28fffc2ab10088f" Feb 25 11:53:18 crc kubenswrapper[5005]: I0225 11:53:18.570222 5005 scope.go:117] "RemoveContainer" containerID="7e4f3e81a265735b77c1fbfbbf9e5fff939695fad1c74d2ccdc7b881c4b26b28" Feb 25 11:54:00 crc kubenswrapper[5005]: I0225 11:54:00.156736 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533674-t928c"] Feb 25 11:54:00 crc kubenswrapper[5005]: I0225 11:54:00.159879 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533674-t928c" Feb 25 11:54:00 crc kubenswrapper[5005]: I0225 11:54:00.162092 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 11:54:00 crc kubenswrapper[5005]: I0225 11:54:00.162192 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 11:54:00 crc kubenswrapper[5005]: I0225 11:54:00.162272 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 11:54:00 crc kubenswrapper[5005]: I0225 11:54:00.167788 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533674-t928c"] Feb 25 11:54:00 crc kubenswrapper[5005]: I0225 11:54:00.195571 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slfnk\" (UniqueName: \"kubernetes.io/projected/3ff346b5-97c2-4bd4-8c95-6cede9eebee8-kube-api-access-slfnk\") pod \"auto-csr-approver-29533674-t928c\" (UID: \"3ff346b5-97c2-4bd4-8c95-6cede9eebee8\") " pod="openshift-infra/auto-csr-approver-29533674-t928c" Feb 25 11:54:00 crc kubenswrapper[5005]: I0225 11:54:00.297271 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slfnk\" (UniqueName: \"kubernetes.io/projected/3ff346b5-97c2-4bd4-8c95-6cede9eebee8-kube-api-access-slfnk\") pod \"auto-csr-approver-29533674-t928c\" (UID: \"3ff346b5-97c2-4bd4-8c95-6cede9eebee8\") " pod="openshift-infra/auto-csr-approver-29533674-t928c" Feb 25 11:54:00 crc kubenswrapper[5005]: I0225 11:54:00.323340 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slfnk\" (UniqueName: \"kubernetes.io/projected/3ff346b5-97c2-4bd4-8c95-6cede9eebee8-kube-api-access-slfnk\") pod \"auto-csr-approver-29533674-t928c\" (UID: \"3ff346b5-97c2-4bd4-8c95-6cede9eebee8\") " pod="openshift-infra/auto-csr-approver-29533674-t928c" Feb 25 11:54:00 crc kubenswrapper[5005]: I0225 11:54:00.480295 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533674-t928c" Feb 25 11:54:00 crc kubenswrapper[5005]: I0225 11:54:00.988623 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533674-t928c"] Feb 25 11:54:00 crc kubenswrapper[5005]: I0225 11:54:00.994544 5005 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 25 11:54:01 crc kubenswrapper[5005]: I0225 11:54:01.906241 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533674-t928c" event={"ID":"3ff346b5-97c2-4bd4-8c95-6cede9eebee8","Type":"ContainerStarted","Data":"b0fb385901b8ad1174552f1576d14e49111b44cb1b61ac7f88c393b30832576d"} Feb 25 11:54:04 crc kubenswrapper[5005]: I0225 11:54:04.024670 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533674-t928c" event={"ID":"3ff346b5-97c2-4bd4-8c95-6cede9eebee8","Type":"ContainerStarted","Data":"a18b02caa67b7f320e1e1122c8be72855d392f6ea3f541a2ab0ab18f6f061311"} Feb 25 11:54:04 crc kubenswrapper[5005]: I0225 11:54:04.149921 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29533674-t928c" podStartSLOduration=1.612621721 podStartE2EDuration="4.149900556s" podCreationTimestamp="2026-02-25 11:54:00 +0000 UTC" firstStartedPulling="2026-02-25 11:54:00.994193322 +0000 UTC m=+2155.034925649" lastFinishedPulling="2026-02-25 11:54:03.531472157 +0000 UTC m=+2157.572204484" observedRunningTime="2026-02-25 11:54:04.142177839 +0000 UTC m=+2158.182910166" watchObservedRunningTime="2026-02-25 11:54:04.149900556 +0000 UTC m=+2158.190632883" Feb 25 11:54:05 crc kubenswrapper[5005]: I0225 11:54:05.033242 5005 generic.go:334] "Generic (PLEG): container finished" podID="3ff346b5-97c2-4bd4-8c95-6cede9eebee8" containerID="a18b02caa67b7f320e1e1122c8be72855d392f6ea3f541a2ab0ab18f6f061311" exitCode=0 Feb 25 11:54:05 crc kubenswrapper[5005]: I0225 11:54:05.033355 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533674-t928c" event={"ID":"3ff346b5-97c2-4bd4-8c95-6cede9eebee8","Type":"ContainerDied","Data":"a18b02caa67b7f320e1e1122c8be72855d392f6ea3f541a2ab0ab18f6f061311"} Feb 25 11:54:06 crc kubenswrapper[5005]: I0225 11:54:06.402901 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533674-t928c" Feb 25 11:54:06 crc kubenswrapper[5005]: I0225 11:54:06.501409 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slfnk\" (UniqueName: \"kubernetes.io/projected/3ff346b5-97c2-4bd4-8c95-6cede9eebee8-kube-api-access-slfnk\") pod \"3ff346b5-97c2-4bd4-8c95-6cede9eebee8\" (UID: \"3ff346b5-97c2-4bd4-8c95-6cede9eebee8\") " Feb 25 11:54:06 crc kubenswrapper[5005]: I0225 11:54:06.512653 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ff346b5-97c2-4bd4-8c95-6cede9eebee8-kube-api-access-slfnk" (OuterVolumeSpecName: "kube-api-access-slfnk") pod "3ff346b5-97c2-4bd4-8c95-6cede9eebee8" (UID: "3ff346b5-97c2-4bd4-8c95-6cede9eebee8"). InnerVolumeSpecName "kube-api-access-slfnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:54:06 crc kubenswrapper[5005]: I0225 11:54:06.603799 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slfnk\" (UniqueName: \"kubernetes.io/projected/3ff346b5-97c2-4bd4-8c95-6cede9eebee8-kube-api-access-slfnk\") on node \"crc\" DevicePath \"\"" Feb 25 11:54:07 crc kubenswrapper[5005]: I0225 11:54:07.050464 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533674-t928c" event={"ID":"3ff346b5-97c2-4bd4-8c95-6cede9eebee8","Type":"ContainerDied","Data":"b0fb385901b8ad1174552f1576d14e49111b44cb1b61ac7f88c393b30832576d"} Feb 25 11:54:07 crc kubenswrapper[5005]: I0225 11:54:07.050769 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0fb385901b8ad1174552f1576d14e49111b44cb1b61ac7f88c393b30832576d" Feb 25 11:54:07 crc kubenswrapper[5005]: I0225 11:54:07.050556 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533674-t928c" Feb 25 11:54:07 crc kubenswrapper[5005]: I0225 11:54:07.472196 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533668-skbjk"] Feb 25 11:54:07 crc kubenswrapper[5005]: I0225 11:54:07.480423 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533668-skbjk"] Feb 25 11:54:08 crc kubenswrapper[5005]: I0225 11:54:08.694927 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="238f3b62-82a3-45ae-b0e2-f1a63e4d6412" path="/var/lib/kubelet/pods/238f3b62-82a3-45ae-b0e2-f1a63e4d6412/volumes" Feb 25 11:54:11 crc kubenswrapper[5005]: I0225 11:54:11.082932 5005 generic.go:334] "Generic (PLEG): container finished" podID="6148b2f6-0020-48b4-9d78-19fd37400e69" containerID="493dbe9275dfb09c65d05f51dc06055c84bb34f2a0efa8030b8eccbf95dc0c69" exitCode=0 Feb 25 11:54:11 crc kubenswrapper[5005]: I0225 11:54:11.082987 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2jhfb" event={"ID":"6148b2f6-0020-48b4-9d78-19fd37400e69","Type":"ContainerDied","Data":"493dbe9275dfb09c65d05f51dc06055c84bb34f2a0efa8030b8eccbf95dc0c69"} Feb 25 11:54:12 crc kubenswrapper[5005]: I0225 11:54:12.490189 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2jhfb" Feb 25 11:54:12 crc kubenswrapper[5005]: I0225 11:54:12.617498 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6148b2f6-0020-48b4-9d78-19fd37400e69-inventory\") pod \"6148b2f6-0020-48b4-9d78-19fd37400e69\" (UID: \"6148b2f6-0020-48b4-9d78-19fd37400e69\") " Feb 25 11:54:12 crc kubenswrapper[5005]: I0225 11:54:12.617842 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6148b2f6-0020-48b4-9d78-19fd37400e69-ceph\") pod \"6148b2f6-0020-48b4-9d78-19fd37400e69\" (UID: \"6148b2f6-0020-48b4-9d78-19fd37400e69\") " Feb 25 11:54:12 crc kubenswrapper[5005]: I0225 11:54:12.617988 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6148b2f6-0020-48b4-9d78-19fd37400e69-bootstrap-combined-ca-bundle\") pod \"6148b2f6-0020-48b4-9d78-19fd37400e69\" (UID: \"6148b2f6-0020-48b4-9d78-19fd37400e69\") " Feb 25 11:54:12 crc kubenswrapper[5005]: I0225 11:54:12.618496 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vghv5\" (UniqueName: \"kubernetes.io/projected/6148b2f6-0020-48b4-9d78-19fd37400e69-kube-api-access-vghv5\") pod \"6148b2f6-0020-48b4-9d78-19fd37400e69\" (UID: \"6148b2f6-0020-48b4-9d78-19fd37400e69\") " Feb 25 11:54:12 crc kubenswrapper[5005]: I0225 11:54:12.618611 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6148b2f6-0020-48b4-9d78-19fd37400e69-ssh-key-openstack-edpm-ipam\") pod \"6148b2f6-0020-48b4-9d78-19fd37400e69\" (UID: \"6148b2f6-0020-48b4-9d78-19fd37400e69\") " Feb 25 11:54:12 crc kubenswrapper[5005]: I0225 11:54:12.623189 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6148b2f6-0020-48b4-9d78-19fd37400e69-ceph" (OuterVolumeSpecName: "ceph") pod "6148b2f6-0020-48b4-9d78-19fd37400e69" (UID: "6148b2f6-0020-48b4-9d78-19fd37400e69"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:54:12 crc kubenswrapper[5005]: I0225 11:54:12.623458 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6148b2f6-0020-48b4-9d78-19fd37400e69-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "6148b2f6-0020-48b4-9d78-19fd37400e69" (UID: "6148b2f6-0020-48b4-9d78-19fd37400e69"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:54:12 crc kubenswrapper[5005]: I0225 11:54:12.625515 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6148b2f6-0020-48b4-9d78-19fd37400e69-kube-api-access-vghv5" (OuterVolumeSpecName: "kube-api-access-vghv5") pod "6148b2f6-0020-48b4-9d78-19fd37400e69" (UID: "6148b2f6-0020-48b4-9d78-19fd37400e69"). InnerVolumeSpecName "kube-api-access-vghv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:54:12 crc kubenswrapper[5005]: I0225 11:54:12.641692 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6148b2f6-0020-48b4-9d78-19fd37400e69-inventory" (OuterVolumeSpecName: "inventory") pod "6148b2f6-0020-48b4-9d78-19fd37400e69" (UID: "6148b2f6-0020-48b4-9d78-19fd37400e69"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:54:12 crc kubenswrapper[5005]: I0225 11:54:12.642091 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6148b2f6-0020-48b4-9d78-19fd37400e69-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6148b2f6-0020-48b4-9d78-19fd37400e69" (UID: "6148b2f6-0020-48b4-9d78-19fd37400e69"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:54:12 crc kubenswrapper[5005]: I0225 11:54:12.720808 5005 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6148b2f6-0020-48b4-9d78-19fd37400e69-inventory\") on node \"crc\" DevicePath \"\"" Feb 25 11:54:12 crc kubenswrapper[5005]: I0225 11:54:12.720850 5005 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6148b2f6-0020-48b4-9d78-19fd37400e69-ceph\") on node \"crc\" DevicePath \"\"" Feb 25 11:54:12 crc kubenswrapper[5005]: I0225 11:54:12.720859 5005 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6148b2f6-0020-48b4-9d78-19fd37400e69-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 11:54:12 crc kubenswrapper[5005]: I0225 11:54:12.720870 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vghv5\" (UniqueName: \"kubernetes.io/projected/6148b2f6-0020-48b4-9d78-19fd37400e69-kube-api-access-vghv5\") on node \"crc\" DevicePath \"\"" Feb 25 11:54:12 crc kubenswrapper[5005]: I0225 11:54:12.720881 5005 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6148b2f6-0020-48b4-9d78-19fd37400e69-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 25 11:54:13 crc kubenswrapper[5005]: I0225 11:54:13.107910 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2jhfb" event={"ID":"6148b2f6-0020-48b4-9d78-19fd37400e69","Type":"ContainerDied","Data":"47c30996f116bcbb17452efbde207fc812d91129b1d910b2e87cc3c5ce349c23"} Feb 25 11:54:13 crc kubenswrapper[5005]: I0225 11:54:13.107952 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47c30996f116bcbb17452efbde207fc812d91129b1d910b2e87cc3c5ce349c23" Feb 25 11:54:13 crc kubenswrapper[5005]: I0225 11:54:13.107966 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2jhfb" Feb 25 11:54:13 crc kubenswrapper[5005]: I0225 11:54:13.178259 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gmj5r"] Feb 25 11:54:13 crc kubenswrapper[5005]: E0225 11:54:13.178660 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ff346b5-97c2-4bd4-8c95-6cede9eebee8" containerName="oc" Feb 25 11:54:13 crc kubenswrapper[5005]: I0225 11:54:13.178678 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ff346b5-97c2-4bd4-8c95-6cede9eebee8" containerName="oc" Feb 25 11:54:13 crc kubenswrapper[5005]: E0225 11:54:13.178702 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6148b2f6-0020-48b4-9d78-19fd37400e69" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 25 11:54:13 crc kubenswrapper[5005]: I0225 11:54:13.178709 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="6148b2f6-0020-48b4-9d78-19fd37400e69" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 25 11:54:13 crc kubenswrapper[5005]: I0225 11:54:13.178913 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ff346b5-97c2-4bd4-8c95-6cede9eebee8" containerName="oc" Feb 25 11:54:13 crc kubenswrapper[5005]: I0225 11:54:13.178934 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="6148b2f6-0020-48b4-9d78-19fd37400e69" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 25 11:54:13 crc kubenswrapper[5005]: I0225 11:54:13.179621 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gmj5r" Feb 25 11:54:13 crc kubenswrapper[5005]: I0225 11:54:13.181811 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 25 11:54:13 crc kubenswrapper[5005]: I0225 11:54:13.181885 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 25 11:54:13 crc kubenswrapper[5005]: I0225 11:54:13.182333 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 25 11:54:13 crc kubenswrapper[5005]: I0225 11:54:13.182667 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dgrbb" Feb 25 11:54:13 crc kubenswrapper[5005]: I0225 11:54:13.183470 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 25 11:54:13 crc kubenswrapper[5005]: I0225 11:54:13.187282 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gmj5r"] Feb 25 11:54:13 crc kubenswrapper[5005]: I0225 11:54:13.335676 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a6d3b90-c256-4163-9be6-e28fc31f4094-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-gmj5r\" (UID: \"0a6d3b90-c256-4163-9be6-e28fc31f4094\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gmj5r" Feb 25 11:54:13 crc kubenswrapper[5005]: I0225 11:54:13.336495 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0a6d3b90-c256-4163-9be6-e28fc31f4094-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-gmj5r\" (UID: \"0a6d3b90-c256-4163-9be6-e28fc31f4094\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gmj5r" Feb 25 11:54:13 crc kubenswrapper[5005]: I0225 11:54:13.336648 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8splj\" (UniqueName: \"kubernetes.io/projected/0a6d3b90-c256-4163-9be6-e28fc31f4094-kube-api-access-8splj\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-gmj5r\" (UID: \"0a6d3b90-c256-4163-9be6-e28fc31f4094\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gmj5r" Feb 25 11:54:13 crc kubenswrapper[5005]: I0225 11:54:13.336815 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a6d3b90-c256-4163-9be6-e28fc31f4094-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-gmj5r\" (UID: \"0a6d3b90-c256-4163-9be6-e28fc31f4094\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gmj5r" Feb 25 11:54:13 crc kubenswrapper[5005]: I0225 11:54:13.438026 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a6d3b90-c256-4163-9be6-e28fc31f4094-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-gmj5r\" (UID: \"0a6d3b90-c256-4163-9be6-e28fc31f4094\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gmj5r" Feb 25 11:54:13 crc kubenswrapper[5005]: I0225 11:54:13.438143 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a6d3b90-c256-4163-9be6-e28fc31f4094-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-gmj5r\" (UID: \"0a6d3b90-c256-4163-9be6-e28fc31f4094\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gmj5r" Feb 25 11:54:13 crc kubenswrapper[5005]: I0225 11:54:13.438209 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0a6d3b90-c256-4163-9be6-e28fc31f4094-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-gmj5r\" (UID: \"0a6d3b90-c256-4163-9be6-e28fc31f4094\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gmj5r" Feb 25 11:54:13 crc kubenswrapper[5005]: I0225 11:54:13.438259 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8splj\" (UniqueName: \"kubernetes.io/projected/0a6d3b90-c256-4163-9be6-e28fc31f4094-kube-api-access-8splj\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-gmj5r\" (UID: \"0a6d3b90-c256-4163-9be6-e28fc31f4094\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gmj5r" Feb 25 11:54:13 crc kubenswrapper[5005]: I0225 11:54:13.442284 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a6d3b90-c256-4163-9be6-e28fc31f4094-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-gmj5r\" (UID: \"0a6d3b90-c256-4163-9be6-e28fc31f4094\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gmj5r" Feb 25 11:54:13 crc kubenswrapper[5005]: I0225 11:54:13.445565 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0a6d3b90-c256-4163-9be6-e28fc31f4094-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-gmj5r\" (UID: \"0a6d3b90-c256-4163-9be6-e28fc31f4094\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gmj5r" Feb 25 11:54:13 crc kubenswrapper[5005]: I0225 11:54:13.453572 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a6d3b90-c256-4163-9be6-e28fc31f4094-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-gmj5r\" (UID: \"0a6d3b90-c256-4163-9be6-e28fc31f4094\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gmj5r" Feb 25 11:54:13 crc kubenswrapper[5005]: I0225 11:54:13.459759 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8splj\" (UniqueName: \"kubernetes.io/projected/0a6d3b90-c256-4163-9be6-e28fc31f4094-kube-api-access-8splj\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-gmj5r\" (UID: \"0a6d3b90-c256-4163-9be6-e28fc31f4094\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gmj5r" Feb 25 11:54:13 crc kubenswrapper[5005]: I0225 11:54:13.496316 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gmj5r" Feb 25 11:54:14 crc kubenswrapper[5005]: I0225 11:54:14.000096 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gmj5r"] Feb 25 11:54:14 crc kubenswrapper[5005]: I0225 11:54:14.115677 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gmj5r" event={"ID":"0a6d3b90-c256-4163-9be6-e28fc31f4094","Type":"ContainerStarted","Data":"51da5835bfa15a60d9b50f0a1040e00d43b088d6d0c56ada8529dc9376140d0e"} Feb 25 11:54:15 crc kubenswrapper[5005]: I0225 11:54:15.128520 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gmj5r" event={"ID":"0a6d3b90-c256-4163-9be6-e28fc31f4094","Type":"ContainerStarted","Data":"3e0f5b2bf72a80a7061017721fd6a6a283f9162e96bede6c13f09b7924f37fe9"} Feb 25 11:54:15 crc kubenswrapper[5005]: I0225 11:54:15.149028 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gmj5r" podStartSLOduration=1.439017247 podStartE2EDuration="2.149007448s" podCreationTimestamp="2026-02-25 11:54:13 +0000 UTC" firstStartedPulling="2026-02-25 11:54:14.006108719 +0000 UTC m=+2168.046841046" lastFinishedPulling="2026-02-25 11:54:14.71609888 +0000 UTC m=+2168.756831247" observedRunningTime="2026-02-25 11:54:15.142896323 +0000 UTC m=+2169.183628660" watchObservedRunningTime="2026-02-25 11:54:15.149007448 +0000 UTC m=+2169.189739775" Feb 25 11:54:18 crc kubenswrapper[5005]: I0225 11:54:18.667983 5005 scope.go:117] "RemoveContainer" containerID="ac18108c62a1c5b5d211df7c3cbe125fecc1bfb291375c0b4f829cae36905f1f" Feb 25 11:54:18 crc kubenswrapper[5005]: I0225 11:54:18.707012 5005 scope.go:117] "RemoveContainer" containerID="777a1139954e0117593aab3a1f0633f3db4b45fb8183309ae6122a7b9660dc2a" Feb 25 11:54:18 crc kubenswrapper[5005]: I0225 11:54:18.808426 5005 scope.go:117] "RemoveContainer" containerID="b5f44d6dcd26c451670cd6ada77f547859a7da1b14cc0dd151b8e3ad039e9dce" Feb 25 11:54:18 crc kubenswrapper[5005]: I0225 11:54:18.849080 5005 scope.go:117] "RemoveContainer" containerID="d553b4a9b75baaa5c41e569d3f322919d6dd87597ba12fdb5d17a92ecf3d3397" Feb 25 11:54:28 crc kubenswrapper[5005]: I0225 11:54:28.087812 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 11:54:28 crc kubenswrapper[5005]: I0225 11:54:28.089413 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 11:54:42 crc kubenswrapper[5005]: I0225 11:54:42.237798 5005 generic.go:334] "Generic (PLEG): container finished" podID="0a6d3b90-c256-4163-9be6-e28fc31f4094" containerID="3e0f5b2bf72a80a7061017721fd6a6a283f9162e96bede6c13f09b7924f37fe9" exitCode=0 Feb 25 11:54:42 crc kubenswrapper[5005]: I0225 11:54:42.237886 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gmj5r" event={"ID":"0a6d3b90-c256-4163-9be6-e28fc31f4094","Type":"ContainerDied","Data":"3e0f5b2bf72a80a7061017721fd6a6a283f9162e96bede6c13f09b7924f37fe9"} Feb 25 11:54:43 crc kubenswrapper[5005]: I0225 11:54:43.590358 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gmj5r" Feb 25 11:54:43 crc kubenswrapper[5005]: I0225 11:54:43.712392 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a6d3b90-c256-4163-9be6-e28fc31f4094-inventory\") pod \"0a6d3b90-c256-4163-9be6-e28fc31f4094\" (UID: \"0a6d3b90-c256-4163-9be6-e28fc31f4094\") " Feb 25 11:54:43 crc kubenswrapper[5005]: I0225 11:54:43.712460 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8splj\" (UniqueName: \"kubernetes.io/projected/0a6d3b90-c256-4163-9be6-e28fc31f4094-kube-api-access-8splj\") pod \"0a6d3b90-c256-4163-9be6-e28fc31f4094\" (UID: \"0a6d3b90-c256-4163-9be6-e28fc31f4094\") " Feb 25 11:54:43 crc kubenswrapper[5005]: I0225 11:54:43.712529 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0a6d3b90-c256-4163-9be6-e28fc31f4094-ceph\") pod \"0a6d3b90-c256-4163-9be6-e28fc31f4094\" (UID: \"0a6d3b90-c256-4163-9be6-e28fc31f4094\") " Feb 25 11:54:43 crc kubenswrapper[5005]: I0225 11:54:43.712678 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a6d3b90-c256-4163-9be6-e28fc31f4094-ssh-key-openstack-edpm-ipam\") pod \"0a6d3b90-c256-4163-9be6-e28fc31f4094\" (UID: \"0a6d3b90-c256-4163-9be6-e28fc31f4094\") " Feb 25 11:54:43 crc kubenswrapper[5005]: I0225 11:54:43.718326 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a6d3b90-c256-4163-9be6-e28fc31f4094-ceph" (OuterVolumeSpecName: "ceph") pod "0a6d3b90-c256-4163-9be6-e28fc31f4094" (UID: "0a6d3b90-c256-4163-9be6-e28fc31f4094"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:54:43 crc kubenswrapper[5005]: I0225 11:54:43.719663 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a6d3b90-c256-4163-9be6-e28fc31f4094-kube-api-access-8splj" (OuterVolumeSpecName: "kube-api-access-8splj") pod "0a6d3b90-c256-4163-9be6-e28fc31f4094" (UID: "0a6d3b90-c256-4163-9be6-e28fc31f4094"). InnerVolumeSpecName "kube-api-access-8splj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:54:43 crc kubenswrapper[5005]: I0225 11:54:43.737706 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a6d3b90-c256-4163-9be6-e28fc31f4094-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0a6d3b90-c256-4163-9be6-e28fc31f4094" (UID: "0a6d3b90-c256-4163-9be6-e28fc31f4094"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:54:43 crc kubenswrapper[5005]: I0225 11:54:43.738490 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a6d3b90-c256-4163-9be6-e28fc31f4094-inventory" (OuterVolumeSpecName: "inventory") pod "0a6d3b90-c256-4163-9be6-e28fc31f4094" (UID: "0a6d3b90-c256-4163-9be6-e28fc31f4094"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:54:43 crc kubenswrapper[5005]: I0225 11:54:43.814858 5005 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0a6d3b90-c256-4163-9be6-e28fc31f4094-ceph\") on node \"crc\" DevicePath \"\"" Feb 25 11:54:43 crc kubenswrapper[5005]: I0225 11:54:43.814896 5005 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a6d3b90-c256-4163-9be6-e28fc31f4094-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 25 11:54:43 crc kubenswrapper[5005]: I0225 11:54:43.814907 5005 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a6d3b90-c256-4163-9be6-e28fc31f4094-inventory\") on node \"crc\" DevicePath \"\"" Feb 25 11:54:43 crc kubenswrapper[5005]: I0225 11:54:43.814916 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8splj\" (UniqueName: \"kubernetes.io/projected/0a6d3b90-c256-4163-9be6-e28fc31f4094-kube-api-access-8splj\") on node \"crc\" DevicePath \"\"" Feb 25 11:54:44 crc kubenswrapper[5005]: I0225 11:54:44.255053 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gmj5r" event={"ID":"0a6d3b90-c256-4163-9be6-e28fc31f4094","Type":"ContainerDied","Data":"51da5835bfa15a60d9b50f0a1040e00d43b088d6d0c56ada8529dc9376140d0e"} Feb 25 11:54:44 crc kubenswrapper[5005]: I0225 11:54:44.255317 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51da5835bfa15a60d9b50f0a1040e00d43b088d6d0c56ada8529dc9376140d0e" Feb 25 11:54:44 crc kubenswrapper[5005]: I0225 11:54:44.255130 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gmj5r" Feb 25 11:54:44 crc kubenswrapper[5005]: I0225 11:54:44.372654 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-r67xp"] Feb 25 11:54:44 crc kubenswrapper[5005]: E0225 11:54:44.373041 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a6d3b90-c256-4163-9be6-e28fc31f4094" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 25 11:54:44 crc kubenswrapper[5005]: I0225 11:54:44.373059 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a6d3b90-c256-4163-9be6-e28fc31f4094" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 25 11:54:44 crc kubenswrapper[5005]: I0225 11:54:44.373232 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a6d3b90-c256-4163-9be6-e28fc31f4094" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 25 11:54:44 crc kubenswrapper[5005]: I0225 11:54:44.373901 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-r67xp" Feb 25 11:54:44 crc kubenswrapper[5005]: I0225 11:54:44.375759 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 25 11:54:44 crc kubenswrapper[5005]: I0225 11:54:44.376062 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 25 11:54:44 crc kubenswrapper[5005]: I0225 11:54:44.376212 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 25 11:54:44 crc kubenswrapper[5005]: I0225 11:54:44.377631 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 25 11:54:44 crc kubenswrapper[5005]: I0225 11:54:44.377702 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dgrbb" Feb 25 11:54:44 crc kubenswrapper[5005]: I0225 11:54:44.409869 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-r67xp"] Feb 25 11:54:44 crc kubenswrapper[5005]: I0225 11:54:44.459781 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r79bd\" (UniqueName: \"kubernetes.io/projected/925953b4-980e-4cf1-be73-3c6e33b496c5-kube-api-access-r79bd\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-r67xp\" (UID: \"925953b4-980e-4cf1-be73-3c6e33b496c5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-r67xp" Feb 25 11:54:44 crc kubenswrapper[5005]: I0225 11:54:44.459924 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/925953b4-980e-4cf1-be73-3c6e33b496c5-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-r67xp\" (UID: \"925953b4-980e-4cf1-be73-3c6e33b496c5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-r67xp" Feb 25 11:54:44 crc kubenswrapper[5005]: I0225 11:54:44.459949 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/925953b4-980e-4cf1-be73-3c6e33b496c5-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-r67xp\" (UID: \"925953b4-980e-4cf1-be73-3c6e33b496c5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-r67xp" Feb 25 11:54:44 crc kubenswrapper[5005]: I0225 11:54:44.459976 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/925953b4-980e-4cf1-be73-3c6e33b496c5-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-r67xp\" (UID: \"925953b4-980e-4cf1-be73-3c6e33b496c5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-r67xp" Feb 25 11:54:44 crc kubenswrapper[5005]: I0225 11:54:44.561650 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/925953b4-980e-4cf1-be73-3c6e33b496c5-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-r67xp\" (UID: \"925953b4-980e-4cf1-be73-3c6e33b496c5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-r67xp" Feb 25 11:54:44 crc kubenswrapper[5005]: I0225 11:54:44.561690 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/925953b4-980e-4cf1-be73-3c6e33b496c5-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-r67xp\" (UID: \"925953b4-980e-4cf1-be73-3c6e33b496c5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-r67xp" Feb 25 11:54:44 crc kubenswrapper[5005]: I0225 11:54:44.561713 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/925953b4-980e-4cf1-be73-3c6e33b496c5-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-r67xp\" (UID: \"925953b4-980e-4cf1-be73-3c6e33b496c5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-r67xp" Feb 25 11:54:44 crc kubenswrapper[5005]: I0225 11:54:44.561796 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r79bd\" (UniqueName: \"kubernetes.io/projected/925953b4-980e-4cf1-be73-3c6e33b496c5-kube-api-access-r79bd\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-r67xp\" (UID: \"925953b4-980e-4cf1-be73-3c6e33b496c5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-r67xp" Feb 25 11:54:44 crc kubenswrapper[5005]: I0225 11:54:44.565205 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/925953b4-980e-4cf1-be73-3c6e33b496c5-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-r67xp\" (UID: \"925953b4-980e-4cf1-be73-3c6e33b496c5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-r67xp" Feb 25 11:54:44 crc kubenswrapper[5005]: I0225 11:54:44.566136 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/925953b4-980e-4cf1-be73-3c6e33b496c5-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-r67xp\" (UID: \"925953b4-980e-4cf1-be73-3c6e33b496c5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-r67xp" Feb 25 11:54:44 crc kubenswrapper[5005]: I0225 11:54:44.573231 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/925953b4-980e-4cf1-be73-3c6e33b496c5-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-r67xp\" (UID: \"925953b4-980e-4cf1-be73-3c6e33b496c5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-r67xp" Feb 25 11:54:44 crc kubenswrapper[5005]: I0225 11:54:44.587795 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r79bd\" (UniqueName: \"kubernetes.io/projected/925953b4-980e-4cf1-be73-3c6e33b496c5-kube-api-access-r79bd\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-r67xp\" (UID: \"925953b4-980e-4cf1-be73-3c6e33b496c5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-r67xp" Feb 25 11:54:44 crc kubenswrapper[5005]: I0225 11:54:44.705004 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-r67xp" Feb 25 11:54:45 crc kubenswrapper[5005]: I0225 11:54:45.223516 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-r67xp"] Feb 25 11:54:45 crc kubenswrapper[5005]: I0225 11:54:45.266704 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-r67xp" event={"ID":"925953b4-980e-4cf1-be73-3c6e33b496c5","Type":"ContainerStarted","Data":"eb7cfda31d25839ea2422b6f5320ba8d68be34aa9d6817e4a39b1f992c070bb2"} Feb 25 11:54:46 crc kubenswrapper[5005]: I0225 11:54:46.276452 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-r67xp" event={"ID":"925953b4-980e-4cf1-be73-3c6e33b496c5","Type":"ContainerStarted","Data":"325301d7a7d61f2220793c5392e03e132fcb02f389f374f891d44f0a255af106"} Feb 25 11:54:46 crc kubenswrapper[5005]: I0225 11:54:46.305206 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-r67xp" podStartSLOduration=1.8461308060000001 podStartE2EDuration="2.305186609s" podCreationTimestamp="2026-02-25 11:54:44 +0000 UTC" firstStartedPulling="2026-02-25 11:54:45.240700722 +0000 UTC m=+2199.281433049" lastFinishedPulling="2026-02-25 11:54:45.699756505 +0000 UTC m=+2199.740488852" observedRunningTime="2026-02-25 11:54:46.299403835 +0000 UTC m=+2200.340136192" watchObservedRunningTime="2026-02-25 11:54:46.305186609 +0000 UTC m=+2200.345918936" Feb 25 11:54:51 crc kubenswrapper[5005]: I0225 11:54:51.325839 5005 generic.go:334] "Generic (PLEG): container finished" podID="925953b4-980e-4cf1-be73-3c6e33b496c5" containerID="325301d7a7d61f2220793c5392e03e132fcb02f389f374f891d44f0a255af106" exitCode=0 Feb 25 11:54:51 crc kubenswrapper[5005]: I0225 11:54:51.325989 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-r67xp" event={"ID":"925953b4-980e-4cf1-be73-3c6e33b496c5","Type":"ContainerDied","Data":"325301d7a7d61f2220793c5392e03e132fcb02f389f374f891d44f0a255af106"} Feb 25 11:54:52 crc kubenswrapper[5005]: I0225 11:54:52.790983 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-r67xp" Feb 25 11:54:52 crc kubenswrapper[5005]: I0225 11:54:52.922448 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/925953b4-980e-4cf1-be73-3c6e33b496c5-ssh-key-openstack-edpm-ipam\") pod \"925953b4-980e-4cf1-be73-3c6e33b496c5\" (UID: \"925953b4-980e-4cf1-be73-3c6e33b496c5\") " Feb 25 11:54:52 crc kubenswrapper[5005]: I0225 11:54:52.922587 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/925953b4-980e-4cf1-be73-3c6e33b496c5-inventory\") pod \"925953b4-980e-4cf1-be73-3c6e33b496c5\" (UID: \"925953b4-980e-4cf1-be73-3c6e33b496c5\") " Feb 25 11:54:52 crc kubenswrapper[5005]: I0225 11:54:52.922847 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/925953b4-980e-4cf1-be73-3c6e33b496c5-ceph\") pod \"925953b4-980e-4cf1-be73-3c6e33b496c5\" (UID: \"925953b4-980e-4cf1-be73-3c6e33b496c5\") " Feb 25 11:54:52 crc kubenswrapper[5005]: I0225 11:54:52.923088 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r79bd\" (UniqueName: \"kubernetes.io/projected/925953b4-980e-4cf1-be73-3c6e33b496c5-kube-api-access-r79bd\") pod \"925953b4-980e-4cf1-be73-3c6e33b496c5\" (UID: \"925953b4-980e-4cf1-be73-3c6e33b496c5\") " Feb 25 11:54:52 crc kubenswrapper[5005]: I0225 11:54:52.934652 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925953b4-980e-4cf1-be73-3c6e33b496c5-kube-api-access-r79bd" (OuterVolumeSpecName: "kube-api-access-r79bd") pod "925953b4-980e-4cf1-be73-3c6e33b496c5" (UID: "925953b4-980e-4cf1-be73-3c6e33b496c5"). InnerVolumeSpecName "kube-api-access-r79bd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:54:52 crc kubenswrapper[5005]: I0225 11:54:52.941667 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925953b4-980e-4cf1-be73-3c6e33b496c5-ceph" (OuterVolumeSpecName: "ceph") pod "925953b4-980e-4cf1-be73-3c6e33b496c5" (UID: "925953b4-980e-4cf1-be73-3c6e33b496c5"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:54:52 crc kubenswrapper[5005]: I0225 11:54:52.957935 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925953b4-980e-4cf1-be73-3c6e33b496c5-inventory" (OuterVolumeSpecName: "inventory") pod "925953b4-980e-4cf1-be73-3c6e33b496c5" (UID: "925953b4-980e-4cf1-be73-3c6e33b496c5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:54:52 crc kubenswrapper[5005]: I0225 11:54:52.960038 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925953b4-980e-4cf1-be73-3c6e33b496c5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "925953b4-980e-4cf1-be73-3c6e33b496c5" (UID: "925953b4-980e-4cf1-be73-3c6e33b496c5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:54:53 crc kubenswrapper[5005]: I0225 11:54:53.025871 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r79bd\" (UniqueName: \"kubernetes.io/projected/925953b4-980e-4cf1-be73-3c6e33b496c5-kube-api-access-r79bd\") on node \"crc\" DevicePath \"\"" Feb 25 11:54:53 crc kubenswrapper[5005]: I0225 11:54:53.026110 5005 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/925953b4-980e-4cf1-be73-3c6e33b496c5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 25 11:54:53 crc kubenswrapper[5005]: I0225 11:54:53.026197 5005 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/925953b4-980e-4cf1-be73-3c6e33b496c5-inventory\") on node \"crc\" DevicePath \"\"" Feb 25 11:54:53 crc kubenswrapper[5005]: I0225 11:54:53.026307 5005 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/925953b4-980e-4cf1-be73-3c6e33b496c5-ceph\") on node \"crc\" DevicePath \"\"" Feb 25 11:54:53 crc kubenswrapper[5005]: I0225 11:54:53.343189 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-r67xp" event={"ID":"925953b4-980e-4cf1-be73-3c6e33b496c5","Type":"ContainerDied","Data":"eb7cfda31d25839ea2422b6f5320ba8d68be34aa9d6817e4a39b1f992c070bb2"} Feb 25 11:54:53 crc kubenswrapper[5005]: I0225 11:54:53.343503 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb7cfda31d25839ea2422b6f5320ba8d68be34aa9d6817e4a39b1f992c070bb2" Feb 25 11:54:53 crc kubenswrapper[5005]: I0225 11:54:53.343275 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-r67xp" Feb 25 11:54:53 crc kubenswrapper[5005]: I0225 11:54:53.421606 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-x9rp4"] Feb 25 11:54:53 crc kubenswrapper[5005]: E0225 11:54:53.422235 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="925953b4-980e-4cf1-be73-3c6e33b496c5" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 25 11:54:53 crc kubenswrapper[5005]: I0225 11:54:53.422520 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="925953b4-980e-4cf1-be73-3c6e33b496c5" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 25 11:54:53 crc kubenswrapper[5005]: I0225 11:54:53.422739 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="925953b4-980e-4cf1-be73-3c6e33b496c5" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 25 11:54:53 crc kubenswrapper[5005]: I0225 11:54:53.423321 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-x9rp4" Feb 25 11:54:53 crc kubenswrapper[5005]: I0225 11:54:53.425693 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 25 11:54:53 crc kubenswrapper[5005]: I0225 11:54:53.426025 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 25 11:54:53 crc kubenswrapper[5005]: I0225 11:54:53.426251 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dgrbb" Feb 25 11:54:53 crc kubenswrapper[5005]: I0225 11:54:53.426486 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 25 11:54:53 crc kubenswrapper[5005]: I0225 11:54:53.435555 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 25 11:54:53 crc kubenswrapper[5005]: I0225 11:54:53.451048 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-x9rp4"] Feb 25 11:54:53 crc kubenswrapper[5005]: I0225 11:54:53.534924 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/daf2a90e-77cb-446f-8fbf-1f526edc75d9-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-x9rp4\" (UID: \"daf2a90e-77cb-446f-8fbf-1f526edc75d9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-x9rp4" Feb 25 11:54:53 crc kubenswrapper[5005]: I0225 11:54:53.534972 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29shz\" (UniqueName: \"kubernetes.io/projected/daf2a90e-77cb-446f-8fbf-1f526edc75d9-kube-api-access-29shz\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-x9rp4\" (UID: \"daf2a90e-77cb-446f-8fbf-1f526edc75d9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-x9rp4" Feb 25 11:54:53 crc kubenswrapper[5005]: I0225 11:54:53.535045 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/daf2a90e-77cb-446f-8fbf-1f526edc75d9-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-x9rp4\" (UID: \"daf2a90e-77cb-446f-8fbf-1f526edc75d9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-x9rp4" Feb 25 11:54:53 crc kubenswrapper[5005]: I0225 11:54:53.535208 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/daf2a90e-77cb-446f-8fbf-1f526edc75d9-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-x9rp4\" (UID: \"daf2a90e-77cb-446f-8fbf-1f526edc75d9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-x9rp4" Feb 25 11:54:53 crc kubenswrapper[5005]: I0225 11:54:53.637654 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/daf2a90e-77cb-446f-8fbf-1f526edc75d9-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-x9rp4\" (UID: \"daf2a90e-77cb-446f-8fbf-1f526edc75d9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-x9rp4" Feb 25 11:54:53 crc kubenswrapper[5005]: I0225 11:54:53.637729 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/daf2a90e-77cb-446f-8fbf-1f526edc75d9-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-x9rp4\" (UID: \"daf2a90e-77cb-446f-8fbf-1f526edc75d9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-x9rp4" Feb 25 11:54:53 crc kubenswrapper[5005]: I0225 11:54:53.637827 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/daf2a90e-77cb-446f-8fbf-1f526edc75d9-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-x9rp4\" (UID: \"daf2a90e-77cb-446f-8fbf-1f526edc75d9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-x9rp4" Feb 25 11:54:53 crc kubenswrapper[5005]: I0225 11:54:53.637855 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29shz\" (UniqueName: \"kubernetes.io/projected/daf2a90e-77cb-446f-8fbf-1f526edc75d9-kube-api-access-29shz\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-x9rp4\" (UID: \"daf2a90e-77cb-446f-8fbf-1f526edc75d9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-x9rp4" Feb 25 11:54:53 crc kubenswrapper[5005]: I0225 11:54:53.643017 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/daf2a90e-77cb-446f-8fbf-1f526edc75d9-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-x9rp4\" (UID: \"daf2a90e-77cb-446f-8fbf-1f526edc75d9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-x9rp4" Feb 25 11:54:53 crc kubenswrapper[5005]: I0225 11:54:53.643020 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/daf2a90e-77cb-446f-8fbf-1f526edc75d9-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-x9rp4\" (UID: \"daf2a90e-77cb-446f-8fbf-1f526edc75d9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-x9rp4" Feb 25 11:54:53 crc kubenswrapper[5005]: I0225 11:54:53.643203 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/daf2a90e-77cb-446f-8fbf-1f526edc75d9-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-x9rp4\" (UID: \"daf2a90e-77cb-446f-8fbf-1f526edc75d9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-x9rp4" Feb 25 11:54:53 crc kubenswrapper[5005]: I0225 11:54:53.654955 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29shz\" (UniqueName: \"kubernetes.io/projected/daf2a90e-77cb-446f-8fbf-1f526edc75d9-kube-api-access-29shz\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-x9rp4\" (UID: \"daf2a90e-77cb-446f-8fbf-1f526edc75d9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-x9rp4" Feb 25 11:54:53 crc kubenswrapper[5005]: I0225 11:54:53.741026 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-x9rp4" Feb 25 11:54:54 crc kubenswrapper[5005]: I0225 11:54:54.247178 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-x9rp4"] Feb 25 11:54:54 crc kubenswrapper[5005]: W0225 11:54:54.253559 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddaf2a90e_77cb_446f_8fbf_1f526edc75d9.slice/crio-527f90d6b478364f0aac9d1f5b1952f1471f5bb6ce2f5952feccf55b7ee0dc7a WatchSource:0}: Error finding container 527f90d6b478364f0aac9d1f5b1952f1471f5bb6ce2f5952feccf55b7ee0dc7a: Status 404 returned error can't find the container with id 527f90d6b478364f0aac9d1f5b1952f1471f5bb6ce2f5952feccf55b7ee0dc7a Feb 25 11:54:54 crc kubenswrapper[5005]: I0225 11:54:54.356305 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-x9rp4" event={"ID":"daf2a90e-77cb-446f-8fbf-1f526edc75d9","Type":"ContainerStarted","Data":"527f90d6b478364f0aac9d1f5b1952f1471f5bb6ce2f5952feccf55b7ee0dc7a"} Feb 25 11:54:55 crc kubenswrapper[5005]: I0225 11:54:55.364670 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-x9rp4" event={"ID":"daf2a90e-77cb-446f-8fbf-1f526edc75d9","Type":"ContainerStarted","Data":"328b132120f793f19552a96c37d91677d23eb6ca089aef4b4de080a76d07bdd3"} Feb 25 11:54:55 crc kubenswrapper[5005]: I0225 11:54:55.390007 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-x9rp4" podStartSLOduration=2.001099788 podStartE2EDuration="2.389990331s" podCreationTimestamp="2026-02-25 11:54:53 +0000 UTC" firstStartedPulling="2026-02-25 11:54:54.25705825 +0000 UTC m=+2208.297790587" lastFinishedPulling="2026-02-25 11:54:54.645948763 +0000 UTC m=+2208.686681130" observedRunningTime="2026-02-25 11:54:55.389128493 +0000 UTC m=+2209.429860820" watchObservedRunningTime="2026-02-25 11:54:55.389990331 +0000 UTC m=+2209.430722658" Feb 25 11:54:58 crc kubenswrapper[5005]: I0225 11:54:58.086973 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 11:54:58 crc kubenswrapper[5005]: I0225 11:54:58.087341 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 11:55:26 crc kubenswrapper[5005]: I0225 11:55:26.020014 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8m7nm"] Feb 25 11:55:26 crc kubenswrapper[5005]: I0225 11:55:26.023264 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8m7nm" Feb 25 11:55:26 crc kubenswrapper[5005]: I0225 11:55:26.035184 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8m7nm"] Feb 25 11:55:26 crc kubenswrapper[5005]: I0225 11:55:26.050027 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a052640-2d9d-40fc-8676-d972c293be36-catalog-content\") pod \"certified-operators-8m7nm\" (UID: \"7a052640-2d9d-40fc-8676-d972c293be36\") " pod="openshift-marketplace/certified-operators-8m7nm" Feb 25 11:55:26 crc kubenswrapper[5005]: I0225 11:55:26.050301 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blqwk\" (UniqueName: \"kubernetes.io/projected/7a052640-2d9d-40fc-8676-d972c293be36-kube-api-access-blqwk\") pod \"certified-operators-8m7nm\" (UID: \"7a052640-2d9d-40fc-8676-d972c293be36\") " pod="openshift-marketplace/certified-operators-8m7nm" Feb 25 11:55:26 crc kubenswrapper[5005]: I0225 11:55:26.050836 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a052640-2d9d-40fc-8676-d972c293be36-utilities\") pod \"certified-operators-8m7nm\" (UID: \"7a052640-2d9d-40fc-8676-d972c293be36\") " pod="openshift-marketplace/certified-operators-8m7nm" Feb 25 11:55:26 crc kubenswrapper[5005]: I0225 11:55:26.152665 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a052640-2d9d-40fc-8676-d972c293be36-catalog-content\") pod \"certified-operators-8m7nm\" (UID: \"7a052640-2d9d-40fc-8676-d972c293be36\") " pod="openshift-marketplace/certified-operators-8m7nm" Feb 25 11:55:26 crc kubenswrapper[5005]: I0225 11:55:26.152728 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blqwk\" (UniqueName: \"kubernetes.io/projected/7a052640-2d9d-40fc-8676-d972c293be36-kube-api-access-blqwk\") pod \"certified-operators-8m7nm\" (UID: \"7a052640-2d9d-40fc-8676-d972c293be36\") " pod="openshift-marketplace/certified-operators-8m7nm" Feb 25 11:55:26 crc kubenswrapper[5005]: I0225 11:55:26.152816 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a052640-2d9d-40fc-8676-d972c293be36-utilities\") pod \"certified-operators-8m7nm\" (UID: \"7a052640-2d9d-40fc-8676-d972c293be36\") " pod="openshift-marketplace/certified-operators-8m7nm" Feb 25 11:55:26 crc kubenswrapper[5005]: I0225 11:55:26.153200 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a052640-2d9d-40fc-8676-d972c293be36-catalog-content\") pod \"certified-operators-8m7nm\" (UID: \"7a052640-2d9d-40fc-8676-d972c293be36\") " pod="openshift-marketplace/certified-operators-8m7nm" Feb 25 11:55:26 crc kubenswrapper[5005]: I0225 11:55:26.153565 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a052640-2d9d-40fc-8676-d972c293be36-utilities\") pod \"certified-operators-8m7nm\" (UID: \"7a052640-2d9d-40fc-8676-d972c293be36\") " pod="openshift-marketplace/certified-operators-8m7nm" Feb 25 11:55:26 crc kubenswrapper[5005]: I0225 11:55:26.181396 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blqwk\" (UniqueName: \"kubernetes.io/projected/7a052640-2d9d-40fc-8676-d972c293be36-kube-api-access-blqwk\") pod \"certified-operators-8m7nm\" (UID: \"7a052640-2d9d-40fc-8676-d972c293be36\") " pod="openshift-marketplace/certified-operators-8m7nm" Feb 25 11:55:26 crc kubenswrapper[5005]: I0225 11:55:26.353286 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8m7nm" Feb 25 11:55:26 crc kubenswrapper[5005]: I0225 11:55:26.886431 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8m7nm"] Feb 25 11:55:27 crc kubenswrapper[5005]: I0225 11:55:27.213381 5005 generic.go:334] "Generic (PLEG): container finished" podID="7a052640-2d9d-40fc-8676-d972c293be36" containerID="7742044cd2e1fabfab50e84e65a1b56ccaca55428911f39877c612fc96f8d501" exitCode=0 Feb 25 11:55:27 crc kubenswrapper[5005]: I0225 11:55:27.213433 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8m7nm" event={"ID":"7a052640-2d9d-40fc-8676-d972c293be36","Type":"ContainerDied","Data":"7742044cd2e1fabfab50e84e65a1b56ccaca55428911f39877c612fc96f8d501"} Feb 25 11:55:27 crc kubenswrapper[5005]: I0225 11:55:27.213465 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8m7nm" event={"ID":"7a052640-2d9d-40fc-8676-d972c293be36","Type":"ContainerStarted","Data":"af03ff24ccaadcc463bbbd74bf2ec444491db17146030cf08091b07c740065dc"} Feb 25 11:55:28 crc kubenswrapper[5005]: I0225 11:55:28.086980 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 11:55:28 crc kubenswrapper[5005]: I0225 11:55:28.087499 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 11:55:28 crc kubenswrapper[5005]: I0225 11:55:28.087549 5005 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" Feb 25 11:55:28 crc kubenswrapper[5005]: I0225 11:55:28.088266 5005 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0a0c931f6f39c45f66a41d485326ae99130758f22d324cc3fec975dfad96b162"} pod="openshift-machine-config-operator/machine-config-daemon-tct5q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 25 11:55:28 crc kubenswrapper[5005]: I0225 11:55:28.088417 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" containerID="cri-o://0a0c931f6f39c45f66a41d485326ae99130758f22d324cc3fec975dfad96b162" gracePeriod=600 Feb 25 11:55:28 crc kubenswrapper[5005]: E0225 11:55:28.221336 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 11:55:28 crc kubenswrapper[5005]: I0225 11:55:28.228066 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8m7nm" event={"ID":"7a052640-2d9d-40fc-8676-d972c293be36","Type":"ContainerStarted","Data":"61f6037ecc7720fd18942835d5e4f9b11a779b3db9986a3f6f5f02c5cb5a6a01"} Feb 25 11:55:28 crc kubenswrapper[5005]: I0225 11:55:28.231402 5005 generic.go:334] "Generic (PLEG): container finished" podID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerID="0a0c931f6f39c45f66a41d485326ae99130758f22d324cc3fec975dfad96b162" exitCode=0 Feb 25 11:55:28 crc kubenswrapper[5005]: I0225 11:55:28.231441 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerDied","Data":"0a0c931f6f39c45f66a41d485326ae99130758f22d324cc3fec975dfad96b162"} Feb 25 11:55:28 crc kubenswrapper[5005]: I0225 11:55:28.231465 5005 scope.go:117] "RemoveContainer" containerID="ad704000137b23295360aa1c0e0fd91240a3ef27bd833fc620dad6c30f9284e5" Feb 25 11:55:28 crc kubenswrapper[5005]: I0225 11:55:28.232197 5005 scope.go:117] "RemoveContainer" containerID="0a0c931f6f39c45f66a41d485326ae99130758f22d324cc3fec975dfad96b162" Feb 25 11:55:28 crc kubenswrapper[5005]: E0225 11:55:28.233070 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 11:55:29 crc kubenswrapper[5005]: I0225 11:55:29.243749 5005 generic.go:334] "Generic (PLEG): container finished" podID="7a052640-2d9d-40fc-8676-d972c293be36" containerID="61f6037ecc7720fd18942835d5e4f9b11a779b3db9986a3f6f5f02c5cb5a6a01" exitCode=0 Feb 25 11:55:29 crc kubenswrapper[5005]: I0225 11:55:29.243850 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8m7nm" event={"ID":"7a052640-2d9d-40fc-8676-d972c293be36","Type":"ContainerDied","Data":"61f6037ecc7720fd18942835d5e4f9b11a779b3db9986a3f6f5f02c5cb5a6a01"} Feb 25 11:55:30 crc kubenswrapper[5005]: I0225 11:55:30.261617 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8m7nm" event={"ID":"7a052640-2d9d-40fc-8676-d972c293be36","Type":"ContainerStarted","Data":"154381f9f4526c079ca38c4148ce7bc24b143da470539c75c724204a52a2e38f"} Feb 25 11:55:30 crc kubenswrapper[5005]: I0225 11:55:30.264895 5005 generic.go:334] "Generic (PLEG): container finished" podID="daf2a90e-77cb-446f-8fbf-1f526edc75d9" containerID="328b132120f793f19552a96c37d91677d23eb6ca089aef4b4de080a76d07bdd3" exitCode=0 Feb 25 11:55:30 crc kubenswrapper[5005]: I0225 11:55:30.264942 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-x9rp4" event={"ID":"daf2a90e-77cb-446f-8fbf-1f526edc75d9","Type":"ContainerDied","Data":"328b132120f793f19552a96c37d91677d23eb6ca089aef4b4de080a76d07bdd3"} Feb 25 11:55:30 crc kubenswrapper[5005]: I0225 11:55:30.284099 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8m7nm" podStartSLOduration=2.831175428 podStartE2EDuration="5.28406877s" podCreationTimestamp="2026-02-25 11:55:25 +0000 UTC" firstStartedPulling="2026-02-25 11:55:27.215234958 +0000 UTC m=+2241.255967285" lastFinishedPulling="2026-02-25 11:55:29.6681283 +0000 UTC m=+2243.708860627" observedRunningTime="2026-02-25 11:55:30.283570974 +0000 UTC m=+2244.324303301" watchObservedRunningTime="2026-02-25 11:55:30.28406877 +0000 UTC m=+2244.324801097" Feb 25 11:55:31 crc kubenswrapper[5005]: I0225 11:55:31.720954 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-x9rp4" Feb 25 11:55:31 crc kubenswrapper[5005]: I0225 11:55:31.798261 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/daf2a90e-77cb-446f-8fbf-1f526edc75d9-ssh-key-openstack-edpm-ipam\") pod \"daf2a90e-77cb-446f-8fbf-1f526edc75d9\" (UID: \"daf2a90e-77cb-446f-8fbf-1f526edc75d9\") " Feb 25 11:55:31 crc kubenswrapper[5005]: I0225 11:55:31.798366 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/daf2a90e-77cb-446f-8fbf-1f526edc75d9-inventory\") pod \"daf2a90e-77cb-446f-8fbf-1f526edc75d9\" (UID: \"daf2a90e-77cb-446f-8fbf-1f526edc75d9\") " Feb 25 11:55:31 crc kubenswrapper[5005]: I0225 11:55:31.798412 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29shz\" (UniqueName: \"kubernetes.io/projected/daf2a90e-77cb-446f-8fbf-1f526edc75d9-kube-api-access-29shz\") pod \"daf2a90e-77cb-446f-8fbf-1f526edc75d9\" (UID: \"daf2a90e-77cb-446f-8fbf-1f526edc75d9\") " Feb 25 11:55:31 crc kubenswrapper[5005]: I0225 11:55:31.798592 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/daf2a90e-77cb-446f-8fbf-1f526edc75d9-ceph\") pod \"daf2a90e-77cb-446f-8fbf-1f526edc75d9\" (UID: \"daf2a90e-77cb-446f-8fbf-1f526edc75d9\") " Feb 25 11:55:31 crc kubenswrapper[5005]: I0225 11:55:31.804696 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/daf2a90e-77cb-446f-8fbf-1f526edc75d9-kube-api-access-29shz" (OuterVolumeSpecName: "kube-api-access-29shz") pod "daf2a90e-77cb-446f-8fbf-1f526edc75d9" (UID: "daf2a90e-77cb-446f-8fbf-1f526edc75d9"). InnerVolumeSpecName "kube-api-access-29shz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:55:31 crc kubenswrapper[5005]: I0225 11:55:31.805588 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/daf2a90e-77cb-446f-8fbf-1f526edc75d9-ceph" (OuterVolumeSpecName: "ceph") pod "daf2a90e-77cb-446f-8fbf-1f526edc75d9" (UID: "daf2a90e-77cb-446f-8fbf-1f526edc75d9"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:55:31 crc kubenswrapper[5005]: I0225 11:55:31.831631 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/daf2a90e-77cb-446f-8fbf-1f526edc75d9-inventory" (OuterVolumeSpecName: "inventory") pod "daf2a90e-77cb-446f-8fbf-1f526edc75d9" (UID: "daf2a90e-77cb-446f-8fbf-1f526edc75d9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:55:31 crc kubenswrapper[5005]: I0225 11:55:31.837804 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/daf2a90e-77cb-446f-8fbf-1f526edc75d9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "daf2a90e-77cb-446f-8fbf-1f526edc75d9" (UID: "daf2a90e-77cb-446f-8fbf-1f526edc75d9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:55:31 crc kubenswrapper[5005]: I0225 11:55:31.900588 5005 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/daf2a90e-77cb-446f-8fbf-1f526edc75d9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 25 11:55:31 crc kubenswrapper[5005]: I0225 11:55:31.900627 5005 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/daf2a90e-77cb-446f-8fbf-1f526edc75d9-inventory\") on node \"crc\" DevicePath \"\"" Feb 25 11:55:31 crc kubenswrapper[5005]: I0225 11:55:31.900638 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29shz\" (UniqueName: \"kubernetes.io/projected/daf2a90e-77cb-446f-8fbf-1f526edc75d9-kube-api-access-29shz\") on node \"crc\" DevicePath \"\"" Feb 25 11:55:31 crc kubenswrapper[5005]: I0225 11:55:31.900646 5005 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/daf2a90e-77cb-446f-8fbf-1f526edc75d9-ceph\") on node \"crc\" DevicePath \"\"" Feb 25 11:55:32 crc kubenswrapper[5005]: I0225 11:55:32.292050 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-x9rp4" event={"ID":"daf2a90e-77cb-446f-8fbf-1f526edc75d9","Type":"ContainerDied","Data":"527f90d6b478364f0aac9d1f5b1952f1471f5bb6ce2f5952feccf55b7ee0dc7a"} Feb 25 11:55:32 crc kubenswrapper[5005]: I0225 11:55:32.292108 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="527f90d6b478364f0aac9d1f5b1952f1471f5bb6ce2f5952feccf55b7ee0dc7a" Feb 25 11:55:32 crc kubenswrapper[5005]: I0225 11:55:32.292198 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-x9rp4" Feb 25 11:55:32 crc kubenswrapper[5005]: I0225 11:55:32.462720 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2nljc"] Feb 25 11:55:32 crc kubenswrapper[5005]: E0225 11:55:32.463286 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="daf2a90e-77cb-446f-8fbf-1f526edc75d9" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 25 11:55:32 crc kubenswrapper[5005]: I0225 11:55:32.463314 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="daf2a90e-77cb-446f-8fbf-1f526edc75d9" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 25 11:55:32 crc kubenswrapper[5005]: I0225 11:55:32.463567 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="daf2a90e-77cb-446f-8fbf-1f526edc75d9" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 25 11:55:32 crc kubenswrapper[5005]: I0225 11:55:32.464418 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2nljc" Feb 25 11:55:32 crc kubenswrapper[5005]: I0225 11:55:32.467667 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 25 11:55:32 crc kubenswrapper[5005]: I0225 11:55:32.467902 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dgrbb" Feb 25 11:55:32 crc kubenswrapper[5005]: I0225 11:55:32.468205 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 25 11:55:32 crc kubenswrapper[5005]: I0225 11:55:32.469315 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 25 11:55:32 crc kubenswrapper[5005]: I0225 11:55:32.480085 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2nljc"] Feb 25 11:55:32 crc kubenswrapper[5005]: I0225 11:55:32.484982 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 25 11:55:32 crc kubenswrapper[5005]: I0225 11:55:32.513527 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/951c3390-5e6e-4ac4-9833-f7c71959fae6-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2nljc\" (UID: \"951c3390-5e6e-4ac4-9833-f7c71959fae6\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2nljc" Feb 25 11:55:32 crc kubenswrapper[5005]: I0225 11:55:32.513595 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/951c3390-5e6e-4ac4-9833-f7c71959fae6-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2nljc\" (UID: \"951c3390-5e6e-4ac4-9833-f7c71959fae6\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2nljc" Feb 25 11:55:32 crc kubenswrapper[5005]: I0225 11:55:32.513709 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tz2h\" (UniqueName: \"kubernetes.io/projected/951c3390-5e6e-4ac4-9833-f7c71959fae6-kube-api-access-8tz2h\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2nljc\" (UID: \"951c3390-5e6e-4ac4-9833-f7c71959fae6\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2nljc" Feb 25 11:55:32 crc kubenswrapper[5005]: I0225 11:55:32.513764 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/951c3390-5e6e-4ac4-9833-f7c71959fae6-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2nljc\" (UID: \"951c3390-5e6e-4ac4-9833-f7c71959fae6\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2nljc" Feb 25 11:55:32 crc kubenswrapper[5005]: I0225 11:55:32.616284 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tz2h\" (UniqueName: \"kubernetes.io/projected/951c3390-5e6e-4ac4-9833-f7c71959fae6-kube-api-access-8tz2h\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2nljc\" (UID: \"951c3390-5e6e-4ac4-9833-f7c71959fae6\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2nljc" Feb 25 11:55:32 crc kubenswrapper[5005]: I0225 11:55:32.616393 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/951c3390-5e6e-4ac4-9833-f7c71959fae6-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2nljc\" (UID: \"951c3390-5e6e-4ac4-9833-f7c71959fae6\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2nljc" Feb 25 11:55:32 crc kubenswrapper[5005]: I0225 11:55:32.616434 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/951c3390-5e6e-4ac4-9833-f7c71959fae6-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2nljc\" (UID: \"951c3390-5e6e-4ac4-9833-f7c71959fae6\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2nljc" Feb 25 11:55:32 crc kubenswrapper[5005]: I0225 11:55:32.616458 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/951c3390-5e6e-4ac4-9833-f7c71959fae6-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2nljc\" (UID: \"951c3390-5e6e-4ac4-9833-f7c71959fae6\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2nljc" Feb 25 11:55:32 crc kubenswrapper[5005]: I0225 11:55:32.622044 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/951c3390-5e6e-4ac4-9833-f7c71959fae6-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2nljc\" (UID: \"951c3390-5e6e-4ac4-9833-f7c71959fae6\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2nljc" Feb 25 11:55:32 crc kubenswrapper[5005]: I0225 11:55:32.622305 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/951c3390-5e6e-4ac4-9833-f7c71959fae6-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2nljc\" (UID: \"951c3390-5e6e-4ac4-9833-f7c71959fae6\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2nljc" Feb 25 11:55:32 crc kubenswrapper[5005]: I0225 11:55:32.623871 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/951c3390-5e6e-4ac4-9833-f7c71959fae6-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2nljc\" (UID: \"951c3390-5e6e-4ac4-9833-f7c71959fae6\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2nljc" Feb 25 11:55:32 crc kubenswrapper[5005]: I0225 11:55:32.633132 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tz2h\" (UniqueName: \"kubernetes.io/projected/951c3390-5e6e-4ac4-9833-f7c71959fae6-kube-api-access-8tz2h\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2nljc\" (UID: \"951c3390-5e6e-4ac4-9833-f7c71959fae6\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2nljc" Feb 25 11:55:32 crc kubenswrapper[5005]: I0225 11:55:32.789513 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2nljc" Feb 25 11:55:33 crc kubenswrapper[5005]: I0225 11:55:33.297239 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2nljc"] Feb 25 11:55:34 crc kubenswrapper[5005]: I0225 11:55:34.313633 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2nljc" event={"ID":"951c3390-5e6e-4ac4-9833-f7c71959fae6","Type":"ContainerStarted","Data":"d5dc17cce2fa9d61284c4b32c4b8310a04a832d1b78013c56127f943f49c3528"} Feb 25 11:55:34 crc kubenswrapper[5005]: I0225 11:55:34.314526 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2nljc" event={"ID":"951c3390-5e6e-4ac4-9833-f7c71959fae6","Type":"ContainerStarted","Data":"76d9f53793affdf2e8619d1ed88517b1b89da4a59387cbfcc0bca611d81ccc96"} Feb 25 11:55:34 crc kubenswrapper[5005]: I0225 11:55:34.337037 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2nljc" podStartSLOduration=1.922838083 podStartE2EDuration="2.337010852s" podCreationTimestamp="2026-02-25 11:55:32 +0000 UTC" firstStartedPulling="2026-02-25 11:55:33.305892991 +0000 UTC m=+2247.346625318" lastFinishedPulling="2026-02-25 11:55:33.72006576 +0000 UTC m=+2247.760798087" observedRunningTime="2026-02-25 11:55:34.33285866 +0000 UTC m=+2248.373591027" watchObservedRunningTime="2026-02-25 11:55:34.337010852 +0000 UTC m=+2248.377743199" Feb 25 11:55:36 crc kubenswrapper[5005]: I0225 11:55:36.353657 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8m7nm" Feb 25 11:55:36 crc kubenswrapper[5005]: I0225 11:55:36.354017 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8m7nm" Feb 25 11:55:36 crc kubenswrapper[5005]: I0225 11:55:36.426321 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8m7nm" Feb 25 11:55:37 crc kubenswrapper[5005]: I0225 11:55:37.429126 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8m7nm" Feb 25 11:55:37 crc kubenswrapper[5005]: I0225 11:55:37.493141 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8m7nm"] Feb 25 11:55:38 crc kubenswrapper[5005]: I0225 11:55:38.365337 5005 generic.go:334] "Generic (PLEG): container finished" podID="951c3390-5e6e-4ac4-9833-f7c71959fae6" containerID="d5dc17cce2fa9d61284c4b32c4b8310a04a832d1b78013c56127f943f49c3528" exitCode=0 Feb 25 11:55:38 crc kubenswrapper[5005]: I0225 11:55:38.365426 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2nljc" event={"ID":"951c3390-5e6e-4ac4-9833-f7c71959fae6","Type":"ContainerDied","Data":"d5dc17cce2fa9d61284c4b32c4b8310a04a832d1b78013c56127f943f49c3528"} Feb 25 11:55:39 crc kubenswrapper[5005]: I0225 11:55:39.377542 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8m7nm" podUID="7a052640-2d9d-40fc-8676-d972c293be36" containerName="registry-server" containerID="cri-o://154381f9f4526c079ca38c4148ce7bc24b143da470539c75c724204a52a2e38f" gracePeriod=2 Feb 25 11:55:39 crc kubenswrapper[5005]: I0225 11:55:39.904222 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2nljc" Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.003214 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8m7nm" Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.038092 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tz2h\" (UniqueName: \"kubernetes.io/projected/951c3390-5e6e-4ac4-9833-f7c71959fae6-kube-api-access-8tz2h\") pod \"951c3390-5e6e-4ac4-9833-f7c71959fae6\" (UID: \"951c3390-5e6e-4ac4-9833-f7c71959fae6\") " Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.038230 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/951c3390-5e6e-4ac4-9833-f7c71959fae6-inventory\") pod \"951c3390-5e6e-4ac4-9833-f7c71959fae6\" (UID: \"951c3390-5e6e-4ac4-9833-f7c71959fae6\") " Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.038287 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/951c3390-5e6e-4ac4-9833-f7c71959fae6-ssh-key-openstack-edpm-ipam\") pod \"951c3390-5e6e-4ac4-9833-f7c71959fae6\" (UID: \"951c3390-5e6e-4ac4-9833-f7c71959fae6\") " Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.038418 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/951c3390-5e6e-4ac4-9833-f7c71959fae6-ceph\") pod \"951c3390-5e6e-4ac4-9833-f7c71959fae6\" (UID: \"951c3390-5e6e-4ac4-9833-f7c71959fae6\") " Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.044453 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/951c3390-5e6e-4ac4-9833-f7c71959fae6-kube-api-access-8tz2h" (OuterVolumeSpecName: "kube-api-access-8tz2h") pod "951c3390-5e6e-4ac4-9833-f7c71959fae6" (UID: "951c3390-5e6e-4ac4-9833-f7c71959fae6"). InnerVolumeSpecName "kube-api-access-8tz2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.044577 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/951c3390-5e6e-4ac4-9833-f7c71959fae6-ceph" (OuterVolumeSpecName: "ceph") pod "951c3390-5e6e-4ac4-9833-f7c71959fae6" (UID: "951c3390-5e6e-4ac4-9833-f7c71959fae6"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.065048 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/951c3390-5e6e-4ac4-9833-f7c71959fae6-inventory" (OuterVolumeSpecName: "inventory") pod "951c3390-5e6e-4ac4-9833-f7c71959fae6" (UID: "951c3390-5e6e-4ac4-9833-f7c71959fae6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.065594 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/951c3390-5e6e-4ac4-9833-f7c71959fae6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "951c3390-5e6e-4ac4-9833-f7c71959fae6" (UID: "951c3390-5e6e-4ac4-9833-f7c71959fae6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.139684 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a052640-2d9d-40fc-8676-d972c293be36-utilities\") pod \"7a052640-2d9d-40fc-8676-d972c293be36\" (UID: \"7a052640-2d9d-40fc-8676-d972c293be36\") " Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.139797 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blqwk\" (UniqueName: \"kubernetes.io/projected/7a052640-2d9d-40fc-8676-d972c293be36-kube-api-access-blqwk\") pod \"7a052640-2d9d-40fc-8676-d972c293be36\" (UID: \"7a052640-2d9d-40fc-8676-d972c293be36\") " Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.139826 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a052640-2d9d-40fc-8676-d972c293be36-catalog-content\") pod \"7a052640-2d9d-40fc-8676-d972c293be36\" (UID: \"7a052640-2d9d-40fc-8676-d972c293be36\") " Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.141423 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a052640-2d9d-40fc-8676-d972c293be36-utilities" (OuterVolumeSpecName: "utilities") pod "7a052640-2d9d-40fc-8676-d972c293be36" (UID: "7a052640-2d9d-40fc-8676-d972c293be36"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.141823 5005 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/951c3390-5e6e-4ac4-9833-f7c71959fae6-inventory\") on node \"crc\" DevicePath \"\"" Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.141869 5005 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/951c3390-5e6e-4ac4-9833-f7c71959fae6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.141893 5005 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/951c3390-5e6e-4ac4-9833-f7c71959fae6-ceph\") on node \"crc\" DevicePath \"\"" Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.141912 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tz2h\" (UniqueName: \"kubernetes.io/projected/951c3390-5e6e-4ac4-9833-f7c71959fae6-kube-api-access-8tz2h\") on node \"crc\" DevicePath \"\"" Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.144175 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a052640-2d9d-40fc-8676-d972c293be36-kube-api-access-blqwk" (OuterVolumeSpecName: "kube-api-access-blqwk") pod "7a052640-2d9d-40fc-8676-d972c293be36" (UID: "7a052640-2d9d-40fc-8676-d972c293be36"). InnerVolumeSpecName "kube-api-access-blqwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.218773 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a052640-2d9d-40fc-8676-d972c293be36-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7a052640-2d9d-40fc-8676-d972c293be36" (UID: "7a052640-2d9d-40fc-8676-d972c293be36"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.243298 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a052640-2d9d-40fc-8676-d972c293be36-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.243340 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blqwk\" (UniqueName: \"kubernetes.io/projected/7a052640-2d9d-40fc-8676-d972c293be36-kube-api-access-blqwk\") on node \"crc\" DevicePath \"\"" Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.243354 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a052640-2d9d-40fc-8676-d972c293be36-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.391402 5005 generic.go:334] "Generic (PLEG): container finished" podID="7a052640-2d9d-40fc-8676-d972c293be36" containerID="154381f9f4526c079ca38c4148ce7bc24b143da470539c75c724204a52a2e38f" exitCode=0 Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.391503 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8m7nm" Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.391554 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8m7nm" event={"ID":"7a052640-2d9d-40fc-8676-d972c293be36","Type":"ContainerDied","Data":"154381f9f4526c079ca38c4148ce7bc24b143da470539c75c724204a52a2e38f"} Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.391679 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8m7nm" event={"ID":"7a052640-2d9d-40fc-8676-d972c293be36","Type":"ContainerDied","Data":"af03ff24ccaadcc463bbbd74bf2ec444491db17146030cf08091b07c740065dc"} Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.391761 5005 scope.go:117] "RemoveContainer" containerID="154381f9f4526c079ca38c4148ce7bc24b143da470539c75c724204a52a2e38f" Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.400175 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2nljc" event={"ID":"951c3390-5e6e-4ac4-9833-f7c71959fae6","Type":"ContainerDied","Data":"76d9f53793affdf2e8619d1ed88517b1b89da4a59387cbfcc0bca611d81ccc96"} Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.400436 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76d9f53793affdf2e8619d1ed88517b1b89da4a59387cbfcc0bca611d81ccc96" Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.400635 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2nljc" Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.427835 5005 scope.go:117] "RemoveContainer" containerID="61f6037ecc7720fd18942835d5e4f9b11a779b3db9986a3f6f5f02c5cb5a6a01" Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.456807 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8m7nm"] Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.470287 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8m7nm"] Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.491618 5005 scope.go:117] "RemoveContainer" containerID="7742044cd2e1fabfab50e84e65a1b56ccaca55428911f39877c612fc96f8d501" Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.499644 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2lkjk"] Feb 25 11:55:40 crc kubenswrapper[5005]: E0225 11:55:40.500208 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="951c3390-5e6e-4ac4-9833-f7c71959fae6" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.500235 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="951c3390-5e6e-4ac4-9833-f7c71959fae6" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Feb 25 11:55:40 crc kubenswrapper[5005]: E0225 11:55:40.500283 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a052640-2d9d-40fc-8676-d972c293be36" containerName="extract-utilities" Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.500296 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a052640-2d9d-40fc-8676-d972c293be36" containerName="extract-utilities" Feb 25 11:55:40 crc kubenswrapper[5005]: E0225 11:55:40.500326 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a052640-2d9d-40fc-8676-d972c293be36" containerName="registry-server" Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.500336 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a052640-2d9d-40fc-8676-d972c293be36" containerName="registry-server" Feb 25 11:55:40 crc kubenswrapper[5005]: E0225 11:55:40.500356 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a052640-2d9d-40fc-8676-d972c293be36" containerName="extract-content" Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.500386 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a052640-2d9d-40fc-8676-d972c293be36" containerName="extract-content" Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.500634 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a052640-2d9d-40fc-8676-d972c293be36" containerName="registry-server" Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.500658 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="951c3390-5e6e-4ac4-9833-f7c71959fae6" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.501523 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2lkjk" Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.505608 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.505687 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.505609 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.506158 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dgrbb" Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.506515 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.513237 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2lkjk"] Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.520903 5005 scope.go:117] "RemoveContainer" containerID="154381f9f4526c079ca38c4148ce7bc24b143da470539c75c724204a52a2e38f" Feb 25 11:55:40 crc kubenswrapper[5005]: E0225 11:55:40.535789 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"154381f9f4526c079ca38c4148ce7bc24b143da470539c75c724204a52a2e38f\": container with ID starting with 154381f9f4526c079ca38c4148ce7bc24b143da470539c75c724204a52a2e38f not found: ID does not exist" containerID="154381f9f4526c079ca38c4148ce7bc24b143da470539c75c724204a52a2e38f" Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.536100 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"154381f9f4526c079ca38c4148ce7bc24b143da470539c75c724204a52a2e38f"} err="failed to get container status \"154381f9f4526c079ca38c4148ce7bc24b143da470539c75c724204a52a2e38f\": rpc error: code = NotFound desc = could not find container \"154381f9f4526c079ca38c4148ce7bc24b143da470539c75c724204a52a2e38f\": container with ID starting with 154381f9f4526c079ca38c4148ce7bc24b143da470539c75c724204a52a2e38f not found: ID does not exist" Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.536128 5005 scope.go:117] "RemoveContainer" containerID="61f6037ecc7720fd18942835d5e4f9b11a779b3db9986a3f6f5f02c5cb5a6a01" Feb 25 11:55:40 crc kubenswrapper[5005]: E0225 11:55:40.536581 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61f6037ecc7720fd18942835d5e4f9b11a779b3db9986a3f6f5f02c5cb5a6a01\": container with ID starting with 61f6037ecc7720fd18942835d5e4f9b11a779b3db9986a3f6f5f02c5cb5a6a01 not found: ID does not exist" containerID="61f6037ecc7720fd18942835d5e4f9b11a779b3db9986a3f6f5f02c5cb5a6a01" Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.536609 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61f6037ecc7720fd18942835d5e4f9b11a779b3db9986a3f6f5f02c5cb5a6a01"} err="failed to get container status \"61f6037ecc7720fd18942835d5e4f9b11a779b3db9986a3f6f5f02c5cb5a6a01\": rpc error: code = NotFound desc = could not find container \"61f6037ecc7720fd18942835d5e4f9b11a779b3db9986a3f6f5f02c5cb5a6a01\": container with ID starting with 61f6037ecc7720fd18942835d5e4f9b11a779b3db9986a3f6f5f02c5cb5a6a01 not found: ID does not exist" Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.536627 5005 scope.go:117] "RemoveContainer" containerID="7742044cd2e1fabfab50e84e65a1b56ccaca55428911f39877c612fc96f8d501" Feb 25 11:55:40 crc kubenswrapper[5005]: E0225 11:55:40.537263 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7742044cd2e1fabfab50e84e65a1b56ccaca55428911f39877c612fc96f8d501\": container with ID starting with 7742044cd2e1fabfab50e84e65a1b56ccaca55428911f39877c612fc96f8d501 not found: ID does not exist" containerID="7742044cd2e1fabfab50e84e65a1b56ccaca55428911f39877c612fc96f8d501" Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.537324 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7742044cd2e1fabfab50e84e65a1b56ccaca55428911f39877c612fc96f8d501"} err="failed to get container status \"7742044cd2e1fabfab50e84e65a1b56ccaca55428911f39877c612fc96f8d501\": rpc error: code = NotFound desc = could not find container \"7742044cd2e1fabfab50e84e65a1b56ccaca55428911f39877c612fc96f8d501\": container with ID starting with 7742044cd2e1fabfab50e84e65a1b56ccaca55428911f39877c612fc96f8d501 not found: ID does not exist" Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.649205 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de337099-6e1c-496a-a163-ebc04d7a0fc8-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2lkjk\" (UID: \"de337099-6e1c-496a-a163-ebc04d7a0fc8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2lkjk" Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.649997 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgdpt\" (UniqueName: \"kubernetes.io/projected/de337099-6e1c-496a-a163-ebc04d7a0fc8-kube-api-access-tgdpt\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2lkjk\" (UID: \"de337099-6e1c-496a-a163-ebc04d7a0fc8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2lkjk" Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.650288 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/de337099-6e1c-496a-a163-ebc04d7a0fc8-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2lkjk\" (UID: \"de337099-6e1c-496a-a163-ebc04d7a0fc8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2lkjk" Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.650582 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/de337099-6e1c-496a-a163-ebc04d7a0fc8-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2lkjk\" (UID: \"de337099-6e1c-496a-a163-ebc04d7a0fc8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2lkjk" Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.704753 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a052640-2d9d-40fc-8676-d972c293be36" path="/var/lib/kubelet/pods/7a052640-2d9d-40fc-8676-d972c293be36/volumes" Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.752847 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/de337099-6e1c-496a-a163-ebc04d7a0fc8-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2lkjk\" (UID: \"de337099-6e1c-496a-a163-ebc04d7a0fc8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2lkjk" Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.752979 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de337099-6e1c-496a-a163-ebc04d7a0fc8-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2lkjk\" (UID: \"de337099-6e1c-496a-a163-ebc04d7a0fc8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2lkjk" Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.753022 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgdpt\" (UniqueName: \"kubernetes.io/projected/de337099-6e1c-496a-a163-ebc04d7a0fc8-kube-api-access-tgdpt\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2lkjk\" (UID: \"de337099-6e1c-496a-a163-ebc04d7a0fc8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2lkjk" Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.753086 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/de337099-6e1c-496a-a163-ebc04d7a0fc8-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2lkjk\" (UID: \"de337099-6e1c-496a-a163-ebc04d7a0fc8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2lkjk" Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.760937 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/de337099-6e1c-496a-a163-ebc04d7a0fc8-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2lkjk\" (UID: \"de337099-6e1c-496a-a163-ebc04d7a0fc8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2lkjk" Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.761839 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de337099-6e1c-496a-a163-ebc04d7a0fc8-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2lkjk\" (UID: \"de337099-6e1c-496a-a163-ebc04d7a0fc8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2lkjk" Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.761848 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/de337099-6e1c-496a-a163-ebc04d7a0fc8-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2lkjk\" (UID: \"de337099-6e1c-496a-a163-ebc04d7a0fc8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2lkjk" Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.773482 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgdpt\" (UniqueName: \"kubernetes.io/projected/de337099-6e1c-496a-a163-ebc04d7a0fc8-kube-api-access-tgdpt\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2lkjk\" (UID: \"de337099-6e1c-496a-a163-ebc04d7a0fc8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2lkjk" Feb 25 11:55:40 crc kubenswrapper[5005]: I0225 11:55:40.885601 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2lkjk" Feb 25 11:55:41 crc kubenswrapper[5005]: I0225 11:55:41.439608 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2lkjk"] Feb 25 11:55:41 crc kubenswrapper[5005]: I0225 11:55:41.685921 5005 scope.go:117] "RemoveContainer" containerID="0a0c931f6f39c45f66a41d485326ae99130758f22d324cc3fec975dfad96b162" Feb 25 11:55:41 crc kubenswrapper[5005]: E0225 11:55:41.686310 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 11:55:42 crc kubenswrapper[5005]: I0225 11:55:42.422201 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2lkjk" event={"ID":"de337099-6e1c-496a-a163-ebc04d7a0fc8","Type":"ContainerStarted","Data":"9b30d540d55b2761c231ab6c871425bf06d0ac8703378a3fb08f2026387566e8"} Feb 25 11:55:42 crc kubenswrapper[5005]: I0225 11:55:42.422585 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2lkjk" event={"ID":"de337099-6e1c-496a-a163-ebc04d7a0fc8","Type":"ContainerStarted","Data":"b424d66d45065393dbef0475b1df7dd89c3b17a85c69ad7eadc7531e8d780ab0"} Feb 25 11:55:42 crc kubenswrapper[5005]: I0225 11:55:42.453779 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2lkjk" podStartSLOduration=1.984743535 podStartE2EDuration="2.453747284s" podCreationTimestamp="2026-02-25 11:55:40 +0000 UTC" firstStartedPulling="2026-02-25 11:55:41.445818773 +0000 UTC m=+2255.486551110" lastFinishedPulling="2026-02-25 11:55:41.914822502 +0000 UTC m=+2255.955554859" observedRunningTime="2026-02-25 11:55:42.441255635 +0000 UTC m=+2256.481987962" watchObservedRunningTime="2026-02-25 11:55:42.453747284 +0000 UTC m=+2256.494479651" Feb 25 11:55:54 crc kubenswrapper[5005]: I0225 11:55:54.685990 5005 scope.go:117] "RemoveContainer" containerID="0a0c931f6f39c45f66a41d485326ae99130758f22d324cc3fec975dfad96b162" Feb 25 11:55:54 crc kubenswrapper[5005]: E0225 11:55:54.686932 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 11:56:00 crc kubenswrapper[5005]: I0225 11:56:00.149656 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533676-qgdl8"] Feb 25 11:56:00 crc kubenswrapper[5005]: I0225 11:56:00.151764 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533676-qgdl8" Feb 25 11:56:00 crc kubenswrapper[5005]: I0225 11:56:00.154160 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 11:56:00 crc kubenswrapper[5005]: I0225 11:56:00.154267 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 11:56:00 crc kubenswrapper[5005]: I0225 11:56:00.154602 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 11:56:00 crc kubenswrapper[5005]: I0225 11:56:00.161466 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533676-qgdl8"] Feb 25 11:56:00 crc kubenswrapper[5005]: I0225 11:56:00.314664 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcnxp\" (UniqueName: \"kubernetes.io/projected/a4e10073-8de9-4f72-ba16-6f2b4a42d757-kube-api-access-kcnxp\") pod \"auto-csr-approver-29533676-qgdl8\" (UID: \"a4e10073-8de9-4f72-ba16-6f2b4a42d757\") " pod="openshift-infra/auto-csr-approver-29533676-qgdl8" Feb 25 11:56:00 crc kubenswrapper[5005]: I0225 11:56:00.416045 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcnxp\" (UniqueName: \"kubernetes.io/projected/a4e10073-8de9-4f72-ba16-6f2b4a42d757-kube-api-access-kcnxp\") pod \"auto-csr-approver-29533676-qgdl8\" (UID: \"a4e10073-8de9-4f72-ba16-6f2b4a42d757\") " pod="openshift-infra/auto-csr-approver-29533676-qgdl8" Feb 25 11:56:00 crc kubenswrapper[5005]: I0225 11:56:00.437204 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcnxp\" (UniqueName: \"kubernetes.io/projected/a4e10073-8de9-4f72-ba16-6f2b4a42d757-kube-api-access-kcnxp\") pod \"auto-csr-approver-29533676-qgdl8\" (UID: \"a4e10073-8de9-4f72-ba16-6f2b4a42d757\") " pod="openshift-infra/auto-csr-approver-29533676-qgdl8" Feb 25 11:56:00 crc kubenswrapper[5005]: I0225 11:56:00.471036 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533676-qgdl8" Feb 25 11:56:01 crc kubenswrapper[5005]: I0225 11:56:01.054795 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533676-qgdl8"] Feb 25 11:56:01 crc kubenswrapper[5005]: W0225 11:56:01.057632 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda4e10073_8de9_4f72_ba16_6f2b4a42d757.slice/crio-a7f88097ee4b979d22d6f4543e0f9d51ca5b0be0ab3a0a7f4d3a6322b3264870 WatchSource:0}: Error finding container a7f88097ee4b979d22d6f4543e0f9d51ca5b0be0ab3a0a7f4d3a6322b3264870: Status 404 returned error can't find the container with id a7f88097ee4b979d22d6f4543e0f9d51ca5b0be0ab3a0a7f4d3a6322b3264870 Feb 25 11:56:01 crc kubenswrapper[5005]: I0225 11:56:01.616928 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533676-qgdl8" event={"ID":"a4e10073-8de9-4f72-ba16-6f2b4a42d757","Type":"ContainerStarted","Data":"a7f88097ee4b979d22d6f4543e0f9d51ca5b0be0ab3a0a7f4d3a6322b3264870"} Feb 25 11:56:02 crc kubenswrapper[5005]: I0225 11:56:02.625723 5005 generic.go:334] "Generic (PLEG): container finished" podID="a4e10073-8de9-4f72-ba16-6f2b4a42d757" containerID="15c5efba7becaaf9b6636843e39ad8cfab700da83f166cb98ca85f157c4dcef5" exitCode=0 Feb 25 11:56:02 crc kubenswrapper[5005]: I0225 11:56:02.625771 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533676-qgdl8" event={"ID":"a4e10073-8de9-4f72-ba16-6f2b4a42d757","Type":"ContainerDied","Data":"15c5efba7becaaf9b6636843e39ad8cfab700da83f166cb98ca85f157c4dcef5"} Feb 25 11:56:04 crc kubenswrapper[5005]: I0225 11:56:04.002678 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533676-qgdl8" Feb 25 11:56:04 crc kubenswrapper[5005]: I0225 11:56:04.092028 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcnxp\" (UniqueName: \"kubernetes.io/projected/a4e10073-8de9-4f72-ba16-6f2b4a42d757-kube-api-access-kcnxp\") pod \"a4e10073-8de9-4f72-ba16-6f2b4a42d757\" (UID: \"a4e10073-8de9-4f72-ba16-6f2b4a42d757\") " Feb 25 11:56:04 crc kubenswrapper[5005]: I0225 11:56:04.098045 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4e10073-8de9-4f72-ba16-6f2b4a42d757-kube-api-access-kcnxp" (OuterVolumeSpecName: "kube-api-access-kcnxp") pod "a4e10073-8de9-4f72-ba16-6f2b4a42d757" (UID: "a4e10073-8de9-4f72-ba16-6f2b4a42d757"). InnerVolumeSpecName "kube-api-access-kcnxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:56:04 crc kubenswrapper[5005]: I0225 11:56:04.193494 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcnxp\" (UniqueName: \"kubernetes.io/projected/a4e10073-8de9-4f72-ba16-6f2b4a42d757-kube-api-access-kcnxp\") on node \"crc\" DevicePath \"\"" Feb 25 11:56:04 crc kubenswrapper[5005]: I0225 11:56:04.645568 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533676-qgdl8" event={"ID":"a4e10073-8de9-4f72-ba16-6f2b4a42d757","Type":"ContainerDied","Data":"a7f88097ee4b979d22d6f4543e0f9d51ca5b0be0ab3a0a7f4d3a6322b3264870"} Feb 25 11:56:04 crc kubenswrapper[5005]: I0225 11:56:04.645610 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7f88097ee4b979d22d6f4543e0f9d51ca5b0be0ab3a0a7f4d3a6322b3264870" Feb 25 11:56:04 crc kubenswrapper[5005]: I0225 11:56:04.645640 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533676-qgdl8" Feb 25 11:56:05 crc kubenswrapper[5005]: I0225 11:56:05.086988 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533670-9xs56"] Feb 25 11:56:05 crc kubenswrapper[5005]: I0225 11:56:05.093975 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533670-9xs56"] Feb 25 11:56:06 crc kubenswrapper[5005]: I0225 11:56:06.696178 5005 scope.go:117] "RemoveContainer" containerID="0a0c931f6f39c45f66a41d485326ae99130758f22d324cc3fec975dfad96b162" Feb 25 11:56:06 crc kubenswrapper[5005]: E0225 11:56:06.696778 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 11:56:06 crc kubenswrapper[5005]: I0225 11:56:06.703116 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f3f0c56-c8c5-4737-8e2a-de8bd6c86fc4" path="/var/lib/kubelet/pods/1f3f0c56-c8c5-4737-8e2a-de8bd6c86fc4/volumes" Feb 25 11:56:18 crc kubenswrapper[5005]: I0225 11:56:18.956138 5005 scope.go:117] "RemoveContainer" containerID="94620fbde0d4da1256b83d4d61ac5fb46231a6fea341a2495ba4e1ca66ac1b37" Feb 25 11:56:21 crc kubenswrapper[5005]: I0225 11:56:21.687022 5005 scope.go:117] "RemoveContainer" containerID="0a0c931f6f39c45f66a41d485326ae99130758f22d324cc3fec975dfad96b162" Feb 25 11:56:21 crc kubenswrapper[5005]: E0225 11:56:21.688051 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 11:56:23 crc kubenswrapper[5005]: I0225 11:56:23.875863 5005 generic.go:334] "Generic (PLEG): container finished" podID="de337099-6e1c-496a-a163-ebc04d7a0fc8" containerID="9b30d540d55b2761c231ab6c871425bf06d0ac8703378a3fb08f2026387566e8" exitCode=0 Feb 25 11:56:23 crc kubenswrapper[5005]: I0225 11:56:23.875958 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2lkjk" event={"ID":"de337099-6e1c-496a-a163-ebc04d7a0fc8","Type":"ContainerDied","Data":"9b30d540d55b2761c231ab6c871425bf06d0ac8703378a3fb08f2026387566e8"} Feb 25 11:56:25 crc kubenswrapper[5005]: I0225 11:56:25.333161 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2lkjk" Feb 25 11:56:25 crc kubenswrapper[5005]: I0225 11:56:25.362749 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/de337099-6e1c-496a-a163-ebc04d7a0fc8-ssh-key-openstack-edpm-ipam\") pod \"de337099-6e1c-496a-a163-ebc04d7a0fc8\" (UID: \"de337099-6e1c-496a-a163-ebc04d7a0fc8\") " Feb 25 11:56:25 crc kubenswrapper[5005]: I0225 11:56:25.363008 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgdpt\" (UniqueName: \"kubernetes.io/projected/de337099-6e1c-496a-a163-ebc04d7a0fc8-kube-api-access-tgdpt\") pod \"de337099-6e1c-496a-a163-ebc04d7a0fc8\" (UID: \"de337099-6e1c-496a-a163-ebc04d7a0fc8\") " Feb 25 11:56:25 crc kubenswrapper[5005]: I0225 11:56:25.363060 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/de337099-6e1c-496a-a163-ebc04d7a0fc8-ceph\") pod \"de337099-6e1c-496a-a163-ebc04d7a0fc8\" (UID: \"de337099-6e1c-496a-a163-ebc04d7a0fc8\") " Feb 25 11:56:25 crc kubenswrapper[5005]: I0225 11:56:25.363191 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de337099-6e1c-496a-a163-ebc04d7a0fc8-inventory\") pod \"de337099-6e1c-496a-a163-ebc04d7a0fc8\" (UID: \"de337099-6e1c-496a-a163-ebc04d7a0fc8\") " Feb 25 11:56:25 crc kubenswrapper[5005]: I0225 11:56:25.374711 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de337099-6e1c-496a-a163-ebc04d7a0fc8-kube-api-access-tgdpt" (OuterVolumeSpecName: "kube-api-access-tgdpt") pod "de337099-6e1c-496a-a163-ebc04d7a0fc8" (UID: "de337099-6e1c-496a-a163-ebc04d7a0fc8"). InnerVolumeSpecName "kube-api-access-tgdpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:56:25 crc kubenswrapper[5005]: I0225 11:56:25.382679 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de337099-6e1c-496a-a163-ebc04d7a0fc8-ceph" (OuterVolumeSpecName: "ceph") pod "de337099-6e1c-496a-a163-ebc04d7a0fc8" (UID: "de337099-6e1c-496a-a163-ebc04d7a0fc8"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:56:25 crc kubenswrapper[5005]: I0225 11:56:25.390098 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de337099-6e1c-496a-a163-ebc04d7a0fc8-inventory" (OuterVolumeSpecName: "inventory") pod "de337099-6e1c-496a-a163-ebc04d7a0fc8" (UID: "de337099-6e1c-496a-a163-ebc04d7a0fc8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:56:25 crc kubenswrapper[5005]: I0225 11:56:25.394728 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de337099-6e1c-496a-a163-ebc04d7a0fc8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "de337099-6e1c-496a-a163-ebc04d7a0fc8" (UID: "de337099-6e1c-496a-a163-ebc04d7a0fc8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:56:25 crc kubenswrapper[5005]: I0225 11:56:25.465527 5005 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de337099-6e1c-496a-a163-ebc04d7a0fc8-inventory\") on node \"crc\" DevicePath \"\"" Feb 25 11:56:25 crc kubenswrapper[5005]: I0225 11:56:25.465564 5005 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/de337099-6e1c-496a-a163-ebc04d7a0fc8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 25 11:56:25 crc kubenswrapper[5005]: I0225 11:56:25.465583 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgdpt\" (UniqueName: \"kubernetes.io/projected/de337099-6e1c-496a-a163-ebc04d7a0fc8-kube-api-access-tgdpt\") on node \"crc\" DevicePath \"\"" Feb 25 11:56:25 crc kubenswrapper[5005]: I0225 11:56:25.465597 5005 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/de337099-6e1c-496a-a163-ebc04d7a0fc8-ceph\") on node \"crc\" DevicePath \"\"" Feb 25 11:56:25 crc kubenswrapper[5005]: I0225 11:56:25.897451 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2lkjk" event={"ID":"de337099-6e1c-496a-a163-ebc04d7a0fc8","Type":"ContainerDied","Data":"b424d66d45065393dbef0475b1df7dd89c3b17a85c69ad7eadc7531e8d780ab0"} Feb 25 11:56:25 crc kubenswrapper[5005]: I0225 11:56:25.897791 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b424d66d45065393dbef0475b1df7dd89c3b17a85c69ad7eadc7531e8d780ab0" Feb 25 11:56:25 crc kubenswrapper[5005]: I0225 11:56:25.897560 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2lkjk" Feb 25 11:56:26 crc kubenswrapper[5005]: I0225 11:56:26.003827 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-zk2nb"] Feb 25 11:56:26 crc kubenswrapper[5005]: E0225 11:56:26.004182 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de337099-6e1c-496a-a163-ebc04d7a0fc8" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 25 11:56:26 crc kubenswrapper[5005]: I0225 11:56:26.004197 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="de337099-6e1c-496a-a163-ebc04d7a0fc8" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 25 11:56:26 crc kubenswrapper[5005]: E0225 11:56:26.004227 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4e10073-8de9-4f72-ba16-6f2b4a42d757" containerName="oc" Feb 25 11:56:26 crc kubenswrapper[5005]: I0225 11:56:26.004234 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4e10073-8de9-4f72-ba16-6f2b4a42d757" containerName="oc" Feb 25 11:56:26 crc kubenswrapper[5005]: I0225 11:56:26.004425 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4e10073-8de9-4f72-ba16-6f2b4a42d757" containerName="oc" Feb 25 11:56:26 crc kubenswrapper[5005]: I0225 11:56:26.004447 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="de337099-6e1c-496a-a163-ebc04d7a0fc8" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 25 11:56:26 crc kubenswrapper[5005]: I0225 11:56:26.004961 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-zk2nb" Feb 25 11:56:26 crc kubenswrapper[5005]: I0225 11:56:26.008709 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 25 11:56:26 crc kubenswrapper[5005]: I0225 11:56:26.009111 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 25 11:56:26 crc kubenswrapper[5005]: I0225 11:56:26.009623 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 25 11:56:26 crc kubenswrapper[5005]: I0225 11:56:26.010609 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 25 11:56:26 crc kubenswrapper[5005]: I0225 11:56:26.012504 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dgrbb" Feb 25 11:56:26 crc kubenswrapper[5005]: I0225 11:56:26.018856 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-zk2nb"] Feb 25 11:56:26 crc kubenswrapper[5005]: I0225 11:56:26.078981 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/fc7b1540-d8b4-4387-98ac-a56fb073470d-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-zk2nb\" (UID: \"fc7b1540-d8b4-4387-98ac-a56fb073470d\") " pod="openstack/ssh-known-hosts-edpm-deployment-zk2nb" Feb 25 11:56:26 crc kubenswrapper[5005]: I0225 11:56:26.079100 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fc7b1540-d8b4-4387-98ac-a56fb073470d-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-zk2nb\" (UID: \"fc7b1540-d8b4-4387-98ac-a56fb073470d\") " pod="openstack/ssh-known-hosts-edpm-deployment-zk2nb" Feb 25 11:56:26 crc kubenswrapper[5005]: I0225 11:56:26.079231 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5j67\" (UniqueName: \"kubernetes.io/projected/fc7b1540-d8b4-4387-98ac-a56fb073470d-kube-api-access-v5j67\") pod \"ssh-known-hosts-edpm-deployment-zk2nb\" (UID: \"fc7b1540-d8b4-4387-98ac-a56fb073470d\") " pod="openstack/ssh-known-hosts-edpm-deployment-zk2nb" Feb 25 11:56:26 crc kubenswrapper[5005]: I0225 11:56:26.079285 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fc7b1540-d8b4-4387-98ac-a56fb073470d-ceph\") pod \"ssh-known-hosts-edpm-deployment-zk2nb\" (UID: \"fc7b1540-d8b4-4387-98ac-a56fb073470d\") " pod="openstack/ssh-known-hosts-edpm-deployment-zk2nb" Feb 25 11:56:26 crc kubenswrapper[5005]: I0225 11:56:26.180588 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5j67\" (UniqueName: \"kubernetes.io/projected/fc7b1540-d8b4-4387-98ac-a56fb073470d-kube-api-access-v5j67\") pod \"ssh-known-hosts-edpm-deployment-zk2nb\" (UID: \"fc7b1540-d8b4-4387-98ac-a56fb073470d\") " pod="openstack/ssh-known-hosts-edpm-deployment-zk2nb" Feb 25 11:56:26 crc kubenswrapper[5005]: I0225 11:56:26.180661 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fc7b1540-d8b4-4387-98ac-a56fb073470d-ceph\") pod \"ssh-known-hosts-edpm-deployment-zk2nb\" (UID: \"fc7b1540-d8b4-4387-98ac-a56fb073470d\") " pod="openstack/ssh-known-hosts-edpm-deployment-zk2nb" Feb 25 11:56:26 crc kubenswrapper[5005]: I0225 11:56:26.180705 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/fc7b1540-d8b4-4387-98ac-a56fb073470d-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-zk2nb\" (UID: \"fc7b1540-d8b4-4387-98ac-a56fb073470d\") " pod="openstack/ssh-known-hosts-edpm-deployment-zk2nb" Feb 25 11:56:26 crc kubenswrapper[5005]: I0225 11:56:26.180740 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fc7b1540-d8b4-4387-98ac-a56fb073470d-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-zk2nb\" (UID: \"fc7b1540-d8b4-4387-98ac-a56fb073470d\") " pod="openstack/ssh-known-hosts-edpm-deployment-zk2nb" Feb 25 11:56:26 crc kubenswrapper[5005]: I0225 11:56:26.186205 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fc7b1540-d8b4-4387-98ac-a56fb073470d-ceph\") pod \"ssh-known-hosts-edpm-deployment-zk2nb\" (UID: \"fc7b1540-d8b4-4387-98ac-a56fb073470d\") " pod="openstack/ssh-known-hosts-edpm-deployment-zk2nb" Feb 25 11:56:26 crc kubenswrapper[5005]: I0225 11:56:26.187872 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/fc7b1540-d8b4-4387-98ac-a56fb073470d-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-zk2nb\" (UID: \"fc7b1540-d8b4-4387-98ac-a56fb073470d\") " pod="openstack/ssh-known-hosts-edpm-deployment-zk2nb" Feb 25 11:56:26 crc kubenswrapper[5005]: I0225 11:56:26.188226 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fc7b1540-d8b4-4387-98ac-a56fb073470d-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-zk2nb\" (UID: \"fc7b1540-d8b4-4387-98ac-a56fb073470d\") " pod="openstack/ssh-known-hosts-edpm-deployment-zk2nb" Feb 25 11:56:26 crc kubenswrapper[5005]: I0225 11:56:26.211992 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5j67\" (UniqueName: \"kubernetes.io/projected/fc7b1540-d8b4-4387-98ac-a56fb073470d-kube-api-access-v5j67\") pod \"ssh-known-hosts-edpm-deployment-zk2nb\" (UID: \"fc7b1540-d8b4-4387-98ac-a56fb073470d\") " pod="openstack/ssh-known-hosts-edpm-deployment-zk2nb" Feb 25 11:56:26 crc kubenswrapper[5005]: I0225 11:56:26.320126 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-zk2nb" Feb 25 11:56:26 crc kubenswrapper[5005]: I0225 11:56:26.854875 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-zk2nb"] Feb 25 11:56:26 crc kubenswrapper[5005]: I0225 11:56:26.905614 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-zk2nb" event={"ID":"fc7b1540-d8b4-4387-98ac-a56fb073470d","Type":"ContainerStarted","Data":"f957545a93dda6af1beeb3af98ef4ade19b0fae48e070260dca27e27fae5e550"} Feb 25 11:56:27 crc kubenswrapper[5005]: I0225 11:56:27.914669 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-zk2nb" event={"ID":"fc7b1540-d8b4-4387-98ac-a56fb073470d","Type":"ContainerStarted","Data":"78849ce5f70ddf691803583c36b60ebf7dad10642fb1963a71e76c4ebe1ef29e"} Feb 25 11:56:27 crc kubenswrapper[5005]: I0225 11:56:27.934389 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-zk2nb" podStartSLOduration=2.442088752 podStartE2EDuration="2.934355527s" podCreationTimestamp="2026-02-25 11:56:25 +0000 UTC" firstStartedPulling="2026-02-25 11:56:26.864906277 +0000 UTC m=+2300.905638604" lastFinishedPulling="2026-02-25 11:56:27.357173042 +0000 UTC m=+2301.397905379" observedRunningTime="2026-02-25 11:56:27.931295101 +0000 UTC m=+2301.972027428" watchObservedRunningTime="2026-02-25 11:56:27.934355527 +0000 UTC m=+2301.975087854" Feb 25 11:56:32 crc kubenswrapper[5005]: I0225 11:56:32.686751 5005 scope.go:117] "RemoveContainer" containerID="0a0c931f6f39c45f66a41d485326ae99130758f22d324cc3fec975dfad96b162" Feb 25 11:56:32 crc kubenswrapper[5005]: E0225 11:56:32.687805 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 11:56:36 crc kubenswrapper[5005]: I0225 11:56:36.996432 5005 generic.go:334] "Generic (PLEG): container finished" podID="fc7b1540-d8b4-4387-98ac-a56fb073470d" containerID="78849ce5f70ddf691803583c36b60ebf7dad10642fb1963a71e76c4ebe1ef29e" exitCode=0 Feb 25 11:56:36 crc kubenswrapper[5005]: I0225 11:56:36.996868 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-zk2nb" event={"ID":"fc7b1540-d8b4-4387-98ac-a56fb073470d","Type":"ContainerDied","Data":"78849ce5f70ddf691803583c36b60ebf7dad10642fb1963a71e76c4ebe1ef29e"} Feb 25 11:56:38 crc kubenswrapper[5005]: I0225 11:56:38.426600 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-zk2nb" Feb 25 11:56:38 crc kubenswrapper[5005]: I0225 11:56:38.595046 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fc7b1540-d8b4-4387-98ac-a56fb073470d-ssh-key-openstack-edpm-ipam\") pod \"fc7b1540-d8b4-4387-98ac-a56fb073470d\" (UID: \"fc7b1540-d8b4-4387-98ac-a56fb073470d\") " Feb 25 11:56:38 crc kubenswrapper[5005]: I0225 11:56:38.595098 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5j67\" (UniqueName: \"kubernetes.io/projected/fc7b1540-d8b4-4387-98ac-a56fb073470d-kube-api-access-v5j67\") pod \"fc7b1540-d8b4-4387-98ac-a56fb073470d\" (UID: \"fc7b1540-d8b4-4387-98ac-a56fb073470d\") " Feb 25 11:56:38 crc kubenswrapper[5005]: I0225 11:56:38.595191 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/fc7b1540-d8b4-4387-98ac-a56fb073470d-inventory-0\") pod \"fc7b1540-d8b4-4387-98ac-a56fb073470d\" (UID: \"fc7b1540-d8b4-4387-98ac-a56fb073470d\") " Feb 25 11:56:38 crc kubenswrapper[5005]: I0225 11:56:38.595236 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fc7b1540-d8b4-4387-98ac-a56fb073470d-ceph\") pod \"fc7b1540-d8b4-4387-98ac-a56fb073470d\" (UID: \"fc7b1540-d8b4-4387-98ac-a56fb073470d\") " Feb 25 11:56:38 crc kubenswrapper[5005]: I0225 11:56:38.601196 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc7b1540-d8b4-4387-98ac-a56fb073470d-kube-api-access-v5j67" (OuterVolumeSpecName: "kube-api-access-v5j67") pod "fc7b1540-d8b4-4387-98ac-a56fb073470d" (UID: "fc7b1540-d8b4-4387-98ac-a56fb073470d"). InnerVolumeSpecName "kube-api-access-v5j67". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:56:38 crc kubenswrapper[5005]: I0225 11:56:38.601475 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc7b1540-d8b4-4387-98ac-a56fb073470d-ceph" (OuterVolumeSpecName: "ceph") pod "fc7b1540-d8b4-4387-98ac-a56fb073470d" (UID: "fc7b1540-d8b4-4387-98ac-a56fb073470d"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:56:38 crc kubenswrapper[5005]: I0225 11:56:38.620036 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc7b1540-d8b4-4387-98ac-a56fb073470d-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "fc7b1540-d8b4-4387-98ac-a56fb073470d" (UID: "fc7b1540-d8b4-4387-98ac-a56fb073470d"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:56:38 crc kubenswrapper[5005]: I0225 11:56:38.631149 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc7b1540-d8b4-4387-98ac-a56fb073470d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fc7b1540-d8b4-4387-98ac-a56fb073470d" (UID: "fc7b1540-d8b4-4387-98ac-a56fb073470d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:56:38 crc kubenswrapper[5005]: I0225 11:56:38.697662 5005 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fc7b1540-d8b4-4387-98ac-a56fb073470d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 25 11:56:38 crc kubenswrapper[5005]: I0225 11:56:38.697714 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5j67\" (UniqueName: \"kubernetes.io/projected/fc7b1540-d8b4-4387-98ac-a56fb073470d-kube-api-access-v5j67\") on node \"crc\" DevicePath \"\"" Feb 25 11:56:38 crc kubenswrapper[5005]: I0225 11:56:38.697736 5005 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/fc7b1540-d8b4-4387-98ac-a56fb073470d-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 25 11:56:38 crc kubenswrapper[5005]: I0225 11:56:38.697757 5005 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fc7b1540-d8b4-4387-98ac-a56fb073470d-ceph\") on node \"crc\" DevicePath \"\"" Feb 25 11:56:39 crc kubenswrapper[5005]: I0225 11:56:39.019468 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-zk2nb" event={"ID":"fc7b1540-d8b4-4387-98ac-a56fb073470d","Type":"ContainerDied","Data":"f957545a93dda6af1beeb3af98ef4ade19b0fae48e070260dca27e27fae5e550"} Feb 25 11:56:39 crc kubenswrapper[5005]: I0225 11:56:39.019552 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f957545a93dda6af1beeb3af98ef4ade19b0fae48e070260dca27e27fae5e550" Feb 25 11:56:39 crc kubenswrapper[5005]: I0225 11:56:39.020116 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-zk2nb" Feb 25 11:56:39 crc kubenswrapper[5005]: I0225 11:56:39.134535 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-6pdzz"] Feb 25 11:56:39 crc kubenswrapper[5005]: E0225 11:56:39.135204 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc7b1540-d8b4-4387-98ac-a56fb073470d" containerName="ssh-known-hosts-edpm-deployment" Feb 25 11:56:39 crc kubenswrapper[5005]: I0225 11:56:39.135231 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc7b1540-d8b4-4387-98ac-a56fb073470d" containerName="ssh-known-hosts-edpm-deployment" Feb 25 11:56:39 crc kubenswrapper[5005]: I0225 11:56:39.135538 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc7b1540-d8b4-4387-98ac-a56fb073470d" containerName="ssh-known-hosts-edpm-deployment" Feb 25 11:56:39 crc kubenswrapper[5005]: I0225 11:56:39.136829 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6pdzz" Feb 25 11:56:39 crc kubenswrapper[5005]: I0225 11:56:39.141092 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 25 11:56:39 crc kubenswrapper[5005]: I0225 11:56:39.141699 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 25 11:56:39 crc kubenswrapper[5005]: I0225 11:56:39.141897 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 25 11:56:39 crc kubenswrapper[5005]: I0225 11:56:39.148175 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 25 11:56:39 crc kubenswrapper[5005]: I0225 11:56:39.152309 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-6pdzz"] Feb 25 11:56:39 crc kubenswrapper[5005]: I0225 11:56:39.156289 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dgrbb" Feb 25 11:56:39 crc kubenswrapper[5005]: I0225 11:56:39.310208 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9f92f677-2047-4b97-ad0c-3862a43553d5-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6pdzz\" (UID: \"9f92f677-2047-4b97-ad0c-3862a43553d5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6pdzz" Feb 25 11:56:39 crc kubenswrapper[5005]: I0225 11:56:39.310262 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xq4s\" (UniqueName: \"kubernetes.io/projected/9f92f677-2047-4b97-ad0c-3862a43553d5-kube-api-access-4xq4s\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6pdzz\" (UID: \"9f92f677-2047-4b97-ad0c-3862a43553d5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6pdzz" Feb 25 11:56:39 crc kubenswrapper[5005]: I0225 11:56:39.310342 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f92f677-2047-4b97-ad0c-3862a43553d5-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6pdzz\" (UID: \"9f92f677-2047-4b97-ad0c-3862a43553d5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6pdzz" Feb 25 11:56:39 crc kubenswrapper[5005]: I0225 11:56:39.310418 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f92f677-2047-4b97-ad0c-3862a43553d5-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6pdzz\" (UID: \"9f92f677-2047-4b97-ad0c-3862a43553d5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6pdzz" Feb 25 11:56:39 crc kubenswrapper[5005]: I0225 11:56:39.412114 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f92f677-2047-4b97-ad0c-3862a43553d5-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6pdzz\" (UID: \"9f92f677-2047-4b97-ad0c-3862a43553d5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6pdzz" Feb 25 11:56:39 crc kubenswrapper[5005]: I0225 11:56:39.412180 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f92f677-2047-4b97-ad0c-3862a43553d5-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6pdzz\" (UID: \"9f92f677-2047-4b97-ad0c-3862a43553d5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6pdzz" Feb 25 11:56:39 crc kubenswrapper[5005]: I0225 11:56:39.412256 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9f92f677-2047-4b97-ad0c-3862a43553d5-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6pdzz\" (UID: \"9f92f677-2047-4b97-ad0c-3862a43553d5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6pdzz" Feb 25 11:56:39 crc kubenswrapper[5005]: I0225 11:56:39.412283 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xq4s\" (UniqueName: \"kubernetes.io/projected/9f92f677-2047-4b97-ad0c-3862a43553d5-kube-api-access-4xq4s\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6pdzz\" (UID: \"9f92f677-2047-4b97-ad0c-3862a43553d5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6pdzz" Feb 25 11:56:39 crc kubenswrapper[5005]: I0225 11:56:39.416310 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f92f677-2047-4b97-ad0c-3862a43553d5-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6pdzz\" (UID: \"9f92f677-2047-4b97-ad0c-3862a43553d5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6pdzz" Feb 25 11:56:39 crc kubenswrapper[5005]: I0225 11:56:39.416727 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f92f677-2047-4b97-ad0c-3862a43553d5-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6pdzz\" (UID: \"9f92f677-2047-4b97-ad0c-3862a43553d5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6pdzz" Feb 25 11:56:39 crc kubenswrapper[5005]: I0225 11:56:39.423173 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9f92f677-2047-4b97-ad0c-3862a43553d5-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6pdzz\" (UID: \"9f92f677-2047-4b97-ad0c-3862a43553d5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6pdzz" Feb 25 11:56:39 crc kubenswrapper[5005]: I0225 11:56:39.427989 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xq4s\" (UniqueName: \"kubernetes.io/projected/9f92f677-2047-4b97-ad0c-3862a43553d5-kube-api-access-4xq4s\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6pdzz\" (UID: \"9f92f677-2047-4b97-ad0c-3862a43553d5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6pdzz" Feb 25 11:56:39 crc kubenswrapper[5005]: I0225 11:56:39.492134 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6pdzz" Feb 25 11:56:39 crc kubenswrapper[5005]: W0225 11:56:39.830477 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f92f677_2047_4b97_ad0c_3862a43553d5.slice/crio-820b4b67d888683fe165cb9daa5ccf424e85190b2eb3bf060d72a0cadec282ac WatchSource:0}: Error finding container 820b4b67d888683fe165cb9daa5ccf424e85190b2eb3bf060d72a0cadec282ac: Status 404 returned error can't find the container with id 820b4b67d888683fe165cb9daa5ccf424e85190b2eb3bf060d72a0cadec282ac Feb 25 11:56:39 crc kubenswrapper[5005]: I0225 11:56:39.847434 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-6pdzz"] Feb 25 11:56:40 crc kubenswrapper[5005]: I0225 11:56:40.031661 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6pdzz" event={"ID":"9f92f677-2047-4b97-ad0c-3862a43553d5","Type":"ContainerStarted","Data":"820b4b67d888683fe165cb9daa5ccf424e85190b2eb3bf060d72a0cadec282ac"} Feb 25 11:56:41 crc kubenswrapper[5005]: I0225 11:56:41.043706 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6pdzz" event={"ID":"9f92f677-2047-4b97-ad0c-3862a43553d5","Type":"ContainerStarted","Data":"cb5c0b958d7ecda07f31de80ebcddf4f9bb9525f58e15e8bb4bf1de98157fb41"} Feb 25 11:56:41 crc kubenswrapper[5005]: I0225 11:56:41.069242 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6pdzz" podStartSLOduration=1.63857662 podStartE2EDuration="2.069217816s" podCreationTimestamp="2026-02-25 11:56:39 +0000 UTC" firstStartedPulling="2026-02-25 11:56:39.847828409 +0000 UTC m=+2313.888560736" lastFinishedPulling="2026-02-25 11:56:40.278469605 +0000 UTC m=+2314.319201932" observedRunningTime="2026-02-25 11:56:41.068234086 +0000 UTC m=+2315.108966433" watchObservedRunningTime="2026-02-25 11:56:41.069217816 +0000 UTC m=+2315.109950153" Feb 25 11:56:43 crc kubenswrapper[5005]: I0225 11:56:43.687481 5005 scope.go:117] "RemoveContainer" containerID="0a0c931f6f39c45f66a41d485326ae99130758f22d324cc3fec975dfad96b162" Feb 25 11:56:43 crc kubenswrapper[5005]: E0225 11:56:43.688308 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 11:56:48 crc kubenswrapper[5005]: I0225 11:56:48.122203 5005 generic.go:334] "Generic (PLEG): container finished" podID="9f92f677-2047-4b97-ad0c-3862a43553d5" containerID="cb5c0b958d7ecda07f31de80ebcddf4f9bb9525f58e15e8bb4bf1de98157fb41" exitCode=0 Feb 25 11:56:48 crc kubenswrapper[5005]: I0225 11:56:48.122326 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6pdzz" event={"ID":"9f92f677-2047-4b97-ad0c-3862a43553d5","Type":"ContainerDied","Data":"cb5c0b958d7ecda07f31de80ebcddf4f9bb9525f58e15e8bb4bf1de98157fb41"} Feb 25 11:56:49 crc kubenswrapper[5005]: I0225 11:56:49.514432 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6pdzz" Feb 25 11:56:49 crc kubenswrapper[5005]: I0225 11:56:49.526295 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xq4s\" (UniqueName: \"kubernetes.io/projected/9f92f677-2047-4b97-ad0c-3862a43553d5-kube-api-access-4xq4s\") pod \"9f92f677-2047-4b97-ad0c-3862a43553d5\" (UID: \"9f92f677-2047-4b97-ad0c-3862a43553d5\") " Feb 25 11:56:49 crc kubenswrapper[5005]: I0225 11:56:49.526442 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9f92f677-2047-4b97-ad0c-3862a43553d5-ceph\") pod \"9f92f677-2047-4b97-ad0c-3862a43553d5\" (UID: \"9f92f677-2047-4b97-ad0c-3862a43553d5\") " Feb 25 11:56:49 crc kubenswrapper[5005]: I0225 11:56:49.526594 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f92f677-2047-4b97-ad0c-3862a43553d5-ssh-key-openstack-edpm-ipam\") pod \"9f92f677-2047-4b97-ad0c-3862a43553d5\" (UID: \"9f92f677-2047-4b97-ad0c-3862a43553d5\") " Feb 25 11:56:49 crc kubenswrapper[5005]: I0225 11:56:49.526733 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f92f677-2047-4b97-ad0c-3862a43553d5-inventory\") pod \"9f92f677-2047-4b97-ad0c-3862a43553d5\" (UID: \"9f92f677-2047-4b97-ad0c-3862a43553d5\") " Feb 25 11:56:49 crc kubenswrapper[5005]: I0225 11:56:49.535498 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f92f677-2047-4b97-ad0c-3862a43553d5-kube-api-access-4xq4s" (OuterVolumeSpecName: "kube-api-access-4xq4s") pod "9f92f677-2047-4b97-ad0c-3862a43553d5" (UID: "9f92f677-2047-4b97-ad0c-3862a43553d5"). InnerVolumeSpecName "kube-api-access-4xq4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:56:49 crc kubenswrapper[5005]: I0225 11:56:49.536104 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f92f677-2047-4b97-ad0c-3862a43553d5-ceph" (OuterVolumeSpecName: "ceph") pod "9f92f677-2047-4b97-ad0c-3862a43553d5" (UID: "9f92f677-2047-4b97-ad0c-3862a43553d5"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:56:49 crc kubenswrapper[5005]: I0225 11:56:49.564632 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f92f677-2047-4b97-ad0c-3862a43553d5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9f92f677-2047-4b97-ad0c-3862a43553d5" (UID: "9f92f677-2047-4b97-ad0c-3862a43553d5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:56:49 crc kubenswrapper[5005]: I0225 11:56:49.568254 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f92f677-2047-4b97-ad0c-3862a43553d5-inventory" (OuterVolumeSpecName: "inventory") pod "9f92f677-2047-4b97-ad0c-3862a43553d5" (UID: "9f92f677-2047-4b97-ad0c-3862a43553d5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:56:49 crc kubenswrapper[5005]: I0225 11:56:49.628923 5005 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f92f677-2047-4b97-ad0c-3862a43553d5-inventory\") on node \"crc\" DevicePath \"\"" Feb 25 11:56:49 crc kubenswrapper[5005]: I0225 11:56:49.628958 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xq4s\" (UniqueName: \"kubernetes.io/projected/9f92f677-2047-4b97-ad0c-3862a43553d5-kube-api-access-4xq4s\") on node \"crc\" DevicePath \"\"" Feb 25 11:56:49 crc kubenswrapper[5005]: I0225 11:56:49.628971 5005 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9f92f677-2047-4b97-ad0c-3862a43553d5-ceph\") on node \"crc\" DevicePath \"\"" Feb 25 11:56:49 crc kubenswrapper[5005]: I0225 11:56:49.628980 5005 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f92f677-2047-4b97-ad0c-3862a43553d5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 25 11:56:50 crc kubenswrapper[5005]: I0225 11:56:50.136921 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6pdzz" event={"ID":"9f92f677-2047-4b97-ad0c-3862a43553d5","Type":"ContainerDied","Data":"820b4b67d888683fe165cb9daa5ccf424e85190b2eb3bf060d72a0cadec282ac"} Feb 25 11:56:50 crc kubenswrapper[5005]: I0225 11:56:50.136976 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6pdzz" Feb 25 11:56:50 crc kubenswrapper[5005]: I0225 11:56:50.137021 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="820b4b67d888683fe165cb9daa5ccf424e85190b2eb3bf060d72a0cadec282ac" Feb 25 11:56:50 crc kubenswrapper[5005]: I0225 11:56:50.215443 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6z458"] Feb 25 11:56:50 crc kubenswrapper[5005]: E0225 11:56:50.215912 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f92f677-2047-4b97-ad0c-3862a43553d5" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 25 11:56:50 crc kubenswrapper[5005]: I0225 11:56:50.215940 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f92f677-2047-4b97-ad0c-3862a43553d5" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 25 11:56:50 crc kubenswrapper[5005]: I0225 11:56:50.216165 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f92f677-2047-4b97-ad0c-3862a43553d5" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 25 11:56:50 crc kubenswrapper[5005]: I0225 11:56:50.216983 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6z458" Feb 25 11:56:50 crc kubenswrapper[5005]: I0225 11:56:50.219046 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 25 11:56:50 crc kubenswrapper[5005]: I0225 11:56:50.219904 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 25 11:56:50 crc kubenswrapper[5005]: I0225 11:56:50.220282 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dgrbb" Feb 25 11:56:50 crc kubenswrapper[5005]: I0225 11:56:50.220485 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 25 11:56:50 crc kubenswrapper[5005]: I0225 11:56:50.221227 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 25 11:56:50 crc kubenswrapper[5005]: I0225 11:56:50.226262 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6z458"] Feb 25 11:56:50 crc kubenswrapper[5005]: I0225 11:56:50.340473 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/48c9975f-aaf1-49c2-8dfe-ee8eb7180754-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6z458\" (UID: \"48c9975f-aaf1-49c2-8dfe-ee8eb7180754\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6z458" Feb 25 11:56:50 crc kubenswrapper[5005]: I0225 11:56:50.340826 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/48c9975f-aaf1-49c2-8dfe-ee8eb7180754-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6z458\" (UID: \"48c9975f-aaf1-49c2-8dfe-ee8eb7180754\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6z458" Feb 25 11:56:50 crc kubenswrapper[5005]: I0225 11:56:50.340897 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7cjd\" (UniqueName: \"kubernetes.io/projected/48c9975f-aaf1-49c2-8dfe-ee8eb7180754-kube-api-access-n7cjd\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6z458\" (UID: \"48c9975f-aaf1-49c2-8dfe-ee8eb7180754\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6z458" Feb 25 11:56:50 crc kubenswrapper[5005]: I0225 11:56:50.340980 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48c9975f-aaf1-49c2-8dfe-ee8eb7180754-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6z458\" (UID: \"48c9975f-aaf1-49c2-8dfe-ee8eb7180754\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6z458" Feb 25 11:56:50 crc kubenswrapper[5005]: I0225 11:56:50.442326 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48c9975f-aaf1-49c2-8dfe-ee8eb7180754-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6z458\" (UID: \"48c9975f-aaf1-49c2-8dfe-ee8eb7180754\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6z458" Feb 25 11:56:50 crc kubenswrapper[5005]: I0225 11:56:50.442452 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/48c9975f-aaf1-49c2-8dfe-ee8eb7180754-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6z458\" (UID: \"48c9975f-aaf1-49c2-8dfe-ee8eb7180754\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6z458" Feb 25 11:56:50 crc kubenswrapper[5005]: I0225 11:56:50.442491 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/48c9975f-aaf1-49c2-8dfe-ee8eb7180754-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6z458\" (UID: \"48c9975f-aaf1-49c2-8dfe-ee8eb7180754\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6z458" Feb 25 11:56:50 crc kubenswrapper[5005]: I0225 11:56:50.442535 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7cjd\" (UniqueName: \"kubernetes.io/projected/48c9975f-aaf1-49c2-8dfe-ee8eb7180754-kube-api-access-n7cjd\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6z458\" (UID: \"48c9975f-aaf1-49c2-8dfe-ee8eb7180754\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6z458" Feb 25 11:56:50 crc kubenswrapper[5005]: I0225 11:56:50.446859 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48c9975f-aaf1-49c2-8dfe-ee8eb7180754-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6z458\" (UID: \"48c9975f-aaf1-49c2-8dfe-ee8eb7180754\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6z458" Feb 25 11:56:50 crc kubenswrapper[5005]: I0225 11:56:50.446877 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/48c9975f-aaf1-49c2-8dfe-ee8eb7180754-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6z458\" (UID: \"48c9975f-aaf1-49c2-8dfe-ee8eb7180754\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6z458" Feb 25 11:56:50 crc kubenswrapper[5005]: I0225 11:56:50.447966 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/48c9975f-aaf1-49c2-8dfe-ee8eb7180754-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6z458\" (UID: \"48c9975f-aaf1-49c2-8dfe-ee8eb7180754\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6z458" Feb 25 11:56:50 crc kubenswrapper[5005]: I0225 11:56:50.459543 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7cjd\" (UniqueName: \"kubernetes.io/projected/48c9975f-aaf1-49c2-8dfe-ee8eb7180754-kube-api-access-n7cjd\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6z458\" (UID: \"48c9975f-aaf1-49c2-8dfe-ee8eb7180754\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6z458" Feb 25 11:56:50 crc kubenswrapper[5005]: I0225 11:56:50.532087 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6z458" Feb 25 11:56:51 crc kubenswrapper[5005]: I0225 11:56:51.006641 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6z458"] Feb 25 11:56:51 crc kubenswrapper[5005]: I0225 11:56:51.146124 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6z458" event={"ID":"48c9975f-aaf1-49c2-8dfe-ee8eb7180754","Type":"ContainerStarted","Data":"038345b87aee108f3347c204d407e2b9a102911e94ce03b4989053b1a342ae7c"} Feb 25 11:56:52 crc kubenswrapper[5005]: I0225 11:56:52.157528 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6z458" event={"ID":"48c9975f-aaf1-49c2-8dfe-ee8eb7180754","Type":"ContainerStarted","Data":"95601787b6855870333e753797d3cc0a9df242affa241489215395463bf4a4ce"} Feb 25 11:56:52 crc kubenswrapper[5005]: I0225 11:56:52.200595 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6z458" podStartSLOduration=1.8095969429999998 podStartE2EDuration="2.200575446s" podCreationTimestamp="2026-02-25 11:56:50 +0000 UTC" firstStartedPulling="2026-02-25 11:56:51.009927091 +0000 UTC m=+2325.050659418" lastFinishedPulling="2026-02-25 11:56:51.400905584 +0000 UTC m=+2325.441637921" observedRunningTime="2026-02-25 11:56:52.186837026 +0000 UTC m=+2326.227569353" watchObservedRunningTime="2026-02-25 11:56:52.200575446 +0000 UTC m=+2326.241307783" Feb 25 11:56:56 crc kubenswrapper[5005]: I0225 11:56:56.692218 5005 scope.go:117] "RemoveContainer" containerID="0a0c931f6f39c45f66a41d485326ae99130758f22d324cc3fec975dfad96b162" Feb 25 11:56:56 crc kubenswrapper[5005]: E0225 11:56:56.693112 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 11:57:01 crc kubenswrapper[5005]: I0225 11:57:01.229601 5005 generic.go:334] "Generic (PLEG): container finished" podID="48c9975f-aaf1-49c2-8dfe-ee8eb7180754" containerID="95601787b6855870333e753797d3cc0a9df242affa241489215395463bf4a4ce" exitCode=0 Feb 25 11:57:01 crc kubenswrapper[5005]: I0225 11:57:01.229689 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6z458" event={"ID":"48c9975f-aaf1-49c2-8dfe-ee8eb7180754","Type":"ContainerDied","Data":"95601787b6855870333e753797d3cc0a9df242affa241489215395463bf4a4ce"} Feb 25 11:57:02 crc kubenswrapper[5005]: I0225 11:57:02.633456 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6z458" Feb 25 11:57:02 crc kubenswrapper[5005]: I0225 11:57:02.783242 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48c9975f-aaf1-49c2-8dfe-ee8eb7180754-inventory\") pod \"48c9975f-aaf1-49c2-8dfe-ee8eb7180754\" (UID: \"48c9975f-aaf1-49c2-8dfe-ee8eb7180754\") " Feb 25 11:57:02 crc kubenswrapper[5005]: I0225 11:57:02.783463 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/48c9975f-aaf1-49c2-8dfe-ee8eb7180754-ssh-key-openstack-edpm-ipam\") pod \"48c9975f-aaf1-49c2-8dfe-ee8eb7180754\" (UID: \"48c9975f-aaf1-49c2-8dfe-ee8eb7180754\") " Feb 25 11:57:02 crc kubenswrapper[5005]: I0225 11:57:02.783560 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/48c9975f-aaf1-49c2-8dfe-ee8eb7180754-ceph\") pod \"48c9975f-aaf1-49c2-8dfe-ee8eb7180754\" (UID: \"48c9975f-aaf1-49c2-8dfe-ee8eb7180754\") " Feb 25 11:57:02 crc kubenswrapper[5005]: I0225 11:57:02.783728 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7cjd\" (UniqueName: \"kubernetes.io/projected/48c9975f-aaf1-49c2-8dfe-ee8eb7180754-kube-api-access-n7cjd\") pod \"48c9975f-aaf1-49c2-8dfe-ee8eb7180754\" (UID: \"48c9975f-aaf1-49c2-8dfe-ee8eb7180754\") " Feb 25 11:57:02 crc kubenswrapper[5005]: I0225 11:57:02.790113 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48c9975f-aaf1-49c2-8dfe-ee8eb7180754-kube-api-access-n7cjd" (OuterVolumeSpecName: "kube-api-access-n7cjd") pod "48c9975f-aaf1-49c2-8dfe-ee8eb7180754" (UID: "48c9975f-aaf1-49c2-8dfe-ee8eb7180754"). InnerVolumeSpecName "kube-api-access-n7cjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:57:02 crc kubenswrapper[5005]: I0225 11:57:02.790714 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48c9975f-aaf1-49c2-8dfe-ee8eb7180754-ceph" (OuterVolumeSpecName: "ceph") pod "48c9975f-aaf1-49c2-8dfe-ee8eb7180754" (UID: "48c9975f-aaf1-49c2-8dfe-ee8eb7180754"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:57:02 crc kubenswrapper[5005]: I0225 11:57:02.811679 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48c9975f-aaf1-49c2-8dfe-ee8eb7180754-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "48c9975f-aaf1-49c2-8dfe-ee8eb7180754" (UID: "48c9975f-aaf1-49c2-8dfe-ee8eb7180754"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:57:02 crc kubenswrapper[5005]: I0225 11:57:02.812137 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48c9975f-aaf1-49c2-8dfe-ee8eb7180754-inventory" (OuterVolumeSpecName: "inventory") pod "48c9975f-aaf1-49c2-8dfe-ee8eb7180754" (UID: "48c9975f-aaf1-49c2-8dfe-ee8eb7180754"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:57:02 crc kubenswrapper[5005]: I0225 11:57:02.885305 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7cjd\" (UniqueName: \"kubernetes.io/projected/48c9975f-aaf1-49c2-8dfe-ee8eb7180754-kube-api-access-n7cjd\") on node \"crc\" DevicePath \"\"" Feb 25 11:57:02 crc kubenswrapper[5005]: I0225 11:57:02.885344 5005 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48c9975f-aaf1-49c2-8dfe-ee8eb7180754-inventory\") on node \"crc\" DevicePath \"\"" Feb 25 11:57:02 crc kubenswrapper[5005]: I0225 11:57:02.885354 5005 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/48c9975f-aaf1-49c2-8dfe-ee8eb7180754-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 25 11:57:02 crc kubenswrapper[5005]: I0225 11:57:02.885364 5005 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/48c9975f-aaf1-49c2-8dfe-ee8eb7180754-ceph\") on node \"crc\" DevicePath \"\"" Feb 25 11:57:03 crc kubenswrapper[5005]: I0225 11:57:03.250341 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6z458" event={"ID":"48c9975f-aaf1-49c2-8dfe-ee8eb7180754","Type":"ContainerDied","Data":"038345b87aee108f3347c204d407e2b9a102911e94ce03b4989053b1a342ae7c"} Feb 25 11:57:03 crc kubenswrapper[5005]: I0225 11:57:03.250605 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6z458" Feb 25 11:57:03 crc kubenswrapper[5005]: I0225 11:57:03.250614 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="038345b87aee108f3347c204d407e2b9a102911e94ce03b4989053b1a342ae7c" Feb 25 11:57:03 crc kubenswrapper[5005]: I0225 11:57:03.331960 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5"] Feb 25 11:57:03 crc kubenswrapper[5005]: E0225 11:57:03.332340 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48c9975f-aaf1-49c2-8dfe-ee8eb7180754" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 25 11:57:03 crc kubenswrapper[5005]: I0225 11:57:03.332366 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="48c9975f-aaf1-49c2-8dfe-ee8eb7180754" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 25 11:57:03 crc kubenswrapper[5005]: I0225 11:57:03.332602 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="48c9975f-aaf1-49c2-8dfe-ee8eb7180754" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 25 11:57:03 crc kubenswrapper[5005]: I0225 11:57:03.333219 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5" Feb 25 11:57:03 crc kubenswrapper[5005]: I0225 11:57:03.335801 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 25 11:57:03 crc kubenswrapper[5005]: I0225 11:57:03.336145 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 25 11:57:03 crc kubenswrapper[5005]: I0225 11:57:03.336186 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 25 11:57:03 crc kubenswrapper[5005]: I0225 11:57:03.336236 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 25 11:57:03 crc kubenswrapper[5005]: I0225 11:57:03.336191 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 25 11:57:03 crc kubenswrapper[5005]: I0225 11:57:03.336391 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dgrbb" Feb 25 11:57:03 crc kubenswrapper[5005]: I0225 11:57:03.336445 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 25 11:57:03 crc kubenswrapper[5005]: I0225 11:57:03.336468 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 25 11:57:03 crc kubenswrapper[5005]: I0225 11:57:03.348332 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5"] Feb 25 11:57:03 crc kubenswrapper[5005]: I0225 11:57:03.496856 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5\" (UID: \"2f97d5cb-2060-4a95-84dd-e4dca87fb87a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5" Feb 25 11:57:03 crc kubenswrapper[5005]: I0225 11:57:03.496928 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5\" (UID: \"2f97d5cb-2060-4a95-84dd-e4dca87fb87a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5" Feb 25 11:57:03 crc kubenswrapper[5005]: I0225 11:57:03.496960 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5\" (UID: \"2f97d5cb-2060-4a95-84dd-e4dca87fb87a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5" Feb 25 11:57:03 crc kubenswrapper[5005]: I0225 11:57:03.497020 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5\" (UID: \"2f97d5cb-2060-4a95-84dd-e4dca87fb87a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5" Feb 25 11:57:03 crc kubenswrapper[5005]: I0225 11:57:03.497058 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5\" (UID: \"2f97d5cb-2060-4a95-84dd-e4dca87fb87a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5" Feb 25 11:57:03 crc kubenswrapper[5005]: I0225 11:57:03.497128 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5\" (UID: \"2f97d5cb-2060-4a95-84dd-e4dca87fb87a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5" Feb 25 11:57:03 crc kubenswrapper[5005]: I0225 11:57:03.497161 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5\" (UID: \"2f97d5cb-2060-4a95-84dd-e4dca87fb87a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5" Feb 25 11:57:03 crc kubenswrapper[5005]: I0225 11:57:03.497184 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtbr9\" (UniqueName: \"kubernetes.io/projected/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-kube-api-access-rtbr9\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5\" (UID: \"2f97d5cb-2060-4a95-84dd-e4dca87fb87a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5" Feb 25 11:57:03 crc kubenswrapper[5005]: I0225 11:57:03.497214 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5\" (UID: \"2f97d5cb-2060-4a95-84dd-e4dca87fb87a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5" Feb 25 11:57:03 crc kubenswrapper[5005]: I0225 11:57:03.497244 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5\" (UID: \"2f97d5cb-2060-4a95-84dd-e4dca87fb87a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5" Feb 25 11:57:03 crc kubenswrapper[5005]: I0225 11:57:03.497269 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5\" (UID: \"2f97d5cb-2060-4a95-84dd-e4dca87fb87a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5" Feb 25 11:57:03 crc kubenswrapper[5005]: I0225 11:57:03.497327 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5\" (UID: \"2f97d5cb-2060-4a95-84dd-e4dca87fb87a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5" Feb 25 11:57:03 crc kubenswrapper[5005]: I0225 11:57:03.497405 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5\" (UID: \"2f97d5cb-2060-4a95-84dd-e4dca87fb87a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5" Feb 25 11:57:03 crc kubenswrapper[5005]: I0225 11:57:03.598941 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5\" (UID: \"2f97d5cb-2060-4a95-84dd-e4dca87fb87a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5" Feb 25 11:57:03 crc kubenswrapper[5005]: I0225 11:57:03.599006 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5\" (UID: \"2f97d5cb-2060-4a95-84dd-e4dca87fb87a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5" Feb 25 11:57:03 crc kubenswrapper[5005]: I0225 11:57:03.599028 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5\" (UID: \"2f97d5cb-2060-4a95-84dd-e4dca87fb87a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5" Feb 25 11:57:03 crc kubenswrapper[5005]: I0225 11:57:03.599068 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5\" (UID: \"2f97d5cb-2060-4a95-84dd-e4dca87fb87a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5" Feb 25 11:57:03 crc kubenswrapper[5005]: I0225 11:57:03.599105 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5\" (UID: \"2f97d5cb-2060-4a95-84dd-e4dca87fb87a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5" Feb 25 11:57:03 crc kubenswrapper[5005]: I0225 11:57:03.599182 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5\" (UID: \"2f97d5cb-2060-4a95-84dd-e4dca87fb87a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5" Feb 25 11:57:03 crc kubenswrapper[5005]: I0225 11:57:03.599216 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5\" (UID: \"2f97d5cb-2060-4a95-84dd-e4dca87fb87a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5" Feb 25 11:57:03 crc kubenswrapper[5005]: I0225 11:57:03.599238 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtbr9\" (UniqueName: \"kubernetes.io/projected/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-kube-api-access-rtbr9\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5\" (UID: \"2f97d5cb-2060-4a95-84dd-e4dca87fb87a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5" Feb 25 11:57:03 crc kubenswrapper[5005]: I0225 11:57:03.599270 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5\" (UID: \"2f97d5cb-2060-4a95-84dd-e4dca87fb87a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5" Feb 25 11:57:03 crc kubenswrapper[5005]: I0225 11:57:03.599293 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5\" (UID: \"2f97d5cb-2060-4a95-84dd-e4dca87fb87a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5" Feb 25 11:57:03 crc kubenswrapper[5005]: I0225 11:57:03.599319 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5\" (UID: \"2f97d5cb-2060-4a95-84dd-e4dca87fb87a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5" Feb 25 11:57:03 crc kubenswrapper[5005]: I0225 11:57:03.599345 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5\" (UID: \"2f97d5cb-2060-4a95-84dd-e4dca87fb87a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5" Feb 25 11:57:03 crc kubenswrapper[5005]: I0225 11:57:03.599393 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5\" (UID: \"2f97d5cb-2060-4a95-84dd-e4dca87fb87a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5" Feb 25 11:57:03 crc kubenswrapper[5005]: I0225 11:57:03.604720 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5\" (UID: \"2f97d5cb-2060-4a95-84dd-e4dca87fb87a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5" Feb 25 11:57:03 crc kubenswrapper[5005]: I0225 11:57:03.604904 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5\" (UID: \"2f97d5cb-2060-4a95-84dd-e4dca87fb87a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5" Feb 25 11:57:03 crc kubenswrapper[5005]: I0225 11:57:03.605670 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5\" (UID: \"2f97d5cb-2060-4a95-84dd-e4dca87fb87a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5" Feb 25 11:57:03 crc kubenswrapper[5005]: I0225 11:57:03.606073 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5\" (UID: \"2f97d5cb-2060-4a95-84dd-e4dca87fb87a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5" Feb 25 11:57:03 crc kubenswrapper[5005]: I0225 11:57:03.606311 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5\" (UID: \"2f97d5cb-2060-4a95-84dd-e4dca87fb87a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5" Feb 25 11:57:03 crc kubenswrapper[5005]: I0225 11:57:03.606537 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5\" (UID: \"2f97d5cb-2060-4a95-84dd-e4dca87fb87a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5" Feb 25 11:57:03 crc kubenswrapper[5005]: I0225 11:57:03.606672 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5\" (UID: \"2f97d5cb-2060-4a95-84dd-e4dca87fb87a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5" Feb 25 11:57:03 crc kubenswrapper[5005]: I0225 11:57:03.607917 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5\" (UID: \"2f97d5cb-2060-4a95-84dd-e4dca87fb87a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5" Feb 25 11:57:03 crc kubenswrapper[5005]: I0225 11:57:03.608005 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5\" (UID: \"2f97d5cb-2060-4a95-84dd-e4dca87fb87a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5" Feb 25 11:57:03 crc kubenswrapper[5005]: I0225 11:57:03.608135 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5\" (UID: \"2f97d5cb-2060-4a95-84dd-e4dca87fb87a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5" Feb 25 11:57:03 crc kubenswrapper[5005]: I0225 11:57:03.608483 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5\" (UID: \"2f97d5cb-2060-4a95-84dd-e4dca87fb87a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5" Feb 25 11:57:03 crc kubenswrapper[5005]: I0225 11:57:03.609195 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5\" (UID: \"2f97d5cb-2060-4a95-84dd-e4dca87fb87a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5" Feb 25 11:57:03 crc kubenswrapper[5005]: I0225 11:57:03.621032 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtbr9\" (UniqueName: \"kubernetes.io/projected/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-kube-api-access-rtbr9\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5\" (UID: \"2f97d5cb-2060-4a95-84dd-e4dca87fb87a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5" Feb 25 11:57:03 crc kubenswrapper[5005]: I0225 11:57:03.652943 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5" Feb 25 11:57:04 crc kubenswrapper[5005]: I0225 11:57:04.182512 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5"] Feb 25 11:57:04 crc kubenswrapper[5005]: I0225 11:57:04.263501 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5" event={"ID":"2f97d5cb-2060-4a95-84dd-e4dca87fb87a","Type":"ContainerStarted","Data":"c1fe603aaa64334a1520d9347d20ffcb0e5e57022ea83c56654996b687f679a7"} Feb 25 11:57:05 crc kubenswrapper[5005]: I0225 11:57:05.274221 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5" event={"ID":"2f97d5cb-2060-4a95-84dd-e4dca87fb87a","Type":"ContainerStarted","Data":"fcc39c61c655db7d490e09a98ae20db09fec27d2de98ff2980e0b8180b8a2002"} Feb 25 11:57:05 crc kubenswrapper[5005]: I0225 11:57:05.294184 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5" podStartSLOduration=1.878444135 podStartE2EDuration="2.294160374s" podCreationTimestamp="2026-02-25 11:57:03 +0000 UTC" firstStartedPulling="2026-02-25 11:57:04.191107162 +0000 UTC m=+2338.231839489" lastFinishedPulling="2026-02-25 11:57:04.606823391 +0000 UTC m=+2338.647555728" observedRunningTime="2026-02-25 11:57:05.290985126 +0000 UTC m=+2339.331717453" watchObservedRunningTime="2026-02-25 11:57:05.294160374 +0000 UTC m=+2339.334892701" Feb 25 11:57:11 crc kubenswrapper[5005]: I0225 11:57:11.685970 5005 scope.go:117] "RemoveContainer" containerID="0a0c931f6f39c45f66a41d485326ae99130758f22d324cc3fec975dfad96b162" Feb 25 11:57:11 crc kubenswrapper[5005]: E0225 11:57:11.686803 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 11:57:24 crc kubenswrapper[5005]: I0225 11:57:24.685920 5005 scope.go:117] "RemoveContainer" containerID="0a0c931f6f39c45f66a41d485326ae99130758f22d324cc3fec975dfad96b162" Feb 25 11:57:24 crc kubenswrapper[5005]: E0225 11:57:24.687631 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 11:57:35 crc kubenswrapper[5005]: I0225 11:57:35.545414 5005 generic.go:334] "Generic (PLEG): container finished" podID="2f97d5cb-2060-4a95-84dd-e4dca87fb87a" containerID="fcc39c61c655db7d490e09a98ae20db09fec27d2de98ff2980e0b8180b8a2002" exitCode=0 Feb 25 11:57:35 crc kubenswrapper[5005]: I0225 11:57:35.545506 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5" event={"ID":"2f97d5cb-2060-4a95-84dd-e4dca87fb87a","Type":"ContainerDied","Data":"fcc39c61c655db7d490e09a98ae20db09fec27d2de98ff2980e0b8180b8a2002"} Feb 25 11:57:36 crc kubenswrapper[5005]: I0225 11:57:36.988553 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5" Feb 25 11:57:37 crc kubenswrapper[5005]: I0225 11:57:37.168030 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"2f97d5cb-2060-4a95-84dd-e4dca87fb87a\" (UID: \"2f97d5cb-2060-4a95-84dd-e4dca87fb87a\") " Feb 25 11:57:37 crc kubenswrapper[5005]: I0225 11:57:37.168084 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-ssh-key-openstack-edpm-ipam\") pod \"2f97d5cb-2060-4a95-84dd-e4dca87fb87a\" (UID: \"2f97d5cb-2060-4a95-84dd-e4dca87fb87a\") " Feb 25 11:57:37 crc kubenswrapper[5005]: I0225 11:57:37.168137 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtbr9\" (UniqueName: \"kubernetes.io/projected/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-kube-api-access-rtbr9\") pod \"2f97d5cb-2060-4a95-84dd-e4dca87fb87a\" (UID: \"2f97d5cb-2060-4a95-84dd-e4dca87fb87a\") " Feb 25 11:57:37 crc kubenswrapper[5005]: I0225 11:57:37.168161 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"2f97d5cb-2060-4a95-84dd-e4dca87fb87a\" (UID: \"2f97d5cb-2060-4a95-84dd-e4dca87fb87a\") " Feb 25 11:57:37 crc kubenswrapper[5005]: I0225 11:57:37.168200 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-nova-combined-ca-bundle\") pod \"2f97d5cb-2060-4a95-84dd-e4dca87fb87a\" (UID: \"2f97d5cb-2060-4a95-84dd-e4dca87fb87a\") " Feb 25 11:57:37 crc kubenswrapper[5005]: I0225 11:57:37.169140 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-ceph\") pod \"2f97d5cb-2060-4a95-84dd-e4dca87fb87a\" (UID: \"2f97d5cb-2060-4a95-84dd-e4dca87fb87a\") " Feb 25 11:57:37 crc kubenswrapper[5005]: I0225 11:57:37.169208 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-libvirt-combined-ca-bundle\") pod \"2f97d5cb-2060-4a95-84dd-e4dca87fb87a\" (UID: \"2f97d5cb-2060-4a95-84dd-e4dca87fb87a\") " Feb 25 11:57:37 crc kubenswrapper[5005]: I0225 11:57:37.169287 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-neutron-metadata-combined-ca-bundle\") pod \"2f97d5cb-2060-4a95-84dd-e4dca87fb87a\" (UID: \"2f97d5cb-2060-4a95-84dd-e4dca87fb87a\") " Feb 25 11:57:37 crc kubenswrapper[5005]: I0225 11:57:37.169346 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-bootstrap-combined-ca-bundle\") pod \"2f97d5cb-2060-4a95-84dd-e4dca87fb87a\" (UID: \"2f97d5cb-2060-4a95-84dd-e4dca87fb87a\") " Feb 25 11:57:37 crc kubenswrapper[5005]: I0225 11:57:37.169485 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-openstack-edpm-ipam-ovn-default-certs-0\") pod \"2f97d5cb-2060-4a95-84dd-e4dca87fb87a\" (UID: \"2f97d5cb-2060-4a95-84dd-e4dca87fb87a\") " Feb 25 11:57:37 crc kubenswrapper[5005]: I0225 11:57:37.169548 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-ovn-combined-ca-bundle\") pod \"2f97d5cb-2060-4a95-84dd-e4dca87fb87a\" (UID: \"2f97d5cb-2060-4a95-84dd-e4dca87fb87a\") " Feb 25 11:57:37 crc kubenswrapper[5005]: I0225 11:57:37.169687 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-repo-setup-combined-ca-bundle\") pod \"2f97d5cb-2060-4a95-84dd-e4dca87fb87a\" (UID: \"2f97d5cb-2060-4a95-84dd-e4dca87fb87a\") " Feb 25 11:57:37 crc kubenswrapper[5005]: I0225 11:57:37.169728 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-inventory\") pod \"2f97d5cb-2060-4a95-84dd-e4dca87fb87a\" (UID: \"2f97d5cb-2060-4a95-84dd-e4dca87fb87a\") " Feb 25 11:57:37 crc kubenswrapper[5005]: I0225 11:57:37.174345 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "2f97d5cb-2060-4a95-84dd-e4dca87fb87a" (UID: "2f97d5cb-2060-4a95-84dd-e4dca87fb87a"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:57:37 crc kubenswrapper[5005]: I0225 11:57:37.174653 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "2f97d5cb-2060-4a95-84dd-e4dca87fb87a" (UID: "2f97d5cb-2060-4a95-84dd-e4dca87fb87a"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:57:37 crc kubenswrapper[5005]: I0225 11:57:37.174695 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-ceph" (OuterVolumeSpecName: "ceph") pod "2f97d5cb-2060-4a95-84dd-e4dca87fb87a" (UID: "2f97d5cb-2060-4a95-84dd-e4dca87fb87a"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:57:37 crc kubenswrapper[5005]: I0225 11:57:37.175181 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "2f97d5cb-2060-4a95-84dd-e4dca87fb87a" (UID: "2f97d5cb-2060-4a95-84dd-e4dca87fb87a"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:57:37 crc kubenswrapper[5005]: I0225 11:57:37.175282 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-kube-api-access-rtbr9" (OuterVolumeSpecName: "kube-api-access-rtbr9") pod "2f97d5cb-2060-4a95-84dd-e4dca87fb87a" (UID: "2f97d5cb-2060-4a95-84dd-e4dca87fb87a"). InnerVolumeSpecName "kube-api-access-rtbr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:57:37 crc kubenswrapper[5005]: I0225 11:57:37.175833 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "2f97d5cb-2060-4a95-84dd-e4dca87fb87a" (UID: "2f97d5cb-2060-4a95-84dd-e4dca87fb87a"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:57:37 crc kubenswrapper[5005]: I0225 11:57:37.177312 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "2f97d5cb-2060-4a95-84dd-e4dca87fb87a" (UID: "2f97d5cb-2060-4a95-84dd-e4dca87fb87a"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:57:37 crc kubenswrapper[5005]: I0225 11:57:37.181110 5005 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 11:57:37 crc kubenswrapper[5005]: I0225 11:57:37.181197 5005 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 11:57:37 crc kubenswrapper[5005]: I0225 11:57:37.181219 5005 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 25 11:57:37 crc kubenswrapper[5005]: I0225 11:57:37.181265 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtbr9\" (UniqueName: \"kubernetes.io/projected/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-kube-api-access-rtbr9\") on node \"crc\" DevicePath \"\"" Feb 25 11:57:37 crc kubenswrapper[5005]: I0225 11:57:37.181289 5005 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 25 11:57:37 crc kubenswrapper[5005]: I0225 11:57:37.181306 5005 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-ceph\") on node \"crc\" DevicePath \"\"" Feb 25 11:57:37 crc kubenswrapper[5005]: I0225 11:57:37.181323 5005 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 11:57:37 crc kubenswrapper[5005]: I0225 11:57:37.182205 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "2f97d5cb-2060-4a95-84dd-e4dca87fb87a" (UID: "2f97d5cb-2060-4a95-84dd-e4dca87fb87a"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:57:37 crc kubenswrapper[5005]: I0225 11:57:37.192202 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "2f97d5cb-2060-4a95-84dd-e4dca87fb87a" (UID: "2f97d5cb-2060-4a95-84dd-e4dca87fb87a"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:57:37 crc kubenswrapper[5005]: I0225 11:57:37.192261 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "2f97d5cb-2060-4a95-84dd-e4dca87fb87a" (UID: "2f97d5cb-2060-4a95-84dd-e4dca87fb87a"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:57:37 crc kubenswrapper[5005]: I0225 11:57:37.192591 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "2f97d5cb-2060-4a95-84dd-e4dca87fb87a" (UID: "2f97d5cb-2060-4a95-84dd-e4dca87fb87a"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:57:37 crc kubenswrapper[5005]: I0225 11:57:37.206122 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-inventory" (OuterVolumeSpecName: "inventory") pod "2f97d5cb-2060-4a95-84dd-e4dca87fb87a" (UID: "2f97d5cb-2060-4a95-84dd-e4dca87fb87a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:57:37 crc kubenswrapper[5005]: I0225 11:57:37.208305 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2f97d5cb-2060-4a95-84dd-e4dca87fb87a" (UID: "2f97d5cb-2060-4a95-84dd-e4dca87fb87a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:57:37 crc kubenswrapper[5005]: I0225 11:57:37.283025 5005 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 11:57:37 crc kubenswrapper[5005]: I0225 11:57:37.283085 5005 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 25 11:57:37 crc kubenswrapper[5005]: I0225 11:57:37.283107 5005 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 11:57:37 crc kubenswrapper[5005]: I0225 11:57:37.283125 5005 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-inventory\") on node \"crc\" DevicePath \"\"" Feb 25 11:57:37 crc kubenswrapper[5005]: I0225 11:57:37.283141 5005 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 25 11:57:37 crc kubenswrapper[5005]: I0225 11:57:37.283155 5005 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f97d5cb-2060-4a95-84dd-e4dca87fb87a-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 11:57:37 crc kubenswrapper[5005]: I0225 11:57:37.561991 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5" event={"ID":"2f97d5cb-2060-4a95-84dd-e4dca87fb87a","Type":"ContainerDied","Data":"c1fe603aaa64334a1520d9347d20ffcb0e5e57022ea83c56654996b687f679a7"} Feb 25 11:57:37 crc kubenswrapper[5005]: I0225 11:57:37.562314 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1fe603aaa64334a1520d9347d20ffcb0e5e57022ea83c56654996b687f679a7" Feb 25 11:57:37 crc kubenswrapper[5005]: I0225 11:57:37.562044 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5" Feb 25 11:57:37 crc kubenswrapper[5005]: I0225 11:57:37.679039 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-jkhrj"] Feb 25 11:57:37 crc kubenswrapper[5005]: E0225 11:57:37.679595 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f97d5cb-2060-4a95-84dd-e4dca87fb87a" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 25 11:57:37 crc kubenswrapper[5005]: I0225 11:57:37.679621 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f97d5cb-2060-4a95-84dd-e4dca87fb87a" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 25 11:57:37 crc kubenswrapper[5005]: I0225 11:57:37.679841 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f97d5cb-2060-4a95-84dd-e4dca87fb87a" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 25 11:57:37 crc kubenswrapper[5005]: I0225 11:57:37.680652 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-jkhrj" Feb 25 11:57:37 crc kubenswrapper[5005]: I0225 11:57:37.685426 5005 scope.go:117] "RemoveContainer" containerID="0a0c931f6f39c45f66a41d485326ae99130758f22d324cc3fec975dfad96b162" Feb 25 11:57:37 crc kubenswrapper[5005]: E0225 11:57:37.685919 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 11:57:37 crc kubenswrapper[5005]: I0225 11:57:37.686486 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 25 11:57:37 crc kubenswrapper[5005]: I0225 11:57:37.687509 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 25 11:57:37 crc kubenswrapper[5005]: I0225 11:57:37.687955 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dgrbb" Feb 25 11:57:37 crc kubenswrapper[5005]: I0225 11:57:37.688004 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 25 11:57:37 crc kubenswrapper[5005]: I0225 11:57:37.688140 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 25 11:57:37 crc kubenswrapper[5005]: I0225 11:57:37.691552 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-jkhrj"] Feb 25 11:57:37 crc kubenswrapper[5005]: I0225 11:57:37.791473 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqm5g\" (UniqueName: \"kubernetes.io/projected/254d9140-397a-4f78-b3bc-ad03ddd2fb26-kube-api-access-nqm5g\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-jkhrj\" (UID: \"254d9140-397a-4f78-b3bc-ad03ddd2fb26\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-jkhrj" Feb 25 11:57:37 crc kubenswrapper[5005]: I0225 11:57:37.791559 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/254d9140-397a-4f78-b3bc-ad03ddd2fb26-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-jkhrj\" (UID: \"254d9140-397a-4f78-b3bc-ad03ddd2fb26\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-jkhrj" Feb 25 11:57:37 crc kubenswrapper[5005]: I0225 11:57:37.791660 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/254d9140-397a-4f78-b3bc-ad03ddd2fb26-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-jkhrj\" (UID: \"254d9140-397a-4f78-b3bc-ad03ddd2fb26\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-jkhrj" Feb 25 11:57:37 crc kubenswrapper[5005]: I0225 11:57:37.791692 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/254d9140-397a-4f78-b3bc-ad03ddd2fb26-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-jkhrj\" (UID: \"254d9140-397a-4f78-b3bc-ad03ddd2fb26\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-jkhrj" Feb 25 11:57:37 crc kubenswrapper[5005]: I0225 11:57:37.893206 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/254d9140-397a-4f78-b3bc-ad03ddd2fb26-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-jkhrj\" (UID: \"254d9140-397a-4f78-b3bc-ad03ddd2fb26\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-jkhrj" Feb 25 11:57:37 crc kubenswrapper[5005]: I0225 11:57:37.893289 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/254d9140-397a-4f78-b3bc-ad03ddd2fb26-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-jkhrj\" (UID: \"254d9140-397a-4f78-b3bc-ad03ddd2fb26\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-jkhrj" Feb 25 11:57:37 crc kubenswrapper[5005]: I0225 11:57:37.893320 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/254d9140-397a-4f78-b3bc-ad03ddd2fb26-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-jkhrj\" (UID: \"254d9140-397a-4f78-b3bc-ad03ddd2fb26\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-jkhrj" Feb 25 11:57:37 crc kubenswrapper[5005]: I0225 11:57:37.893423 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqm5g\" (UniqueName: \"kubernetes.io/projected/254d9140-397a-4f78-b3bc-ad03ddd2fb26-kube-api-access-nqm5g\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-jkhrj\" (UID: \"254d9140-397a-4f78-b3bc-ad03ddd2fb26\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-jkhrj" Feb 25 11:57:37 crc kubenswrapper[5005]: I0225 11:57:37.897603 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/254d9140-397a-4f78-b3bc-ad03ddd2fb26-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-jkhrj\" (UID: \"254d9140-397a-4f78-b3bc-ad03ddd2fb26\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-jkhrj" Feb 25 11:57:37 crc kubenswrapper[5005]: I0225 11:57:37.898548 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/254d9140-397a-4f78-b3bc-ad03ddd2fb26-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-jkhrj\" (UID: \"254d9140-397a-4f78-b3bc-ad03ddd2fb26\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-jkhrj" Feb 25 11:57:37 crc kubenswrapper[5005]: I0225 11:57:37.901104 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/254d9140-397a-4f78-b3bc-ad03ddd2fb26-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-jkhrj\" (UID: \"254d9140-397a-4f78-b3bc-ad03ddd2fb26\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-jkhrj" Feb 25 11:57:37 crc kubenswrapper[5005]: I0225 11:57:37.916589 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqm5g\" (UniqueName: \"kubernetes.io/projected/254d9140-397a-4f78-b3bc-ad03ddd2fb26-kube-api-access-nqm5g\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-jkhrj\" (UID: \"254d9140-397a-4f78-b3bc-ad03ddd2fb26\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-jkhrj" Feb 25 11:57:37 crc kubenswrapper[5005]: I0225 11:57:37.995866 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-jkhrj" Feb 25 11:57:38 crc kubenswrapper[5005]: I0225 11:57:38.576179 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-jkhrj"] Feb 25 11:57:39 crc kubenswrapper[5005]: I0225 11:57:39.579303 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-jkhrj" event={"ID":"254d9140-397a-4f78-b3bc-ad03ddd2fb26","Type":"ContainerStarted","Data":"6a145b3fde5d52fec1c3904b6c94302bdfad2de8ad70c57d05dfb303464042ed"} Feb 25 11:57:39 crc kubenswrapper[5005]: I0225 11:57:39.579625 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-jkhrj" event={"ID":"254d9140-397a-4f78-b3bc-ad03ddd2fb26","Type":"ContainerStarted","Data":"d6a5123ebbc4d41fe748df79f8b17d2bad2b037dab264867359c33ddbe835854"} Feb 25 11:57:39 crc kubenswrapper[5005]: I0225 11:57:39.599402 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-jkhrj" podStartSLOduration=2.15909355 podStartE2EDuration="2.599348876s" podCreationTimestamp="2026-02-25 11:57:37 +0000 UTC" firstStartedPulling="2026-02-25 11:57:38.591525567 +0000 UTC m=+2372.632257894" lastFinishedPulling="2026-02-25 11:57:39.031780893 +0000 UTC m=+2373.072513220" observedRunningTime="2026-02-25 11:57:39.593646418 +0000 UTC m=+2373.634378745" watchObservedRunningTime="2026-02-25 11:57:39.599348876 +0000 UTC m=+2373.640081213" Feb 25 11:57:44 crc kubenswrapper[5005]: I0225 11:57:44.616774 5005 generic.go:334] "Generic (PLEG): container finished" podID="254d9140-397a-4f78-b3bc-ad03ddd2fb26" containerID="6a145b3fde5d52fec1c3904b6c94302bdfad2de8ad70c57d05dfb303464042ed" exitCode=0 Feb 25 11:57:44 crc kubenswrapper[5005]: I0225 11:57:44.616901 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-jkhrj" event={"ID":"254d9140-397a-4f78-b3bc-ad03ddd2fb26","Type":"ContainerDied","Data":"6a145b3fde5d52fec1c3904b6c94302bdfad2de8ad70c57d05dfb303464042ed"} Feb 25 11:57:46 crc kubenswrapper[5005]: I0225 11:57:46.052531 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-jkhrj" Feb 25 11:57:46 crc kubenswrapper[5005]: I0225 11:57:46.148323 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/254d9140-397a-4f78-b3bc-ad03ddd2fb26-inventory\") pod \"254d9140-397a-4f78-b3bc-ad03ddd2fb26\" (UID: \"254d9140-397a-4f78-b3bc-ad03ddd2fb26\") " Feb 25 11:57:46 crc kubenswrapper[5005]: I0225 11:57:46.148411 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/254d9140-397a-4f78-b3bc-ad03ddd2fb26-ssh-key-openstack-edpm-ipam\") pod \"254d9140-397a-4f78-b3bc-ad03ddd2fb26\" (UID: \"254d9140-397a-4f78-b3bc-ad03ddd2fb26\") " Feb 25 11:57:46 crc kubenswrapper[5005]: I0225 11:57:46.148628 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqm5g\" (UniqueName: \"kubernetes.io/projected/254d9140-397a-4f78-b3bc-ad03ddd2fb26-kube-api-access-nqm5g\") pod \"254d9140-397a-4f78-b3bc-ad03ddd2fb26\" (UID: \"254d9140-397a-4f78-b3bc-ad03ddd2fb26\") " Feb 25 11:57:46 crc kubenswrapper[5005]: I0225 11:57:46.148698 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/254d9140-397a-4f78-b3bc-ad03ddd2fb26-ceph\") pod \"254d9140-397a-4f78-b3bc-ad03ddd2fb26\" (UID: \"254d9140-397a-4f78-b3bc-ad03ddd2fb26\") " Feb 25 11:57:46 crc kubenswrapper[5005]: I0225 11:57:46.165921 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/254d9140-397a-4f78-b3bc-ad03ddd2fb26-kube-api-access-nqm5g" (OuterVolumeSpecName: "kube-api-access-nqm5g") pod "254d9140-397a-4f78-b3bc-ad03ddd2fb26" (UID: "254d9140-397a-4f78-b3bc-ad03ddd2fb26"). InnerVolumeSpecName "kube-api-access-nqm5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:57:46 crc kubenswrapper[5005]: I0225 11:57:46.166445 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/254d9140-397a-4f78-b3bc-ad03ddd2fb26-ceph" (OuterVolumeSpecName: "ceph") pod "254d9140-397a-4f78-b3bc-ad03ddd2fb26" (UID: "254d9140-397a-4f78-b3bc-ad03ddd2fb26"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:57:46 crc kubenswrapper[5005]: I0225 11:57:46.174515 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/254d9140-397a-4f78-b3bc-ad03ddd2fb26-inventory" (OuterVolumeSpecName: "inventory") pod "254d9140-397a-4f78-b3bc-ad03ddd2fb26" (UID: "254d9140-397a-4f78-b3bc-ad03ddd2fb26"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:57:46 crc kubenswrapper[5005]: I0225 11:57:46.186014 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/254d9140-397a-4f78-b3bc-ad03ddd2fb26-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "254d9140-397a-4f78-b3bc-ad03ddd2fb26" (UID: "254d9140-397a-4f78-b3bc-ad03ddd2fb26"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:57:46 crc kubenswrapper[5005]: I0225 11:57:46.251540 5005 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/254d9140-397a-4f78-b3bc-ad03ddd2fb26-inventory\") on node \"crc\" DevicePath \"\"" Feb 25 11:57:46 crc kubenswrapper[5005]: I0225 11:57:46.252005 5005 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/254d9140-397a-4f78-b3bc-ad03ddd2fb26-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 25 11:57:46 crc kubenswrapper[5005]: I0225 11:57:46.252154 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqm5g\" (UniqueName: \"kubernetes.io/projected/254d9140-397a-4f78-b3bc-ad03ddd2fb26-kube-api-access-nqm5g\") on node \"crc\" DevicePath \"\"" Feb 25 11:57:46 crc kubenswrapper[5005]: I0225 11:57:46.252333 5005 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/254d9140-397a-4f78-b3bc-ad03ddd2fb26-ceph\") on node \"crc\" DevicePath \"\"" Feb 25 11:57:46 crc kubenswrapper[5005]: I0225 11:57:46.636007 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-jkhrj" event={"ID":"254d9140-397a-4f78-b3bc-ad03ddd2fb26","Type":"ContainerDied","Data":"d6a5123ebbc4d41fe748df79f8b17d2bad2b037dab264867359c33ddbe835854"} Feb 25 11:57:46 crc kubenswrapper[5005]: I0225 11:57:46.636288 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6a5123ebbc4d41fe748df79f8b17d2bad2b037dab264867359c33ddbe835854" Feb 25 11:57:46 crc kubenswrapper[5005]: I0225 11:57:46.636082 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-jkhrj" Feb 25 11:57:46 crc kubenswrapper[5005]: I0225 11:57:46.733680 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-w6dhj"] Feb 25 11:57:46 crc kubenswrapper[5005]: E0225 11:57:46.735064 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="254d9140-397a-4f78-b3bc-ad03ddd2fb26" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Feb 25 11:57:46 crc kubenswrapper[5005]: I0225 11:57:46.735098 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="254d9140-397a-4f78-b3bc-ad03ddd2fb26" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Feb 25 11:57:46 crc kubenswrapper[5005]: I0225 11:57:46.736558 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="254d9140-397a-4f78-b3bc-ad03ddd2fb26" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Feb 25 11:57:46 crc kubenswrapper[5005]: I0225 11:57:46.738114 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w6dhj" Feb 25 11:57:46 crc kubenswrapper[5005]: I0225 11:57:46.741903 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 25 11:57:46 crc kubenswrapper[5005]: I0225 11:57:46.742065 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 25 11:57:46 crc kubenswrapper[5005]: I0225 11:57:46.742860 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 25 11:57:46 crc kubenswrapper[5005]: I0225 11:57:46.743048 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 25 11:57:46 crc kubenswrapper[5005]: I0225 11:57:46.744415 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 25 11:57:46 crc kubenswrapper[5005]: I0225 11:57:46.744656 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dgrbb" Feb 25 11:57:46 crc kubenswrapper[5005]: I0225 11:57:46.761715 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d029a87-5e79-4abe-9bc5-68de638fb6b8-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-w6dhj\" (UID: \"5d029a87-5e79-4abe-9bc5-68de638fb6b8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w6dhj" Feb 25 11:57:46 crc kubenswrapper[5005]: I0225 11:57:46.762035 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5d029a87-5e79-4abe-9bc5-68de638fb6b8-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-w6dhj\" (UID: \"5d029a87-5e79-4abe-9bc5-68de638fb6b8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w6dhj" Feb 25 11:57:46 crc kubenswrapper[5005]: I0225 11:57:46.762103 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5d029a87-5e79-4abe-9bc5-68de638fb6b8-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-w6dhj\" (UID: \"5d029a87-5e79-4abe-9bc5-68de638fb6b8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w6dhj" Feb 25 11:57:46 crc kubenswrapper[5005]: I0225 11:57:46.763649 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbbf5\" (UniqueName: \"kubernetes.io/projected/5d029a87-5e79-4abe-9bc5-68de638fb6b8-kube-api-access-lbbf5\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-w6dhj\" (UID: \"5d029a87-5e79-4abe-9bc5-68de638fb6b8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w6dhj" Feb 25 11:57:46 crc kubenswrapper[5005]: I0225 11:57:46.763953 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/5d029a87-5e79-4abe-9bc5-68de638fb6b8-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-w6dhj\" (UID: \"5d029a87-5e79-4abe-9bc5-68de638fb6b8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w6dhj" Feb 25 11:57:46 crc kubenswrapper[5005]: I0225 11:57:46.764122 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5d029a87-5e79-4abe-9bc5-68de638fb6b8-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-w6dhj\" (UID: \"5d029a87-5e79-4abe-9bc5-68de638fb6b8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w6dhj" Feb 25 11:57:46 crc kubenswrapper[5005]: I0225 11:57:46.779039 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-w6dhj"] Feb 25 11:57:46 crc kubenswrapper[5005]: I0225 11:57:46.866154 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d029a87-5e79-4abe-9bc5-68de638fb6b8-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-w6dhj\" (UID: \"5d029a87-5e79-4abe-9bc5-68de638fb6b8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w6dhj" Feb 25 11:57:46 crc kubenswrapper[5005]: I0225 11:57:46.866263 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5d029a87-5e79-4abe-9bc5-68de638fb6b8-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-w6dhj\" (UID: \"5d029a87-5e79-4abe-9bc5-68de638fb6b8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w6dhj" Feb 25 11:57:46 crc kubenswrapper[5005]: I0225 11:57:46.866289 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5d029a87-5e79-4abe-9bc5-68de638fb6b8-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-w6dhj\" (UID: \"5d029a87-5e79-4abe-9bc5-68de638fb6b8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w6dhj" Feb 25 11:57:46 crc kubenswrapper[5005]: I0225 11:57:46.866315 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbbf5\" (UniqueName: \"kubernetes.io/projected/5d029a87-5e79-4abe-9bc5-68de638fb6b8-kube-api-access-lbbf5\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-w6dhj\" (UID: \"5d029a87-5e79-4abe-9bc5-68de638fb6b8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w6dhj" Feb 25 11:57:46 crc kubenswrapper[5005]: I0225 11:57:46.866419 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/5d029a87-5e79-4abe-9bc5-68de638fb6b8-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-w6dhj\" (UID: \"5d029a87-5e79-4abe-9bc5-68de638fb6b8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w6dhj" Feb 25 11:57:46 crc kubenswrapper[5005]: I0225 11:57:46.866470 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5d029a87-5e79-4abe-9bc5-68de638fb6b8-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-w6dhj\" (UID: \"5d029a87-5e79-4abe-9bc5-68de638fb6b8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w6dhj" Feb 25 11:57:46 crc kubenswrapper[5005]: I0225 11:57:46.867842 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/5d029a87-5e79-4abe-9bc5-68de638fb6b8-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-w6dhj\" (UID: \"5d029a87-5e79-4abe-9bc5-68de638fb6b8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w6dhj" Feb 25 11:57:46 crc kubenswrapper[5005]: I0225 11:57:46.873916 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5d029a87-5e79-4abe-9bc5-68de638fb6b8-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-w6dhj\" (UID: \"5d029a87-5e79-4abe-9bc5-68de638fb6b8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w6dhj" Feb 25 11:57:46 crc kubenswrapper[5005]: I0225 11:57:46.874364 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5d029a87-5e79-4abe-9bc5-68de638fb6b8-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-w6dhj\" (UID: \"5d029a87-5e79-4abe-9bc5-68de638fb6b8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w6dhj" Feb 25 11:57:46 crc kubenswrapper[5005]: I0225 11:57:46.875319 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d029a87-5e79-4abe-9bc5-68de638fb6b8-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-w6dhj\" (UID: \"5d029a87-5e79-4abe-9bc5-68de638fb6b8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w6dhj" Feb 25 11:57:46 crc kubenswrapper[5005]: I0225 11:57:46.879711 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5d029a87-5e79-4abe-9bc5-68de638fb6b8-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-w6dhj\" (UID: \"5d029a87-5e79-4abe-9bc5-68de638fb6b8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w6dhj" Feb 25 11:57:46 crc kubenswrapper[5005]: I0225 11:57:46.887198 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbbf5\" (UniqueName: \"kubernetes.io/projected/5d029a87-5e79-4abe-9bc5-68de638fb6b8-kube-api-access-lbbf5\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-w6dhj\" (UID: \"5d029a87-5e79-4abe-9bc5-68de638fb6b8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w6dhj" Feb 25 11:57:47 crc kubenswrapper[5005]: I0225 11:57:47.072678 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w6dhj" Feb 25 11:57:47 crc kubenswrapper[5005]: I0225 11:57:47.592425 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-w6dhj"] Feb 25 11:57:47 crc kubenswrapper[5005]: I0225 11:57:47.648127 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w6dhj" event={"ID":"5d029a87-5e79-4abe-9bc5-68de638fb6b8","Type":"ContainerStarted","Data":"197c6296d24aa9270b06a8c261f27c67bea1db67bab2ed79d1897595eff00557"} Feb 25 11:57:48 crc kubenswrapper[5005]: I0225 11:57:48.657326 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w6dhj" event={"ID":"5d029a87-5e79-4abe-9bc5-68de638fb6b8","Type":"ContainerStarted","Data":"d6578f967f4e91b1ffc01d62e9ff73857658b4428117f3909b5142c9484f4898"} Feb 25 11:57:48 crc kubenswrapper[5005]: I0225 11:57:48.685452 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w6dhj" podStartSLOduration=2.26880538 podStartE2EDuration="2.685432746s" podCreationTimestamp="2026-02-25 11:57:46 +0000 UTC" firstStartedPulling="2026-02-25 11:57:47.599628224 +0000 UTC m=+2381.640360551" lastFinishedPulling="2026-02-25 11:57:48.01625558 +0000 UTC m=+2382.056987917" observedRunningTime="2026-02-25 11:57:48.676252408 +0000 UTC m=+2382.716984735" watchObservedRunningTime="2026-02-25 11:57:48.685432746 +0000 UTC m=+2382.726165073" Feb 25 11:57:49 crc kubenswrapper[5005]: I0225 11:57:49.685935 5005 scope.go:117] "RemoveContainer" containerID="0a0c931f6f39c45f66a41d485326ae99130758f22d324cc3fec975dfad96b162" Feb 25 11:57:49 crc kubenswrapper[5005]: E0225 11:57:49.686329 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 11:58:00 crc kubenswrapper[5005]: I0225 11:58:00.136229 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533678-r67xh"] Feb 25 11:58:00 crc kubenswrapper[5005]: I0225 11:58:00.138083 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533678-r67xh" Feb 25 11:58:00 crc kubenswrapper[5005]: I0225 11:58:00.140868 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 11:58:00 crc kubenswrapper[5005]: I0225 11:58:00.140931 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 11:58:00 crc kubenswrapper[5005]: I0225 11:58:00.141023 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 11:58:00 crc kubenswrapper[5005]: I0225 11:58:00.148610 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533678-r67xh"] Feb 25 11:58:00 crc kubenswrapper[5005]: I0225 11:58:00.223428 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k6z5\" (UniqueName: \"kubernetes.io/projected/330ac3af-0f49-422f-8bbf-81585552eec0-kube-api-access-6k6z5\") pod \"auto-csr-approver-29533678-r67xh\" (UID: \"330ac3af-0f49-422f-8bbf-81585552eec0\") " pod="openshift-infra/auto-csr-approver-29533678-r67xh" Feb 25 11:58:00 crc kubenswrapper[5005]: I0225 11:58:00.325890 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6k6z5\" (UniqueName: \"kubernetes.io/projected/330ac3af-0f49-422f-8bbf-81585552eec0-kube-api-access-6k6z5\") pod \"auto-csr-approver-29533678-r67xh\" (UID: \"330ac3af-0f49-422f-8bbf-81585552eec0\") " pod="openshift-infra/auto-csr-approver-29533678-r67xh" Feb 25 11:58:00 crc kubenswrapper[5005]: I0225 11:58:00.352155 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6k6z5\" (UniqueName: \"kubernetes.io/projected/330ac3af-0f49-422f-8bbf-81585552eec0-kube-api-access-6k6z5\") pod \"auto-csr-approver-29533678-r67xh\" (UID: \"330ac3af-0f49-422f-8bbf-81585552eec0\") " pod="openshift-infra/auto-csr-approver-29533678-r67xh" Feb 25 11:58:00 crc kubenswrapper[5005]: I0225 11:58:00.463528 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533678-r67xh" Feb 25 11:58:00 crc kubenswrapper[5005]: I0225 11:58:00.900514 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533678-r67xh"] Feb 25 11:58:00 crc kubenswrapper[5005]: W0225 11:58:00.904656 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod330ac3af_0f49_422f_8bbf_81585552eec0.slice/crio-11c6ca1a2936d253dde427c77d8d08058dc043645f68c7f12b15c6b463f0e597 WatchSource:0}: Error finding container 11c6ca1a2936d253dde427c77d8d08058dc043645f68c7f12b15c6b463f0e597: Status 404 returned error can't find the container with id 11c6ca1a2936d253dde427c77d8d08058dc043645f68c7f12b15c6b463f0e597 Feb 25 11:58:01 crc kubenswrapper[5005]: I0225 11:58:01.765755 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533678-r67xh" event={"ID":"330ac3af-0f49-422f-8bbf-81585552eec0","Type":"ContainerStarted","Data":"11c6ca1a2936d253dde427c77d8d08058dc043645f68c7f12b15c6b463f0e597"} Feb 25 11:58:02 crc kubenswrapper[5005]: I0225 11:58:02.686413 5005 scope.go:117] "RemoveContainer" containerID="0a0c931f6f39c45f66a41d485326ae99130758f22d324cc3fec975dfad96b162" Feb 25 11:58:02 crc kubenswrapper[5005]: E0225 11:58:02.686667 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 11:58:04 crc kubenswrapper[5005]: I0225 11:58:04.793121 5005 generic.go:334] "Generic (PLEG): container finished" podID="330ac3af-0f49-422f-8bbf-81585552eec0" containerID="083bef0023c1dca3b89c5f55eaa66c7dd9572acab6cdd710e0f88fa67566c5b6" exitCode=0 Feb 25 11:58:04 crc kubenswrapper[5005]: I0225 11:58:04.793516 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533678-r67xh" event={"ID":"330ac3af-0f49-422f-8bbf-81585552eec0","Type":"ContainerDied","Data":"083bef0023c1dca3b89c5f55eaa66c7dd9572acab6cdd710e0f88fa67566c5b6"} Feb 25 11:58:06 crc kubenswrapper[5005]: I0225 11:58:06.240512 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533678-r67xh" Feb 25 11:58:06 crc kubenswrapper[5005]: I0225 11:58:06.333927 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6k6z5\" (UniqueName: \"kubernetes.io/projected/330ac3af-0f49-422f-8bbf-81585552eec0-kube-api-access-6k6z5\") pod \"330ac3af-0f49-422f-8bbf-81585552eec0\" (UID: \"330ac3af-0f49-422f-8bbf-81585552eec0\") " Feb 25 11:58:06 crc kubenswrapper[5005]: I0225 11:58:06.339169 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/330ac3af-0f49-422f-8bbf-81585552eec0-kube-api-access-6k6z5" (OuterVolumeSpecName: "kube-api-access-6k6z5") pod "330ac3af-0f49-422f-8bbf-81585552eec0" (UID: "330ac3af-0f49-422f-8bbf-81585552eec0"). InnerVolumeSpecName "kube-api-access-6k6z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:58:06 crc kubenswrapper[5005]: I0225 11:58:06.436470 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6k6z5\" (UniqueName: \"kubernetes.io/projected/330ac3af-0f49-422f-8bbf-81585552eec0-kube-api-access-6k6z5\") on node \"crc\" DevicePath \"\"" Feb 25 11:58:06 crc kubenswrapper[5005]: I0225 11:58:06.814442 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533678-r67xh" event={"ID":"330ac3af-0f49-422f-8bbf-81585552eec0","Type":"ContainerDied","Data":"11c6ca1a2936d253dde427c77d8d08058dc043645f68c7f12b15c6b463f0e597"} Feb 25 11:58:06 crc kubenswrapper[5005]: I0225 11:58:06.814477 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533678-r67xh" Feb 25 11:58:06 crc kubenswrapper[5005]: I0225 11:58:06.814493 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11c6ca1a2936d253dde427c77d8d08058dc043645f68c7f12b15c6b463f0e597" Feb 25 11:58:07 crc kubenswrapper[5005]: I0225 11:58:07.343468 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533672-2vvsz"] Feb 25 11:58:07 crc kubenswrapper[5005]: I0225 11:58:07.350771 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533672-2vvsz"] Feb 25 11:58:08 crc kubenswrapper[5005]: I0225 11:58:08.699249 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ada5fa1-990b-4bb8-a32f-1f6b7fce77b4" path="/var/lib/kubelet/pods/2ada5fa1-990b-4bb8-a32f-1f6b7fce77b4/volumes" Feb 25 11:58:15 crc kubenswrapper[5005]: I0225 11:58:15.685897 5005 scope.go:117] "RemoveContainer" containerID="0a0c931f6f39c45f66a41d485326ae99130758f22d324cc3fec975dfad96b162" Feb 25 11:58:15 crc kubenswrapper[5005]: E0225 11:58:15.686992 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 11:58:19 crc kubenswrapper[5005]: I0225 11:58:19.043844 5005 scope.go:117] "RemoveContainer" containerID="d2dd31e152451aad04e3824484f36e5a84d17183fc236cbb9d8748f77e440fd6" Feb 25 11:58:27 crc kubenswrapper[5005]: I0225 11:58:27.686305 5005 scope.go:117] "RemoveContainer" containerID="0a0c931f6f39c45f66a41d485326ae99130758f22d324cc3fec975dfad96b162" Feb 25 11:58:27 crc kubenswrapper[5005]: E0225 11:58:27.687617 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 11:58:39 crc kubenswrapper[5005]: I0225 11:58:39.685521 5005 scope.go:117] "RemoveContainer" containerID="0a0c931f6f39c45f66a41d485326ae99130758f22d324cc3fec975dfad96b162" Feb 25 11:58:39 crc kubenswrapper[5005]: E0225 11:58:39.686324 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 11:58:51 crc kubenswrapper[5005]: I0225 11:58:51.685586 5005 scope.go:117] "RemoveContainer" containerID="0a0c931f6f39c45f66a41d485326ae99130758f22d324cc3fec975dfad96b162" Feb 25 11:58:51 crc kubenswrapper[5005]: E0225 11:58:51.686778 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 11:58:55 crc kubenswrapper[5005]: I0225 11:58:55.871948 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bv876"] Feb 25 11:58:55 crc kubenswrapper[5005]: E0225 11:58:55.873138 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="330ac3af-0f49-422f-8bbf-81585552eec0" containerName="oc" Feb 25 11:58:55 crc kubenswrapper[5005]: I0225 11:58:55.873155 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="330ac3af-0f49-422f-8bbf-81585552eec0" containerName="oc" Feb 25 11:58:55 crc kubenswrapper[5005]: I0225 11:58:55.873403 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="330ac3af-0f49-422f-8bbf-81585552eec0" containerName="oc" Feb 25 11:58:55 crc kubenswrapper[5005]: I0225 11:58:55.874753 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bv876" Feb 25 11:58:55 crc kubenswrapper[5005]: I0225 11:58:55.924903 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bv876"] Feb 25 11:58:56 crc kubenswrapper[5005]: I0225 11:58:56.028294 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01894e34-7fbd-4991-9f41-edfcf327d906-catalog-content\") pod \"redhat-marketplace-bv876\" (UID: \"01894e34-7fbd-4991-9f41-edfcf327d906\") " pod="openshift-marketplace/redhat-marketplace-bv876" Feb 25 11:58:56 crc kubenswrapper[5005]: I0225 11:58:56.028356 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8v6f\" (UniqueName: \"kubernetes.io/projected/01894e34-7fbd-4991-9f41-edfcf327d906-kube-api-access-b8v6f\") pod \"redhat-marketplace-bv876\" (UID: \"01894e34-7fbd-4991-9f41-edfcf327d906\") " pod="openshift-marketplace/redhat-marketplace-bv876" Feb 25 11:58:56 crc kubenswrapper[5005]: I0225 11:58:56.028401 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01894e34-7fbd-4991-9f41-edfcf327d906-utilities\") pod \"redhat-marketplace-bv876\" (UID: \"01894e34-7fbd-4991-9f41-edfcf327d906\") " pod="openshift-marketplace/redhat-marketplace-bv876" Feb 25 11:58:56 crc kubenswrapper[5005]: I0225 11:58:56.130652 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01894e34-7fbd-4991-9f41-edfcf327d906-catalog-content\") pod \"redhat-marketplace-bv876\" (UID: \"01894e34-7fbd-4991-9f41-edfcf327d906\") " pod="openshift-marketplace/redhat-marketplace-bv876" Feb 25 11:58:56 crc kubenswrapper[5005]: I0225 11:58:56.130735 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8v6f\" (UniqueName: \"kubernetes.io/projected/01894e34-7fbd-4991-9f41-edfcf327d906-kube-api-access-b8v6f\") pod \"redhat-marketplace-bv876\" (UID: \"01894e34-7fbd-4991-9f41-edfcf327d906\") " pod="openshift-marketplace/redhat-marketplace-bv876" Feb 25 11:58:56 crc kubenswrapper[5005]: I0225 11:58:56.130771 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01894e34-7fbd-4991-9f41-edfcf327d906-utilities\") pod \"redhat-marketplace-bv876\" (UID: \"01894e34-7fbd-4991-9f41-edfcf327d906\") " pod="openshift-marketplace/redhat-marketplace-bv876" Feb 25 11:58:56 crc kubenswrapper[5005]: I0225 11:58:56.131590 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01894e34-7fbd-4991-9f41-edfcf327d906-catalog-content\") pod \"redhat-marketplace-bv876\" (UID: \"01894e34-7fbd-4991-9f41-edfcf327d906\") " pod="openshift-marketplace/redhat-marketplace-bv876" Feb 25 11:58:56 crc kubenswrapper[5005]: I0225 11:58:56.131651 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01894e34-7fbd-4991-9f41-edfcf327d906-utilities\") pod \"redhat-marketplace-bv876\" (UID: \"01894e34-7fbd-4991-9f41-edfcf327d906\") " pod="openshift-marketplace/redhat-marketplace-bv876" Feb 25 11:58:56 crc kubenswrapper[5005]: I0225 11:58:56.153843 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8v6f\" (UniqueName: \"kubernetes.io/projected/01894e34-7fbd-4991-9f41-edfcf327d906-kube-api-access-b8v6f\") pod \"redhat-marketplace-bv876\" (UID: \"01894e34-7fbd-4991-9f41-edfcf327d906\") " pod="openshift-marketplace/redhat-marketplace-bv876" Feb 25 11:58:56 crc kubenswrapper[5005]: I0225 11:58:56.198022 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bv876" Feb 25 11:58:56 crc kubenswrapper[5005]: I0225 11:58:56.651428 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bv876"] Feb 25 11:58:57 crc kubenswrapper[5005]: I0225 11:58:57.637853 5005 generic.go:334] "Generic (PLEG): container finished" podID="01894e34-7fbd-4991-9f41-edfcf327d906" containerID="96ece352124cada0f2d0de926c91d8e7590f0140b1410ea365cb8676e05d4ea3" exitCode=0 Feb 25 11:58:57 crc kubenswrapper[5005]: I0225 11:58:57.637951 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bv876" event={"ID":"01894e34-7fbd-4991-9f41-edfcf327d906","Type":"ContainerDied","Data":"96ece352124cada0f2d0de926c91d8e7590f0140b1410ea365cb8676e05d4ea3"} Feb 25 11:58:57 crc kubenswrapper[5005]: I0225 11:58:57.638286 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bv876" event={"ID":"01894e34-7fbd-4991-9f41-edfcf327d906","Type":"ContainerStarted","Data":"a041f123a3f9b29820ff8c3ec88dc7176b537d163282be248d2533449a3b330c"} Feb 25 11:58:57 crc kubenswrapper[5005]: I0225 11:58:57.640148 5005 generic.go:334] "Generic (PLEG): container finished" podID="5d029a87-5e79-4abe-9bc5-68de638fb6b8" containerID="d6578f967f4e91b1ffc01d62e9ff73857658b4428117f3909b5142c9484f4898" exitCode=0 Feb 25 11:58:57 crc kubenswrapper[5005]: I0225 11:58:57.640172 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w6dhj" event={"ID":"5d029a87-5e79-4abe-9bc5-68de638fb6b8","Type":"ContainerDied","Data":"d6578f967f4e91b1ffc01d62e9ff73857658b4428117f3909b5142c9484f4898"} Feb 25 11:58:58 crc kubenswrapper[5005]: I0225 11:58:58.649514 5005 generic.go:334] "Generic (PLEG): container finished" podID="01894e34-7fbd-4991-9f41-edfcf327d906" containerID="a2c85da8d8cd96440ea015632343daed2d3e303ea05b18a083148a283c75ef4e" exitCode=0 Feb 25 11:58:58 crc kubenswrapper[5005]: I0225 11:58:58.649588 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bv876" event={"ID":"01894e34-7fbd-4991-9f41-edfcf327d906","Type":"ContainerDied","Data":"a2c85da8d8cd96440ea015632343daed2d3e303ea05b18a083148a283c75ef4e"} Feb 25 11:58:59 crc kubenswrapper[5005]: I0225 11:58:59.087062 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w6dhj" Feb 25 11:58:59 crc kubenswrapper[5005]: I0225 11:58:59.093060 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbbf5\" (UniqueName: \"kubernetes.io/projected/5d029a87-5e79-4abe-9bc5-68de638fb6b8-kube-api-access-lbbf5\") pod \"5d029a87-5e79-4abe-9bc5-68de638fb6b8\" (UID: \"5d029a87-5e79-4abe-9bc5-68de638fb6b8\") " Feb 25 11:58:59 crc kubenswrapper[5005]: I0225 11:58:59.093240 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/5d029a87-5e79-4abe-9bc5-68de638fb6b8-ovncontroller-config-0\") pod \"5d029a87-5e79-4abe-9bc5-68de638fb6b8\" (UID: \"5d029a87-5e79-4abe-9bc5-68de638fb6b8\") " Feb 25 11:58:59 crc kubenswrapper[5005]: I0225 11:58:59.093720 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5d029a87-5e79-4abe-9bc5-68de638fb6b8-inventory\") pod \"5d029a87-5e79-4abe-9bc5-68de638fb6b8\" (UID: \"5d029a87-5e79-4abe-9bc5-68de638fb6b8\") " Feb 25 11:58:59 crc kubenswrapper[5005]: I0225 11:58:59.093904 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5d029a87-5e79-4abe-9bc5-68de638fb6b8-ssh-key-openstack-edpm-ipam\") pod \"5d029a87-5e79-4abe-9bc5-68de638fb6b8\" (UID: \"5d029a87-5e79-4abe-9bc5-68de638fb6b8\") " Feb 25 11:58:59 crc kubenswrapper[5005]: I0225 11:58:59.094096 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5d029a87-5e79-4abe-9bc5-68de638fb6b8-ceph\") pod \"5d029a87-5e79-4abe-9bc5-68de638fb6b8\" (UID: \"5d029a87-5e79-4abe-9bc5-68de638fb6b8\") " Feb 25 11:58:59 crc kubenswrapper[5005]: I0225 11:58:59.094261 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d029a87-5e79-4abe-9bc5-68de638fb6b8-ovn-combined-ca-bundle\") pod \"5d029a87-5e79-4abe-9bc5-68de638fb6b8\" (UID: \"5d029a87-5e79-4abe-9bc5-68de638fb6b8\") " Feb 25 11:58:59 crc kubenswrapper[5005]: I0225 11:58:59.099459 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d029a87-5e79-4abe-9bc5-68de638fb6b8-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "5d029a87-5e79-4abe-9bc5-68de638fb6b8" (UID: "5d029a87-5e79-4abe-9bc5-68de638fb6b8"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:58:59 crc kubenswrapper[5005]: I0225 11:58:59.099567 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d029a87-5e79-4abe-9bc5-68de638fb6b8-kube-api-access-lbbf5" (OuterVolumeSpecName: "kube-api-access-lbbf5") pod "5d029a87-5e79-4abe-9bc5-68de638fb6b8" (UID: "5d029a87-5e79-4abe-9bc5-68de638fb6b8"). InnerVolumeSpecName "kube-api-access-lbbf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:58:59 crc kubenswrapper[5005]: I0225 11:58:59.110575 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d029a87-5e79-4abe-9bc5-68de638fb6b8-ceph" (OuterVolumeSpecName: "ceph") pod "5d029a87-5e79-4abe-9bc5-68de638fb6b8" (UID: "5d029a87-5e79-4abe-9bc5-68de638fb6b8"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:58:59 crc kubenswrapper[5005]: E0225 11:58:59.127397 5005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d029a87-5e79-4abe-9bc5-68de638fb6b8-inventory podName:5d029a87-5e79-4abe-9bc5-68de638fb6b8 nodeName:}" failed. No retries permitted until 2026-02-25 11:58:59.627339667 +0000 UTC m=+2453.668072014 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "inventory" (UniqueName: "kubernetes.io/secret/5d029a87-5e79-4abe-9bc5-68de638fb6b8-inventory") pod "5d029a87-5e79-4abe-9bc5-68de638fb6b8" (UID: "5d029a87-5e79-4abe-9bc5-68de638fb6b8") : error deleting /var/lib/kubelet/pods/5d029a87-5e79-4abe-9bc5-68de638fb6b8/volume-subpaths: remove /var/lib/kubelet/pods/5d029a87-5e79-4abe-9bc5-68de638fb6b8/volume-subpaths: no such file or directory Feb 25 11:58:59 crc kubenswrapper[5005]: I0225 11:58:59.128472 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d029a87-5e79-4abe-9bc5-68de638fb6b8-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "5d029a87-5e79-4abe-9bc5-68de638fb6b8" (UID: "5d029a87-5e79-4abe-9bc5-68de638fb6b8"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 11:58:59 crc kubenswrapper[5005]: I0225 11:58:59.132131 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d029a87-5e79-4abe-9bc5-68de638fb6b8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5d029a87-5e79-4abe-9bc5-68de638fb6b8" (UID: "5d029a87-5e79-4abe-9bc5-68de638fb6b8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:58:59 crc kubenswrapper[5005]: I0225 11:58:59.197242 5005 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5d029a87-5e79-4abe-9bc5-68de638fb6b8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 25 11:58:59 crc kubenswrapper[5005]: I0225 11:58:59.197285 5005 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5d029a87-5e79-4abe-9bc5-68de638fb6b8-ceph\") on node \"crc\" DevicePath \"\"" Feb 25 11:58:59 crc kubenswrapper[5005]: I0225 11:58:59.197299 5005 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d029a87-5e79-4abe-9bc5-68de638fb6b8-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 11:58:59 crc kubenswrapper[5005]: I0225 11:58:59.197310 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbbf5\" (UniqueName: \"kubernetes.io/projected/5d029a87-5e79-4abe-9bc5-68de638fb6b8-kube-api-access-lbbf5\") on node \"crc\" DevicePath \"\"" Feb 25 11:58:59 crc kubenswrapper[5005]: I0225 11:58:59.197322 5005 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/5d029a87-5e79-4abe-9bc5-68de638fb6b8-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 25 11:58:59 crc kubenswrapper[5005]: I0225 11:58:59.658133 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w6dhj" event={"ID":"5d029a87-5e79-4abe-9bc5-68de638fb6b8","Type":"ContainerDied","Data":"197c6296d24aa9270b06a8c261f27c67bea1db67bab2ed79d1897595eff00557"} Feb 25 11:58:59 crc kubenswrapper[5005]: I0225 11:58:59.658529 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="197c6296d24aa9270b06a8c261f27c67bea1db67bab2ed79d1897595eff00557" Feb 25 11:58:59 crc kubenswrapper[5005]: I0225 11:58:59.658437 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-w6dhj" Feb 25 11:58:59 crc kubenswrapper[5005]: I0225 11:58:59.660320 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bv876" event={"ID":"01894e34-7fbd-4991-9f41-edfcf327d906","Type":"ContainerStarted","Data":"44ea1a58558ef4c5fe170ce0742bbf0e1df9c5508a9e1110eeaf24f8aa359cb6"} Feb 25 11:58:59 crc kubenswrapper[5005]: I0225 11:58:59.682051 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bv876" podStartSLOduration=3.272380004 podStartE2EDuration="4.682034178s" podCreationTimestamp="2026-02-25 11:58:55 +0000 UTC" firstStartedPulling="2026-02-25 11:58:57.640869198 +0000 UTC m=+2451.681601525" lastFinishedPulling="2026-02-25 11:58:59.050523362 +0000 UTC m=+2453.091255699" observedRunningTime="2026-02-25 11:58:59.678832967 +0000 UTC m=+2453.719565304" watchObservedRunningTime="2026-02-25 11:58:59.682034178 +0000 UTC m=+2453.722766495" Feb 25 11:58:59 crc kubenswrapper[5005]: I0225 11:58:59.705148 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5d029a87-5e79-4abe-9bc5-68de638fb6b8-inventory\") pod \"5d029a87-5e79-4abe-9bc5-68de638fb6b8\" (UID: \"5d029a87-5e79-4abe-9bc5-68de638fb6b8\") " Feb 25 11:58:59 crc kubenswrapper[5005]: I0225 11:58:59.714828 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d029a87-5e79-4abe-9bc5-68de638fb6b8-inventory" (OuterVolumeSpecName: "inventory") pod "5d029a87-5e79-4abe-9bc5-68de638fb6b8" (UID: "5d029a87-5e79-4abe-9bc5-68de638fb6b8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:58:59 crc kubenswrapper[5005]: I0225 11:58:59.807402 5005 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5d029a87-5e79-4abe-9bc5-68de638fb6b8-inventory\") on node \"crc\" DevicePath \"\"" Feb 25 11:58:59 crc kubenswrapper[5005]: I0225 11:58:59.815161 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pn82x"] Feb 25 11:58:59 crc kubenswrapper[5005]: E0225 11:58:59.815569 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d029a87-5e79-4abe-9bc5-68de638fb6b8" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 25 11:58:59 crc kubenswrapper[5005]: I0225 11:58:59.815593 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d029a87-5e79-4abe-9bc5-68de638fb6b8" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 25 11:58:59 crc kubenswrapper[5005]: I0225 11:58:59.815772 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d029a87-5e79-4abe-9bc5-68de638fb6b8" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 25 11:58:59 crc kubenswrapper[5005]: I0225 11:58:59.816425 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pn82x" Feb 25 11:58:59 crc kubenswrapper[5005]: I0225 11:58:59.818270 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 25 11:58:59 crc kubenswrapper[5005]: I0225 11:58:59.820158 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 25 11:58:59 crc kubenswrapper[5005]: I0225 11:58:59.839318 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pn82x"] Feb 25 11:58:59 crc kubenswrapper[5005]: I0225 11:58:59.908952 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/44b0558f-354e-40f9-9db7-8f5e8762f1fe-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pn82x\" (UID: \"44b0558f-354e-40f9-9db7-8f5e8762f1fe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pn82x" Feb 25 11:58:59 crc kubenswrapper[5005]: I0225 11:58:59.909068 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j9vf\" (UniqueName: \"kubernetes.io/projected/44b0558f-354e-40f9-9db7-8f5e8762f1fe-kube-api-access-4j9vf\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pn82x\" (UID: \"44b0558f-354e-40f9-9db7-8f5e8762f1fe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pn82x" Feb 25 11:58:59 crc kubenswrapper[5005]: I0225 11:58:59.909111 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/44b0558f-354e-40f9-9db7-8f5e8762f1fe-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pn82x\" (UID: \"44b0558f-354e-40f9-9db7-8f5e8762f1fe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pn82x" Feb 25 11:58:59 crc kubenswrapper[5005]: I0225 11:58:59.909153 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/44b0558f-354e-40f9-9db7-8f5e8762f1fe-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pn82x\" (UID: \"44b0558f-354e-40f9-9db7-8f5e8762f1fe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pn82x" Feb 25 11:58:59 crc kubenswrapper[5005]: I0225 11:58:59.909172 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/44b0558f-354e-40f9-9db7-8f5e8762f1fe-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pn82x\" (UID: \"44b0558f-354e-40f9-9db7-8f5e8762f1fe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pn82x" Feb 25 11:58:59 crc kubenswrapper[5005]: I0225 11:58:59.909215 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44b0558f-354e-40f9-9db7-8f5e8762f1fe-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pn82x\" (UID: \"44b0558f-354e-40f9-9db7-8f5e8762f1fe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pn82x" Feb 25 11:58:59 crc kubenswrapper[5005]: I0225 11:58:59.909290 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/44b0558f-354e-40f9-9db7-8f5e8762f1fe-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pn82x\" (UID: \"44b0558f-354e-40f9-9db7-8f5e8762f1fe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pn82x" Feb 25 11:59:00 crc kubenswrapper[5005]: I0225 11:59:00.010867 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/44b0558f-354e-40f9-9db7-8f5e8762f1fe-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pn82x\" (UID: \"44b0558f-354e-40f9-9db7-8f5e8762f1fe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pn82x" Feb 25 11:59:00 crc kubenswrapper[5005]: I0225 11:59:00.010943 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4j9vf\" (UniqueName: \"kubernetes.io/projected/44b0558f-354e-40f9-9db7-8f5e8762f1fe-kube-api-access-4j9vf\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pn82x\" (UID: \"44b0558f-354e-40f9-9db7-8f5e8762f1fe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pn82x" Feb 25 11:59:00 crc kubenswrapper[5005]: I0225 11:59:00.010973 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/44b0558f-354e-40f9-9db7-8f5e8762f1fe-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pn82x\" (UID: \"44b0558f-354e-40f9-9db7-8f5e8762f1fe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pn82x" Feb 25 11:59:00 crc kubenswrapper[5005]: I0225 11:59:00.011007 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/44b0558f-354e-40f9-9db7-8f5e8762f1fe-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pn82x\" (UID: \"44b0558f-354e-40f9-9db7-8f5e8762f1fe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pn82x" Feb 25 11:59:00 crc kubenswrapper[5005]: I0225 11:59:00.011026 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/44b0558f-354e-40f9-9db7-8f5e8762f1fe-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pn82x\" (UID: \"44b0558f-354e-40f9-9db7-8f5e8762f1fe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pn82x" Feb 25 11:59:00 crc kubenswrapper[5005]: I0225 11:59:00.011052 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44b0558f-354e-40f9-9db7-8f5e8762f1fe-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pn82x\" (UID: \"44b0558f-354e-40f9-9db7-8f5e8762f1fe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pn82x" Feb 25 11:59:00 crc kubenswrapper[5005]: I0225 11:59:00.011106 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/44b0558f-354e-40f9-9db7-8f5e8762f1fe-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pn82x\" (UID: \"44b0558f-354e-40f9-9db7-8f5e8762f1fe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pn82x" Feb 25 11:59:00 crc kubenswrapper[5005]: I0225 11:59:00.015746 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/44b0558f-354e-40f9-9db7-8f5e8762f1fe-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pn82x\" (UID: \"44b0558f-354e-40f9-9db7-8f5e8762f1fe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pn82x" Feb 25 11:59:00 crc kubenswrapper[5005]: I0225 11:59:00.016015 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/44b0558f-354e-40f9-9db7-8f5e8762f1fe-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pn82x\" (UID: \"44b0558f-354e-40f9-9db7-8f5e8762f1fe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pn82x" Feb 25 11:59:00 crc kubenswrapper[5005]: I0225 11:59:00.016332 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/44b0558f-354e-40f9-9db7-8f5e8762f1fe-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pn82x\" (UID: \"44b0558f-354e-40f9-9db7-8f5e8762f1fe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pn82x" Feb 25 11:59:00 crc kubenswrapper[5005]: I0225 11:59:00.016389 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/44b0558f-354e-40f9-9db7-8f5e8762f1fe-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pn82x\" (UID: \"44b0558f-354e-40f9-9db7-8f5e8762f1fe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pn82x" Feb 25 11:59:00 crc kubenswrapper[5005]: I0225 11:59:00.017609 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/44b0558f-354e-40f9-9db7-8f5e8762f1fe-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pn82x\" (UID: \"44b0558f-354e-40f9-9db7-8f5e8762f1fe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pn82x" Feb 25 11:59:00 crc kubenswrapper[5005]: I0225 11:59:00.026614 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44b0558f-354e-40f9-9db7-8f5e8762f1fe-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pn82x\" (UID: \"44b0558f-354e-40f9-9db7-8f5e8762f1fe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pn82x" Feb 25 11:59:00 crc kubenswrapper[5005]: I0225 11:59:00.030490 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4j9vf\" (UniqueName: \"kubernetes.io/projected/44b0558f-354e-40f9-9db7-8f5e8762f1fe-kube-api-access-4j9vf\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pn82x\" (UID: \"44b0558f-354e-40f9-9db7-8f5e8762f1fe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pn82x" Feb 25 11:59:00 crc kubenswrapper[5005]: I0225 11:59:00.132770 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pn82x" Feb 25 11:59:00 crc kubenswrapper[5005]: I0225 11:59:00.642905 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pn82x"] Feb 25 11:59:00 crc kubenswrapper[5005]: W0225 11:59:00.650315 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod44b0558f_354e_40f9_9db7_8f5e8762f1fe.slice/crio-49cbd7d18720858dc35023a67fecfe6deaf96066efff092a4683e33d28de195f WatchSource:0}: Error finding container 49cbd7d18720858dc35023a67fecfe6deaf96066efff092a4683e33d28de195f: Status 404 returned error can't find the container with id 49cbd7d18720858dc35023a67fecfe6deaf96066efff092a4683e33d28de195f Feb 25 11:59:00 crc kubenswrapper[5005]: I0225 11:59:00.668037 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pn82x" event={"ID":"44b0558f-354e-40f9-9db7-8f5e8762f1fe","Type":"ContainerStarted","Data":"49cbd7d18720858dc35023a67fecfe6deaf96066efff092a4683e33d28de195f"} Feb 25 11:59:01 crc kubenswrapper[5005]: I0225 11:59:01.678421 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pn82x" event={"ID":"44b0558f-354e-40f9-9db7-8f5e8762f1fe","Type":"ContainerStarted","Data":"a94399a7509d81322d2da34ae1788d0be05978e91d1a6e73a3208656eb9001c4"} Feb 25 11:59:01 crc kubenswrapper[5005]: I0225 11:59:01.713557 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pn82x" podStartSLOduration=2.249647269 podStartE2EDuration="2.713526995s" podCreationTimestamp="2026-02-25 11:58:59 +0000 UTC" firstStartedPulling="2026-02-25 11:59:00.651696673 +0000 UTC m=+2454.692429000" lastFinishedPulling="2026-02-25 11:59:01.115576399 +0000 UTC m=+2455.156308726" observedRunningTime="2026-02-25 11:59:01.707207307 +0000 UTC m=+2455.747939634" watchObservedRunningTime="2026-02-25 11:59:01.713526995 +0000 UTC m=+2455.754259322" Feb 25 11:59:05 crc kubenswrapper[5005]: I0225 11:59:05.686303 5005 scope.go:117] "RemoveContainer" containerID="0a0c931f6f39c45f66a41d485326ae99130758f22d324cc3fec975dfad96b162" Feb 25 11:59:05 crc kubenswrapper[5005]: E0225 11:59:05.687421 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 11:59:06 crc kubenswrapper[5005]: I0225 11:59:06.198199 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bv876" Feb 25 11:59:06 crc kubenswrapper[5005]: I0225 11:59:06.198274 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bv876" Feb 25 11:59:06 crc kubenswrapper[5005]: I0225 11:59:06.244170 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bv876" Feb 25 11:59:06 crc kubenswrapper[5005]: I0225 11:59:06.793245 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bv876" Feb 25 11:59:08 crc kubenswrapper[5005]: I0225 11:59:08.072063 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bv876"] Feb 25 11:59:08 crc kubenswrapper[5005]: I0225 11:59:08.744736 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bv876" podUID="01894e34-7fbd-4991-9f41-edfcf327d906" containerName="registry-server" containerID="cri-o://44ea1a58558ef4c5fe170ce0742bbf0e1df9c5508a9e1110eeaf24f8aa359cb6" gracePeriod=2 Feb 25 11:59:09 crc kubenswrapper[5005]: I0225 11:59:09.171938 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bv876" Feb 25 11:59:09 crc kubenswrapper[5005]: I0225 11:59:09.370564 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01894e34-7fbd-4991-9f41-edfcf327d906-catalog-content\") pod \"01894e34-7fbd-4991-9f41-edfcf327d906\" (UID: \"01894e34-7fbd-4991-9f41-edfcf327d906\") " Feb 25 11:59:09 crc kubenswrapper[5005]: I0225 11:59:09.370767 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01894e34-7fbd-4991-9f41-edfcf327d906-utilities\") pod \"01894e34-7fbd-4991-9f41-edfcf327d906\" (UID: \"01894e34-7fbd-4991-9f41-edfcf327d906\") " Feb 25 11:59:09 crc kubenswrapper[5005]: I0225 11:59:09.370910 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8v6f\" (UniqueName: \"kubernetes.io/projected/01894e34-7fbd-4991-9f41-edfcf327d906-kube-api-access-b8v6f\") pod \"01894e34-7fbd-4991-9f41-edfcf327d906\" (UID: \"01894e34-7fbd-4991-9f41-edfcf327d906\") " Feb 25 11:59:09 crc kubenswrapper[5005]: I0225 11:59:09.371657 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01894e34-7fbd-4991-9f41-edfcf327d906-utilities" (OuterVolumeSpecName: "utilities") pod "01894e34-7fbd-4991-9f41-edfcf327d906" (UID: "01894e34-7fbd-4991-9f41-edfcf327d906"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:59:09 crc kubenswrapper[5005]: I0225 11:59:09.376482 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01894e34-7fbd-4991-9f41-edfcf327d906-kube-api-access-b8v6f" (OuterVolumeSpecName: "kube-api-access-b8v6f") pod "01894e34-7fbd-4991-9f41-edfcf327d906" (UID: "01894e34-7fbd-4991-9f41-edfcf327d906"). InnerVolumeSpecName "kube-api-access-b8v6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:59:09 crc kubenswrapper[5005]: I0225 11:59:09.393295 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01894e34-7fbd-4991-9f41-edfcf327d906-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "01894e34-7fbd-4991-9f41-edfcf327d906" (UID: "01894e34-7fbd-4991-9f41-edfcf327d906"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 11:59:09 crc kubenswrapper[5005]: I0225 11:59:09.473449 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8v6f\" (UniqueName: \"kubernetes.io/projected/01894e34-7fbd-4991-9f41-edfcf327d906-kube-api-access-b8v6f\") on node \"crc\" DevicePath \"\"" Feb 25 11:59:09 crc kubenswrapper[5005]: I0225 11:59:09.473502 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01894e34-7fbd-4991-9f41-edfcf327d906-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 11:59:09 crc kubenswrapper[5005]: I0225 11:59:09.473512 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01894e34-7fbd-4991-9f41-edfcf327d906-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 11:59:09 crc kubenswrapper[5005]: I0225 11:59:09.760624 5005 generic.go:334] "Generic (PLEG): container finished" podID="01894e34-7fbd-4991-9f41-edfcf327d906" containerID="44ea1a58558ef4c5fe170ce0742bbf0e1df9c5508a9e1110eeaf24f8aa359cb6" exitCode=0 Feb 25 11:59:09 crc kubenswrapper[5005]: I0225 11:59:09.760715 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bv876" event={"ID":"01894e34-7fbd-4991-9f41-edfcf327d906","Type":"ContainerDied","Data":"44ea1a58558ef4c5fe170ce0742bbf0e1df9c5508a9e1110eeaf24f8aa359cb6"} Feb 25 11:59:09 crc kubenswrapper[5005]: I0225 11:59:09.761157 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bv876" event={"ID":"01894e34-7fbd-4991-9f41-edfcf327d906","Type":"ContainerDied","Data":"a041f123a3f9b29820ff8c3ec88dc7176b537d163282be248d2533449a3b330c"} Feb 25 11:59:09 crc kubenswrapper[5005]: I0225 11:59:09.760785 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bv876" Feb 25 11:59:09 crc kubenswrapper[5005]: I0225 11:59:09.761240 5005 scope.go:117] "RemoveContainer" containerID="44ea1a58558ef4c5fe170ce0742bbf0e1df9c5508a9e1110eeaf24f8aa359cb6" Feb 25 11:59:09 crc kubenswrapper[5005]: I0225 11:59:09.791547 5005 scope.go:117] "RemoveContainer" containerID="a2c85da8d8cd96440ea015632343daed2d3e303ea05b18a083148a283c75ef4e" Feb 25 11:59:09 crc kubenswrapper[5005]: I0225 11:59:09.803786 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bv876"] Feb 25 11:59:09 crc kubenswrapper[5005]: I0225 11:59:09.810491 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bv876"] Feb 25 11:59:09 crc kubenswrapper[5005]: I0225 11:59:09.821256 5005 scope.go:117] "RemoveContainer" containerID="96ece352124cada0f2d0de926c91d8e7590f0140b1410ea365cb8676e05d4ea3" Feb 25 11:59:09 crc kubenswrapper[5005]: I0225 11:59:09.875678 5005 scope.go:117] "RemoveContainer" containerID="44ea1a58558ef4c5fe170ce0742bbf0e1df9c5508a9e1110eeaf24f8aa359cb6" Feb 25 11:59:09 crc kubenswrapper[5005]: E0225 11:59:09.876290 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44ea1a58558ef4c5fe170ce0742bbf0e1df9c5508a9e1110eeaf24f8aa359cb6\": container with ID starting with 44ea1a58558ef4c5fe170ce0742bbf0e1df9c5508a9e1110eeaf24f8aa359cb6 not found: ID does not exist" containerID="44ea1a58558ef4c5fe170ce0742bbf0e1df9c5508a9e1110eeaf24f8aa359cb6" Feb 25 11:59:09 crc kubenswrapper[5005]: I0225 11:59:09.876336 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44ea1a58558ef4c5fe170ce0742bbf0e1df9c5508a9e1110eeaf24f8aa359cb6"} err="failed to get container status \"44ea1a58558ef4c5fe170ce0742bbf0e1df9c5508a9e1110eeaf24f8aa359cb6\": rpc error: code = NotFound desc = could not find container \"44ea1a58558ef4c5fe170ce0742bbf0e1df9c5508a9e1110eeaf24f8aa359cb6\": container with ID starting with 44ea1a58558ef4c5fe170ce0742bbf0e1df9c5508a9e1110eeaf24f8aa359cb6 not found: ID does not exist" Feb 25 11:59:09 crc kubenswrapper[5005]: I0225 11:59:09.876364 5005 scope.go:117] "RemoveContainer" containerID="a2c85da8d8cd96440ea015632343daed2d3e303ea05b18a083148a283c75ef4e" Feb 25 11:59:09 crc kubenswrapper[5005]: E0225 11:59:09.876767 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2c85da8d8cd96440ea015632343daed2d3e303ea05b18a083148a283c75ef4e\": container with ID starting with a2c85da8d8cd96440ea015632343daed2d3e303ea05b18a083148a283c75ef4e not found: ID does not exist" containerID="a2c85da8d8cd96440ea015632343daed2d3e303ea05b18a083148a283c75ef4e" Feb 25 11:59:09 crc kubenswrapper[5005]: I0225 11:59:09.876825 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2c85da8d8cd96440ea015632343daed2d3e303ea05b18a083148a283c75ef4e"} err="failed to get container status \"a2c85da8d8cd96440ea015632343daed2d3e303ea05b18a083148a283c75ef4e\": rpc error: code = NotFound desc = could not find container \"a2c85da8d8cd96440ea015632343daed2d3e303ea05b18a083148a283c75ef4e\": container with ID starting with a2c85da8d8cd96440ea015632343daed2d3e303ea05b18a083148a283c75ef4e not found: ID does not exist" Feb 25 11:59:09 crc kubenswrapper[5005]: I0225 11:59:09.876863 5005 scope.go:117] "RemoveContainer" containerID="96ece352124cada0f2d0de926c91d8e7590f0140b1410ea365cb8676e05d4ea3" Feb 25 11:59:09 crc kubenswrapper[5005]: E0225 11:59:09.877239 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96ece352124cada0f2d0de926c91d8e7590f0140b1410ea365cb8676e05d4ea3\": container with ID starting with 96ece352124cada0f2d0de926c91d8e7590f0140b1410ea365cb8676e05d4ea3 not found: ID does not exist" containerID="96ece352124cada0f2d0de926c91d8e7590f0140b1410ea365cb8676e05d4ea3" Feb 25 11:59:09 crc kubenswrapper[5005]: I0225 11:59:09.877262 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96ece352124cada0f2d0de926c91d8e7590f0140b1410ea365cb8676e05d4ea3"} err="failed to get container status \"96ece352124cada0f2d0de926c91d8e7590f0140b1410ea365cb8676e05d4ea3\": rpc error: code = NotFound desc = could not find container \"96ece352124cada0f2d0de926c91d8e7590f0140b1410ea365cb8676e05d4ea3\": container with ID starting with 96ece352124cada0f2d0de926c91d8e7590f0140b1410ea365cb8676e05d4ea3 not found: ID does not exist" Feb 25 11:59:10 crc kubenswrapper[5005]: I0225 11:59:10.703843 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01894e34-7fbd-4991-9f41-edfcf327d906" path="/var/lib/kubelet/pods/01894e34-7fbd-4991-9f41-edfcf327d906/volumes" Feb 25 11:59:18 crc kubenswrapper[5005]: I0225 11:59:18.685735 5005 scope.go:117] "RemoveContainer" containerID="0a0c931f6f39c45f66a41d485326ae99130758f22d324cc3fec975dfad96b162" Feb 25 11:59:18 crc kubenswrapper[5005]: E0225 11:59:18.686544 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 11:59:31 crc kubenswrapper[5005]: I0225 11:59:31.685651 5005 scope.go:117] "RemoveContainer" containerID="0a0c931f6f39c45f66a41d485326ae99130758f22d324cc3fec975dfad96b162" Feb 25 11:59:31 crc kubenswrapper[5005]: E0225 11:59:31.686709 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 11:59:43 crc kubenswrapper[5005]: I0225 11:59:43.685540 5005 scope.go:117] "RemoveContainer" containerID="0a0c931f6f39c45f66a41d485326ae99130758f22d324cc3fec975dfad96b162" Feb 25 11:59:43 crc kubenswrapper[5005]: E0225 11:59:43.688614 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 11:59:54 crc kubenswrapper[5005]: I0225 11:59:54.685140 5005 scope.go:117] "RemoveContainer" containerID="0a0c931f6f39c45f66a41d485326ae99130758f22d324cc3fec975dfad96b162" Feb 25 11:59:54 crc kubenswrapper[5005]: E0225 11:59:54.686096 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 11:59:57 crc kubenswrapper[5005]: I0225 11:59:57.191641 5005 generic.go:334] "Generic (PLEG): container finished" podID="44b0558f-354e-40f9-9db7-8f5e8762f1fe" containerID="a94399a7509d81322d2da34ae1788d0be05978e91d1a6e73a3208656eb9001c4" exitCode=0 Feb 25 11:59:57 crc kubenswrapper[5005]: I0225 11:59:57.191774 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pn82x" event={"ID":"44b0558f-354e-40f9-9db7-8f5e8762f1fe","Type":"ContainerDied","Data":"a94399a7509d81322d2da34ae1788d0be05978e91d1a6e73a3208656eb9001c4"} Feb 25 11:59:58 crc kubenswrapper[5005]: I0225 11:59:58.743361 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pn82x" Feb 25 11:59:58 crc kubenswrapper[5005]: I0225 11:59:58.830382 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/44b0558f-354e-40f9-9db7-8f5e8762f1fe-inventory\") pod \"44b0558f-354e-40f9-9db7-8f5e8762f1fe\" (UID: \"44b0558f-354e-40f9-9db7-8f5e8762f1fe\") " Feb 25 11:59:58 crc kubenswrapper[5005]: I0225 11:59:58.830425 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/44b0558f-354e-40f9-9db7-8f5e8762f1fe-neutron-ovn-metadata-agent-neutron-config-0\") pod \"44b0558f-354e-40f9-9db7-8f5e8762f1fe\" (UID: \"44b0558f-354e-40f9-9db7-8f5e8762f1fe\") " Feb 25 11:59:58 crc kubenswrapper[5005]: I0225 11:59:58.830455 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/44b0558f-354e-40f9-9db7-8f5e8762f1fe-ceph\") pod \"44b0558f-354e-40f9-9db7-8f5e8762f1fe\" (UID: \"44b0558f-354e-40f9-9db7-8f5e8762f1fe\") " Feb 25 11:59:58 crc kubenswrapper[5005]: I0225 11:59:58.830505 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4j9vf\" (UniqueName: \"kubernetes.io/projected/44b0558f-354e-40f9-9db7-8f5e8762f1fe-kube-api-access-4j9vf\") pod \"44b0558f-354e-40f9-9db7-8f5e8762f1fe\" (UID: \"44b0558f-354e-40f9-9db7-8f5e8762f1fe\") " Feb 25 11:59:58 crc kubenswrapper[5005]: I0225 11:59:58.830550 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44b0558f-354e-40f9-9db7-8f5e8762f1fe-neutron-metadata-combined-ca-bundle\") pod \"44b0558f-354e-40f9-9db7-8f5e8762f1fe\" (UID: \"44b0558f-354e-40f9-9db7-8f5e8762f1fe\") " Feb 25 11:59:58 crc kubenswrapper[5005]: I0225 11:59:58.830584 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/44b0558f-354e-40f9-9db7-8f5e8762f1fe-nova-metadata-neutron-config-0\") pod \"44b0558f-354e-40f9-9db7-8f5e8762f1fe\" (UID: \"44b0558f-354e-40f9-9db7-8f5e8762f1fe\") " Feb 25 11:59:58 crc kubenswrapper[5005]: I0225 11:59:58.830620 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/44b0558f-354e-40f9-9db7-8f5e8762f1fe-ssh-key-openstack-edpm-ipam\") pod \"44b0558f-354e-40f9-9db7-8f5e8762f1fe\" (UID: \"44b0558f-354e-40f9-9db7-8f5e8762f1fe\") " Feb 25 11:59:58 crc kubenswrapper[5005]: I0225 11:59:58.836476 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44b0558f-354e-40f9-9db7-8f5e8762f1fe-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "44b0558f-354e-40f9-9db7-8f5e8762f1fe" (UID: "44b0558f-354e-40f9-9db7-8f5e8762f1fe"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:59:58 crc kubenswrapper[5005]: I0225 11:59:58.838764 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44b0558f-354e-40f9-9db7-8f5e8762f1fe-kube-api-access-4j9vf" (OuterVolumeSpecName: "kube-api-access-4j9vf") pod "44b0558f-354e-40f9-9db7-8f5e8762f1fe" (UID: "44b0558f-354e-40f9-9db7-8f5e8762f1fe"). InnerVolumeSpecName "kube-api-access-4j9vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 11:59:58 crc kubenswrapper[5005]: I0225 11:59:58.844500 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44b0558f-354e-40f9-9db7-8f5e8762f1fe-ceph" (OuterVolumeSpecName: "ceph") pod "44b0558f-354e-40f9-9db7-8f5e8762f1fe" (UID: "44b0558f-354e-40f9-9db7-8f5e8762f1fe"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:59:58 crc kubenswrapper[5005]: I0225 11:59:58.855578 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44b0558f-354e-40f9-9db7-8f5e8762f1fe-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "44b0558f-354e-40f9-9db7-8f5e8762f1fe" (UID: "44b0558f-354e-40f9-9db7-8f5e8762f1fe"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:59:58 crc kubenswrapper[5005]: I0225 11:59:58.855994 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44b0558f-354e-40f9-9db7-8f5e8762f1fe-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "44b0558f-354e-40f9-9db7-8f5e8762f1fe" (UID: "44b0558f-354e-40f9-9db7-8f5e8762f1fe"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:59:58 crc kubenswrapper[5005]: I0225 11:59:58.861008 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44b0558f-354e-40f9-9db7-8f5e8762f1fe-inventory" (OuterVolumeSpecName: "inventory") pod "44b0558f-354e-40f9-9db7-8f5e8762f1fe" (UID: "44b0558f-354e-40f9-9db7-8f5e8762f1fe"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:59:58 crc kubenswrapper[5005]: I0225 11:59:58.862573 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44b0558f-354e-40f9-9db7-8f5e8762f1fe-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "44b0558f-354e-40f9-9db7-8f5e8762f1fe" (UID: "44b0558f-354e-40f9-9db7-8f5e8762f1fe"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 11:59:58 crc kubenswrapper[5005]: I0225 11:59:58.932413 5005 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/44b0558f-354e-40f9-9db7-8f5e8762f1fe-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 25 11:59:58 crc kubenswrapper[5005]: I0225 11:59:58.932444 5005 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/44b0558f-354e-40f9-9db7-8f5e8762f1fe-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 25 11:59:58 crc kubenswrapper[5005]: I0225 11:59:58.932455 5005 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/44b0558f-354e-40f9-9db7-8f5e8762f1fe-inventory\") on node \"crc\" DevicePath \"\"" Feb 25 11:59:58 crc kubenswrapper[5005]: I0225 11:59:58.932466 5005 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/44b0558f-354e-40f9-9db7-8f5e8762f1fe-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 25 11:59:58 crc kubenswrapper[5005]: I0225 11:59:58.932476 5005 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/44b0558f-354e-40f9-9db7-8f5e8762f1fe-ceph\") on node \"crc\" DevicePath \"\"" Feb 25 11:59:58 crc kubenswrapper[5005]: I0225 11:59:58.932486 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4j9vf\" (UniqueName: \"kubernetes.io/projected/44b0558f-354e-40f9-9db7-8f5e8762f1fe-kube-api-access-4j9vf\") on node \"crc\" DevicePath \"\"" Feb 25 11:59:58 crc kubenswrapper[5005]: I0225 11:59:58.932495 5005 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44b0558f-354e-40f9-9db7-8f5e8762f1fe-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 11:59:59 crc kubenswrapper[5005]: I0225 11:59:59.241835 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pn82x" event={"ID":"44b0558f-354e-40f9-9db7-8f5e8762f1fe","Type":"ContainerDied","Data":"49cbd7d18720858dc35023a67fecfe6deaf96066efff092a4683e33d28de195f"} Feb 25 11:59:59 crc kubenswrapper[5005]: I0225 11:59:59.241890 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49cbd7d18720858dc35023a67fecfe6deaf96066efff092a4683e33d28de195f" Feb 25 11:59:59 crc kubenswrapper[5005]: I0225 11:59:59.242508 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pn82x" Feb 25 11:59:59 crc kubenswrapper[5005]: I0225 11:59:59.340283 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jldwr"] Feb 25 11:59:59 crc kubenswrapper[5005]: E0225 11:59:59.340982 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01894e34-7fbd-4991-9f41-edfcf327d906" containerName="registry-server" Feb 25 11:59:59 crc kubenswrapper[5005]: I0225 11:59:59.341019 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="01894e34-7fbd-4991-9f41-edfcf327d906" containerName="registry-server" Feb 25 11:59:59 crc kubenswrapper[5005]: E0225 11:59:59.341054 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01894e34-7fbd-4991-9f41-edfcf327d906" containerName="extract-utilities" Feb 25 11:59:59 crc kubenswrapper[5005]: I0225 11:59:59.341069 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="01894e34-7fbd-4991-9f41-edfcf327d906" containerName="extract-utilities" Feb 25 11:59:59 crc kubenswrapper[5005]: E0225 11:59:59.341104 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44b0558f-354e-40f9-9db7-8f5e8762f1fe" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 25 11:59:59 crc kubenswrapper[5005]: I0225 11:59:59.341119 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="44b0558f-354e-40f9-9db7-8f5e8762f1fe" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 25 11:59:59 crc kubenswrapper[5005]: E0225 11:59:59.341154 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01894e34-7fbd-4991-9f41-edfcf327d906" containerName="extract-content" Feb 25 11:59:59 crc kubenswrapper[5005]: I0225 11:59:59.341167 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="01894e34-7fbd-4991-9f41-edfcf327d906" containerName="extract-content" Feb 25 11:59:59 crc kubenswrapper[5005]: I0225 11:59:59.341561 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="44b0558f-354e-40f9-9db7-8f5e8762f1fe" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 25 11:59:59 crc kubenswrapper[5005]: I0225 11:59:59.341616 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="01894e34-7fbd-4991-9f41-edfcf327d906" containerName="registry-server" Feb 25 11:59:59 crc kubenswrapper[5005]: I0225 11:59:59.342732 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jldwr" Feb 25 11:59:59 crc kubenswrapper[5005]: I0225 11:59:59.351941 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 25 11:59:59 crc kubenswrapper[5005]: I0225 11:59:59.352060 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dgrbb" Feb 25 11:59:59 crc kubenswrapper[5005]: I0225 11:59:59.352168 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 25 11:59:59 crc kubenswrapper[5005]: I0225 11:59:59.352264 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 25 11:59:59 crc kubenswrapper[5005]: I0225 11:59:59.352527 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 25 11:59:59 crc kubenswrapper[5005]: I0225 11:59:59.352762 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 25 11:59:59 crc kubenswrapper[5005]: I0225 11:59:59.367093 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jldwr"] Feb 25 11:59:59 crc kubenswrapper[5005]: I0225 11:59:59.443183 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6aeb7945-54e1-40bf-b490-7623fac580b0-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jldwr\" (UID: \"6aeb7945-54e1-40bf-b490-7623fac580b0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jldwr" Feb 25 11:59:59 crc kubenswrapper[5005]: I0225 11:59:59.443535 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6aeb7945-54e1-40bf-b490-7623fac580b0-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jldwr\" (UID: \"6aeb7945-54e1-40bf-b490-7623fac580b0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jldwr" Feb 25 11:59:59 crc kubenswrapper[5005]: I0225 11:59:59.443704 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6aeb7945-54e1-40bf-b490-7623fac580b0-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jldwr\" (UID: \"6aeb7945-54e1-40bf-b490-7623fac580b0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jldwr" Feb 25 11:59:59 crc kubenswrapper[5005]: I0225 11:59:59.443811 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxrbl\" (UniqueName: \"kubernetes.io/projected/6aeb7945-54e1-40bf-b490-7623fac580b0-kube-api-access-gxrbl\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jldwr\" (UID: \"6aeb7945-54e1-40bf-b490-7623fac580b0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jldwr" Feb 25 11:59:59 crc kubenswrapper[5005]: I0225 11:59:59.443930 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aeb7945-54e1-40bf-b490-7623fac580b0-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jldwr\" (UID: \"6aeb7945-54e1-40bf-b490-7623fac580b0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jldwr" Feb 25 11:59:59 crc kubenswrapper[5005]: I0225 11:59:59.444087 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/6aeb7945-54e1-40bf-b490-7623fac580b0-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jldwr\" (UID: \"6aeb7945-54e1-40bf-b490-7623fac580b0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jldwr" Feb 25 11:59:59 crc kubenswrapper[5005]: E0225 11:59:59.480671 5005 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod44b0558f_354e_40f9_9db7_8f5e8762f1fe.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod44b0558f_354e_40f9_9db7_8f5e8762f1fe.slice/crio-49cbd7d18720858dc35023a67fecfe6deaf96066efff092a4683e33d28de195f\": RecentStats: unable to find data in memory cache]" Feb 25 11:59:59 crc kubenswrapper[5005]: I0225 11:59:59.545202 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6aeb7945-54e1-40bf-b490-7623fac580b0-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jldwr\" (UID: \"6aeb7945-54e1-40bf-b490-7623fac580b0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jldwr" Feb 25 11:59:59 crc kubenswrapper[5005]: I0225 11:59:59.545256 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6aeb7945-54e1-40bf-b490-7623fac580b0-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jldwr\" (UID: \"6aeb7945-54e1-40bf-b490-7623fac580b0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jldwr" Feb 25 11:59:59 crc kubenswrapper[5005]: I0225 11:59:59.545313 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6aeb7945-54e1-40bf-b490-7623fac580b0-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jldwr\" (UID: \"6aeb7945-54e1-40bf-b490-7623fac580b0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jldwr" Feb 25 11:59:59 crc kubenswrapper[5005]: I0225 11:59:59.545329 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxrbl\" (UniqueName: \"kubernetes.io/projected/6aeb7945-54e1-40bf-b490-7623fac580b0-kube-api-access-gxrbl\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jldwr\" (UID: \"6aeb7945-54e1-40bf-b490-7623fac580b0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jldwr" Feb 25 11:59:59 crc kubenswrapper[5005]: I0225 11:59:59.545354 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aeb7945-54e1-40bf-b490-7623fac580b0-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jldwr\" (UID: \"6aeb7945-54e1-40bf-b490-7623fac580b0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jldwr" Feb 25 11:59:59 crc kubenswrapper[5005]: I0225 11:59:59.545389 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/6aeb7945-54e1-40bf-b490-7623fac580b0-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jldwr\" (UID: \"6aeb7945-54e1-40bf-b490-7623fac580b0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jldwr" Feb 25 11:59:59 crc kubenswrapper[5005]: I0225 11:59:59.549467 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/6aeb7945-54e1-40bf-b490-7623fac580b0-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jldwr\" (UID: \"6aeb7945-54e1-40bf-b490-7623fac580b0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jldwr" Feb 25 11:59:59 crc kubenswrapper[5005]: I0225 11:59:59.550674 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6aeb7945-54e1-40bf-b490-7623fac580b0-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jldwr\" (UID: \"6aeb7945-54e1-40bf-b490-7623fac580b0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jldwr" Feb 25 11:59:59 crc kubenswrapper[5005]: I0225 11:59:59.550901 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aeb7945-54e1-40bf-b490-7623fac580b0-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jldwr\" (UID: \"6aeb7945-54e1-40bf-b490-7623fac580b0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jldwr" Feb 25 11:59:59 crc kubenswrapper[5005]: I0225 11:59:59.551220 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6aeb7945-54e1-40bf-b490-7623fac580b0-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jldwr\" (UID: \"6aeb7945-54e1-40bf-b490-7623fac580b0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jldwr" Feb 25 11:59:59 crc kubenswrapper[5005]: I0225 11:59:59.553600 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6aeb7945-54e1-40bf-b490-7623fac580b0-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jldwr\" (UID: \"6aeb7945-54e1-40bf-b490-7623fac580b0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jldwr" Feb 25 11:59:59 crc kubenswrapper[5005]: I0225 11:59:59.563241 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxrbl\" (UniqueName: \"kubernetes.io/projected/6aeb7945-54e1-40bf-b490-7623fac580b0-kube-api-access-gxrbl\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jldwr\" (UID: \"6aeb7945-54e1-40bf-b490-7623fac580b0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jldwr" Feb 25 11:59:59 crc kubenswrapper[5005]: I0225 11:59:59.665631 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jldwr" Feb 25 12:00:00 crc kubenswrapper[5005]: I0225 12:00:00.133667 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533680-rtq5g"] Feb 25 12:00:00 crc kubenswrapper[5005]: I0225 12:00:00.135503 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533680-rtq5g" Feb 25 12:00:00 crc kubenswrapper[5005]: I0225 12:00:00.140556 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 12:00:00 crc kubenswrapper[5005]: I0225 12:00:00.140708 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 12:00:00 crc kubenswrapper[5005]: I0225 12:00:00.140758 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 12:00:00 crc kubenswrapper[5005]: I0225 12:00:00.155386 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533680-rtq5g"] Feb 25 12:00:00 crc kubenswrapper[5005]: I0225 12:00:00.157123 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t28zn\" (UniqueName: \"kubernetes.io/projected/ddfe17f5-d025-4272-93dd-d71fda7be5e9-kube-api-access-t28zn\") pod \"auto-csr-approver-29533680-rtq5g\" (UID: \"ddfe17f5-d025-4272-93dd-d71fda7be5e9\") " pod="openshift-infra/auto-csr-approver-29533680-rtq5g" Feb 25 12:00:00 crc kubenswrapper[5005]: I0225 12:00:00.165479 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29533680-t4666"] Feb 25 12:00:00 crc kubenswrapper[5005]: I0225 12:00:00.166771 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29533680-t4666" Feb 25 12:00:00 crc kubenswrapper[5005]: I0225 12:00:00.168782 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 25 12:00:00 crc kubenswrapper[5005]: I0225 12:00:00.168927 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 25 12:00:00 crc kubenswrapper[5005]: I0225 12:00:00.173950 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29533680-t4666"] Feb 25 12:00:00 crc kubenswrapper[5005]: I0225 12:00:00.259182 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t28zn\" (UniqueName: \"kubernetes.io/projected/ddfe17f5-d025-4272-93dd-d71fda7be5e9-kube-api-access-t28zn\") pod \"auto-csr-approver-29533680-rtq5g\" (UID: \"ddfe17f5-d025-4272-93dd-d71fda7be5e9\") " pod="openshift-infra/auto-csr-approver-29533680-rtq5g" Feb 25 12:00:00 crc kubenswrapper[5005]: I0225 12:00:00.277591 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t28zn\" (UniqueName: \"kubernetes.io/projected/ddfe17f5-d025-4272-93dd-d71fda7be5e9-kube-api-access-t28zn\") pod \"auto-csr-approver-29533680-rtq5g\" (UID: \"ddfe17f5-d025-4272-93dd-d71fda7be5e9\") " pod="openshift-infra/auto-csr-approver-29533680-rtq5g" Feb 25 12:00:00 crc kubenswrapper[5005]: I0225 12:00:00.361439 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cf323fa1-3a7a-4244-88b9-e704b090f3ed-secret-volume\") pod \"collect-profiles-29533680-t4666\" (UID: \"cf323fa1-3a7a-4244-88b9-e704b090f3ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533680-t4666" Feb 25 12:00:00 crc kubenswrapper[5005]: I0225 12:00:00.361598 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7zmd\" (UniqueName: \"kubernetes.io/projected/cf323fa1-3a7a-4244-88b9-e704b090f3ed-kube-api-access-l7zmd\") pod \"collect-profiles-29533680-t4666\" (UID: \"cf323fa1-3a7a-4244-88b9-e704b090f3ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533680-t4666" Feb 25 12:00:00 crc kubenswrapper[5005]: I0225 12:00:00.361736 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf323fa1-3a7a-4244-88b9-e704b090f3ed-config-volume\") pod \"collect-profiles-29533680-t4666\" (UID: \"cf323fa1-3a7a-4244-88b9-e704b090f3ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533680-t4666" Feb 25 12:00:00 crc kubenswrapper[5005]: I0225 12:00:00.460364 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533680-rtq5g" Feb 25 12:00:00 crc kubenswrapper[5005]: I0225 12:00:00.466403 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf323fa1-3a7a-4244-88b9-e704b090f3ed-config-volume\") pod \"collect-profiles-29533680-t4666\" (UID: \"cf323fa1-3a7a-4244-88b9-e704b090f3ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533680-t4666" Feb 25 12:00:00 crc kubenswrapper[5005]: I0225 12:00:00.466574 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cf323fa1-3a7a-4244-88b9-e704b090f3ed-secret-volume\") pod \"collect-profiles-29533680-t4666\" (UID: \"cf323fa1-3a7a-4244-88b9-e704b090f3ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533680-t4666" Feb 25 12:00:00 crc kubenswrapper[5005]: I0225 12:00:00.466643 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7zmd\" (UniqueName: \"kubernetes.io/projected/cf323fa1-3a7a-4244-88b9-e704b090f3ed-kube-api-access-l7zmd\") pod \"collect-profiles-29533680-t4666\" (UID: \"cf323fa1-3a7a-4244-88b9-e704b090f3ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533680-t4666" Feb 25 12:00:00 crc kubenswrapper[5005]: I0225 12:00:00.469947 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf323fa1-3a7a-4244-88b9-e704b090f3ed-config-volume\") pod \"collect-profiles-29533680-t4666\" (UID: \"cf323fa1-3a7a-4244-88b9-e704b090f3ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533680-t4666" Feb 25 12:00:00 crc kubenswrapper[5005]: I0225 12:00:00.485145 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cf323fa1-3a7a-4244-88b9-e704b090f3ed-secret-volume\") pod \"collect-profiles-29533680-t4666\" (UID: \"cf323fa1-3a7a-4244-88b9-e704b090f3ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533680-t4666" Feb 25 12:00:00 crc kubenswrapper[5005]: I0225 12:00:00.495282 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7zmd\" (UniqueName: \"kubernetes.io/projected/cf323fa1-3a7a-4244-88b9-e704b090f3ed-kube-api-access-l7zmd\") pod \"collect-profiles-29533680-t4666\" (UID: \"cf323fa1-3a7a-4244-88b9-e704b090f3ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533680-t4666" Feb 25 12:00:00 crc kubenswrapper[5005]: I0225 12:00:00.782813 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29533680-t4666" Feb 25 12:00:00 crc kubenswrapper[5005]: I0225 12:00:00.785890 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jldwr"] Feb 25 12:00:00 crc kubenswrapper[5005]: I0225 12:00:00.805654 5005 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 25 12:00:00 crc kubenswrapper[5005]: I0225 12:00:00.909522 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533680-rtq5g"] Feb 25 12:00:01 crc kubenswrapper[5005]: I0225 12:00:01.230224 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29533680-t4666"] Feb 25 12:00:01 crc kubenswrapper[5005]: I0225 12:00:01.264151 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29533680-t4666" event={"ID":"cf323fa1-3a7a-4244-88b9-e704b090f3ed","Type":"ContainerStarted","Data":"8c372d280866a25bc66d3d22d1689b44b6adae576192085b459cf1a8346f580a"} Feb 25 12:00:01 crc kubenswrapper[5005]: I0225 12:00:01.265541 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jldwr" event={"ID":"6aeb7945-54e1-40bf-b490-7623fac580b0","Type":"ContainerStarted","Data":"fbf4f7775ad247026828eaaa33da7cda68fcc7a0df7256a189b464545a74b50b"} Feb 25 12:00:01 crc kubenswrapper[5005]: I0225 12:00:01.266684 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533680-rtq5g" event={"ID":"ddfe17f5-d025-4272-93dd-d71fda7be5e9","Type":"ContainerStarted","Data":"b4264d6cef368907c74bc094db0bad5ca58233bb22d256a8752428bc08cd654a"} Feb 25 12:00:02 crc kubenswrapper[5005]: I0225 12:00:02.296748 5005 generic.go:334] "Generic (PLEG): container finished" podID="cf323fa1-3a7a-4244-88b9-e704b090f3ed" containerID="629658fa21a9ae797a251bb9e8690b3742201b6361addbe198cb95f6b4ebc248" exitCode=0 Feb 25 12:00:02 crc kubenswrapper[5005]: I0225 12:00:02.296861 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29533680-t4666" event={"ID":"cf323fa1-3a7a-4244-88b9-e704b090f3ed","Type":"ContainerDied","Data":"629658fa21a9ae797a251bb9e8690b3742201b6361addbe198cb95f6b4ebc248"} Feb 25 12:00:03 crc kubenswrapper[5005]: I0225 12:00:03.306852 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jldwr" event={"ID":"6aeb7945-54e1-40bf-b490-7623fac580b0","Type":"ContainerStarted","Data":"dbda5f7be6b752804c98c9e733700d963f64b37ccd78bf0ae7f1d7508de11613"} Feb 25 12:00:03 crc kubenswrapper[5005]: I0225 12:00:03.333317 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jldwr" podStartSLOduration=2.117793665 podStartE2EDuration="4.333296224s" podCreationTimestamp="2026-02-25 11:59:59 +0000 UTC" firstStartedPulling="2026-02-25 12:00:00.805354882 +0000 UTC m=+2514.846087199" lastFinishedPulling="2026-02-25 12:00:03.020857431 +0000 UTC m=+2517.061589758" observedRunningTime="2026-02-25 12:00:03.32229732 +0000 UTC m=+2517.363029657" watchObservedRunningTime="2026-02-25 12:00:03.333296224 +0000 UTC m=+2517.374028561" Feb 25 12:00:03 crc kubenswrapper[5005]: I0225 12:00:03.579039 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29533680-t4666" Feb 25 12:00:03 crc kubenswrapper[5005]: I0225 12:00:03.723569 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7zmd\" (UniqueName: \"kubernetes.io/projected/cf323fa1-3a7a-4244-88b9-e704b090f3ed-kube-api-access-l7zmd\") pod \"cf323fa1-3a7a-4244-88b9-e704b090f3ed\" (UID: \"cf323fa1-3a7a-4244-88b9-e704b090f3ed\") " Feb 25 12:00:03 crc kubenswrapper[5005]: I0225 12:00:03.723679 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf323fa1-3a7a-4244-88b9-e704b090f3ed-config-volume\") pod \"cf323fa1-3a7a-4244-88b9-e704b090f3ed\" (UID: \"cf323fa1-3a7a-4244-88b9-e704b090f3ed\") " Feb 25 12:00:03 crc kubenswrapper[5005]: I0225 12:00:03.723772 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cf323fa1-3a7a-4244-88b9-e704b090f3ed-secret-volume\") pod \"cf323fa1-3a7a-4244-88b9-e704b090f3ed\" (UID: \"cf323fa1-3a7a-4244-88b9-e704b090f3ed\") " Feb 25 12:00:03 crc kubenswrapper[5005]: I0225 12:00:03.725181 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf323fa1-3a7a-4244-88b9-e704b090f3ed-config-volume" (OuterVolumeSpecName: "config-volume") pod "cf323fa1-3a7a-4244-88b9-e704b090f3ed" (UID: "cf323fa1-3a7a-4244-88b9-e704b090f3ed"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 12:00:03 crc kubenswrapper[5005]: I0225 12:00:03.728509 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf323fa1-3a7a-4244-88b9-e704b090f3ed-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "cf323fa1-3a7a-4244-88b9-e704b090f3ed" (UID: "cf323fa1-3a7a-4244-88b9-e704b090f3ed"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 12:00:03 crc kubenswrapper[5005]: I0225 12:00:03.728621 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf323fa1-3a7a-4244-88b9-e704b090f3ed-kube-api-access-l7zmd" (OuterVolumeSpecName: "kube-api-access-l7zmd") pod "cf323fa1-3a7a-4244-88b9-e704b090f3ed" (UID: "cf323fa1-3a7a-4244-88b9-e704b090f3ed"). InnerVolumeSpecName "kube-api-access-l7zmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:00:03 crc kubenswrapper[5005]: I0225 12:00:03.826001 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7zmd\" (UniqueName: \"kubernetes.io/projected/cf323fa1-3a7a-4244-88b9-e704b090f3ed-kube-api-access-l7zmd\") on node \"crc\" DevicePath \"\"" Feb 25 12:00:03 crc kubenswrapper[5005]: I0225 12:00:03.826053 5005 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf323fa1-3a7a-4244-88b9-e704b090f3ed-config-volume\") on node \"crc\" DevicePath \"\"" Feb 25 12:00:03 crc kubenswrapper[5005]: I0225 12:00:03.826084 5005 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cf323fa1-3a7a-4244-88b9-e704b090f3ed-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 25 12:00:04 crc kubenswrapper[5005]: I0225 12:00:04.320902 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29533680-t4666" event={"ID":"cf323fa1-3a7a-4244-88b9-e704b090f3ed","Type":"ContainerDied","Data":"8c372d280866a25bc66d3d22d1689b44b6adae576192085b459cf1a8346f580a"} Feb 25 12:00:04 crc kubenswrapper[5005]: I0225 12:00:04.320971 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c372d280866a25bc66d3d22d1689b44b6adae576192085b459cf1a8346f580a" Feb 25 12:00:04 crc kubenswrapper[5005]: I0225 12:00:04.321035 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29533680-t4666" Feb 25 12:00:04 crc kubenswrapper[5005]: I0225 12:00:04.649897 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29533635-968n9"] Feb 25 12:00:04 crc kubenswrapper[5005]: I0225 12:00:04.657226 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29533635-968n9"] Feb 25 12:00:04 crc kubenswrapper[5005]: I0225 12:00:04.695427 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="871158bf-c5f6-4e49-981a-bf00d5b8c4c7" path="/var/lib/kubelet/pods/871158bf-c5f6-4e49-981a-bf00d5b8c4c7/volumes" Feb 25 12:00:05 crc kubenswrapper[5005]: I0225 12:00:05.330785 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533680-rtq5g" event={"ID":"ddfe17f5-d025-4272-93dd-d71fda7be5e9","Type":"ContainerStarted","Data":"1d3e53e488a48f466e5d3765c41ca6d1c07b3b75b0c6f1d3d736957f5b31849a"} Feb 25 12:00:05 crc kubenswrapper[5005]: I0225 12:00:05.358056 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29533680-rtq5g" podStartSLOduration=1.382897298 podStartE2EDuration="5.3580367s" podCreationTimestamp="2026-02-25 12:00:00 +0000 UTC" firstStartedPulling="2026-02-25 12:00:00.924660108 +0000 UTC m=+2514.965392425" lastFinishedPulling="2026-02-25 12:00:04.89979949 +0000 UTC m=+2518.940531827" observedRunningTime="2026-02-25 12:00:05.349355638 +0000 UTC m=+2519.390087995" watchObservedRunningTime="2026-02-25 12:00:05.3580367 +0000 UTC m=+2519.398769037" Feb 25 12:00:06 crc kubenswrapper[5005]: I0225 12:00:06.347401 5005 generic.go:334] "Generic (PLEG): container finished" podID="ddfe17f5-d025-4272-93dd-d71fda7be5e9" containerID="1d3e53e488a48f466e5d3765c41ca6d1c07b3b75b0c6f1d3d736957f5b31849a" exitCode=0 Feb 25 12:00:06 crc kubenswrapper[5005]: I0225 12:00:06.347510 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533680-rtq5g" event={"ID":"ddfe17f5-d025-4272-93dd-d71fda7be5e9","Type":"ContainerDied","Data":"1d3e53e488a48f466e5d3765c41ca6d1c07b3b75b0c6f1d3d736957f5b31849a"} Feb 25 12:00:07 crc kubenswrapper[5005]: I0225 12:00:07.728477 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533680-rtq5g" Feb 25 12:00:07 crc kubenswrapper[5005]: I0225 12:00:07.898549 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t28zn\" (UniqueName: \"kubernetes.io/projected/ddfe17f5-d025-4272-93dd-d71fda7be5e9-kube-api-access-t28zn\") pod \"ddfe17f5-d025-4272-93dd-d71fda7be5e9\" (UID: \"ddfe17f5-d025-4272-93dd-d71fda7be5e9\") " Feb 25 12:00:07 crc kubenswrapper[5005]: I0225 12:00:07.906090 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddfe17f5-d025-4272-93dd-d71fda7be5e9-kube-api-access-t28zn" (OuterVolumeSpecName: "kube-api-access-t28zn") pod "ddfe17f5-d025-4272-93dd-d71fda7be5e9" (UID: "ddfe17f5-d025-4272-93dd-d71fda7be5e9"). InnerVolumeSpecName "kube-api-access-t28zn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:00:08 crc kubenswrapper[5005]: I0225 12:00:08.001453 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t28zn\" (UniqueName: \"kubernetes.io/projected/ddfe17f5-d025-4272-93dd-d71fda7be5e9-kube-api-access-t28zn\") on node \"crc\" DevicePath \"\"" Feb 25 12:00:08 crc kubenswrapper[5005]: I0225 12:00:08.373252 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533680-rtq5g" event={"ID":"ddfe17f5-d025-4272-93dd-d71fda7be5e9","Type":"ContainerDied","Data":"b4264d6cef368907c74bc094db0bad5ca58233bb22d256a8752428bc08cd654a"} Feb 25 12:00:08 crc kubenswrapper[5005]: I0225 12:00:08.373592 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4264d6cef368907c74bc094db0bad5ca58233bb22d256a8752428bc08cd654a" Feb 25 12:00:08 crc kubenswrapper[5005]: I0225 12:00:08.373501 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533680-rtq5g" Feb 25 12:00:08 crc kubenswrapper[5005]: I0225 12:00:08.417626 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533674-t928c"] Feb 25 12:00:08 crc kubenswrapper[5005]: I0225 12:00:08.428177 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533674-t928c"] Feb 25 12:00:08 crc kubenswrapper[5005]: I0225 12:00:08.685748 5005 scope.go:117] "RemoveContainer" containerID="0a0c931f6f39c45f66a41d485326ae99130758f22d324cc3fec975dfad96b162" Feb 25 12:00:08 crc kubenswrapper[5005]: E0225 12:00:08.686220 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:00:08 crc kubenswrapper[5005]: I0225 12:00:08.695472 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ff346b5-97c2-4bd4-8c95-6cede9eebee8" path="/var/lib/kubelet/pods/3ff346b5-97c2-4bd4-8c95-6cede9eebee8/volumes" Feb 25 12:00:19 crc kubenswrapper[5005]: I0225 12:00:19.156989 5005 scope.go:117] "RemoveContainer" containerID="a1f530703922ade8a94f5905b688352a03c649440f48f176f6fda094a4b23fd7" Feb 25 12:00:19 crc kubenswrapper[5005]: I0225 12:00:19.179958 5005 scope.go:117] "RemoveContainer" containerID="a18b02caa67b7f320e1e1122c8be72855d392f6ea3f541a2ab0ab18f6f061311" Feb 25 12:00:21 crc kubenswrapper[5005]: I0225 12:00:21.686648 5005 scope.go:117] "RemoveContainer" containerID="0a0c931f6f39c45f66a41d485326ae99130758f22d324cc3fec975dfad96b162" Feb 25 12:00:21 crc kubenswrapper[5005]: E0225 12:00:21.687509 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:00:32 crc kubenswrapper[5005]: I0225 12:00:32.685488 5005 scope.go:117] "RemoveContainer" containerID="0a0c931f6f39c45f66a41d485326ae99130758f22d324cc3fec975dfad96b162" Feb 25 12:00:33 crc kubenswrapper[5005]: I0225 12:00:33.584562 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerStarted","Data":"4d68713e6e19c3efba3ef33278796317e807a4a6ee0ef751e7b26fc433e930e4"} Feb 25 12:01:00 crc kubenswrapper[5005]: I0225 12:01:00.145209 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29533681-d94x7"] Feb 25 12:01:00 crc kubenswrapper[5005]: E0225 12:01:00.146079 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf323fa1-3a7a-4244-88b9-e704b090f3ed" containerName="collect-profiles" Feb 25 12:01:00 crc kubenswrapper[5005]: I0225 12:01:00.146091 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf323fa1-3a7a-4244-88b9-e704b090f3ed" containerName="collect-profiles" Feb 25 12:01:00 crc kubenswrapper[5005]: E0225 12:01:00.146111 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddfe17f5-d025-4272-93dd-d71fda7be5e9" containerName="oc" Feb 25 12:01:00 crc kubenswrapper[5005]: I0225 12:01:00.146117 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddfe17f5-d025-4272-93dd-d71fda7be5e9" containerName="oc" Feb 25 12:01:00 crc kubenswrapper[5005]: I0225 12:01:00.146293 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddfe17f5-d025-4272-93dd-d71fda7be5e9" containerName="oc" Feb 25 12:01:00 crc kubenswrapper[5005]: I0225 12:01:00.146306 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf323fa1-3a7a-4244-88b9-e704b090f3ed" containerName="collect-profiles" Feb 25 12:01:00 crc kubenswrapper[5005]: I0225 12:01:00.146888 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29533681-d94x7" Feb 25 12:01:00 crc kubenswrapper[5005]: I0225 12:01:00.154881 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29533681-d94x7"] Feb 25 12:01:00 crc kubenswrapper[5005]: I0225 12:01:00.189890 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39300e2f-d3dc-453a-8555-0453080ef2bd-combined-ca-bundle\") pod \"keystone-cron-29533681-d94x7\" (UID: \"39300e2f-d3dc-453a-8555-0453080ef2bd\") " pod="openstack/keystone-cron-29533681-d94x7" Feb 25 12:01:00 crc kubenswrapper[5005]: I0225 12:01:00.189946 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/39300e2f-d3dc-453a-8555-0453080ef2bd-fernet-keys\") pod \"keystone-cron-29533681-d94x7\" (UID: \"39300e2f-d3dc-453a-8555-0453080ef2bd\") " pod="openstack/keystone-cron-29533681-d94x7" Feb 25 12:01:00 crc kubenswrapper[5005]: I0225 12:01:00.189976 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gh5s\" (UniqueName: \"kubernetes.io/projected/39300e2f-d3dc-453a-8555-0453080ef2bd-kube-api-access-4gh5s\") pod \"keystone-cron-29533681-d94x7\" (UID: \"39300e2f-d3dc-453a-8555-0453080ef2bd\") " pod="openstack/keystone-cron-29533681-d94x7" Feb 25 12:01:00 crc kubenswrapper[5005]: I0225 12:01:00.190114 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39300e2f-d3dc-453a-8555-0453080ef2bd-config-data\") pod \"keystone-cron-29533681-d94x7\" (UID: \"39300e2f-d3dc-453a-8555-0453080ef2bd\") " pod="openstack/keystone-cron-29533681-d94x7" Feb 25 12:01:00 crc kubenswrapper[5005]: I0225 12:01:00.291831 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39300e2f-d3dc-453a-8555-0453080ef2bd-combined-ca-bundle\") pod \"keystone-cron-29533681-d94x7\" (UID: \"39300e2f-d3dc-453a-8555-0453080ef2bd\") " pod="openstack/keystone-cron-29533681-d94x7" Feb 25 12:01:00 crc kubenswrapper[5005]: I0225 12:01:00.291880 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/39300e2f-d3dc-453a-8555-0453080ef2bd-fernet-keys\") pod \"keystone-cron-29533681-d94x7\" (UID: \"39300e2f-d3dc-453a-8555-0453080ef2bd\") " pod="openstack/keystone-cron-29533681-d94x7" Feb 25 12:01:00 crc kubenswrapper[5005]: I0225 12:01:00.291927 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gh5s\" (UniqueName: \"kubernetes.io/projected/39300e2f-d3dc-453a-8555-0453080ef2bd-kube-api-access-4gh5s\") pod \"keystone-cron-29533681-d94x7\" (UID: \"39300e2f-d3dc-453a-8555-0453080ef2bd\") " pod="openstack/keystone-cron-29533681-d94x7" Feb 25 12:01:00 crc kubenswrapper[5005]: I0225 12:01:00.291979 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39300e2f-d3dc-453a-8555-0453080ef2bd-config-data\") pod \"keystone-cron-29533681-d94x7\" (UID: \"39300e2f-d3dc-453a-8555-0453080ef2bd\") " pod="openstack/keystone-cron-29533681-d94x7" Feb 25 12:01:00 crc kubenswrapper[5005]: I0225 12:01:00.298110 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/39300e2f-d3dc-453a-8555-0453080ef2bd-fernet-keys\") pod \"keystone-cron-29533681-d94x7\" (UID: \"39300e2f-d3dc-453a-8555-0453080ef2bd\") " pod="openstack/keystone-cron-29533681-d94x7" Feb 25 12:01:00 crc kubenswrapper[5005]: I0225 12:01:00.301291 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39300e2f-d3dc-453a-8555-0453080ef2bd-config-data\") pod \"keystone-cron-29533681-d94x7\" (UID: \"39300e2f-d3dc-453a-8555-0453080ef2bd\") " pod="openstack/keystone-cron-29533681-d94x7" Feb 25 12:01:00 crc kubenswrapper[5005]: I0225 12:01:00.302439 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39300e2f-d3dc-453a-8555-0453080ef2bd-combined-ca-bundle\") pod \"keystone-cron-29533681-d94x7\" (UID: \"39300e2f-d3dc-453a-8555-0453080ef2bd\") " pod="openstack/keystone-cron-29533681-d94x7" Feb 25 12:01:00 crc kubenswrapper[5005]: I0225 12:01:00.308330 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gh5s\" (UniqueName: \"kubernetes.io/projected/39300e2f-d3dc-453a-8555-0453080ef2bd-kube-api-access-4gh5s\") pod \"keystone-cron-29533681-d94x7\" (UID: \"39300e2f-d3dc-453a-8555-0453080ef2bd\") " pod="openstack/keystone-cron-29533681-d94x7" Feb 25 12:01:00 crc kubenswrapper[5005]: I0225 12:01:00.474839 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29533681-d94x7" Feb 25 12:01:01 crc kubenswrapper[5005]: I0225 12:01:01.696215 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29533681-d94x7"] Feb 25 12:01:02 crc kubenswrapper[5005]: I0225 12:01:02.332307 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29533681-d94x7" event={"ID":"39300e2f-d3dc-453a-8555-0453080ef2bd","Type":"ContainerStarted","Data":"2a3170832e5e305f540384a845a8604846fd41760af6ea3d9d7141c852632fbe"} Feb 25 12:01:02 crc kubenswrapper[5005]: I0225 12:01:02.332641 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29533681-d94x7" event={"ID":"39300e2f-d3dc-453a-8555-0453080ef2bd","Type":"ContainerStarted","Data":"7fe4ad98745c2c7e85136dba0fda34e7f2e5d45c1578e9e9b1bf00a65b8e5e8a"} Feb 25 12:01:02 crc kubenswrapper[5005]: I0225 12:01:02.350178 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29533681-d94x7" podStartSLOduration=2.350129243 podStartE2EDuration="2.350129243s" podCreationTimestamp="2026-02-25 12:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 12:01:02.346163489 +0000 UTC m=+2576.386895846" watchObservedRunningTime="2026-02-25 12:01:02.350129243 +0000 UTC m=+2576.390861570" Feb 25 12:01:04 crc kubenswrapper[5005]: I0225 12:01:04.350174 5005 generic.go:334] "Generic (PLEG): container finished" podID="39300e2f-d3dc-453a-8555-0453080ef2bd" containerID="2a3170832e5e305f540384a845a8604846fd41760af6ea3d9d7141c852632fbe" exitCode=0 Feb 25 12:01:04 crc kubenswrapper[5005]: I0225 12:01:04.350249 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29533681-d94x7" event={"ID":"39300e2f-d3dc-453a-8555-0453080ef2bd","Type":"ContainerDied","Data":"2a3170832e5e305f540384a845a8604846fd41760af6ea3d9d7141c852632fbe"} Feb 25 12:01:05 crc kubenswrapper[5005]: I0225 12:01:05.713572 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29533681-d94x7" Feb 25 12:01:05 crc kubenswrapper[5005]: I0225 12:01:05.742516 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39300e2f-d3dc-453a-8555-0453080ef2bd-config-data\") pod \"39300e2f-d3dc-453a-8555-0453080ef2bd\" (UID: \"39300e2f-d3dc-453a-8555-0453080ef2bd\") " Feb 25 12:01:05 crc kubenswrapper[5005]: I0225 12:01:05.742619 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/39300e2f-d3dc-453a-8555-0453080ef2bd-fernet-keys\") pod \"39300e2f-d3dc-453a-8555-0453080ef2bd\" (UID: \"39300e2f-d3dc-453a-8555-0453080ef2bd\") " Feb 25 12:01:05 crc kubenswrapper[5005]: I0225 12:01:05.742713 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gh5s\" (UniqueName: \"kubernetes.io/projected/39300e2f-d3dc-453a-8555-0453080ef2bd-kube-api-access-4gh5s\") pod \"39300e2f-d3dc-453a-8555-0453080ef2bd\" (UID: \"39300e2f-d3dc-453a-8555-0453080ef2bd\") " Feb 25 12:01:05 crc kubenswrapper[5005]: I0225 12:01:05.742855 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39300e2f-d3dc-453a-8555-0453080ef2bd-combined-ca-bundle\") pod \"39300e2f-d3dc-453a-8555-0453080ef2bd\" (UID: \"39300e2f-d3dc-453a-8555-0453080ef2bd\") " Feb 25 12:01:05 crc kubenswrapper[5005]: I0225 12:01:05.749898 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39300e2f-d3dc-453a-8555-0453080ef2bd-kube-api-access-4gh5s" (OuterVolumeSpecName: "kube-api-access-4gh5s") pod "39300e2f-d3dc-453a-8555-0453080ef2bd" (UID: "39300e2f-d3dc-453a-8555-0453080ef2bd"). InnerVolumeSpecName "kube-api-access-4gh5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:01:05 crc kubenswrapper[5005]: I0225 12:01:05.751689 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39300e2f-d3dc-453a-8555-0453080ef2bd-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "39300e2f-d3dc-453a-8555-0453080ef2bd" (UID: "39300e2f-d3dc-453a-8555-0453080ef2bd"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 12:01:05 crc kubenswrapper[5005]: I0225 12:01:05.771541 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39300e2f-d3dc-453a-8555-0453080ef2bd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "39300e2f-d3dc-453a-8555-0453080ef2bd" (UID: "39300e2f-d3dc-453a-8555-0453080ef2bd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 12:01:05 crc kubenswrapper[5005]: I0225 12:01:05.795208 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39300e2f-d3dc-453a-8555-0453080ef2bd-config-data" (OuterVolumeSpecName: "config-data") pod "39300e2f-d3dc-453a-8555-0453080ef2bd" (UID: "39300e2f-d3dc-453a-8555-0453080ef2bd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 12:01:05 crc kubenswrapper[5005]: I0225 12:01:05.845449 5005 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39300e2f-d3dc-453a-8555-0453080ef2bd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 12:01:05 crc kubenswrapper[5005]: I0225 12:01:05.845477 5005 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39300e2f-d3dc-453a-8555-0453080ef2bd-config-data\") on node \"crc\" DevicePath \"\"" Feb 25 12:01:05 crc kubenswrapper[5005]: I0225 12:01:05.845486 5005 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/39300e2f-d3dc-453a-8555-0453080ef2bd-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 25 12:01:05 crc kubenswrapper[5005]: I0225 12:01:05.845495 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gh5s\" (UniqueName: \"kubernetes.io/projected/39300e2f-d3dc-453a-8555-0453080ef2bd-kube-api-access-4gh5s\") on node \"crc\" DevicePath \"\"" Feb 25 12:01:06 crc kubenswrapper[5005]: I0225 12:01:06.374976 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29533681-d94x7" event={"ID":"39300e2f-d3dc-453a-8555-0453080ef2bd","Type":"ContainerDied","Data":"7fe4ad98745c2c7e85136dba0fda34e7f2e5d45c1578e9e9b1bf00a65b8e5e8a"} Feb 25 12:01:06 crc kubenswrapper[5005]: I0225 12:01:06.375018 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29533681-d94x7" Feb 25 12:01:06 crc kubenswrapper[5005]: I0225 12:01:06.375034 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fe4ad98745c2c7e85136dba0fda34e7f2e5d45c1578e9e9b1bf00a65b8e5e8a" Feb 25 12:02:00 crc kubenswrapper[5005]: I0225 12:02:00.144958 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533682-l2dn9"] Feb 25 12:02:00 crc kubenswrapper[5005]: E0225 12:02:00.145906 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39300e2f-d3dc-453a-8555-0453080ef2bd" containerName="keystone-cron" Feb 25 12:02:00 crc kubenswrapper[5005]: I0225 12:02:00.145920 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="39300e2f-d3dc-453a-8555-0453080ef2bd" containerName="keystone-cron" Feb 25 12:02:00 crc kubenswrapper[5005]: I0225 12:02:00.146105 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="39300e2f-d3dc-453a-8555-0453080ef2bd" containerName="keystone-cron" Feb 25 12:02:00 crc kubenswrapper[5005]: I0225 12:02:00.146742 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533682-l2dn9" Feb 25 12:02:00 crc kubenswrapper[5005]: I0225 12:02:00.157687 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533682-l2dn9"] Feb 25 12:02:00 crc kubenswrapper[5005]: I0225 12:02:00.158297 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 12:02:00 crc kubenswrapper[5005]: I0225 12:02:00.158534 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 12:02:00 crc kubenswrapper[5005]: I0225 12:02:00.158713 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 12:02:00 crc kubenswrapper[5005]: I0225 12:02:00.242095 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8gnk\" (UniqueName: \"kubernetes.io/projected/be9a9c8d-c303-4a5b-af89-b546f299d069-kube-api-access-f8gnk\") pod \"auto-csr-approver-29533682-l2dn9\" (UID: \"be9a9c8d-c303-4a5b-af89-b546f299d069\") " pod="openshift-infra/auto-csr-approver-29533682-l2dn9" Feb 25 12:02:00 crc kubenswrapper[5005]: I0225 12:02:00.343429 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8gnk\" (UniqueName: \"kubernetes.io/projected/be9a9c8d-c303-4a5b-af89-b546f299d069-kube-api-access-f8gnk\") pod \"auto-csr-approver-29533682-l2dn9\" (UID: \"be9a9c8d-c303-4a5b-af89-b546f299d069\") " pod="openshift-infra/auto-csr-approver-29533682-l2dn9" Feb 25 12:02:00 crc kubenswrapper[5005]: I0225 12:02:00.366519 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8gnk\" (UniqueName: \"kubernetes.io/projected/be9a9c8d-c303-4a5b-af89-b546f299d069-kube-api-access-f8gnk\") pod \"auto-csr-approver-29533682-l2dn9\" (UID: \"be9a9c8d-c303-4a5b-af89-b546f299d069\") " pod="openshift-infra/auto-csr-approver-29533682-l2dn9" Feb 25 12:02:00 crc kubenswrapper[5005]: I0225 12:02:00.463237 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533682-l2dn9" Feb 25 12:02:00 crc kubenswrapper[5005]: I0225 12:02:00.984743 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533682-l2dn9"] Feb 25 12:02:00 crc kubenswrapper[5005]: W0225 12:02:00.992063 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe9a9c8d_c303_4a5b_af89_b546f299d069.slice/crio-edaa6e26e29a9bc851e28613f2869fc59509a00d3acb4c9d796509485f7d8a58 WatchSource:0}: Error finding container edaa6e26e29a9bc851e28613f2869fc59509a00d3acb4c9d796509485f7d8a58: Status 404 returned error can't find the container with id edaa6e26e29a9bc851e28613f2869fc59509a00d3acb4c9d796509485f7d8a58 Feb 25 12:02:01 crc kubenswrapper[5005]: I0225 12:02:01.941584 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533682-l2dn9" event={"ID":"be9a9c8d-c303-4a5b-af89-b546f299d069","Type":"ContainerStarted","Data":"edaa6e26e29a9bc851e28613f2869fc59509a00d3acb4c9d796509485f7d8a58"} Feb 25 12:02:02 crc kubenswrapper[5005]: E0225 12:02:02.902261 5005 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe9a9c8d_c303_4a5b_af89_b546f299d069.slice/crio-e806359198a2b7c588f1fd50314aef4d6da2c8c10fd9ce849938070e1e4f87c4.scope\": RecentStats: unable to find data in memory cache]" Feb 25 12:02:02 crc kubenswrapper[5005]: I0225 12:02:02.952511 5005 generic.go:334] "Generic (PLEG): container finished" podID="be9a9c8d-c303-4a5b-af89-b546f299d069" containerID="e806359198a2b7c588f1fd50314aef4d6da2c8c10fd9ce849938070e1e4f87c4" exitCode=0 Feb 25 12:02:02 crc kubenswrapper[5005]: I0225 12:02:02.952558 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533682-l2dn9" event={"ID":"be9a9c8d-c303-4a5b-af89-b546f299d069","Type":"ContainerDied","Data":"e806359198a2b7c588f1fd50314aef4d6da2c8c10fd9ce849938070e1e4f87c4"} Feb 25 12:02:04 crc kubenswrapper[5005]: I0225 12:02:04.304965 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533682-l2dn9" Feb 25 12:02:04 crc kubenswrapper[5005]: I0225 12:02:04.340919 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8gnk\" (UniqueName: \"kubernetes.io/projected/be9a9c8d-c303-4a5b-af89-b546f299d069-kube-api-access-f8gnk\") pod \"be9a9c8d-c303-4a5b-af89-b546f299d069\" (UID: \"be9a9c8d-c303-4a5b-af89-b546f299d069\") " Feb 25 12:02:04 crc kubenswrapper[5005]: I0225 12:02:04.346902 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be9a9c8d-c303-4a5b-af89-b546f299d069-kube-api-access-f8gnk" (OuterVolumeSpecName: "kube-api-access-f8gnk") pod "be9a9c8d-c303-4a5b-af89-b546f299d069" (UID: "be9a9c8d-c303-4a5b-af89-b546f299d069"). InnerVolumeSpecName "kube-api-access-f8gnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:02:04 crc kubenswrapper[5005]: I0225 12:02:04.443190 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8gnk\" (UniqueName: \"kubernetes.io/projected/be9a9c8d-c303-4a5b-af89-b546f299d069-kube-api-access-f8gnk\") on node \"crc\" DevicePath \"\"" Feb 25 12:02:04 crc kubenswrapper[5005]: I0225 12:02:04.969001 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533682-l2dn9" event={"ID":"be9a9c8d-c303-4a5b-af89-b546f299d069","Type":"ContainerDied","Data":"edaa6e26e29a9bc851e28613f2869fc59509a00d3acb4c9d796509485f7d8a58"} Feb 25 12:02:04 crc kubenswrapper[5005]: I0225 12:02:04.969040 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="edaa6e26e29a9bc851e28613f2869fc59509a00d3acb4c9d796509485f7d8a58" Feb 25 12:02:04 crc kubenswrapper[5005]: I0225 12:02:04.969055 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533682-l2dn9" Feb 25 12:02:05 crc kubenswrapper[5005]: I0225 12:02:05.383515 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533676-qgdl8"] Feb 25 12:02:05 crc kubenswrapper[5005]: I0225 12:02:05.390693 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533676-qgdl8"] Feb 25 12:02:06 crc kubenswrapper[5005]: I0225 12:02:06.700638 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4e10073-8de9-4f72-ba16-6f2b4a42d757" path="/var/lib/kubelet/pods/a4e10073-8de9-4f72-ba16-6f2b4a42d757/volumes" Feb 25 12:02:19 crc kubenswrapper[5005]: I0225 12:02:19.316062 5005 scope.go:117] "RemoveContainer" containerID="15c5efba7becaaf9b6636843e39ad8cfab700da83f166cb98ca85f157c4dcef5" Feb 25 12:02:58 crc kubenswrapper[5005]: I0225 12:02:58.088063 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 12:02:58 crc kubenswrapper[5005]: I0225 12:02:58.088927 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 12:03:03 crc kubenswrapper[5005]: I0225 12:03:03.607018 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vgw6s"] Feb 25 12:03:03 crc kubenswrapper[5005]: E0225 12:03:03.608481 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be9a9c8d-c303-4a5b-af89-b546f299d069" containerName="oc" Feb 25 12:03:03 crc kubenswrapper[5005]: I0225 12:03:03.608508 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="be9a9c8d-c303-4a5b-af89-b546f299d069" containerName="oc" Feb 25 12:03:03 crc kubenswrapper[5005]: I0225 12:03:03.608878 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="be9a9c8d-c303-4a5b-af89-b546f299d069" containerName="oc" Feb 25 12:03:03 crc kubenswrapper[5005]: I0225 12:03:03.610982 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vgw6s" Feb 25 12:03:03 crc kubenswrapper[5005]: I0225 12:03:03.626057 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vgw6s"] Feb 25 12:03:03 crc kubenswrapper[5005]: I0225 12:03:03.723111 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d44b08ca-3a56-4b2b-8b3d-36e62f01e668-catalog-content\") pod \"redhat-operators-vgw6s\" (UID: \"d44b08ca-3a56-4b2b-8b3d-36e62f01e668\") " pod="openshift-marketplace/redhat-operators-vgw6s" Feb 25 12:03:03 crc kubenswrapper[5005]: I0225 12:03:03.723518 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d44b08ca-3a56-4b2b-8b3d-36e62f01e668-utilities\") pod \"redhat-operators-vgw6s\" (UID: \"d44b08ca-3a56-4b2b-8b3d-36e62f01e668\") " pod="openshift-marketplace/redhat-operators-vgw6s" Feb 25 12:03:03 crc kubenswrapper[5005]: I0225 12:03:03.723597 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf8kg\" (UniqueName: \"kubernetes.io/projected/d44b08ca-3a56-4b2b-8b3d-36e62f01e668-kube-api-access-zf8kg\") pod \"redhat-operators-vgw6s\" (UID: \"d44b08ca-3a56-4b2b-8b3d-36e62f01e668\") " pod="openshift-marketplace/redhat-operators-vgw6s" Feb 25 12:03:03 crc kubenswrapper[5005]: I0225 12:03:03.825632 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zf8kg\" (UniqueName: \"kubernetes.io/projected/d44b08ca-3a56-4b2b-8b3d-36e62f01e668-kube-api-access-zf8kg\") pod \"redhat-operators-vgw6s\" (UID: \"d44b08ca-3a56-4b2b-8b3d-36e62f01e668\") " pod="openshift-marketplace/redhat-operators-vgw6s" Feb 25 12:03:03 crc kubenswrapper[5005]: I0225 12:03:03.826128 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d44b08ca-3a56-4b2b-8b3d-36e62f01e668-catalog-content\") pod \"redhat-operators-vgw6s\" (UID: \"d44b08ca-3a56-4b2b-8b3d-36e62f01e668\") " pod="openshift-marketplace/redhat-operators-vgw6s" Feb 25 12:03:03 crc kubenswrapper[5005]: I0225 12:03:03.826262 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d44b08ca-3a56-4b2b-8b3d-36e62f01e668-utilities\") pod \"redhat-operators-vgw6s\" (UID: \"d44b08ca-3a56-4b2b-8b3d-36e62f01e668\") " pod="openshift-marketplace/redhat-operators-vgw6s" Feb 25 12:03:03 crc kubenswrapper[5005]: I0225 12:03:03.826775 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d44b08ca-3a56-4b2b-8b3d-36e62f01e668-catalog-content\") pod \"redhat-operators-vgw6s\" (UID: \"d44b08ca-3a56-4b2b-8b3d-36e62f01e668\") " pod="openshift-marketplace/redhat-operators-vgw6s" Feb 25 12:03:03 crc kubenswrapper[5005]: I0225 12:03:03.826859 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d44b08ca-3a56-4b2b-8b3d-36e62f01e668-utilities\") pod \"redhat-operators-vgw6s\" (UID: \"d44b08ca-3a56-4b2b-8b3d-36e62f01e668\") " pod="openshift-marketplace/redhat-operators-vgw6s" Feb 25 12:03:03 crc kubenswrapper[5005]: I0225 12:03:03.857607 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf8kg\" (UniqueName: \"kubernetes.io/projected/d44b08ca-3a56-4b2b-8b3d-36e62f01e668-kube-api-access-zf8kg\") pod \"redhat-operators-vgw6s\" (UID: \"d44b08ca-3a56-4b2b-8b3d-36e62f01e668\") " pod="openshift-marketplace/redhat-operators-vgw6s" Feb 25 12:03:03 crc kubenswrapper[5005]: I0225 12:03:03.933756 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vgw6s" Feb 25 12:03:04 crc kubenswrapper[5005]: I0225 12:03:04.444617 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vgw6s"] Feb 25 12:03:04 crc kubenswrapper[5005]: I0225 12:03:04.513178 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vgw6s" event={"ID":"d44b08ca-3a56-4b2b-8b3d-36e62f01e668","Type":"ContainerStarted","Data":"bcf9a55e3a64f1e4a0e382363e5db9def677a38748ac1ab574e6ca1e0c20f33f"} Feb 25 12:03:05 crc kubenswrapper[5005]: I0225 12:03:05.523769 5005 generic.go:334] "Generic (PLEG): container finished" podID="d44b08ca-3a56-4b2b-8b3d-36e62f01e668" containerID="3f715cbd6446b0d18ce5d5da7963a6a4bfb7b11e472e98ac991f33b7154bf407" exitCode=0 Feb 25 12:03:05 crc kubenswrapper[5005]: I0225 12:03:05.523814 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vgw6s" event={"ID":"d44b08ca-3a56-4b2b-8b3d-36e62f01e668","Type":"ContainerDied","Data":"3f715cbd6446b0d18ce5d5da7963a6a4bfb7b11e472e98ac991f33b7154bf407"} Feb 25 12:03:06 crc kubenswrapper[5005]: I0225 12:03:06.534702 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vgw6s" event={"ID":"d44b08ca-3a56-4b2b-8b3d-36e62f01e668","Type":"ContainerStarted","Data":"f8cf3ed453650155f2ef81e1d5b275a90dab2903fb6a63c53fac0cdf9ccaad43"} Feb 25 12:03:07 crc kubenswrapper[5005]: I0225 12:03:07.544631 5005 generic.go:334] "Generic (PLEG): container finished" podID="d44b08ca-3a56-4b2b-8b3d-36e62f01e668" containerID="f8cf3ed453650155f2ef81e1d5b275a90dab2903fb6a63c53fac0cdf9ccaad43" exitCode=0 Feb 25 12:03:07 crc kubenswrapper[5005]: I0225 12:03:07.544759 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vgw6s" event={"ID":"d44b08ca-3a56-4b2b-8b3d-36e62f01e668","Type":"ContainerDied","Data":"f8cf3ed453650155f2ef81e1d5b275a90dab2903fb6a63c53fac0cdf9ccaad43"} Feb 25 12:03:08 crc kubenswrapper[5005]: I0225 12:03:08.555917 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vgw6s" event={"ID":"d44b08ca-3a56-4b2b-8b3d-36e62f01e668","Type":"ContainerStarted","Data":"eb047092d8f8dbb2624716ec0940e422b638e273998698c657389aaf2cec81dd"} Feb 25 12:03:08 crc kubenswrapper[5005]: I0225 12:03:08.583704 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vgw6s" podStartSLOduration=3.117487012 podStartE2EDuration="5.583673171s" podCreationTimestamp="2026-02-25 12:03:03 +0000 UTC" firstStartedPulling="2026-02-25 12:03:05.525442443 +0000 UTC m=+2699.566174770" lastFinishedPulling="2026-02-25 12:03:07.991628602 +0000 UTC m=+2702.032360929" observedRunningTime="2026-02-25 12:03:08.573213984 +0000 UTC m=+2702.613946321" watchObservedRunningTime="2026-02-25 12:03:08.583673171 +0000 UTC m=+2702.624405508" Feb 25 12:03:13 crc kubenswrapper[5005]: I0225 12:03:13.933973 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vgw6s" Feb 25 12:03:13 crc kubenswrapper[5005]: I0225 12:03:13.934671 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vgw6s" Feb 25 12:03:13 crc kubenswrapper[5005]: I0225 12:03:13.977724 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vgw6s" Feb 25 12:03:14 crc kubenswrapper[5005]: I0225 12:03:14.660228 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vgw6s" Feb 25 12:03:14 crc kubenswrapper[5005]: I0225 12:03:14.730876 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vgw6s"] Feb 25 12:03:16 crc kubenswrapper[5005]: I0225 12:03:16.623526 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vgw6s" podUID="d44b08ca-3a56-4b2b-8b3d-36e62f01e668" containerName="registry-server" containerID="cri-o://eb047092d8f8dbb2624716ec0940e422b638e273998698c657389aaf2cec81dd" gracePeriod=2 Feb 25 12:03:17 crc kubenswrapper[5005]: I0225 12:03:17.116399 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vgw6s" Feb 25 12:03:17 crc kubenswrapper[5005]: I0225 12:03:17.266733 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d44b08ca-3a56-4b2b-8b3d-36e62f01e668-utilities\") pod \"d44b08ca-3a56-4b2b-8b3d-36e62f01e668\" (UID: \"d44b08ca-3a56-4b2b-8b3d-36e62f01e668\") " Feb 25 12:03:17 crc kubenswrapper[5005]: I0225 12:03:17.267013 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zf8kg\" (UniqueName: \"kubernetes.io/projected/d44b08ca-3a56-4b2b-8b3d-36e62f01e668-kube-api-access-zf8kg\") pod \"d44b08ca-3a56-4b2b-8b3d-36e62f01e668\" (UID: \"d44b08ca-3a56-4b2b-8b3d-36e62f01e668\") " Feb 25 12:03:17 crc kubenswrapper[5005]: I0225 12:03:17.267164 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d44b08ca-3a56-4b2b-8b3d-36e62f01e668-catalog-content\") pod \"d44b08ca-3a56-4b2b-8b3d-36e62f01e668\" (UID: \"d44b08ca-3a56-4b2b-8b3d-36e62f01e668\") " Feb 25 12:03:17 crc kubenswrapper[5005]: I0225 12:03:17.267905 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d44b08ca-3a56-4b2b-8b3d-36e62f01e668-utilities" (OuterVolumeSpecName: "utilities") pod "d44b08ca-3a56-4b2b-8b3d-36e62f01e668" (UID: "d44b08ca-3a56-4b2b-8b3d-36e62f01e668"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 12:03:17 crc kubenswrapper[5005]: I0225 12:03:17.274683 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d44b08ca-3a56-4b2b-8b3d-36e62f01e668-kube-api-access-zf8kg" (OuterVolumeSpecName: "kube-api-access-zf8kg") pod "d44b08ca-3a56-4b2b-8b3d-36e62f01e668" (UID: "d44b08ca-3a56-4b2b-8b3d-36e62f01e668"). InnerVolumeSpecName "kube-api-access-zf8kg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:03:17 crc kubenswrapper[5005]: I0225 12:03:17.369272 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d44b08ca-3a56-4b2b-8b3d-36e62f01e668-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 12:03:17 crc kubenswrapper[5005]: I0225 12:03:17.369311 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zf8kg\" (UniqueName: \"kubernetes.io/projected/d44b08ca-3a56-4b2b-8b3d-36e62f01e668-kube-api-access-zf8kg\") on node \"crc\" DevicePath \"\"" Feb 25 12:03:17 crc kubenswrapper[5005]: I0225 12:03:17.636836 5005 generic.go:334] "Generic (PLEG): container finished" podID="d44b08ca-3a56-4b2b-8b3d-36e62f01e668" containerID="eb047092d8f8dbb2624716ec0940e422b638e273998698c657389aaf2cec81dd" exitCode=0 Feb 25 12:03:17 crc kubenswrapper[5005]: I0225 12:03:17.636877 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vgw6s" event={"ID":"d44b08ca-3a56-4b2b-8b3d-36e62f01e668","Type":"ContainerDied","Data":"eb047092d8f8dbb2624716ec0940e422b638e273998698c657389aaf2cec81dd"} Feb 25 12:03:17 crc kubenswrapper[5005]: I0225 12:03:17.636902 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vgw6s" event={"ID":"d44b08ca-3a56-4b2b-8b3d-36e62f01e668","Type":"ContainerDied","Data":"bcf9a55e3a64f1e4a0e382363e5db9def677a38748ac1ab574e6ca1e0c20f33f"} Feb 25 12:03:17 crc kubenswrapper[5005]: I0225 12:03:17.636926 5005 scope.go:117] "RemoveContainer" containerID="eb047092d8f8dbb2624716ec0940e422b638e273998698c657389aaf2cec81dd" Feb 25 12:03:17 crc kubenswrapper[5005]: I0225 12:03:17.638693 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vgw6s" Feb 25 12:03:17 crc kubenswrapper[5005]: I0225 12:03:17.657187 5005 scope.go:117] "RemoveContainer" containerID="f8cf3ed453650155f2ef81e1d5b275a90dab2903fb6a63c53fac0cdf9ccaad43" Feb 25 12:03:17 crc kubenswrapper[5005]: I0225 12:03:17.688827 5005 scope.go:117] "RemoveContainer" containerID="3f715cbd6446b0d18ce5d5da7963a6a4bfb7b11e472e98ac991f33b7154bf407" Feb 25 12:03:17 crc kubenswrapper[5005]: I0225 12:03:17.711341 5005 scope.go:117] "RemoveContainer" containerID="eb047092d8f8dbb2624716ec0940e422b638e273998698c657389aaf2cec81dd" Feb 25 12:03:17 crc kubenswrapper[5005]: E0225 12:03:17.711801 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb047092d8f8dbb2624716ec0940e422b638e273998698c657389aaf2cec81dd\": container with ID starting with eb047092d8f8dbb2624716ec0940e422b638e273998698c657389aaf2cec81dd not found: ID does not exist" containerID="eb047092d8f8dbb2624716ec0940e422b638e273998698c657389aaf2cec81dd" Feb 25 12:03:17 crc kubenswrapper[5005]: I0225 12:03:17.711835 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb047092d8f8dbb2624716ec0940e422b638e273998698c657389aaf2cec81dd"} err="failed to get container status \"eb047092d8f8dbb2624716ec0940e422b638e273998698c657389aaf2cec81dd\": rpc error: code = NotFound desc = could not find container \"eb047092d8f8dbb2624716ec0940e422b638e273998698c657389aaf2cec81dd\": container with ID starting with eb047092d8f8dbb2624716ec0940e422b638e273998698c657389aaf2cec81dd not found: ID does not exist" Feb 25 12:03:17 crc kubenswrapper[5005]: I0225 12:03:17.711858 5005 scope.go:117] "RemoveContainer" containerID="f8cf3ed453650155f2ef81e1d5b275a90dab2903fb6a63c53fac0cdf9ccaad43" Feb 25 12:03:17 crc kubenswrapper[5005]: E0225 12:03:17.712116 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8cf3ed453650155f2ef81e1d5b275a90dab2903fb6a63c53fac0cdf9ccaad43\": container with ID starting with f8cf3ed453650155f2ef81e1d5b275a90dab2903fb6a63c53fac0cdf9ccaad43 not found: ID does not exist" containerID="f8cf3ed453650155f2ef81e1d5b275a90dab2903fb6a63c53fac0cdf9ccaad43" Feb 25 12:03:17 crc kubenswrapper[5005]: I0225 12:03:17.712136 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8cf3ed453650155f2ef81e1d5b275a90dab2903fb6a63c53fac0cdf9ccaad43"} err="failed to get container status \"f8cf3ed453650155f2ef81e1d5b275a90dab2903fb6a63c53fac0cdf9ccaad43\": rpc error: code = NotFound desc = could not find container \"f8cf3ed453650155f2ef81e1d5b275a90dab2903fb6a63c53fac0cdf9ccaad43\": container with ID starting with f8cf3ed453650155f2ef81e1d5b275a90dab2903fb6a63c53fac0cdf9ccaad43 not found: ID does not exist" Feb 25 12:03:17 crc kubenswrapper[5005]: I0225 12:03:17.712150 5005 scope.go:117] "RemoveContainer" containerID="3f715cbd6446b0d18ce5d5da7963a6a4bfb7b11e472e98ac991f33b7154bf407" Feb 25 12:03:17 crc kubenswrapper[5005]: E0225 12:03:17.712422 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f715cbd6446b0d18ce5d5da7963a6a4bfb7b11e472e98ac991f33b7154bf407\": container with ID starting with 3f715cbd6446b0d18ce5d5da7963a6a4bfb7b11e472e98ac991f33b7154bf407 not found: ID does not exist" containerID="3f715cbd6446b0d18ce5d5da7963a6a4bfb7b11e472e98ac991f33b7154bf407" Feb 25 12:03:17 crc kubenswrapper[5005]: I0225 12:03:17.712443 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f715cbd6446b0d18ce5d5da7963a6a4bfb7b11e472e98ac991f33b7154bf407"} err="failed to get container status \"3f715cbd6446b0d18ce5d5da7963a6a4bfb7b11e472e98ac991f33b7154bf407\": rpc error: code = NotFound desc = could not find container \"3f715cbd6446b0d18ce5d5da7963a6a4bfb7b11e472e98ac991f33b7154bf407\": container with ID starting with 3f715cbd6446b0d18ce5d5da7963a6a4bfb7b11e472e98ac991f33b7154bf407 not found: ID does not exist" Feb 25 12:03:18 crc kubenswrapper[5005]: I0225 12:03:18.731150 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d44b08ca-3a56-4b2b-8b3d-36e62f01e668-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d44b08ca-3a56-4b2b-8b3d-36e62f01e668" (UID: "d44b08ca-3a56-4b2b-8b3d-36e62f01e668"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 12:03:18 crc kubenswrapper[5005]: I0225 12:03:18.796477 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d44b08ca-3a56-4b2b-8b3d-36e62f01e668-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 12:03:18 crc kubenswrapper[5005]: I0225 12:03:18.881446 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vgw6s"] Feb 25 12:03:18 crc kubenswrapper[5005]: I0225 12:03:18.888932 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vgw6s"] Feb 25 12:03:20 crc kubenswrapper[5005]: I0225 12:03:20.703313 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d44b08ca-3a56-4b2b-8b3d-36e62f01e668" path="/var/lib/kubelet/pods/d44b08ca-3a56-4b2b-8b3d-36e62f01e668/volumes" Feb 25 12:03:28 crc kubenswrapper[5005]: I0225 12:03:28.087939 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 12:03:28 crc kubenswrapper[5005]: I0225 12:03:28.088554 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 12:03:58 crc kubenswrapper[5005]: I0225 12:03:58.088006 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 12:03:58 crc kubenswrapper[5005]: I0225 12:03:58.088916 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 12:03:58 crc kubenswrapper[5005]: I0225 12:03:58.088977 5005 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" Feb 25 12:03:58 crc kubenswrapper[5005]: I0225 12:03:58.089941 5005 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4d68713e6e19c3efba3ef33278796317e807a4a6ee0ef751e7b26fc433e930e4"} pod="openshift-machine-config-operator/machine-config-daemon-tct5q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 25 12:03:58 crc kubenswrapper[5005]: I0225 12:03:58.090009 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" containerID="cri-o://4d68713e6e19c3efba3ef33278796317e807a4a6ee0ef751e7b26fc433e930e4" gracePeriod=600 Feb 25 12:03:59 crc kubenswrapper[5005]: I0225 12:03:59.084333 5005 generic.go:334] "Generic (PLEG): container finished" podID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerID="4d68713e6e19c3efba3ef33278796317e807a4a6ee0ef751e7b26fc433e930e4" exitCode=0 Feb 25 12:03:59 crc kubenswrapper[5005]: I0225 12:03:59.084453 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerDied","Data":"4d68713e6e19c3efba3ef33278796317e807a4a6ee0ef751e7b26fc433e930e4"} Feb 25 12:03:59 crc kubenswrapper[5005]: I0225 12:03:59.085246 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerStarted","Data":"7ca266498d6b9c8fb2a14f176fd0f40e3e96606757679e079fcb7ecb4ca85b52"} Feb 25 12:03:59 crc kubenswrapper[5005]: I0225 12:03:59.085278 5005 scope.go:117] "RemoveContainer" containerID="0a0c931f6f39c45f66a41d485326ae99130758f22d324cc3fec975dfad96b162" Feb 25 12:04:00 crc kubenswrapper[5005]: I0225 12:04:00.151711 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533684-gzchd"] Feb 25 12:04:00 crc kubenswrapper[5005]: E0225 12:04:00.153171 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d44b08ca-3a56-4b2b-8b3d-36e62f01e668" containerName="registry-server" Feb 25 12:04:00 crc kubenswrapper[5005]: I0225 12:04:00.153197 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="d44b08ca-3a56-4b2b-8b3d-36e62f01e668" containerName="registry-server" Feb 25 12:04:00 crc kubenswrapper[5005]: E0225 12:04:00.153221 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d44b08ca-3a56-4b2b-8b3d-36e62f01e668" containerName="extract-utilities" Feb 25 12:04:00 crc kubenswrapper[5005]: I0225 12:04:00.153242 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="d44b08ca-3a56-4b2b-8b3d-36e62f01e668" containerName="extract-utilities" Feb 25 12:04:00 crc kubenswrapper[5005]: E0225 12:04:00.153283 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d44b08ca-3a56-4b2b-8b3d-36e62f01e668" containerName="extract-content" Feb 25 12:04:00 crc kubenswrapper[5005]: I0225 12:04:00.153295 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="d44b08ca-3a56-4b2b-8b3d-36e62f01e668" containerName="extract-content" Feb 25 12:04:00 crc kubenswrapper[5005]: I0225 12:04:00.153693 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="d44b08ca-3a56-4b2b-8b3d-36e62f01e668" containerName="registry-server" Feb 25 12:04:00 crc kubenswrapper[5005]: I0225 12:04:00.155095 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533684-gzchd" Feb 25 12:04:00 crc kubenswrapper[5005]: I0225 12:04:00.157553 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 12:04:00 crc kubenswrapper[5005]: I0225 12:04:00.157649 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 12:04:00 crc kubenswrapper[5005]: I0225 12:04:00.159163 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 12:04:00 crc kubenswrapper[5005]: I0225 12:04:00.177307 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533684-gzchd"] Feb 25 12:04:00 crc kubenswrapper[5005]: I0225 12:04:00.282483 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqzrr\" (UniqueName: \"kubernetes.io/projected/be9206cf-f74e-4999-8555-a8d30375cfe0-kube-api-access-lqzrr\") pod \"auto-csr-approver-29533684-gzchd\" (UID: \"be9206cf-f74e-4999-8555-a8d30375cfe0\") " pod="openshift-infra/auto-csr-approver-29533684-gzchd" Feb 25 12:04:00 crc kubenswrapper[5005]: I0225 12:04:00.384469 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqzrr\" (UniqueName: \"kubernetes.io/projected/be9206cf-f74e-4999-8555-a8d30375cfe0-kube-api-access-lqzrr\") pod \"auto-csr-approver-29533684-gzchd\" (UID: \"be9206cf-f74e-4999-8555-a8d30375cfe0\") " pod="openshift-infra/auto-csr-approver-29533684-gzchd" Feb 25 12:04:00 crc kubenswrapper[5005]: I0225 12:04:00.426493 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqzrr\" (UniqueName: \"kubernetes.io/projected/be9206cf-f74e-4999-8555-a8d30375cfe0-kube-api-access-lqzrr\") pod \"auto-csr-approver-29533684-gzchd\" (UID: \"be9206cf-f74e-4999-8555-a8d30375cfe0\") " pod="openshift-infra/auto-csr-approver-29533684-gzchd" Feb 25 12:04:00 crc kubenswrapper[5005]: I0225 12:04:00.485176 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533684-gzchd" Feb 25 12:04:00 crc kubenswrapper[5005]: I0225 12:04:00.969103 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533684-gzchd"] Feb 25 12:04:00 crc kubenswrapper[5005]: W0225 12:04:00.976332 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe9206cf_f74e_4999_8555_a8d30375cfe0.slice/crio-07e6a85ee3fbabc015b70a4d745e2eced6aea22dc2cd7c65833d14393dffbb67 WatchSource:0}: Error finding container 07e6a85ee3fbabc015b70a4d745e2eced6aea22dc2cd7c65833d14393dffbb67: Status 404 returned error can't find the container with id 07e6a85ee3fbabc015b70a4d745e2eced6aea22dc2cd7c65833d14393dffbb67 Feb 25 12:04:01 crc kubenswrapper[5005]: I0225 12:04:01.118007 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533684-gzchd" event={"ID":"be9206cf-f74e-4999-8555-a8d30375cfe0","Type":"ContainerStarted","Data":"07e6a85ee3fbabc015b70a4d745e2eced6aea22dc2cd7c65833d14393dffbb67"} Feb 25 12:04:03 crc kubenswrapper[5005]: I0225 12:04:03.141589 5005 generic.go:334] "Generic (PLEG): container finished" podID="be9206cf-f74e-4999-8555-a8d30375cfe0" containerID="2d5bb7409d133c5a5b1fad84b7a7f0e3285edd14105b07455ab1572fd521d948" exitCode=0 Feb 25 12:04:03 crc kubenswrapper[5005]: I0225 12:04:03.141708 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533684-gzchd" event={"ID":"be9206cf-f74e-4999-8555-a8d30375cfe0","Type":"ContainerDied","Data":"2d5bb7409d133c5a5b1fad84b7a7f0e3285edd14105b07455ab1572fd521d948"} Feb 25 12:04:04 crc kubenswrapper[5005]: I0225 12:04:04.666629 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533684-gzchd" Feb 25 12:04:04 crc kubenswrapper[5005]: I0225 12:04:04.795190 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqzrr\" (UniqueName: \"kubernetes.io/projected/be9206cf-f74e-4999-8555-a8d30375cfe0-kube-api-access-lqzrr\") pod \"be9206cf-f74e-4999-8555-a8d30375cfe0\" (UID: \"be9206cf-f74e-4999-8555-a8d30375cfe0\") " Feb 25 12:04:04 crc kubenswrapper[5005]: I0225 12:04:04.805720 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be9206cf-f74e-4999-8555-a8d30375cfe0-kube-api-access-lqzrr" (OuterVolumeSpecName: "kube-api-access-lqzrr") pod "be9206cf-f74e-4999-8555-a8d30375cfe0" (UID: "be9206cf-f74e-4999-8555-a8d30375cfe0"). InnerVolumeSpecName "kube-api-access-lqzrr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:04:04 crc kubenswrapper[5005]: I0225 12:04:04.900214 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqzrr\" (UniqueName: \"kubernetes.io/projected/be9206cf-f74e-4999-8555-a8d30375cfe0-kube-api-access-lqzrr\") on node \"crc\" DevicePath \"\"" Feb 25 12:04:05 crc kubenswrapper[5005]: I0225 12:04:05.165314 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533684-gzchd" event={"ID":"be9206cf-f74e-4999-8555-a8d30375cfe0","Type":"ContainerDied","Data":"07e6a85ee3fbabc015b70a4d745e2eced6aea22dc2cd7c65833d14393dffbb67"} Feb 25 12:04:05 crc kubenswrapper[5005]: I0225 12:04:05.165415 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533684-gzchd" Feb 25 12:04:05 crc kubenswrapper[5005]: I0225 12:04:05.165437 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07e6a85ee3fbabc015b70a4d745e2eced6aea22dc2cd7c65833d14393dffbb67" Feb 25 12:04:05 crc kubenswrapper[5005]: I0225 12:04:05.754131 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533678-r67xh"] Feb 25 12:04:05 crc kubenswrapper[5005]: I0225 12:04:05.765880 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533678-r67xh"] Feb 25 12:04:06 crc kubenswrapper[5005]: I0225 12:04:06.176453 5005 generic.go:334] "Generic (PLEG): container finished" podID="6aeb7945-54e1-40bf-b490-7623fac580b0" containerID="dbda5f7be6b752804c98c9e733700d963f64b37ccd78bf0ae7f1d7508de11613" exitCode=0 Feb 25 12:04:06 crc kubenswrapper[5005]: I0225 12:04:06.176530 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jldwr" event={"ID":"6aeb7945-54e1-40bf-b490-7623fac580b0","Type":"ContainerDied","Data":"dbda5f7be6b752804c98c9e733700d963f64b37ccd78bf0ae7f1d7508de11613"} Feb 25 12:04:06 crc kubenswrapper[5005]: I0225 12:04:06.706543 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="330ac3af-0f49-422f-8bbf-81585552eec0" path="/var/lib/kubelet/pods/330ac3af-0f49-422f-8bbf-81585552eec0/volumes" Feb 25 12:04:07 crc kubenswrapper[5005]: I0225 12:04:07.723829 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jldwr" Feb 25 12:04:07 crc kubenswrapper[5005]: I0225 12:04:07.861896 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6aeb7945-54e1-40bf-b490-7623fac580b0-ceph\") pod \"6aeb7945-54e1-40bf-b490-7623fac580b0\" (UID: \"6aeb7945-54e1-40bf-b490-7623fac580b0\") " Feb 25 12:04:07 crc kubenswrapper[5005]: I0225 12:04:07.861981 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aeb7945-54e1-40bf-b490-7623fac580b0-libvirt-combined-ca-bundle\") pod \"6aeb7945-54e1-40bf-b490-7623fac580b0\" (UID: \"6aeb7945-54e1-40bf-b490-7623fac580b0\") " Feb 25 12:04:07 crc kubenswrapper[5005]: I0225 12:04:07.862007 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxrbl\" (UniqueName: \"kubernetes.io/projected/6aeb7945-54e1-40bf-b490-7623fac580b0-kube-api-access-gxrbl\") pod \"6aeb7945-54e1-40bf-b490-7623fac580b0\" (UID: \"6aeb7945-54e1-40bf-b490-7623fac580b0\") " Feb 25 12:04:07 crc kubenswrapper[5005]: I0225 12:04:07.862102 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/6aeb7945-54e1-40bf-b490-7623fac580b0-libvirt-secret-0\") pod \"6aeb7945-54e1-40bf-b490-7623fac580b0\" (UID: \"6aeb7945-54e1-40bf-b490-7623fac580b0\") " Feb 25 12:04:07 crc kubenswrapper[5005]: I0225 12:04:07.862175 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6aeb7945-54e1-40bf-b490-7623fac580b0-ssh-key-openstack-edpm-ipam\") pod \"6aeb7945-54e1-40bf-b490-7623fac580b0\" (UID: \"6aeb7945-54e1-40bf-b490-7623fac580b0\") " Feb 25 12:04:07 crc kubenswrapper[5005]: I0225 12:04:07.862202 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6aeb7945-54e1-40bf-b490-7623fac580b0-inventory\") pod \"6aeb7945-54e1-40bf-b490-7623fac580b0\" (UID: \"6aeb7945-54e1-40bf-b490-7623fac580b0\") " Feb 25 12:04:07 crc kubenswrapper[5005]: I0225 12:04:07.868843 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6aeb7945-54e1-40bf-b490-7623fac580b0-ceph" (OuterVolumeSpecName: "ceph") pod "6aeb7945-54e1-40bf-b490-7623fac580b0" (UID: "6aeb7945-54e1-40bf-b490-7623fac580b0"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 12:04:07 crc kubenswrapper[5005]: I0225 12:04:07.868887 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6aeb7945-54e1-40bf-b490-7623fac580b0-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "6aeb7945-54e1-40bf-b490-7623fac580b0" (UID: "6aeb7945-54e1-40bf-b490-7623fac580b0"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 12:04:07 crc kubenswrapper[5005]: I0225 12:04:07.869155 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6aeb7945-54e1-40bf-b490-7623fac580b0-kube-api-access-gxrbl" (OuterVolumeSpecName: "kube-api-access-gxrbl") pod "6aeb7945-54e1-40bf-b490-7623fac580b0" (UID: "6aeb7945-54e1-40bf-b490-7623fac580b0"). InnerVolumeSpecName "kube-api-access-gxrbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:04:07 crc kubenswrapper[5005]: I0225 12:04:07.889024 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6aeb7945-54e1-40bf-b490-7623fac580b0-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "6aeb7945-54e1-40bf-b490-7623fac580b0" (UID: "6aeb7945-54e1-40bf-b490-7623fac580b0"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 12:04:07 crc kubenswrapper[5005]: I0225 12:04:07.890044 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6aeb7945-54e1-40bf-b490-7623fac580b0-inventory" (OuterVolumeSpecName: "inventory") pod "6aeb7945-54e1-40bf-b490-7623fac580b0" (UID: "6aeb7945-54e1-40bf-b490-7623fac580b0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 12:04:07 crc kubenswrapper[5005]: I0225 12:04:07.891333 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6aeb7945-54e1-40bf-b490-7623fac580b0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6aeb7945-54e1-40bf-b490-7623fac580b0" (UID: "6aeb7945-54e1-40bf-b490-7623fac580b0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 12:04:07 crc kubenswrapper[5005]: I0225 12:04:07.965415 5005 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/6aeb7945-54e1-40bf-b490-7623fac580b0-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 25 12:04:07 crc kubenswrapper[5005]: I0225 12:04:07.965863 5005 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6aeb7945-54e1-40bf-b490-7623fac580b0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 25 12:04:07 crc kubenswrapper[5005]: I0225 12:04:07.966004 5005 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6aeb7945-54e1-40bf-b490-7623fac580b0-inventory\") on node \"crc\" DevicePath \"\"" Feb 25 12:04:07 crc kubenswrapper[5005]: I0225 12:04:07.966023 5005 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6aeb7945-54e1-40bf-b490-7623fac580b0-ceph\") on node \"crc\" DevicePath \"\"" Feb 25 12:04:07 crc kubenswrapper[5005]: I0225 12:04:07.966048 5005 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aeb7945-54e1-40bf-b490-7623fac580b0-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 12:04:07 crc kubenswrapper[5005]: I0225 12:04:07.966071 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxrbl\" (UniqueName: \"kubernetes.io/projected/6aeb7945-54e1-40bf-b490-7623fac580b0-kube-api-access-gxrbl\") on node \"crc\" DevicePath \"\"" Feb 25 12:04:08 crc kubenswrapper[5005]: I0225 12:04:08.205953 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jldwr" event={"ID":"6aeb7945-54e1-40bf-b490-7623fac580b0","Type":"ContainerDied","Data":"fbf4f7775ad247026828eaaa33da7cda68fcc7a0df7256a189b464545a74b50b"} Feb 25 12:04:08 crc kubenswrapper[5005]: I0225 12:04:08.206477 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbf4f7775ad247026828eaaa33da7cda68fcc7a0df7256a189b464545a74b50b" Feb 25 12:04:08 crc kubenswrapper[5005]: I0225 12:04:08.206059 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jldwr" Feb 25 12:04:08 crc kubenswrapper[5005]: I0225 12:04:08.293436 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl"] Feb 25 12:04:08 crc kubenswrapper[5005]: E0225 12:04:08.293947 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be9206cf-f74e-4999-8555-a8d30375cfe0" containerName="oc" Feb 25 12:04:08 crc kubenswrapper[5005]: I0225 12:04:08.293972 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="be9206cf-f74e-4999-8555-a8d30375cfe0" containerName="oc" Feb 25 12:04:08 crc kubenswrapper[5005]: E0225 12:04:08.294001 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aeb7945-54e1-40bf-b490-7623fac580b0" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 25 12:04:08 crc kubenswrapper[5005]: I0225 12:04:08.294013 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aeb7945-54e1-40bf-b490-7623fac580b0" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 25 12:04:08 crc kubenswrapper[5005]: I0225 12:04:08.294266 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="6aeb7945-54e1-40bf-b490-7623fac580b0" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 25 12:04:08 crc kubenswrapper[5005]: I0225 12:04:08.294304 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="be9206cf-f74e-4999-8555-a8d30375cfe0" containerName="oc" Feb 25 12:04:08 crc kubenswrapper[5005]: I0225 12:04:08.295142 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl" Feb 25 12:04:08 crc kubenswrapper[5005]: I0225 12:04:08.298274 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 25 12:04:08 crc kubenswrapper[5005]: I0225 12:04:08.298861 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 25 12:04:08 crc kubenswrapper[5005]: I0225 12:04:08.298915 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 25 12:04:08 crc kubenswrapper[5005]: I0225 12:04:08.298917 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 25 12:04:08 crc kubenswrapper[5005]: I0225 12:04:08.299080 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ceph-nova" Feb 25 12:04:08 crc kubenswrapper[5005]: I0225 12:04:08.299098 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 25 12:04:08 crc kubenswrapper[5005]: I0225 12:04:08.299204 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dgrbb" Feb 25 12:04:08 crc kubenswrapper[5005]: I0225 12:04:08.299336 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 25 12:04:08 crc kubenswrapper[5005]: I0225 12:04:08.301185 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 25 12:04:08 crc kubenswrapper[5005]: I0225 12:04:08.312761 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl"] Feb 25 12:04:08 crc kubenswrapper[5005]: I0225 12:04:08.374487 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/13abaed5-d544-41f0-8bd9-07bbd0798c33-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl\" (UID: \"13abaed5-d544-41f0-8bd9-07bbd0798c33\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl" Feb 25 12:04:08 crc kubenswrapper[5005]: I0225 12:04:08.374573 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/13abaed5-d544-41f0-8bd9-07bbd0798c33-nova-cell1-compute-config-2\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl\" (UID: \"13abaed5-d544-41f0-8bd9-07bbd0798c33\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl" Feb 25 12:04:08 crc kubenswrapper[5005]: I0225 12:04:08.374621 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/13abaed5-d544-41f0-8bd9-07bbd0798c33-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl\" (UID: \"13abaed5-d544-41f0-8bd9-07bbd0798c33\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl" Feb 25 12:04:08 crc kubenswrapper[5005]: I0225 12:04:08.374770 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13abaed5-d544-41f0-8bd9-07bbd0798c33-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl\" (UID: \"13abaed5-d544-41f0-8bd9-07bbd0798c33\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl" Feb 25 12:04:08 crc kubenswrapper[5005]: I0225 12:04:08.374819 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/13abaed5-d544-41f0-8bd9-07bbd0798c33-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl\" (UID: \"13abaed5-d544-41f0-8bd9-07bbd0798c33\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl" Feb 25 12:04:08 crc kubenswrapper[5005]: I0225 12:04:08.374859 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/13abaed5-d544-41f0-8bd9-07bbd0798c33-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl\" (UID: \"13abaed5-d544-41f0-8bd9-07bbd0798c33\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl" Feb 25 12:04:08 crc kubenswrapper[5005]: I0225 12:04:08.374893 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/13abaed5-d544-41f0-8bd9-07bbd0798c33-nova-cell1-compute-config-3\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl\" (UID: \"13abaed5-d544-41f0-8bd9-07bbd0798c33\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl" Feb 25 12:04:08 crc kubenswrapper[5005]: I0225 12:04:08.374918 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/13abaed5-d544-41f0-8bd9-07bbd0798c33-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl\" (UID: \"13abaed5-d544-41f0-8bd9-07bbd0798c33\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl" Feb 25 12:04:08 crc kubenswrapper[5005]: I0225 12:04:08.374956 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/13abaed5-d544-41f0-8bd9-07bbd0798c33-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl\" (UID: \"13abaed5-d544-41f0-8bd9-07bbd0798c33\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl" Feb 25 12:04:08 crc kubenswrapper[5005]: I0225 12:04:08.374989 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/13abaed5-d544-41f0-8bd9-07bbd0798c33-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl\" (UID: \"13abaed5-d544-41f0-8bd9-07bbd0798c33\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl" Feb 25 12:04:08 crc kubenswrapper[5005]: I0225 12:04:08.375013 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx67f\" (UniqueName: \"kubernetes.io/projected/13abaed5-d544-41f0-8bd9-07bbd0798c33-kube-api-access-fx67f\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl\" (UID: \"13abaed5-d544-41f0-8bd9-07bbd0798c33\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl" Feb 25 12:04:08 crc kubenswrapper[5005]: I0225 12:04:08.375041 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/13abaed5-d544-41f0-8bd9-07bbd0798c33-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl\" (UID: \"13abaed5-d544-41f0-8bd9-07bbd0798c33\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl" Feb 25 12:04:08 crc kubenswrapper[5005]: I0225 12:04:08.375083 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/13abaed5-d544-41f0-8bd9-07bbd0798c33-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl\" (UID: \"13abaed5-d544-41f0-8bd9-07bbd0798c33\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl" Feb 25 12:04:08 crc kubenswrapper[5005]: I0225 12:04:08.476505 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/13abaed5-d544-41f0-8bd9-07bbd0798c33-nova-cell1-compute-config-2\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl\" (UID: \"13abaed5-d544-41f0-8bd9-07bbd0798c33\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl" Feb 25 12:04:08 crc kubenswrapper[5005]: I0225 12:04:08.476565 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/13abaed5-d544-41f0-8bd9-07bbd0798c33-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl\" (UID: \"13abaed5-d544-41f0-8bd9-07bbd0798c33\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl" Feb 25 12:04:08 crc kubenswrapper[5005]: I0225 12:04:08.476645 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13abaed5-d544-41f0-8bd9-07bbd0798c33-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl\" (UID: \"13abaed5-d544-41f0-8bd9-07bbd0798c33\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl" Feb 25 12:04:08 crc kubenswrapper[5005]: I0225 12:04:08.476682 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/13abaed5-d544-41f0-8bd9-07bbd0798c33-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl\" (UID: \"13abaed5-d544-41f0-8bd9-07bbd0798c33\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl" Feb 25 12:04:08 crc kubenswrapper[5005]: I0225 12:04:08.476708 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/13abaed5-d544-41f0-8bd9-07bbd0798c33-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl\" (UID: \"13abaed5-d544-41f0-8bd9-07bbd0798c33\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl" Feb 25 12:04:08 crc kubenswrapper[5005]: I0225 12:04:08.476728 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/13abaed5-d544-41f0-8bd9-07bbd0798c33-nova-cell1-compute-config-3\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl\" (UID: \"13abaed5-d544-41f0-8bd9-07bbd0798c33\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl" Feb 25 12:04:08 crc kubenswrapper[5005]: I0225 12:04:08.476750 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/13abaed5-d544-41f0-8bd9-07bbd0798c33-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl\" (UID: \"13abaed5-d544-41f0-8bd9-07bbd0798c33\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl" Feb 25 12:04:08 crc kubenswrapper[5005]: I0225 12:04:08.476772 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/13abaed5-d544-41f0-8bd9-07bbd0798c33-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl\" (UID: \"13abaed5-d544-41f0-8bd9-07bbd0798c33\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl" Feb 25 12:04:08 crc kubenswrapper[5005]: I0225 12:04:08.476795 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/13abaed5-d544-41f0-8bd9-07bbd0798c33-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl\" (UID: \"13abaed5-d544-41f0-8bd9-07bbd0798c33\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl" Feb 25 12:04:08 crc kubenswrapper[5005]: I0225 12:04:08.476813 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fx67f\" (UniqueName: \"kubernetes.io/projected/13abaed5-d544-41f0-8bd9-07bbd0798c33-kube-api-access-fx67f\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl\" (UID: \"13abaed5-d544-41f0-8bd9-07bbd0798c33\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl" Feb 25 12:04:08 crc kubenswrapper[5005]: I0225 12:04:08.476832 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/13abaed5-d544-41f0-8bd9-07bbd0798c33-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl\" (UID: \"13abaed5-d544-41f0-8bd9-07bbd0798c33\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl" Feb 25 12:04:08 crc kubenswrapper[5005]: I0225 12:04:08.476855 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/13abaed5-d544-41f0-8bd9-07bbd0798c33-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl\" (UID: \"13abaed5-d544-41f0-8bd9-07bbd0798c33\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl" Feb 25 12:04:08 crc kubenswrapper[5005]: I0225 12:04:08.476913 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/13abaed5-d544-41f0-8bd9-07bbd0798c33-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl\" (UID: \"13abaed5-d544-41f0-8bd9-07bbd0798c33\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl" Feb 25 12:04:08 crc kubenswrapper[5005]: I0225 12:04:08.478930 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/13abaed5-d544-41f0-8bd9-07bbd0798c33-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl\" (UID: \"13abaed5-d544-41f0-8bd9-07bbd0798c33\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl" Feb 25 12:04:08 crc kubenswrapper[5005]: I0225 12:04:08.480234 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/13abaed5-d544-41f0-8bd9-07bbd0798c33-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl\" (UID: \"13abaed5-d544-41f0-8bd9-07bbd0798c33\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl" Feb 25 12:04:08 crc kubenswrapper[5005]: I0225 12:04:08.482296 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/13abaed5-d544-41f0-8bd9-07bbd0798c33-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl\" (UID: \"13abaed5-d544-41f0-8bd9-07bbd0798c33\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl" Feb 25 12:04:08 crc kubenswrapper[5005]: I0225 12:04:08.482454 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/13abaed5-d544-41f0-8bd9-07bbd0798c33-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl\" (UID: \"13abaed5-d544-41f0-8bd9-07bbd0798c33\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl" Feb 25 12:04:08 crc kubenswrapper[5005]: I0225 12:04:08.484946 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/13abaed5-d544-41f0-8bd9-07bbd0798c33-nova-cell1-compute-config-2\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl\" (UID: \"13abaed5-d544-41f0-8bd9-07bbd0798c33\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl" Feb 25 12:04:08 crc kubenswrapper[5005]: I0225 12:04:08.486924 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/13abaed5-d544-41f0-8bd9-07bbd0798c33-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl\" (UID: \"13abaed5-d544-41f0-8bd9-07bbd0798c33\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl" Feb 25 12:04:08 crc kubenswrapper[5005]: I0225 12:04:08.488536 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/13abaed5-d544-41f0-8bd9-07bbd0798c33-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl\" (UID: \"13abaed5-d544-41f0-8bd9-07bbd0798c33\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl" Feb 25 12:04:08 crc kubenswrapper[5005]: I0225 12:04:08.488917 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/13abaed5-d544-41f0-8bd9-07bbd0798c33-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl\" (UID: \"13abaed5-d544-41f0-8bd9-07bbd0798c33\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl" Feb 25 12:04:08 crc kubenswrapper[5005]: I0225 12:04:08.489241 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/13abaed5-d544-41f0-8bd9-07bbd0798c33-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl\" (UID: \"13abaed5-d544-41f0-8bd9-07bbd0798c33\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl" Feb 25 12:04:08 crc kubenswrapper[5005]: I0225 12:04:08.489671 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/13abaed5-d544-41f0-8bd9-07bbd0798c33-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl\" (UID: \"13abaed5-d544-41f0-8bd9-07bbd0798c33\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl" Feb 25 12:04:08 crc kubenswrapper[5005]: I0225 12:04:08.489893 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13abaed5-d544-41f0-8bd9-07bbd0798c33-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl\" (UID: \"13abaed5-d544-41f0-8bd9-07bbd0798c33\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl" Feb 25 12:04:08 crc kubenswrapper[5005]: I0225 12:04:08.492928 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/13abaed5-d544-41f0-8bd9-07bbd0798c33-nova-cell1-compute-config-3\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl\" (UID: \"13abaed5-d544-41f0-8bd9-07bbd0798c33\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl" Feb 25 12:04:08 crc kubenswrapper[5005]: I0225 12:04:08.494923 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fx67f\" (UniqueName: \"kubernetes.io/projected/13abaed5-d544-41f0-8bd9-07bbd0798c33-kube-api-access-fx67f\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl\" (UID: \"13abaed5-d544-41f0-8bd9-07bbd0798c33\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl" Feb 25 12:04:08 crc kubenswrapper[5005]: I0225 12:04:08.620167 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl" Feb 25 12:04:08 crc kubenswrapper[5005]: I0225 12:04:08.945139 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl"] Feb 25 12:04:08 crc kubenswrapper[5005]: W0225 12:04:08.956738 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13abaed5_d544_41f0_8bd9_07bbd0798c33.slice/crio-31a3c4e25ad955cde5cc1a25abd9bfac9b004ac8f5619e3dfe75c21e1dd54ccb WatchSource:0}: Error finding container 31a3c4e25ad955cde5cc1a25abd9bfac9b004ac8f5619e3dfe75c21e1dd54ccb: Status 404 returned error can't find the container with id 31a3c4e25ad955cde5cc1a25abd9bfac9b004ac8f5619e3dfe75c21e1dd54ccb Feb 25 12:04:09 crc kubenswrapper[5005]: I0225 12:04:09.222674 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl" event={"ID":"13abaed5-d544-41f0-8bd9-07bbd0798c33","Type":"ContainerStarted","Data":"31a3c4e25ad955cde5cc1a25abd9bfac9b004ac8f5619e3dfe75c21e1dd54ccb"} Feb 25 12:04:10 crc kubenswrapper[5005]: I0225 12:04:10.235714 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl" event={"ID":"13abaed5-d544-41f0-8bd9-07bbd0798c33","Type":"ContainerStarted","Data":"1b824557f04005944c13ce102a955c3f5f855102ee9f2300ea9ea5b0970bf495"} Feb 25 12:04:10 crc kubenswrapper[5005]: I0225 12:04:10.278620 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl" podStartSLOduration=1.787536988 podStartE2EDuration="2.278574446s" podCreationTimestamp="2026-02-25 12:04:08 +0000 UTC" firstStartedPulling="2026-02-25 12:04:08.960104457 +0000 UTC m=+2763.000836784" lastFinishedPulling="2026-02-25 12:04:09.451141895 +0000 UTC m=+2763.491874242" observedRunningTime="2026-02-25 12:04:10.257049962 +0000 UTC m=+2764.297782289" watchObservedRunningTime="2026-02-25 12:04:10.278574446 +0000 UTC m=+2764.319306803" Feb 25 12:04:19 crc kubenswrapper[5005]: I0225 12:04:19.425931 5005 scope.go:117] "RemoveContainer" containerID="083bef0023c1dca3b89c5f55eaa66c7dd9572acab6cdd710e0f88fa67566c5b6" Feb 25 12:05:44 crc kubenswrapper[5005]: I0225 12:05:44.741106 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bwnwt"] Feb 25 12:05:44 crc kubenswrapper[5005]: I0225 12:05:44.743408 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bwnwt" Feb 25 12:05:44 crc kubenswrapper[5005]: I0225 12:05:44.842518 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bwnwt"] Feb 25 12:05:45 crc kubenswrapper[5005]: I0225 12:05:45.141289 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/305cdf9b-3559-4c5f-9096-816626efe32c-catalog-content\") pod \"certified-operators-bwnwt\" (UID: \"305cdf9b-3559-4c5f-9096-816626efe32c\") " pod="openshift-marketplace/certified-operators-bwnwt" Feb 25 12:05:45 crc kubenswrapper[5005]: I0225 12:05:45.141506 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/305cdf9b-3559-4c5f-9096-816626efe32c-utilities\") pod \"certified-operators-bwnwt\" (UID: \"305cdf9b-3559-4c5f-9096-816626efe32c\") " pod="openshift-marketplace/certified-operators-bwnwt" Feb 25 12:05:45 crc kubenswrapper[5005]: I0225 12:05:45.141628 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmqxc\" (UniqueName: \"kubernetes.io/projected/305cdf9b-3559-4c5f-9096-816626efe32c-kube-api-access-gmqxc\") pod \"certified-operators-bwnwt\" (UID: \"305cdf9b-3559-4c5f-9096-816626efe32c\") " pod="openshift-marketplace/certified-operators-bwnwt" Feb 25 12:05:45 crc kubenswrapper[5005]: I0225 12:05:45.179888 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8wv99"] Feb 25 12:05:45 crc kubenswrapper[5005]: I0225 12:05:45.181731 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8wv99" Feb 25 12:05:45 crc kubenswrapper[5005]: I0225 12:05:45.207537 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8wv99"] Feb 25 12:05:45 crc kubenswrapper[5005]: I0225 12:05:45.243285 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/022be3c7-53ad-4b1a-9841-1195783317c9-catalog-content\") pod \"community-operators-8wv99\" (UID: \"022be3c7-53ad-4b1a-9841-1195783317c9\") " pod="openshift-marketplace/community-operators-8wv99" Feb 25 12:05:45 crc kubenswrapper[5005]: I0225 12:05:45.243400 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/305cdf9b-3559-4c5f-9096-816626efe32c-catalog-content\") pod \"certified-operators-bwnwt\" (UID: \"305cdf9b-3559-4c5f-9096-816626efe32c\") " pod="openshift-marketplace/certified-operators-bwnwt" Feb 25 12:05:45 crc kubenswrapper[5005]: I0225 12:05:45.243448 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjr4g\" (UniqueName: \"kubernetes.io/projected/022be3c7-53ad-4b1a-9841-1195783317c9-kube-api-access-hjr4g\") pod \"community-operators-8wv99\" (UID: \"022be3c7-53ad-4b1a-9841-1195783317c9\") " pod="openshift-marketplace/community-operators-8wv99" Feb 25 12:05:45 crc kubenswrapper[5005]: I0225 12:05:45.243512 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/305cdf9b-3559-4c5f-9096-816626efe32c-utilities\") pod \"certified-operators-bwnwt\" (UID: \"305cdf9b-3559-4c5f-9096-816626efe32c\") " pod="openshift-marketplace/certified-operators-bwnwt" Feb 25 12:05:45 crc kubenswrapper[5005]: I0225 12:05:45.243573 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/022be3c7-53ad-4b1a-9841-1195783317c9-utilities\") pod \"community-operators-8wv99\" (UID: \"022be3c7-53ad-4b1a-9841-1195783317c9\") " pod="openshift-marketplace/community-operators-8wv99" Feb 25 12:05:45 crc kubenswrapper[5005]: I0225 12:05:45.243597 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmqxc\" (UniqueName: \"kubernetes.io/projected/305cdf9b-3559-4c5f-9096-816626efe32c-kube-api-access-gmqxc\") pod \"certified-operators-bwnwt\" (UID: \"305cdf9b-3559-4c5f-9096-816626efe32c\") " pod="openshift-marketplace/certified-operators-bwnwt" Feb 25 12:05:45 crc kubenswrapper[5005]: I0225 12:05:45.244404 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/305cdf9b-3559-4c5f-9096-816626efe32c-catalog-content\") pod \"certified-operators-bwnwt\" (UID: \"305cdf9b-3559-4c5f-9096-816626efe32c\") " pod="openshift-marketplace/certified-operators-bwnwt" Feb 25 12:05:45 crc kubenswrapper[5005]: I0225 12:05:45.244681 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/305cdf9b-3559-4c5f-9096-816626efe32c-utilities\") pod \"certified-operators-bwnwt\" (UID: \"305cdf9b-3559-4c5f-9096-816626efe32c\") " pod="openshift-marketplace/certified-operators-bwnwt" Feb 25 12:05:45 crc kubenswrapper[5005]: I0225 12:05:45.270209 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmqxc\" (UniqueName: \"kubernetes.io/projected/305cdf9b-3559-4c5f-9096-816626efe32c-kube-api-access-gmqxc\") pod \"certified-operators-bwnwt\" (UID: \"305cdf9b-3559-4c5f-9096-816626efe32c\") " pod="openshift-marketplace/certified-operators-bwnwt" Feb 25 12:05:45 crc kubenswrapper[5005]: I0225 12:05:45.345662 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/022be3c7-53ad-4b1a-9841-1195783317c9-catalog-content\") pod \"community-operators-8wv99\" (UID: \"022be3c7-53ad-4b1a-9841-1195783317c9\") " pod="openshift-marketplace/community-operators-8wv99" Feb 25 12:05:45 crc kubenswrapper[5005]: I0225 12:05:45.346018 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjr4g\" (UniqueName: \"kubernetes.io/projected/022be3c7-53ad-4b1a-9841-1195783317c9-kube-api-access-hjr4g\") pod \"community-operators-8wv99\" (UID: \"022be3c7-53ad-4b1a-9841-1195783317c9\") " pod="openshift-marketplace/community-operators-8wv99" Feb 25 12:05:45 crc kubenswrapper[5005]: I0225 12:05:45.346149 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/022be3c7-53ad-4b1a-9841-1195783317c9-utilities\") pod \"community-operators-8wv99\" (UID: \"022be3c7-53ad-4b1a-9841-1195783317c9\") " pod="openshift-marketplace/community-operators-8wv99" Feb 25 12:05:45 crc kubenswrapper[5005]: I0225 12:05:45.346745 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/022be3c7-53ad-4b1a-9841-1195783317c9-catalog-content\") pod \"community-operators-8wv99\" (UID: \"022be3c7-53ad-4b1a-9841-1195783317c9\") " pod="openshift-marketplace/community-operators-8wv99" Feb 25 12:05:45 crc kubenswrapper[5005]: I0225 12:05:45.346883 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/022be3c7-53ad-4b1a-9841-1195783317c9-utilities\") pod \"community-operators-8wv99\" (UID: \"022be3c7-53ad-4b1a-9841-1195783317c9\") " pod="openshift-marketplace/community-operators-8wv99" Feb 25 12:05:45 crc kubenswrapper[5005]: I0225 12:05:45.372401 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjr4g\" (UniqueName: \"kubernetes.io/projected/022be3c7-53ad-4b1a-9841-1195783317c9-kube-api-access-hjr4g\") pod \"community-operators-8wv99\" (UID: \"022be3c7-53ad-4b1a-9841-1195783317c9\") " pod="openshift-marketplace/community-operators-8wv99" Feb 25 12:05:45 crc kubenswrapper[5005]: I0225 12:05:45.441460 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bwnwt" Feb 25 12:05:45 crc kubenswrapper[5005]: I0225 12:05:45.510872 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8wv99" Feb 25 12:05:45 crc kubenswrapper[5005]: I0225 12:05:45.910468 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8wv99"] Feb 25 12:05:46 crc kubenswrapper[5005]: I0225 12:05:46.044268 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bwnwt"] Feb 25 12:05:46 crc kubenswrapper[5005]: W0225 12:05:46.055094 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod305cdf9b_3559_4c5f_9096_816626efe32c.slice/crio-4fa8430ff9868c2bdbc480f70003151290a6800800793672a080c52dadec9a04 WatchSource:0}: Error finding container 4fa8430ff9868c2bdbc480f70003151290a6800800793672a080c52dadec9a04: Status 404 returned error can't find the container with id 4fa8430ff9868c2bdbc480f70003151290a6800800793672a080c52dadec9a04 Feb 25 12:05:46 crc kubenswrapper[5005]: I0225 12:05:46.253932 5005 generic.go:334] "Generic (PLEG): container finished" podID="305cdf9b-3559-4c5f-9096-816626efe32c" containerID="0f11e349287e75d2bd57268c48c01de6b58d1ee35ed1cbd45626b3d0b93120f9" exitCode=0 Feb 25 12:05:46 crc kubenswrapper[5005]: I0225 12:05:46.254042 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bwnwt" event={"ID":"305cdf9b-3559-4c5f-9096-816626efe32c","Type":"ContainerDied","Data":"0f11e349287e75d2bd57268c48c01de6b58d1ee35ed1cbd45626b3d0b93120f9"} Feb 25 12:05:46 crc kubenswrapper[5005]: I0225 12:05:46.254303 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bwnwt" event={"ID":"305cdf9b-3559-4c5f-9096-816626efe32c","Type":"ContainerStarted","Data":"4fa8430ff9868c2bdbc480f70003151290a6800800793672a080c52dadec9a04"} Feb 25 12:05:46 crc kubenswrapper[5005]: I0225 12:05:46.256274 5005 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 25 12:05:46 crc kubenswrapper[5005]: I0225 12:05:46.256469 5005 generic.go:334] "Generic (PLEG): container finished" podID="022be3c7-53ad-4b1a-9841-1195783317c9" containerID="d612bb594fbe84913c352589cb841cd3f68517cc252c8dbe69d016043e4468f6" exitCode=0 Feb 25 12:05:46 crc kubenswrapper[5005]: I0225 12:05:46.256502 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8wv99" event={"ID":"022be3c7-53ad-4b1a-9841-1195783317c9","Type":"ContainerDied","Data":"d612bb594fbe84913c352589cb841cd3f68517cc252c8dbe69d016043e4468f6"} Feb 25 12:05:46 crc kubenswrapper[5005]: I0225 12:05:46.256524 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8wv99" event={"ID":"022be3c7-53ad-4b1a-9841-1195783317c9","Type":"ContainerStarted","Data":"db3d9bf9230055a57a330e92eb40138fc9037775603f196ff2826147a3cbc41c"} Feb 25 12:05:47 crc kubenswrapper[5005]: I0225 12:05:47.265486 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bwnwt" event={"ID":"305cdf9b-3559-4c5f-9096-816626efe32c","Type":"ContainerStarted","Data":"55421a3ed466d48e700ef543a5f4d67c0d245b39e828b75e08e6716fb2d846d3"} Feb 25 12:05:47 crc kubenswrapper[5005]: I0225 12:05:47.282524 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8wv99" event={"ID":"022be3c7-53ad-4b1a-9841-1195783317c9","Type":"ContainerStarted","Data":"de96e196ea91d62d1cb4bdb03c8145bc4a49c0a85eaf437da8348285a730cb7d"} Feb 25 12:05:48 crc kubenswrapper[5005]: I0225 12:05:48.295092 5005 generic.go:334] "Generic (PLEG): container finished" podID="022be3c7-53ad-4b1a-9841-1195783317c9" containerID="de96e196ea91d62d1cb4bdb03c8145bc4a49c0a85eaf437da8348285a730cb7d" exitCode=0 Feb 25 12:05:48 crc kubenswrapper[5005]: I0225 12:05:48.295224 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8wv99" event={"ID":"022be3c7-53ad-4b1a-9841-1195783317c9","Type":"ContainerDied","Data":"de96e196ea91d62d1cb4bdb03c8145bc4a49c0a85eaf437da8348285a730cb7d"} Feb 25 12:05:48 crc kubenswrapper[5005]: I0225 12:05:48.298625 5005 generic.go:334] "Generic (PLEG): container finished" podID="305cdf9b-3559-4c5f-9096-816626efe32c" containerID="55421a3ed466d48e700ef543a5f4d67c0d245b39e828b75e08e6716fb2d846d3" exitCode=0 Feb 25 12:05:48 crc kubenswrapper[5005]: I0225 12:05:48.298649 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bwnwt" event={"ID":"305cdf9b-3559-4c5f-9096-816626efe32c","Type":"ContainerDied","Data":"55421a3ed466d48e700ef543a5f4d67c0d245b39e828b75e08e6716fb2d846d3"} Feb 25 12:05:49 crc kubenswrapper[5005]: I0225 12:05:49.310877 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bwnwt" event={"ID":"305cdf9b-3559-4c5f-9096-816626efe32c","Type":"ContainerStarted","Data":"28ebc46f99d6fc0cf7fb2f534d0bdaea2512fd1e2a9ad6f43d3376b06e3c96fa"} Feb 25 12:05:49 crc kubenswrapper[5005]: I0225 12:05:49.314863 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8wv99" event={"ID":"022be3c7-53ad-4b1a-9841-1195783317c9","Type":"ContainerStarted","Data":"dcda39935bcdf52f4ef8989d7edbd09159e0788119a7520b125578c2b82ebb1c"} Feb 25 12:05:49 crc kubenswrapper[5005]: I0225 12:05:49.337702 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bwnwt" podStartSLOduration=2.906131221 podStartE2EDuration="5.33767866s" podCreationTimestamp="2026-02-25 12:05:44 +0000 UTC" firstStartedPulling="2026-02-25 12:05:46.256003737 +0000 UTC m=+2860.296736074" lastFinishedPulling="2026-02-25 12:05:48.687551186 +0000 UTC m=+2862.728283513" observedRunningTime="2026-02-25 12:05:49.337026191 +0000 UTC m=+2863.377758558" watchObservedRunningTime="2026-02-25 12:05:49.33767866 +0000 UTC m=+2863.378410987" Feb 25 12:05:49 crc kubenswrapper[5005]: I0225 12:05:49.362517 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8wv99" podStartSLOduration=1.946596467 podStartE2EDuration="4.362489614s" podCreationTimestamp="2026-02-25 12:05:45 +0000 UTC" firstStartedPulling="2026-02-25 12:05:46.258860735 +0000 UTC m=+2860.299593062" lastFinishedPulling="2026-02-25 12:05:48.674753882 +0000 UTC m=+2862.715486209" observedRunningTime="2026-02-25 12:05:49.358897304 +0000 UTC m=+2863.399629671" watchObservedRunningTime="2026-02-25 12:05:49.362489614 +0000 UTC m=+2863.403221951" Feb 25 12:05:55 crc kubenswrapper[5005]: I0225 12:05:55.442626 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bwnwt" Feb 25 12:05:55 crc kubenswrapper[5005]: I0225 12:05:55.443254 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bwnwt" Feb 25 12:05:55 crc kubenswrapper[5005]: I0225 12:05:55.495519 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bwnwt" Feb 25 12:05:55 crc kubenswrapper[5005]: I0225 12:05:55.515723 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8wv99" Feb 25 12:05:55 crc kubenswrapper[5005]: I0225 12:05:55.515781 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8wv99" Feb 25 12:05:55 crc kubenswrapper[5005]: I0225 12:05:55.566494 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8wv99" Feb 25 12:05:56 crc kubenswrapper[5005]: I0225 12:05:56.453304 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8wv99" Feb 25 12:05:56 crc kubenswrapper[5005]: I0225 12:05:56.474867 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bwnwt" Feb 25 12:05:57 crc kubenswrapper[5005]: I0225 12:05:57.332194 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8wv99"] Feb 25 12:05:58 crc kubenswrapper[5005]: I0225 12:05:58.088265 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 12:05:58 crc kubenswrapper[5005]: I0225 12:05:58.088956 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 12:05:58 crc kubenswrapper[5005]: I0225 12:05:58.412204 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8wv99" podUID="022be3c7-53ad-4b1a-9841-1195783317c9" containerName="registry-server" containerID="cri-o://dcda39935bcdf52f4ef8989d7edbd09159e0788119a7520b125578c2b82ebb1c" gracePeriod=2 Feb 25 12:05:58 crc kubenswrapper[5005]: I0225 12:05:58.734829 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bwnwt"] Feb 25 12:05:58 crc kubenswrapper[5005]: I0225 12:05:58.735313 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bwnwt" podUID="305cdf9b-3559-4c5f-9096-816626efe32c" containerName="registry-server" containerID="cri-o://28ebc46f99d6fc0cf7fb2f534d0bdaea2512fd1e2a9ad6f43d3376b06e3c96fa" gracePeriod=2 Feb 25 12:05:58 crc kubenswrapper[5005]: I0225 12:05:58.936718 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8wv99" Feb 25 12:05:59 crc kubenswrapper[5005]: I0225 12:05:59.137400 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjr4g\" (UniqueName: \"kubernetes.io/projected/022be3c7-53ad-4b1a-9841-1195783317c9-kube-api-access-hjr4g\") pod \"022be3c7-53ad-4b1a-9841-1195783317c9\" (UID: \"022be3c7-53ad-4b1a-9841-1195783317c9\") " Feb 25 12:05:59 crc kubenswrapper[5005]: I0225 12:05:59.137514 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/022be3c7-53ad-4b1a-9841-1195783317c9-utilities\") pod \"022be3c7-53ad-4b1a-9841-1195783317c9\" (UID: \"022be3c7-53ad-4b1a-9841-1195783317c9\") " Feb 25 12:05:59 crc kubenswrapper[5005]: I0225 12:05:59.137563 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/022be3c7-53ad-4b1a-9841-1195783317c9-catalog-content\") pod \"022be3c7-53ad-4b1a-9841-1195783317c9\" (UID: \"022be3c7-53ad-4b1a-9841-1195783317c9\") " Feb 25 12:05:59 crc kubenswrapper[5005]: I0225 12:05:59.138635 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/022be3c7-53ad-4b1a-9841-1195783317c9-utilities" (OuterVolumeSpecName: "utilities") pod "022be3c7-53ad-4b1a-9841-1195783317c9" (UID: "022be3c7-53ad-4b1a-9841-1195783317c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 12:05:59 crc kubenswrapper[5005]: I0225 12:05:59.144227 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/022be3c7-53ad-4b1a-9841-1195783317c9-kube-api-access-hjr4g" (OuterVolumeSpecName: "kube-api-access-hjr4g") pod "022be3c7-53ad-4b1a-9841-1195783317c9" (UID: "022be3c7-53ad-4b1a-9841-1195783317c9"). InnerVolumeSpecName "kube-api-access-hjr4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:05:59 crc kubenswrapper[5005]: I0225 12:05:59.188469 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bwnwt" Feb 25 12:05:59 crc kubenswrapper[5005]: I0225 12:05:59.188778 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/022be3c7-53ad-4b1a-9841-1195783317c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "022be3c7-53ad-4b1a-9841-1195783317c9" (UID: "022be3c7-53ad-4b1a-9841-1195783317c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 12:05:59 crc kubenswrapper[5005]: I0225 12:05:59.239566 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/022be3c7-53ad-4b1a-9841-1195783317c9-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 12:05:59 crc kubenswrapper[5005]: I0225 12:05:59.239614 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/022be3c7-53ad-4b1a-9841-1195783317c9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 12:05:59 crc kubenswrapper[5005]: I0225 12:05:59.239631 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjr4g\" (UniqueName: \"kubernetes.io/projected/022be3c7-53ad-4b1a-9841-1195783317c9-kube-api-access-hjr4g\") on node \"crc\" DevicePath \"\"" Feb 25 12:05:59 crc kubenswrapper[5005]: I0225 12:05:59.340897 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmqxc\" (UniqueName: \"kubernetes.io/projected/305cdf9b-3559-4c5f-9096-816626efe32c-kube-api-access-gmqxc\") pod \"305cdf9b-3559-4c5f-9096-816626efe32c\" (UID: \"305cdf9b-3559-4c5f-9096-816626efe32c\") " Feb 25 12:05:59 crc kubenswrapper[5005]: I0225 12:05:59.341459 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/305cdf9b-3559-4c5f-9096-816626efe32c-catalog-content\") pod \"305cdf9b-3559-4c5f-9096-816626efe32c\" (UID: \"305cdf9b-3559-4c5f-9096-816626efe32c\") " Feb 25 12:05:59 crc kubenswrapper[5005]: I0225 12:05:59.341495 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/305cdf9b-3559-4c5f-9096-816626efe32c-utilities\") pod \"305cdf9b-3559-4c5f-9096-816626efe32c\" (UID: \"305cdf9b-3559-4c5f-9096-816626efe32c\") " Feb 25 12:05:59 crc kubenswrapper[5005]: I0225 12:05:59.342571 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/305cdf9b-3559-4c5f-9096-816626efe32c-utilities" (OuterVolumeSpecName: "utilities") pod "305cdf9b-3559-4c5f-9096-816626efe32c" (UID: "305cdf9b-3559-4c5f-9096-816626efe32c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 12:05:59 crc kubenswrapper[5005]: I0225 12:05:59.344154 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/305cdf9b-3559-4c5f-9096-816626efe32c-kube-api-access-gmqxc" (OuterVolumeSpecName: "kube-api-access-gmqxc") pod "305cdf9b-3559-4c5f-9096-816626efe32c" (UID: "305cdf9b-3559-4c5f-9096-816626efe32c"). InnerVolumeSpecName "kube-api-access-gmqxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:05:59 crc kubenswrapper[5005]: I0225 12:05:59.406206 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/305cdf9b-3559-4c5f-9096-816626efe32c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "305cdf9b-3559-4c5f-9096-816626efe32c" (UID: "305cdf9b-3559-4c5f-9096-816626efe32c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 12:05:59 crc kubenswrapper[5005]: I0225 12:05:59.431536 5005 generic.go:334] "Generic (PLEG): container finished" podID="022be3c7-53ad-4b1a-9841-1195783317c9" containerID="dcda39935bcdf52f4ef8989d7edbd09159e0788119a7520b125578c2b82ebb1c" exitCode=0 Feb 25 12:05:59 crc kubenswrapper[5005]: I0225 12:05:59.431716 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8wv99" event={"ID":"022be3c7-53ad-4b1a-9841-1195783317c9","Type":"ContainerDied","Data":"dcda39935bcdf52f4ef8989d7edbd09159e0788119a7520b125578c2b82ebb1c"} Feb 25 12:05:59 crc kubenswrapper[5005]: I0225 12:05:59.431784 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8wv99" event={"ID":"022be3c7-53ad-4b1a-9841-1195783317c9","Type":"ContainerDied","Data":"db3d9bf9230055a57a330e92eb40138fc9037775603f196ff2826147a3cbc41c"} Feb 25 12:05:59 crc kubenswrapper[5005]: I0225 12:05:59.431807 5005 scope.go:117] "RemoveContainer" containerID="dcda39935bcdf52f4ef8989d7edbd09159e0788119a7520b125578c2b82ebb1c" Feb 25 12:05:59 crc kubenswrapper[5005]: I0225 12:05:59.431980 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8wv99" Feb 25 12:05:59 crc kubenswrapper[5005]: I0225 12:05:59.436413 5005 generic.go:334] "Generic (PLEG): container finished" podID="305cdf9b-3559-4c5f-9096-816626efe32c" containerID="28ebc46f99d6fc0cf7fb2f534d0bdaea2512fd1e2a9ad6f43d3376b06e3c96fa" exitCode=0 Feb 25 12:05:59 crc kubenswrapper[5005]: I0225 12:05:59.436519 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bwnwt" event={"ID":"305cdf9b-3559-4c5f-9096-816626efe32c","Type":"ContainerDied","Data":"28ebc46f99d6fc0cf7fb2f534d0bdaea2512fd1e2a9ad6f43d3376b06e3c96fa"} Feb 25 12:05:59 crc kubenswrapper[5005]: I0225 12:05:59.436606 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bwnwt" event={"ID":"305cdf9b-3559-4c5f-9096-816626efe32c","Type":"ContainerDied","Data":"4fa8430ff9868c2bdbc480f70003151290a6800800793672a080c52dadec9a04"} Feb 25 12:05:59 crc kubenswrapper[5005]: I0225 12:05:59.436717 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bwnwt" Feb 25 12:05:59 crc kubenswrapper[5005]: I0225 12:05:59.442907 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/305cdf9b-3559-4c5f-9096-816626efe32c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 12:05:59 crc kubenswrapper[5005]: I0225 12:05:59.442935 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/305cdf9b-3559-4c5f-9096-816626efe32c-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 12:05:59 crc kubenswrapper[5005]: I0225 12:05:59.442944 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmqxc\" (UniqueName: \"kubernetes.io/projected/305cdf9b-3559-4c5f-9096-816626efe32c-kube-api-access-gmqxc\") on node \"crc\" DevicePath \"\"" Feb 25 12:05:59 crc kubenswrapper[5005]: I0225 12:05:59.460941 5005 scope.go:117] "RemoveContainer" containerID="de96e196ea91d62d1cb4bdb03c8145bc4a49c0a85eaf437da8348285a730cb7d" Feb 25 12:05:59 crc kubenswrapper[5005]: I0225 12:05:59.487140 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bwnwt"] Feb 25 12:05:59 crc kubenswrapper[5005]: I0225 12:05:59.497079 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bwnwt"] Feb 25 12:05:59 crc kubenswrapper[5005]: I0225 12:05:59.501300 5005 scope.go:117] "RemoveContainer" containerID="d612bb594fbe84913c352589cb841cd3f68517cc252c8dbe69d016043e4468f6" Feb 25 12:05:59 crc kubenswrapper[5005]: I0225 12:05:59.504797 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8wv99"] Feb 25 12:05:59 crc kubenswrapper[5005]: I0225 12:05:59.513528 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8wv99"] Feb 25 12:05:59 crc kubenswrapper[5005]: I0225 12:05:59.524840 5005 scope.go:117] "RemoveContainer" containerID="dcda39935bcdf52f4ef8989d7edbd09159e0788119a7520b125578c2b82ebb1c" Feb 25 12:05:59 crc kubenswrapper[5005]: E0225 12:05:59.525328 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dcda39935bcdf52f4ef8989d7edbd09159e0788119a7520b125578c2b82ebb1c\": container with ID starting with dcda39935bcdf52f4ef8989d7edbd09159e0788119a7520b125578c2b82ebb1c not found: ID does not exist" containerID="dcda39935bcdf52f4ef8989d7edbd09159e0788119a7520b125578c2b82ebb1c" Feb 25 12:05:59 crc kubenswrapper[5005]: I0225 12:05:59.525394 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dcda39935bcdf52f4ef8989d7edbd09159e0788119a7520b125578c2b82ebb1c"} err="failed to get container status \"dcda39935bcdf52f4ef8989d7edbd09159e0788119a7520b125578c2b82ebb1c\": rpc error: code = NotFound desc = could not find container \"dcda39935bcdf52f4ef8989d7edbd09159e0788119a7520b125578c2b82ebb1c\": container with ID starting with dcda39935bcdf52f4ef8989d7edbd09159e0788119a7520b125578c2b82ebb1c not found: ID does not exist" Feb 25 12:05:59 crc kubenswrapper[5005]: I0225 12:05:59.525428 5005 scope.go:117] "RemoveContainer" containerID="de96e196ea91d62d1cb4bdb03c8145bc4a49c0a85eaf437da8348285a730cb7d" Feb 25 12:05:59 crc kubenswrapper[5005]: E0225 12:05:59.525757 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de96e196ea91d62d1cb4bdb03c8145bc4a49c0a85eaf437da8348285a730cb7d\": container with ID starting with de96e196ea91d62d1cb4bdb03c8145bc4a49c0a85eaf437da8348285a730cb7d not found: ID does not exist" containerID="de96e196ea91d62d1cb4bdb03c8145bc4a49c0a85eaf437da8348285a730cb7d" Feb 25 12:05:59 crc kubenswrapper[5005]: I0225 12:05:59.525787 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de96e196ea91d62d1cb4bdb03c8145bc4a49c0a85eaf437da8348285a730cb7d"} err="failed to get container status \"de96e196ea91d62d1cb4bdb03c8145bc4a49c0a85eaf437da8348285a730cb7d\": rpc error: code = NotFound desc = could not find container \"de96e196ea91d62d1cb4bdb03c8145bc4a49c0a85eaf437da8348285a730cb7d\": container with ID starting with de96e196ea91d62d1cb4bdb03c8145bc4a49c0a85eaf437da8348285a730cb7d not found: ID does not exist" Feb 25 12:05:59 crc kubenswrapper[5005]: I0225 12:05:59.525810 5005 scope.go:117] "RemoveContainer" containerID="d612bb594fbe84913c352589cb841cd3f68517cc252c8dbe69d016043e4468f6" Feb 25 12:05:59 crc kubenswrapper[5005]: E0225 12:05:59.526071 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d612bb594fbe84913c352589cb841cd3f68517cc252c8dbe69d016043e4468f6\": container with ID starting with d612bb594fbe84913c352589cb841cd3f68517cc252c8dbe69d016043e4468f6 not found: ID does not exist" containerID="d612bb594fbe84913c352589cb841cd3f68517cc252c8dbe69d016043e4468f6" Feb 25 12:05:59 crc kubenswrapper[5005]: I0225 12:05:59.526107 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d612bb594fbe84913c352589cb841cd3f68517cc252c8dbe69d016043e4468f6"} err="failed to get container status \"d612bb594fbe84913c352589cb841cd3f68517cc252c8dbe69d016043e4468f6\": rpc error: code = NotFound desc = could not find container \"d612bb594fbe84913c352589cb841cd3f68517cc252c8dbe69d016043e4468f6\": container with ID starting with d612bb594fbe84913c352589cb841cd3f68517cc252c8dbe69d016043e4468f6 not found: ID does not exist" Feb 25 12:05:59 crc kubenswrapper[5005]: I0225 12:05:59.526121 5005 scope.go:117] "RemoveContainer" containerID="28ebc46f99d6fc0cf7fb2f534d0bdaea2512fd1e2a9ad6f43d3376b06e3c96fa" Feb 25 12:05:59 crc kubenswrapper[5005]: I0225 12:05:59.544737 5005 scope.go:117] "RemoveContainer" containerID="55421a3ed466d48e700ef543a5f4d67c0d245b39e828b75e08e6716fb2d846d3" Feb 25 12:05:59 crc kubenswrapper[5005]: I0225 12:05:59.565175 5005 scope.go:117] "RemoveContainer" containerID="0f11e349287e75d2bd57268c48c01de6b58d1ee35ed1cbd45626b3d0b93120f9" Feb 25 12:05:59 crc kubenswrapper[5005]: I0225 12:05:59.588501 5005 scope.go:117] "RemoveContainer" containerID="28ebc46f99d6fc0cf7fb2f534d0bdaea2512fd1e2a9ad6f43d3376b06e3c96fa" Feb 25 12:05:59 crc kubenswrapper[5005]: E0225 12:05:59.589128 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28ebc46f99d6fc0cf7fb2f534d0bdaea2512fd1e2a9ad6f43d3376b06e3c96fa\": container with ID starting with 28ebc46f99d6fc0cf7fb2f534d0bdaea2512fd1e2a9ad6f43d3376b06e3c96fa not found: ID does not exist" containerID="28ebc46f99d6fc0cf7fb2f534d0bdaea2512fd1e2a9ad6f43d3376b06e3c96fa" Feb 25 12:05:59 crc kubenswrapper[5005]: I0225 12:05:59.589171 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28ebc46f99d6fc0cf7fb2f534d0bdaea2512fd1e2a9ad6f43d3376b06e3c96fa"} err="failed to get container status \"28ebc46f99d6fc0cf7fb2f534d0bdaea2512fd1e2a9ad6f43d3376b06e3c96fa\": rpc error: code = NotFound desc = could not find container \"28ebc46f99d6fc0cf7fb2f534d0bdaea2512fd1e2a9ad6f43d3376b06e3c96fa\": container with ID starting with 28ebc46f99d6fc0cf7fb2f534d0bdaea2512fd1e2a9ad6f43d3376b06e3c96fa not found: ID does not exist" Feb 25 12:05:59 crc kubenswrapper[5005]: I0225 12:05:59.589192 5005 scope.go:117] "RemoveContainer" containerID="55421a3ed466d48e700ef543a5f4d67c0d245b39e828b75e08e6716fb2d846d3" Feb 25 12:05:59 crc kubenswrapper[5005]: E0225 12:05:59.589621 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55421a3ed466d48e700ef543a5f4d67c0d245b39e828b75e08e6716fb2d846d3\": container with ID starting with 55421a3ed466d48e700ef543a5f4d67c0d245b39e828b75e08e6716fb2d846d3 not found: ID does not exist" containerID="55421a3ed466d48e700ef543a5f4d67c0d245b39e828b75e08e6716fb2d846d3" Feb 25 12:05:59 crc kubenswrapper[5005]: I0225 12:05:59.589656 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55421a3ed466d48e700ef543a5f4d67c0d245b39e828b75e08e6716fb2d846d3"} err="failed to get container status \"55421a3ed466d48e700ef543a5f4d67c0d245b39e828b75e08e6716fb2d846d3\": rpc error: code = NotFound desc = could not find container \"55421a3ed466d48e700ef543a5f4d67c0d245b39e828b75e08e6716fb2d846d3\": container with ID starting with 55421a3ed466d48e700ef543a5f4d67c0d245b39e828b75e08e6716fb2d846d3 not found: ID does not exist" Feb 25 12:05:59 crc kubenswrapper[5005]: I0225 12:05:59.589670 5005 scope.go:117] "RemoveContainer" containerID="0f11e349287e75d2bd57268c48c01de6b58d1ee35ed1cbd45626b3d0b93120f9" Feb 25 12:05:59 crc kubenswrapper[5005]: E0225 12:05:59.590028 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f11e349287e75d2bd57268c48c01de6b58d1ee35ed1cbd45626b3d0b93120f9\": container with ID starting with 0f11e349287e75d2bd57268c48c01de6b58d1ee35ed1cbd45626b3d0b93120f9 not found: ID does not exist" containerID="0f11e349287e75d2bd57268c48c01de6b58d1ee35ed1cbd45626b3d0b93120f9" Feb 25 12:05:59 crc kubenswrapper[5005]: I0225 12:05:59.590046 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f11e349287e75d2bd57268c48c01de6b58d1ee35ed1cbd45626b3d0b93120f9"} err="failed to get container status \"0f11e349287e75d2bd57268c48c01de6b58d1ee35ed1cbd45626b3d0b93120f9\": rpc error: code = NotFound desc = could not find container \"0f11e349287e75d2bd57268c48c01de6b58d1ee35ed1cbd45626b3d0b93120f9\": container with ID starting with 0f11e349287e75d2bd57268c48c01de6b58d1ee35ed1cbd45626b3d0b93120f9 not found: ID does not exist" Feb 25 12:06:00 crc kubenswrapper[5005]: I0225 12:06:00.169359 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533686-hqqh5"] Feb 25 12:06:00 crc kubenswrapper[5005]: E0225 12:06:00.170019 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="022be3c7-53ad-4b1a-9841-1195783317c9" containerName="registry-server" Feb 25 12:06:00 crc kubenswrapper[5005]: I0225 12:06:00.170050 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="022be3c7-53ad-4b1a-9841-1195783317c9" containerName="registry-server" Feb 25 12:06:00 crc kubenswrapper[5005]: E0225 12:06:00.170077 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="305cdf9b-3559-4c5f-9096-816626efe32c" containerName="extract-content" Feb 25 12:06:00 crc kubenswrapper[5005]: I0225 12:06:00.170090 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="305cdf9b-3559-4c5f-9096-816626efe32c" containerName="extract-content" Feb 25 12:06:00 crc kubenswrapper[5005]: E0225 12:06:00.170121 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="022be3c7-53ad-4b1a-9841-1195783317c9" containerName="extract-utilities" Feb 25 12:06:00 crc kubenswrapper[5005]: I0225 12:06:00.170138 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="022be3c7-53ad-4b1a-9841-1195783317c9" containerName="extract-utilities" Feb 25 12:06:00 crc kubenswrapper[5005]: E0225 12:06:00.170190 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="305cdf9b-3559-4c5f-9096-816626efe32c" containerName="extract-utilities" Feb 25 12:06:00 crc kubenswrapper[5005]: I0225 12:06:00.170207 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="305cdf9b-3559-4c5f-9096-816626efe32c" containerName="extract-utilities" Feb 25 12:06:00 crc kubenswrapper[5005]: E0225 12:06:00.170238 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="305cdf9b-3559-4c5f-9096-816626efe32c" containerName="registry-server" Feb 25 12:06:00 crc kubenswrapper[5005]: I0225 12:06:00.170254 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="305cdf9b-3559-4c5f-9096-816626efe32c" containerName="registry-server" Feb 25 12:06:00 crc kubenswrapper[5005]: E0225 12:06:00.170273 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="022be3c7-53ad-4b1a-9841-1195783317c9" containerName="extract-content" Feb 25 12:06:00 crc kubenswrapper[5005]: I0225 12:06:00.170288 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="022be3c7-53ad-4b1a-9841-1195783317c9" containerName="extract-content" Feb 25 12:06:00 crc kubenswrapper[5005]: I0225 12:06:00.170837 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="022be3c7-53ad-4b1a-9841-1195783317c9" containerName="registry-server" Feb 25 12:06:00 crc kubenswrapper[5005]: I0225 12:06:00.170861 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="305cdf9b-3559-4c5f-9096-816626efe32c" containerName="registry-server" Feb 25 12:06:00 crc kubenswrapper[5005]: I0225 12:06:00.172100 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533686-hqqh5" Feb 25 12:06:00 crc kubenswrapper[5005]: I0225 12:06:00.175672 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 12:06:00 crc kubenswrapper[5005]: I0225 12:06:00.176100 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 12:06:00 crc kubenswrapper[5005]: I0225 12:06:00.181412 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 12:06:00 crc kubenswrapper[5005]: I0225 12:06:00.187727 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533686-hqqh5"] Feb 25 12:06:00 crc kubenswrapper[5005]: I0225 12:06:00.361485 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bff7\" (UniqueName: \"kubernetes.io/projected/7ba2309e-c965-431a-8da7-85e347d172d6-kube-api-access-8bff7\") pod \"auto-csr-approver-29533686-hqqh5\" (UID: \"7ba2309e-c965-431a-8da7-85e347d172d6\") " pod="openshift-infra/auto-csr-approver-29533686-hqqh5" Feb 25 12:06:00 crc kubenswrapper[5005]: I0225 12:06:00.468543 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bff7\" (UniqueName: \"kubernetes.io/projected/7ba2309e-c965-431a-8da7-85e347d172d6-kube-api-access-8bff7\") pod \"auto-csr-approver-29533686-hqqh5\" (UID: \"7ba2309e-c965-431a-8da7-85e347d172d6\") " pod="openshift-infra/auto-csr-approver-29533686-hqqh5" Feb 25 12:06:00 crc kubenswrapper[5005]: I0225 12:06:00.495348 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bff7\" (UniqueName: \"kubernetes.io/projected/7ba2309e-c965-431a-8da7-85e347d172d6-kube-api-access-8bff7\") pod \"auto-csr-approver-29533686-hqqh5\" (UID: \"7ba2309e-c965-431a-8da7-85e347d172d6\") " pod="openshift-infra/auto-csr-approver-29533686-hqqh5" Feb 25 12:06:00 crc kubenswrapper[5005]: I0225 12:06:00.502829 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533686-hqqh5" Feb 25 12:06:00 crc kubenswrapper[5005]: I0225 12:06:00.697750 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="022be3c7-53ad-4b1a-9841-1195783317c9" path="/var/lib/kubelet/pods/022be3c7-53ad-4b1a-9841-1195783317c9/volumes" Feb 25 12:06:00 crc kubenswrapper[5005]: I0225 12:06:00.699011 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="305cdf9b-3559-4c5f-9096-816626efe32c" path="/var/lib/kubelet/pods/305cdf9b-3559-4c5f-9096-816626efe32c/volumes" Feb 25 12:06:01 crc kubenswrapper[5005]: W0225 12:06:01.008576 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ba2309e_c965_431a_8da7_85e347d172d6.slice/crio-ce82af3c4887e125927b6be0bd630e7e6375a5686757f31fad95d6f406fbd5a6 WatchSource:0}: Error finding container ce82af3c4887e125927b6be0bd630e7e6375a5686757f31fad95d6f406fbd5a6: Status 404 returned error can't find the container with id ce82af3c4887e125927b6be0bd630e7e6375a5686757f31fad95d6f406fbd5a6 Feb 25 12:06:01 crc kubenswrapper[5005]: I0225 12:06:01.027237 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533686-hqqh5"] Feb 25 12:06:01 crc kubenswrapper[5005]: I0225 12:06:01.458926 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533686-hqqh5" event={"ID":"7ba2309e-c965-431a-8da7-85e347d172d6","Type":"ContainerStarted","Data":"ce82af3c4887e125927b6be0bd630e7e6375a5686757f31fad95d6f406fbd5a6"} Feb 25 12:06:03 crc kubenswrapper[5005]: I0225 12:06:03.482076 5005 generic.go:334] "Generic (PLEG): container finished" podID="7ba2309e-c965-431a-8da7-85e347d172d6" containerID="0ae4c861233e6a4a070c39a52a95481a5ee30f23886940793fd2b383ec40db2c" exitCode=0 Feb 25 12:06:03 crc kubenswrapper[5005]: I0225 12:06:03.482550 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533686-hqqh5" event={"ID":"7ba2309e-c965-431a-8da7-85e347d172d6","Type":"ContainerDied","Data":"0ae4c861233e6a4a070c39a52a95481a5ee30f23886940793fd2b383ec40db2c"} Feb 25 12:06:04 crc kubenswrapper[5005]: I0225 12:06:04.878735 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533686-hqqh5" Feb 25 12:06:04 crc kubenswrapper[5005]: I0225 12:06:04.959041 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bff7\" (UniqueName: \"kubernetes.io/projected/7ba2309e-c965-431a-8da7-85e347d172d6-kube-api-access-8bff7\") pod \"7ba2309e-c965-431a-8da7-85e347d172d6\" (UID: \"7ba2309e-c965-431a-8da7-85e347d172d6\") " Feb 25 12:06:04 crc kubenswrapper[5005]: I0225 12:06:04.964601 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ba2309e-c965-431a-8da7-85e347d172d6-kube-api-access-8bff7" (OuterVolumeSpecName: "kube-api-access-8bff7") pod "7ba2309e-c965-431a-8da7-85e347d172d6" (UID: "7ba2309e-c965-431a-8da7-85e347d172d6"). InnerVolumeSpecName "kube-api-access-8bff7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:06:05 crc kubenswrapper[5005]: I0225 12:06:05.061444 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bff7\" (UniqueName: \"kubernetes.io/projected/7ba2309e-c965-431a-8da7-85e347d172d6-kube-api-access-8bff7\") on node \"crc\" DevicePath \"\"" Feb 25 12:06:05 crc kubenswrapper[5005]: I0225 12:06:05.499626 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533686-hqqh5" Feb 25 12:06:05 crc kubenswrapper[5005]: I0225 12:06:05.500290 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533686-hqqh5" event={"ID":"7ba2309e-c965-431a-8da7-85e347d172d6","Type":"ContainerDied","Data":"ce82af3c4887e125927b6be0bd630e7e6375a5686757f31fad95d6f406fbd5a6"} Feb 25 12:06:05 crc kubenswrapper[5005]: I0225 12:06:05.500323 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce82af3c4887e125927b6be0bd630e7e6375a5686757f31fad95d6f406fbd5a6" Feb 25 12:06:05 crc kubenswrapper[5005]: I0225 12:06:05.952339 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533680-rtq5g"] Feb 25 12:06:05 crc kubenswrapper[5005]: I0225 12:06:05.960819 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533680-rtq5g"] Feb 25 12:06:06 crc kubenswrapper[5005]: I0225 12:06:06.705230 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddfe17f5-d025-4272-93dd-d71fda7be5e9" path="/var/lib/kubelet/pods/ddfe17f5-d025-4272-93dd-d71fda7be5e9/volumes" Feb 25 12:06:19 crc kubenswrapper[5005]: I0225 12:06:19.545039 5005 scope.go:117] "RemoveContainer" containerID="1d3e53e488a48f466e5d3765c41ca6d1c07b3b75b0c6f1d3d736957f5b31849a" Feb 25 12:06:28 crc kubenswrapper[5005]: I0225 12:06:28.087179 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 12:06:28 crc kubenswrapper[5005]: I0225 12:06:28.087676 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 12:06:51 crc kubenswrapper[5005]: I0225 12:06:51.940661 5005 generic.go:334] "Generic (PLEG): container finished" podID="13abaed5-d544-41f0-8bd9-07bbd0798c33" containerID="1b824557f04005944c13ce102a955c3f5f855102ee9f2300ea9ea5b0970bf495" exitCode=0 Feb 25 12:06:51 crc kubenswrapper[5005]: I0225 12:06:51.940785 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl" event={"ID":"13abaed5-d544-41f0-8bd9-07bbd0798c33","Type":"ContainerDied","Data":"1b824557f04005944c13ce102a955c3f5f855102ee9f2300ea9ea5b0970bf495"} Feb 25 12:06:53 crc kubenswrapper[5005]: I0225 12:06:53.393088 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl" Feb 25 12:06:53 crc kubenswrapper[5005]: I0225 12:06:53.575250 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/13abaed5-d544-41f0-8bd9-07bbd0798c33-nova-extra-config-0\") pod \"13abaed5-d544-41f0-8bd9-07bbd0798c33\" (UID: \"13abaed5-d544-41f0-8bd9-07bbd0798c33\") " Feb 25 12:06:53 crc kubenswrapper[5005]: I0225 12:06:53.575337 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/13abaed5-d544-41f0-8bd9-07bbd0798c33-nova-cell1-compute-config-3\") pod \"13abaed5-d544-41f0-8bd9-07bbd0798c33\" (UID: \"13abaed5-d544-41f0-8bd9-07bbd0798c33\") " Feb 25 12:06:53 crc kubenswrapper[5005]: I0225 12:06:53.575365 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/13abaed5-d544-41f0-8bd9-07bbd0798c33-ceph\") pod \"13abaed5-d544-41f0-8bd9-07bbd0798c33\" (UID: \"13abaed5-d544-41f0-8bd9-07bbd0798c33\") " Feb 25 12:06:53 crc kubenswrapper[5005]: I0225 12:06:53.575428 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/13abaed5-d544-41f0-8bd9-07bbd0798c33-nova-migration-ssh-key-0\") pod \"13abaed5-d544-41f0-8bd9-07bbd0798c33\" (UID: \"13abaed5-d544-41f0-8bd9-07bbd0798c33\") " Feb 25 12:06:53 crc kubenswrapper[5005]: I0225 12:06:53.575516 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/13abaed5-d544-41f0-8bd9-07bbd0798c33-inventory\") pod \"13abaed5-d544-41f0-8bd9-07bbd0798c33\" (UID: \"13abaed5-d544-41f0-8bd9-07bbd0798c33\") " Feb 25 12:06:53 crc kubenswrapper[5005]: I0225 12:06:53.575548 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/13abaed5-d544-41f0-8bd9-07bbd0798c33-nova-cell1-compute-config-1\") pod \"13abaed5-d544-41f0-8bd9-07bbd0798c33\" (UID: \"13abaed5-d544-41f0-8bd9-07bbd0798c33\") " Feb 25 12:06:53 crc kubenswrapper[5005]: I0225 12:06:53.575684 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/13abaed5-d544-41f0-8bd9-07bbd0798c33-ceph-nova-0\") pod \"13abaed5-d544-41f0-8bd9-07bbd0798c33\" (UID: \"13abaed5-d544-41f0-8bd9-07bbd0798c33\") " Feb 25 12:06:53 crc kubenswrapper[5005]: I0225 12:06:53.575702 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/13abaed5-d544-41f0-8bd9-07bbd0798c33-ssh-key-openstack-edpm-ipam\") pod \"13abaed5-d544-41f0-8bd9-07bbd0798c33\" (UID: \"13abaed5-d544-41f0-8bd9-07bbd0798c33\") " Feb 25 12:06:53 crc kubenswrapper[5005]: I0225 12:06:53.575717 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/13abaed5-d544-41f0-8bd9-07bbd0798c33-nova-cell1-compute-config-2\") pod \"13abaed5-d544-41f0-8bd9-07bbd0798c33\" (UID: \"13abaed5-d544-41f0-8bd9-07bbd0798c33\") " Feb 25 12:06:53 crc kubenswrapper[5005]: I0225 12:06:53.575746 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/13abaed5-d544-41f0-8bd9-07bbd0798c33-nova-cell1-compute-config-0\") pod \"13abaed5-d544-41f0-8bd9-07bbd0798c33\" (UID: \"13abaed5-d544-41f0-8bd9-07bbd0798c33\") " Feb 25 12:06:53 crc kubenswrapper[5005]: I0225 12:06:53.575761 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13abaed5-d544-41f0-8bd9-07bbd0798c33-nova-custom-ceph-combined-ca-bundle\") pod \"13abaed5-d544-41f0-8bd9-07bbd0798c33\" (UID: \"13abaed5-d544-41f0-8bd9-07bbd0798c33\") " Feb 25 12:06:53 crc kubenswrapper[5005]: I0225 12:06:53.575790 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/13abaed5-d544-41f0-8bd9-07bbd0798c33-nova-migration-ssh-key-1\") pod \"13abaed5-d544-41f0-8bd9-07bbd0798c33\" (UID: \"13abaed5-d544-41f0-8bd9-07bbd0798c33\") " Feb 25 12:06:53 crc kubenswrapper[5005]: I0225 12:06:53.575822 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fx67f\" (UniqueName: \"kubernetes.io/projected/13abaed5-d544-41f0-8bd9-07bbd0798c33-kube-api-access-fx67f\") pod \"13abaed5-d544-41f0-8bd9-07bbd0798c33\" (UID: \"13abaed5-d544-41f0-8bd9-07bbd0798c33\") " Feb 25 12:06:53 crc kubenswrapper[5005]: I0225 12:06:53.582008 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13abaed5-d544-41f0-8bd9-07bbd0798c33-nova-custom-ceph-combined-ca-bundle" (OuterVolumeSpecName: "nova-custom-ceph-combined-ca-bundle") pod "13abaed5-d544-41f0-8bd9-07bbd0798c33" (UID: "13abaed5-d544-41f0-8bd9-07bbd0798c33"). InnerVolumeSpecName "nova-custom-ceph-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 12:06:53 crc kubenswrapper[5005]: I0225 12:06:53.584710 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13abaed5-d544-41f0-8bd9-07bbd0798c33-kube-api-access-fx67f" (OuterVolumeSpecName: "kube-api-access-fx67f") pod "13abaed5-d544-41f0-8bd9-07bbd0798c33" (UID: "13abaed5-d544-41f0-8bd9-07bbd0798c33"). InnerVolumeSpecName "kube-api-access-fx67f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:06:53 crc kubenswrapper[5005]: I0225 12:06:53.602272 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13abaed5-d544-41f0-8bd9-07bbd0798c33-ceph-nova-0" (OuterVolumeSpecName: "ceph-nova-0") pod "13abaed5-d544-41f0-8bd9-07bbd0798c33" (UID: "13abaed5-d544-41f0-8bd9-07bbd0798c33"). InnerVolumeSpecName "ceph-nova-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 12:06:53 crc kubenswrapper[5005]: I0225 12:06:53.607985 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13abaed5-d544-41f0-8bd9-07bbd0798c33-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "13abaed5-d544-41f0-8bd9-07bbd0798c33" (UID: "13abaed5-d544-41f0-8bd9-07bbd0798c33"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 12:06:53 crc kubenswrapper[5005]: I0225 12:06:53.609626 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13abaed5-d544-41f0-8bd9-07bbd0798c33-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "13abaed5-d544-41f0-8bd9-07bbd0798c33" (UID: "13abaed5-d544-41f0-8bd9-07bbd0798c33"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 12:06:53 crc kubenswrapper[5005]: I0225 12:06:53.612363 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13abaed5-d544-41f0-8bd9-07bbd0798c33-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "13abaed5-d544-41f0-8bd9-07bbd0798c33" (UID: "13abaed5-d544-41f0-8bd9-07bbd0798c33"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 12:06:53 crc kubenswrapper[5005]: I0225 12:06:53.612561 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13abaed5-d544-41f0-8bd9-07bbd0798c33-ceph" (OuterVolumeSpecName: "ceph") pod "13abaed5-d544-41f0-8bd9-07bbd0798c33" (UID: "13abaed5-d544-41f0-8bd9-07bbd0798c33"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 12:06:53 crc kubenswrapper[5005]: I0225 12:06:53.614676 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13abaed5-d544-41f0-8bd9-07bbd0798c33-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "13abaed5-d544-41f0-8bd9-07bbd0798c33" (UID: "13abaed5-d544-41f0-8bd9-07bbd0798c33"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 12:06:53 crc kubenswrapper[5005]: I0225 12:06:53.614716 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13abaed5-d544-41f0-8bd9-07bbd0798c33-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "13abaed5-d544-41f0-8bd9-07bbd0798c33" (UID: "13abaed5-d544-41f0-8bd9-07bbd0798c33"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 12:06:53 crc kubenswrapper[5005]: I0225 12:06:53.615288 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13abaed5-d544-41f0-8bd9-07bbd0798c33-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "13abaed5-d544-41f0-8bd9-07bbd0798c33" (UID: "13abaed5-d544-41f0-8bd9-07bbd0798c33"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 12:06:53 crc kubenswrapper[5005]: I0225 12:06:53.620242 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13abaed5-d544-41f0-8bd9-07bbd0798c33-inventory" (OuterVolumeSpecName: "inventory") pod "13abaed5-d544-41f0-8bd9-07bbd0798c33" (UID: "13abaed5-d544-41f0-8bd9-07bbd0798c33"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 12:06:53 crc kubenswrapper[5005]: I0225 12:06:53.623029 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13abaed5-d544-41f0-8bd9-07bbd0798c33-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "13abaed5-d544-41f0-8bd9-07bbd0798c33" (UID: "13abaed5-d544-41f0-8bd9-07bbd0798c33"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 12:06:53 crc kubenswrapper[5005]: I0225 12:06:53.629599 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13abaed5-d544-41f0-8bd9-07bbd0798c33-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "13abaed5-d544-41f0-8bd9-07bbd0798c33" (UID: "13abaed5-d544-41f0-8bd9-07bbd0798c33"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 12:06:53 crc kubenswrapper[5005]: I0225 12:06:53.678158 5005 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/13abaed5-d544-41f0-8bd9-07bbd0798c33-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Feb 25 12:06:53 crc kubenswrapper[5005]: I0225 12:06:53.678385 5005 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/13abaed5-d544-41f0-8bd9-07bbd0798c33-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 25 12:06:53 crc kubenswrapper[5005]: I0225 12:06:53.678447 5005 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/13abaed5-d544-41f0-8bd9-07bbd0798c33-ceph\") on node \"crc\" DevicePath \"\"" Feb 25 12:06:53 crc kubenswrapper[5005]: I0225 12:06:53.678512 5005 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/13abaed5-d544-41f0-8bd9-07bbd0798c33-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 25 12:06:53 crc kubenswrapper[5005]: I0225 12:06:53.678565 5005 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/13abaed5-d544-41f0-8bd9-07bbd0798c33-inventory\") on node \"crc\" DevicePath \"\"" Feb 25 12:06:53 crc kubenswrapper[5005]: I0225 12:06:53.678633 5005 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/13abaed5-d544-41f0-8bd9-07bbd0798c33-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 25 12:06:53 crc kubenswrapper[5005]: I0225 12:06:53.678700 5005 reconciler_common.go:293] "Volume detached for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/13abaed5-d544-41f0-8bd9-07bbd0798c33-ceph-nova-0\") on node \"crc\" DevicePath \"\"" Feb 25 12:06:53 crc kubenswrapper[5005]: I0225 12:06:53.678775 5005 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/13abaed5-d544-41f0-8bd9-07bbd0798c33-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 25 12:06:53 crc kubenswrapper[5005]: I0225 12:06:53.678844 5005 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/13abaed5-d544-41f0-8bd9-07bbd0798c33-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Feb 25 12:06:53 crc kubenswrapper[5005]: I0225 12:06:53.678936 5005 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/13abaed5-d544-41f0-8bd9-07bbd0798c33-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 25 12:06:53 crc kubenswrapper[5005]: I0225 12:06:53.678991 5005 reconciler_common.go:293] "Volume detached for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13abaed5-d544-41f0-8bd9-07bbd0798c33-nova-custom-ceph-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 12:06:53 crc kubenswrapper[5005]: I0225 12:06:53.679061 5005 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/13abaed5-d544-41f0-8bd9-07bbd0798c33-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 25 12:06:53 crc kubenswrapper[5005]: I0225 12:06:53.679121 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fx67f\" (UniqueName: \"kubernetes.io/projected/13abaed5-d544-41f0-8bd9-07bbd0798c33-kube-api-access-fx67f\") on node \"crc\" DevicePath \"\"" Feb 25 12:06:53 crc kubenswrapper[5005]: I0225 12:06:53.962333 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl" event={"ID":"13abaed5-d544-41f0-8bd9-07bbd0798c33","Type":"ContainerDied","Data":"31a3c4e25ad955cde5cc1a25abd9bfac9b004ac8f5619e3dfe75c21e1dd54ccb"} Feb 25 12:06:53 crc kubenswrapper[5005]: I0225 12:06:53.962445 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31a3c4e25ad955cde5cc1a25abd9bfac9b004ac8f5619e3dfe75c21e1dd54ccb" Feb 25 12:06:53 crc kubenswrapper[5005]: I0225 12:06:53.962403 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl" Feb 25 12:06:58 crc kubenswrapper[5005]: I0225 12:06:58.087982 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 12:06:58 crc kubenswrapper[5005]: I0225 12:06:58.088641 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 12:06:58 crc kubenswrapper[5005]: I0225 12:06:58.088692 5005 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" Feb 25 12:06:58 crc kubenswrapper[5005]: I0225 12:06:58.089568 5005 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7ca266498d6b9c8fb2a14f176fd0f40e3e96606757679e079fcb7ecb4ca85b52"} pod="openshift-machine-config-operator/machine-config-daemon-tct5q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 25 12:06:58 crc kubenswrapper[5005]: I0225 12:06:58.089636 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" containerID="cri-o://7ca266498d6b9c8fb2a14f176fd0f40e3e96606757679e079fcb7ecb4ca85b52" gracePeriod=600 Feb 25 12:06:58 crc kubenswrapper[5005]: E0225 12:06:58.208516 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:06:59 crc kubenswrapper[5005]: I0225 12:06:59.009127 5005 generic.go:334] "Generic (PLEG): container finished" podID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerID="7ca266498d6b9c8fb2a14f176fd0f40e3e96606757679e079fcb7ecb4ca85b52" exitCode=0 Feb 25 12:06:59 crc kubenswrapper[5005]: I0225 12:06:59.009183 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerDied","Data":"7ca266498d6b9c8fb2a14f176fd0f40e3e96606757679e079fcb7ecb4ca85b52"} Feb 25 12:06:59 crc kubenswrapper[5005]: I0225 12:06:59.009471 5005 scope.go:117] "RemoveContainer" containerID="4d68713e6e19c3efba3ef33278796317e807a4a6ee0ef751e7b26fc433e930e4" Feb 25 12:06:59 crc kubenswrapper[5005]: I0225 12:06:59.010050 5005 scope.go:117] "RemoveContainer" containerID="7ca266498d6b9c8fb2a14f176fd0f40e3e96606757679e079fcb7ecb4ca85b52" Feb 25 12:06:59 crc kubenswrapper[5005]: E0225 12:06:59.010276 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:07:07 crc kubenswrapper[5005]: I0225 12:07:07.953108 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Feb 25 12:07:07 crc kubenswrapper[5005]: E0225 12:07:07.954187 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13abaed5-d544-41f0-8bd9-07bbd0798c33" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Feb 25 12:07:07 crc kubenswrapper[5005]: I0225 12:07:07.954208 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="13abaed5-d544-41f0-8bd9-07bbd0798c33" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Feb 25 12:07:07 crc kubenswrapper[5005]: E0225 12:07:07.954249 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ba2309e-c965-431a-8da7-85e347d172d6" containerName="oc" Feb 25 12:07:07 crc kubenswrapper[5005]: I0225 12:07:07.954257 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ba2309e-c965-431a-8da7-85e347d172d6" containerName="oc" Feb 25 12:07:07 crc kubenswrapper[5005]: I0225 12:07:07.954506 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ba2309e-c965-431a-8da7-85e347d172d6" containerName="oc" Feb 25 12:07:07 crc kubenswrapper[5005]: I0225 12:07:07.954526 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="13abaed5-d544-41f0-8bd9-07bbd0798c33" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Feb 25 12:07:07 crc kubenswrapper[5005]: I0225 12:07:07.958410 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Feb 25 12:07:07 crc kubenswrapper[5005]: I0225 12:07:07.967334 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 25 12:07:07 crc kubenswrapper[5005]: I0225 12:07:07.968383 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Feb 25 12:07:07 crc kubenswrapper[5005]: I0225 12:07:07.982261 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Feb 25 12:07:07 crc kubenswrapper[5005]: I0225 12:07:07.983926 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Feb 25 12:07:07 crc kubenswrapper[5005]: I0225 12:07:07.989298 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Feb 25 12:07:07 crc kubenswrapper[5005]: I0225 12:07:07.992630 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.028214 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.057824 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/da2a296c-43d5-4d35-9929-d13584a2d821-dev\") pod \"cinder-volume-volume1-0\" (UID: \"da2a296c-43d5-4d35-9929-d13584a2d821\") " pod="openstack/cinder-volume-volume1-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.057877 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b64ce473-ca21-4e77-afd9-24d93e79a71f-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"b64ce473-ca21-4e77-afd9-24d93e79a71f\") " pod="openstack/cinder-backup-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.057897 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b64ce473-ca21-4e77-afd9-24d93e79a71f-etc-nvme\") pod \"cinder-backup-0\" (UID: \"b64ce473-ca21-4e77-afd9-24d93e79a71f\") " pod="openstack/cinder-backup-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.057925 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b64ce473-ca21-4e77-afd9-24d93e79a71f-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"b64ce473-ca21-4e77-afd9-24d93e79a71f\") " pod="openstack/cinder-backup-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.057946 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/da2a296c-43d5-4d35-9929-d13584a2d821-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"da2a296c-43d5-4d35-9929-d13584a2d821\") " pod="openstack/cinder-volume-volume1-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.057964 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/da2a296c-43d5-4d35-9929-d13584a2d821-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"da2a296c-43d5-4d35-9929-d13584a2d821\") " pod="openstack/cinder-volume-volume1-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.057994 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/da2a296c-43d5-4d35-9929-d13584a2d821-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"da2a296c-43d5-4d35-9929-d13584a2d821\") " pod="openstack/cinder-volume-volume1-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.058014 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b64ce473-ca21-4e77-afd9-24d93e79a71f-dev\") pod \"cinder-backup-0\" (UID: \"b64ce473-ca21-4e77-afd9-24d93e79a71f\") " pod="openstack/cinder-backup-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.058040 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/da2a296c-43d5-4d35-9929-d13584a2d821-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"da2a296c-43d5-4d35-9929-d13584a2d821\") " pod="openstack/cinder-volume-volume1-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.058054 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/b64ce473-ca21-4e77-afd9-24d93e79a71f-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"b64ce473-ca21-4e77-afd9-24d93e79a71f\") " pod="openstack/cinder-backup-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.058076 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gflfr\" (UniqueName: \"kubernetes.io/projected/da2a296c-43d5-4d35-9929-d13584a2d821-kube-api-access-gflfr\") pod \"cinder-volume-volume1-0\" (UID: \"da2a296c-43d5-4d35-9929-d13584a2d821\") " pod="openstack/cinder-volume-volume1-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.058096 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b64ce473-ca21-4e77-afd9-24d93e79a71f-run\") pod \"cinder-backup-0\" (UID: \"b64ce473-ca21-4e77-afd9-24d93e79a71f\") " pod="openstack/cinder-backup-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.058113 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/da2a296c-43d5-4d35-9929-d13584a2d821-sys\") pod \"cinder-volume-volume1-0\" (UID: \"da2a296c-43d5-4d35-9929-d13584a2d821\") " pod="openstack/cinder-volume-volume1-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.058134 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b64ce473-ca21-4e77-afd9-24d93e79a71f-scripts\") pod \"cinder-backup-0\" (UID: \"b64ce473-ca21-4e77-afd9-24d93e79a71f\") " pod="openstack/cinder-backup-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.058160 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffn2j\" (UniqueName: \"kubernetes.io/projected/b64ce473-ca21-4e77-afd9-24d93e79a71f-kube-api-access-ffn2j\") pod \"cinder-backup-0\" (UID: \"b64ce473-ca21-4e77-afd9-24d93e79a71f\") " pod="openstack/cinder-backup-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.058189 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/da2a296c-43d5-4d35-9929-d13584a2d821-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"da2a296c-43d5-4d35-9929-d13584a2d821\") " pod="openstack/cinder-volume-volume1-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.058213 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da2a296c-43d5-4d35-9929-d13584a2d821-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"da2a296c-43d5-4d35-9929-d13584a2d821\") " pod="openstack/cinder-volume-volume1-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.058234 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b64ce473-ca21-4e77-afd9-24d93e79a71f-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"b64ce473-ca21-4e77-afd9-24d93e79a71f\") " pod="openstack/cinder-backup-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.058250 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da2a296c-43d5-4d35-9929-d13584a2d821-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"da2a296c-43d5-4d35-9929-d13584a2d821\") " pod="openstack/cinder-volume-volume1-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.058275 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b64ce473-ca21-4e77-afd9-24d93e79a71f-config-data-custom\") pod \"cinder-backup-0\" (UID: \"b64ce473-ca21-4e77-afd9-24d93e79a71f\") " pod="openstack/cinder-backup-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.058293 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/da2a296c-43d5-4d35-9929-d13584a2d821-run\") pod \"cinder-volume-volume1-0\" (UID: \"da2a296c-43d5-4d35-9929-d13584a2d821\") " pod="openstack/cinder-volume-volume1-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.058311 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b64ce473-ca21-4e77-afd9-24d93e79a71f-config-data\") pod \"cinder-backup-0\" (UID: \"b64ce473-ca21-4e77-afd9-24d93e79a71f\") " pod="openstack/cinder-backup-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.058329 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b64ce473-ca21-4e77-afd9-24d93e79a71f-lib-modules\") pod \"cinder-backup-0\" (UID: \"b64ce473-ca21-4e77-afd9-24d93e79a71f\") " pod="openstack/cinder-backup-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.058350 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b64ce473-ca21-4e77-afd9-24d93e79a71f-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"b64ce473-ca21-4e77-afd9-24d93e79a71f\") " pod="openstack/cinder-backup-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.058389 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/da2a296c-43d5-4d35-9929-d13584a2d821-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"da2a296c-43d5-4d35-9929-d13584a2d821\") " pod="openstack/cinder-volume-volume1-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.058406 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b64ce473-ca21-4e77-afd9-24d93e79a71f-ceph\") pod \"cinder-backup-0\" (UID: \"b64ce473-ca21-4e77-afd9-24d93e79a71f\") " pod="openstack/cinder-backup-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.058428 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/da2a296c-43d5-4d35-9929-d13584a2d821-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"da2a296c-43d5-4d35-9929-d13584a2d821\") " pod="openstack/cinder-volume-volume1-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.058448 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/da2a296c-43d5-4d35-9929-d13584a2d821-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"da2a296c-43d5-4d35-9929-d13584a2d821\") " pod="openstack/cinder-volume-volume1-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.058464 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da2a296c-43d5-4d35-9929-d13584a2d821-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"da2a296c-43d5-4d35-9929-d13584a2d821\") " pod="openstack/cinder-volume-volume1-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.058478 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/da2a296c-43d5-4d35-9929-d13584a2d821-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"da2a296c-43d5-4d35-9929-d13584a2d821\") " pod="openstack/cinder-volume-volume1-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.058493 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/b64ce473-ca21-4e77-afd9-24d93e79a71f-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"b64ce473-ca21-4e77-afd9-24d93e79a71f\") " pod="openstack/cinder-backup-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.058512 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b64ce473-ca21-4e77-afd9-24d93e79a71f-sys\") pod \"cinder-backup-0\" (UID: \"b64ce473-ca21-4e77-afd9-24d93e79a71f\") " pod="openstack/cinder-backup-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.159203 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b64ce473-ca21-4e77-afd9-24d93e79a71f-run\") pod \"cinder-backup-0\" (UID: \"b64ce473-ca21-4e77-afd9-24d93e79a71f\") " pod="openstack/cinder-backup-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.159239 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/da2a296c-43d5-4d35-9929-d13584a2d821-sys\") pod \"cinder-volume-volume1-0\" (UID: \"da2a296c-43d5-4d35-9929-d13584a2d821\") " pod="openstack/cinder-volume-volume1-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.159265 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b64ce473-ca21-4e77-afd9-24d93e79a71f-scripts\") pod \"cinder-backup-0\" (UID: \"b64ce473-ca21-4e77-afd9-24d93e79a71f\") " pod="openstack/cinder-backup-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.159287 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffn2j\" (UniqueName: \"kubernetes.io/projected/b64ce473-ca21-4e77-afd9-24d93e79a71f-kube-api-access-ffn2j\") pod \"cinder-backup-0\" (UID: \"b64ce473-ca21-4e77-afd9-24d93e79a71f\") " pod="openstack/cinder-backup-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.159308 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/da2a296c-43d5-4d35-9929-d13584a2d821-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"da2a296c-43d5-4d35-9929-d13584a2d821\") " pod="openstack/cinder-volume-volume1-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.159333 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da2a296c-43d5-4d35-9929-d13584a2d821-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"da2a296c-43d5-4d35-9929-d13584a2d821\") " pod="openstack/cinder-volume-volume1-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.159350 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b64ce473-ca21-4e77-afd9-24d93e79a71f-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"b64ce473-ca21-4e77-afd9-24d93e79a71f\") " pod="openstack/cinder-backup-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.159366 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da2a296c-43d5-4d35-9929-d13584a2d821-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"da2a296c-43d5-4d35-9929-d13584a2d821\") " pod="openstack/cinder-volume-volume1-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.159423 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b64ce473-ca21-4e77-afd9-24d93e79a71f-config-data-custom\") pod \"cinder-backup-0\" (UID: \"b64ce473-ca21-4e77-afd9-24d93e79a71f\") " pod="openstack/cinder-backup-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.159450 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/da2a296c-43d5-4d35-9929-d13584a2d821-run\") pod \"cinder-volume-volume1-0\" (UID: \"da2a296c-43d5-4d35-9929-d13584a2d821\") " pod="openstack/cinder-volume-volume1-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.159463 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b64ce473-ca21-4e77-afd9-24d93e79a71f-config-data\") pod \"cinder-backup-0\" (UID: \"b64ce473-ca21-4e77-afd9-24d93e79a71f\") " pod="openstack/cinder-backup-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.159478 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b64ce473-ca21-4e77-afd9-24d93e79a71f-lib-modules\") pod \"cinder-backup-0\" (UID: \"b64ce473-ca21-4e77-afd9-24d93e79a71f\") " pod="openstack/cinder-backup-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.159500 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b64ce473-ca21-4e77-afd9-24d93e79a71f-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"b64ce473-ca21-4e77-afd9-24d93e79a71f\") " pod="openstack/cinder-backup-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.159522 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/da2a296c-43d5-4d35-9929-d13584a2d821-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"da2a296c-43d5-4d35-9929-d13584a2d821\") " pod="openstack/cinder-volume-volume1-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.159539 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b64ce473-ca21-4e77-afd9-24d93e79a71f-ceph\") pod \"cinder-backup-0\" (UID: \"b64ce473-ca21-4e77-afd9-24d93e79a71f\") " pod="openstack/cinder-backup-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.159557 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/da2a296c-43d5-4d35-9929-d13584a2d821-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"da2a296c-43d5-4d35-9929-d13584a2d821\") " pod="openstack/cinder-volume-volume1-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.159574 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/da2a296c-43d5-4d35-9929-d13584a2d821-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"da2a296c-43d5-4d35-9929-d13584a2d821\") " pod="openstack/cinder-volume-volume1-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.159590 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da2a296c-43d5-4d35-9929-d13584a2d821-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"da2a296c-43d5-4d35-9929-d13584a2d821\") " pod="openstack/cinder-volume-volume1-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.159605 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/da2a296c-43d5-4d35-9929-d13584a2d821-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"da2a296c-43d5-4d35-9929-d13584a2d821\") " pod="openstack/cinder-volume-volume1-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.159618 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/b64ce473-ca21-4e77-afd9-24d93e79a71f-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"b64ce473-ca21-4e77-afd9-24d93e79a71f\") " pod="openstack/cinder-backup-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.159634 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b64ce473-ca21-4e77-afd9-24d93e79a71f-sys\") pod \"cinder-backup-0\" (UID: \"b64ce473-ca21-4e77-afd9-24d93e79a71f\") " pod="openstack/cinder-backup-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.159664 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/da2a296c-43d5-4d35-9929-d13584a2d821-dev\") pod \"cinder-volume-volume1-0\" (UID: \"da2a296c-43d5-4d35-9929-d13584a2d821\") " pod="openstack/cinder-volume-volume1-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.159679 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b64ce473-ca21-4e77-afd9-24d93e79a71f-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"b64ce473-ca21-4e77-afd9-24d93e79a71f\") " pod="openstack/cinder-backup-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.159693 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b64ce473-ca21-4e77-afd9-24d93e79a71f-etc-nvme\") pod \"cinder-backup-0\" (UID: \"b64ce473-ca21-4e77-afd9-24d93e79a71f\") " pod="openstack/cinder-backup-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.159714 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b64ce473-ca21-4e77-afd9-24d93e79a71f-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"b64ce473-ca21-4e77-afd9-24d93e79a71f\") " pod="openstack/cinder-backup-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.159729 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/da2a296c-43d5-4d35-9929-d13584a2d821-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"da2a296c-43d5-4d35-9929-d13584a2d821\") " pod="openstack/cinder-volume-volume1-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.159743 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/da2a296c-43d5-4d35-9929-d13584a2d821-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"da2a296c-43d5-4d35-9929-d13584a2d821\") " pod="openstack/cinder-volume-volume1-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.159769 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/da2a296c-43d5-4d35-9929-d13584a2d821-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"da2a296c-43d5-4d35-9929-d13584a2d821\") " pod="openstack/cinder-volume-volume1-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.159787 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b64ce473-ca21-4e77-afd9-24d93e79a71f-dev\") pod \"cinder-backup-0\" (UID: \"b64ce473-ca21-4e77-afd9-24d93e79a71f\") " pod="openstack/cinder-backup-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.159809 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/da2a296c-43d5-4d35-9929-d13584a2d821-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"da2a296c-43d5-4d35-9929-d13584a2d821\") " pod="openstack/cinder-volume-volume1-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.159827 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/b64ce473-ca21-4e77-afd9-24d93e79a71f-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"b64ce473-ca21-4e77-afd9-24d93e79a71f\") " pod="openstack/cinder-backup-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.159849 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gflfr\" (UniqueName: \"kubernetes.io/projected/da2a296c-43d5-4d35-9929-d13584a2d821-kube-api-access-gflfr\") pod \"cinder-volume-volume1-0\" (UID: \"da2a296c-43d5-4d35-9929-d13584a2d821\") " pod="openstack/cinder-volume-volume1-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.160140 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b64ce473-ca21-4e77-afd9-24d93e79a71f-run\") pod \"cinder-backup-0\" (UID: \"b64ce473-ca21-4e77-afd9-24d93e79a71f\") " pod="openstack/cinder-backup-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.160170 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/da2a296c-43d5-4d35-9929-d13584a2d821-sys\") pod \"cinder-volume-volume1-0\" (UID: \"da2a296c-43d5-4d35-9929-d13584a2d821\") " pod="openstack/cinder-volume-volume1-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.161071 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/da2a296c-43d5-4d35-9929-d13584a2d821-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"da2a296c-43d5-4d35-9929-d13584a2d821\") " pod="openstack/cinder-volume-volume1-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.161239 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/da2a296c-43d5-4d35-9929-d13584a2d821-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"da2a296c-43d5-4d35-9929-d13584a2d821\") " pod="openstack/cinder-volume-volume1-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.161255 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b64ce473-ca21-4e77-afd9-24d93e79a71f-etc-nvme\") pod \"cinder-backup-0\" (UID: \"b64ce473-ca21-4e77-afd9-24d93e79a71f\") " pod="openstack/cinder-backup-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.161467 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b64ce473-ca21-4e77-afd9-24d93e79a71f-sys\") pod \"cinder-backup-0\" (UID: \"b64ce473-ca21-4e77-afd9-24d93e79a71f\") " pod="openstack/cinder-backup-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.161563 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/da2a296c-43d5-4d35-9929-d13584a2d821-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"da2a296c-43d5-4d35-9929-d13584a2d821\") " pod="openstack/cinder-volume-volume1-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.161671 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/da2a296c-43d5-4d35-9929-d13584a2d821-dev\") pod \"cinder-volume-volume1-0\" (UID: \"da2a296c-43d5-4d35-9929-d13584a2d821\") " pod="openstack/cinder-volume-volume1-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.161743 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/da2a296c-43d5-4d35-9929-d13584a2d821-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"da2a296c-43d5-4d35-9929-d13584a2d821\") " pod="openstack/cinder-volume-volume1-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.161785 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b64ce473-ca21-4e77-afd9-24d93e79a71f-lib-modules\") pod \"cinder-backup-0\" (UID: \"b64ce473-ca21-4e77-afd9-24d93e79a71f\") " pod="openstack/cinder-backup-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.161815 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b64ce473-ca21-4e77-afd9-24d93e79a71f-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"b64ce473-ca21-4e77-afd9-24d93e79a71f\") " pod="openstack/cinder-backup-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.161856 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/b64ce473-ca21-4e77-afd9-24d93e79a71f-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"b64ce473-ca21-4e77-afd9-24d93e79a71f\") " pod="openstack/cinder-backup-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.162006 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/da2a296c-43d5-4d35-9929-d13584a2d821-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"da2a296c-43d5-4d35-9929-d13584a2d821\") " pod="openstack/cinder-volume-volume1-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.162098 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b64ce473-ca21-4e77-afd9-24d93e79a71f-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"b64ce473-ca21-4e77-afd9-24d93e79a71f\") " pod="openstack/cinder-backup-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.162140 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/da2a296c-43d5-4d35-9929-d13584a2d821-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"da2a296c-43d5-4d35-9929-d13584a2d821\") " pod="openstack/cinder-volume-volume1-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.166452 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da2a296c-43d5-4d35-9929-d13584a2d821-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"da2a296c-43d5-4d35-9929-d13584a2d821\") " pod="openstack/cinder-volume-volume1-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.166932 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/da2a296c-43d5-4d35-9929-d13584a2d821-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"da2a296c-43d5-4d35-9929-d13584a2d821\") " pod="openstack/cinder-volume-volume1-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.167075 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da2a296c-43d5-4d35-9929-d13584a2d821-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"da2a296c-43d5-4d35-9929-d13584a2d821\") " pod="openstack/cinder-volume-volume1-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.168463 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/da2a296c-43d5-4d35-9929-d13584a2d821-run\") pod \"cinder-volume-volume1-0\" (UID: \"da2a296c-43d5-4d35-9929-d13584a2d821\") " pod="openstack/cinder-volume-volume1-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.168472 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b64ce473-ca21-4e77-afd9-24d93e79a71f-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"b64ce473-ca21-4e77-afd9-24d93e79a71f\") " pod="openstack/cinder-backup-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.168522 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/da2a296c-43d5-4d35-9929-d13584a2d821-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"da2a296c-43d5-4d35-9929-d13584a2d821\") " pod="openstack/cinder-volume-volume1-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.168530 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b64ce473-ca21-4e77-afd9-24d93e79a71f-dev\") pod \"cinder-backup-0\" (UID: \"b64ce473-ca21-4e77-afd9-24d93e79a71f\") " pod="openstack/cinder-backup-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.168560 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/b64ce473-ca21-4e77-afd9-24d93e79a71f-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"b64ce473-ca21-4e77-afd9-24d93e79a71f\") " pod="openstack/cinder-backup-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.169810 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b64ce473-ca21-4e77-afd9-24d93e79a71f-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"b64ce473-ca21-4e77-afd9-24d93e79a71f\") " pod="openstack/cinder-backup-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.170829 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b64ce473-ca21-4e77-afd9-24d93e79a71f-config-data-custom\") pod \"cinder-backup-0\" (UID: \"b64ce473-ca21-4e77-afd9-24d93e79a71f\") " pod="openstack/cinder-backup-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.172744 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da2a296c-43d5-4d35-9929-d13584a2d821-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"da2a296c-43d5-4d35-9929-d13584a2d821\") " pod="openstack/cinder-volume-volume1-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.173037 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/da2a296c-43d5-4d35-9929-d13584a2d821-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"da2a296c-43d5-4d35-9929-d13584a2d821\") " pod="openstack/cinder-volume-volume1-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.176617 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b64ce473-ca21-4e77-afd9-24d93e79a71f-config-data\") pod \"cinder-backup-0\" (UID: \"b64ce473-ca21-4e77-afd9-24d93e79a71f\") " pod="openstack/cinder-backup-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.176770 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b64ce473-ca21-4e77-afd9-24d93e79a71f-ceph\") pod \"cinder-backup-0\" (UID: \"b64ce473-ca21-4e77-afd9-24d93e79a71f\") " pod="openstack/cinder-backup-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.177269 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b64ce473-ca21-4e77-afd9-24d93e79a71f-scripts\") pod \"cinder-backup-0\" (UID: \"b64ce473-ca21-4e77-afd9-24d93e79a71f\") " pod="openstack/cinder-backup-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.179946 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gflfr\" (UniqueName: \"kubernetes.io/projected/da2a296c-43d5-4d35-9929-d13584a2d821-kube-api-access-gflfr\") pod \"cinder-volume-volume1-0\" (UID: \"da2a296c-43d5-4d35-9929-d13584a2d821\") " pod="openstack/cinder-volume-volume1-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.182282 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffn2j\" (UniqueName: \"kubernetes.io/projected/b64ce473-ca21-4e77-afd9-24d93e79a71f-kube-api-access-ffn2j\") pod \"cinder-backup-0\" (UID: \"b64ce473-ca21-4e77-afd9-24d93e79a71f\") " pod="openstack/cinder-backup-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.276457 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.303821 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.609809 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-create-bfmds"] Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.611269 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-bfmds" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.632356 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-bfmds"] Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.690112 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/911d1b58-a288-457b-9acb-206003bb9c0b-operator-scripts\") pod \"manila-db-create-bfmds\" (UID: \"911d1b58-a288-457b-9acb-206003bb9c0b\") " pod="openstack/manila-db-create-bfmds" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.690498 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2lqx\" (UniqueName: \"kubernetes.io/projected/911d1b58-a288-457b-9acb-206003bb9c0b-kube-api-access-x2lqx\") pod \"manila-db-create-bfmds\" (UID: \"911d1b58-a288-457b-9acb-206003bb9c0b\") " pod="openstack/manila-db-create-bfmds" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.729749 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-6303-account-create-update-wkvmp"] Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.730889 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-6303-account-create-update-wkvmp" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.733123 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-db-secret" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.741853 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-6303-account-create-update-wkvmp"] Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.792466 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/911d1b58-a288-457b-9acb-206003bb9c0b-operator-scripts\") pod \"manila-db-create-bfmds\" (UID: \"911d1b58-a288-457b-9acb-206003bb9c0b\") " pod="openstack/manila-db-create-bfmds" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.792619 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2lqx\" (UniqueName: \"kubernetes.io/projected/911d1b58-a288-457b-9acb-206003bb9c0b-kube-api-access-x2lqx\") pod \"manila-db-create-bfmds\" (UID: \"911d1b58-a288-457b-9acb-206003bb9c0b\") " pod="openstack/manila-db-create-bfmds" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.792755 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04d08653-4d72-4043-8f2d-aafdd7f7c384-operator-scripts\") pod \"manila-6303-account-create-update-wkvmp\" (UID: \"04d08653-4d72-4043-8f2d-aafdd7f7c384\") " pod="openstack/manila-6303-account-create-update-wkvmp" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.793721 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf8sv\" (UniqueName: \"kubernetes.io/projected/04d08653-4d72-4043-8f2d-aafdd7f7c384-kube-api-access-tf8sv\") pod \"manila-6303-account-create-update-wkvmp\" (UID: \"04d08653-4d72-4043-8f2d-aafdd7f7c384\") " pod="openstack/manila-6303-account-create-update-wkvmp" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.793397 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/911d1b58-a288-457b-9acb-206003bb9c0b-operator-scripts\") pod \"manila-db-create-bfmds\" (UID: \"911d1b58-a288-457b-9acb-206003bb9c0b\") " pod="openstack/manila-db-create-bfmds" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.800424 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.802226 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.804694 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.804898 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-khhns" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.807014 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.807098 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.815296 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.819655 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2lqx\" (UniqueName: \"kubernetes.io/projected/911d1b58-a288-457b-9acb-206003bb9c0b-kube-api-access-x2lqx\") pod \"manila-db-create-bfmds\" (UID: \"911d1b58-a288-457b-9acb-206003bb9c0b\") " pod="openstack/manila-db-create-bfmds" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.874727 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.876175 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.882954 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.883164 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.891259 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.899829 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e67820f3-0872-48ef-b531-e263d5be19bf-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e67820f3-0872-48ef-b531-e263d5be19bf\") " pod="openstack/glance-default-external-api-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.899871 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e67820f3-0872-48ef-b531-e263d5be19bf-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e67820f3-0872-48ef-b531-e263d5be19bf\") " pod="openstack/glance-default-external-api-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.899899 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"e67820f3-0872-48ef-b531-e263d5be19bf\") " pod="openstack/glance-default-external-api-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.899924 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04d08653-4d72-4043-8f2d-aafdd7f7c384-operator-scripts\") pod \"manila-6303-account-create-update-wkvmp\" (UID: \"04d08653-4d72-4043-8f2d-aafdd7f7c384\") " pod="openstack/manila-6303-account-create-update-wkvmp" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.899943 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e67820f3-0872-48ef-b531-e263d5be19bf-scripts\") pod \"glance-default-external-api-0\" (UID: \"e67820f3-0872-48ef-b531-e263d5be19bf\") " pod="openstack/glance-default-external-api-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.899969 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sl4c2\" (UniqueName: \"kubernetes.io/projected/e67820f3-0872-48ef-b531-e263d5be19bf-kube-api-access-sl4c2\") pod \"glance-default-external-api-0\" (UID: \"e67820f3-0872-48ef-b531-e263d5be19bf\") " pod="openstack/glance-default-external-api-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.900052 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e67820f3-0872-48ef-b531-e263d5be19bf-ceph\") pod \"glance-default-external-api-0\" (UID: \"e67820f3-0872-48ef-b531-e263d5be19bf\") " pod="openstack/glance-default-external-api-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.900067 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e67820f3-0872-48ef-b531-e263d5be19bf-config-data\") pod \"glance-default-external-api-0\" (UID: \"e67820f3-0872-48ef-b531-e263d5be19bf\") " pod="openstack/glance-default-external-api-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.900094 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e67820f3-0872-48ef-b531-e263d5be19bf-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e67820f3-0872-48ef-b531-e263d5be19bf\") " pod="openstack/glance-default-external-api-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.900138 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e67820f3-0872-48ef-b531-e263d5be19bf-logs\") pod \"glance-default-external-api-0\" (UID: \"e67820f3-0872-48ef-b531-e263d5be19bf\") " pod="openstack/glance-default-external-api-0" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.900159 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tf8sv\" (UniqueName: \"kubernetes.io/projected/04d08653-4d72-4043-8f2d-aafdd7f7c384-kube-api-access-tf8sv\") pod \"manila-6303-account-create-update-wkvmp\" (UID: \"04d08653-4d72-4043-8f2d-aafdd7f7c384\") " pod="openstack/manila-6303-account-create-update-wkvmp" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.901743 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04d08653-4d72-4043-8f2d-aafdd7f7c384-operator-scripts\") pod \"manila-6303-account-create-update-wkvmp\" (UID: \"04d08653-4d72-4043-8f2d-aafdd7f7c384\") " pod="openstack/manila-6303-account-create-update-wkvmp" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.925499 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tf8sv\" (UniqueName: \"kubernetes.io/projected/04d08653-4d72-4043-8f2d-aafdd7f7c384-kube-api-access-tf8sv\") pod \"manila-6303-account-create-update-wkvmp\" (UID: \"04d08653-4d72-4043-8f2d-aafdd7f7c384\") " pod="openstack/manila-6303-account-create-update-wkvmp" Feb 25 12:07:08 crc kubenswrapper[5005]: I0225 12:07:08.951240 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-bfmds" Feb 25 12:07:09 crc kubenswrapper[5005]: I0225 12:07:09.001954 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e67820f3-0872-48ef-b531-e263d5be19bf-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e67820f3-0872-48ef-b531-e263d5be19bf\") " pod="openstack/glance-default-external-api-0" Feb 25 12:07:09 crc kubenswrapper[5005]: I0225 12:07:09.002010 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e67820f3-0872-48ef-b531-e263d5be19bf-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e67820f3-0872-48ef-b531-e263d5be19bf\") " pod="openstack/glance-default-external-api-0" Feb 25 12:07:09 crc kubenswrapper[5005]: I0225 12:07:09.002052 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"e67820f3-0872-48ef-b531-e263d5be19bf\") " pod="openstack/glance-default-external-api-0" Feb 25 12:07:09 crc kubenswrapper[5005]: I0225 12:07:09.002106 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e67820f3-0872-48ef-b531-e263d5be19bf-scripts\") pod \"glance-default-external-api-0\" (UID: \"e67820f3-0872-48ef-b531-e263d5be19bf\") " pod="openstack/glance-default-external-api-0" Feb 25 12:07:09 crc kubenswrapper[5005]: I0225 12:07:09.002530 5005 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"e67820f3-0872-48ef-b531-e263d5be19bf\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-external-api-0" Feb 25 12:07:09 crc kubenswrapper[5005]: I0225 12:07:09.005887 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e67820f3-0872-48ef-b531-e263d5be19bf-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e67820f3-0872-48ef-b531-e263d5be19bf\") " pod="openstack/glance-default-external-api-0" Feb 25 12:07:09 crc kubenswrapper[5005]: I0225 12:07:09.006783 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e67820f3-0872-48ef-b531-e263d5be19bf-scripts\") pod \"glance-default-external-api-0\" (UID: \"e67820f3-0872-48ef-b531-e263d5be19bf\") " pod="openstack/glance-default-external-api-0" Feb 25 12:07:09 crc kubenswrapper[5005]: I0225 12:07:09.006990 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e67820f3-0872-48ef-b531-e263d5be19bf-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e67820f3-0872-48ef-b531-e263d5be19bf\") " pod="openstack/glance-default-external-api-0" Feb 25 12:07:09 crc kubenswrapper[5005]: I0225 12:07:09.007668 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Feb 25 12:07:09 crc kubenswrapper[5005]: I0225 12:07:09.010098 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sl4c2\" (UniqueName: \"kubernetes.io/projected/e67820f3-0872-48ef-b531-e263d5be19bf-kube-api-access-sl4c2\") pod \"glance-default-external-api-0\" (UID: \"e67820f3-0872-48ef-b531-e263d5be19bf\") " pod="openstack/glance-default-external-api-0" Feb 25 12:07:09 crc kubenswrapper[5005]: I0225 12:07:09.013691 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e67820f3-0872-48ef-b531-e263d5be19bf-ceph\") pod \"glance-default-external-api-0\" (UID: \"e67820f3-0872-48ef-b531-e263d5be19bf\") " pod="openstack/glance-default-external-api-0" Feb 25 12:07:09 crc kubenswrapper[5005]: I0225 12:07:09.013733 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e67820f3-0872-48ef-b531-e263d5be19bf-config-data\") pod \"glance-default-external-api-0\" (UID: \"e67820f3-0872-48ef-b531-e263d5be19bf\") " pod="openstack/glance-default-external-api-0" Feb 25 12:07:09 crc kubenswrapper[5005]: I0225 12:07:09.013775 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e67820f3-0872-48ef-b531-e263d5be19bf-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e67820f3-0872-48ef-b531-e263d5be19bf\") " pod="openstack/glance-default-external-api-0" Feb 25 12:07:09 crc kubenswrapper[5005]: I0225 12:07:09.013871 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e67820f3-0872-48ef-b531-e263d5be19bf-logs\") pod \"glance-default-external-api-0\" (UID: \"e67820f3-0872-48ef-b531-e263d5be19bf\") " pod="openstack/glance-default-external-api-0" Feb 25 12:07:09 crc kubenswrapper[5005]: I0225 12:07:09.014308 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e67820f3-0872-48ef-b531-e263d5be19bf-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e67820f3-0872-48ef-b531-e263d5be19bf\") " pod="openstack/glance-default-external-api-0" Feb 25 12:07:09 crc kubenswrapper[5005]: I0225 12:07:09.014367 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e67820f3-0872-48ef-b531-e263d5be19bf-logs\") pod \"glance-default-external-api-0\" (UID: \"e67820f3-0872-48ef-b531-e263d5be19bf\") " pod="openstack/glance-default-external-api-0" Feb 25 12:07:09 crc kubenswrapper[5005]: I0225 12:07:09.021454 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e67820f3-0872-48ef-b531-e263d5be19bf-config-data\") pod \"glance-default-external-api-0\" (UID: \"e67820f3-0872-48ef-b531-e263d5be19bf\") " pod="openstack/glance-default-external-api-0" Feb 25 12:07:09 crc kubenswrapper[5005]: I0225 12:07:09.022422 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e67820f3-0872-48ef-b531-e263d5be19bf-ceph\") pod \"glance-default-external-api-0\" (UID: \"e67820f3-0872-48ef-b531-e263d5be19bf\") " pod="openstack/glance-default-external-api-0" Feb 25 12:07:09 crc kubenswrapper[5005]: I0225 12:07:09.033826 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sl4c2\" (UniqueName: \"kubernetes.io/projected/e67820f3-0872-48ef-b531-e263d5be19bf-kube-api-access-sl4c2\") pod \"glance-default-external-api-0\" (UID: \"e67820f3-0872-48ef-b531-e263d5be19bf\") " pod="openstack/glance-default-external-api-0" Feb 25 12:07:09 crc kubenswrapper[5005]: I0225 12:07:09.057005 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-6303-account-create-update-wkvmp" Feb 25 12:07:09 crc kubenswrapper[5005]: I0225 12:07:09.058838 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"e67820f3-0872-48ef-b531-e263d5be19bf\") " pod="openstack/glance-default-external-api-0" Feb 25 12:07:09 crc kubenswrapper[5005]: I0225 12:07:09.114757 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Feb 25 12:07:09 crc kubenswrapper[5005]: I0225 12:07:09.115651 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"5484c86b-7109-4708-b68d-a8ba3a06925b\") " pod="openstack/glance-default-internal-api-0" Feb 25 12:07:09 crc kubenswrapper[5005]: I0225 12:07:09.115708 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/5484c86b-7109-4708-b68d-a8ba3a06925b-ceph\") pod \"glance-default-internal-api-0\" (UID: \"5484c86b-7109-4708-b68d-a8ba3a06925b\") " pod="openstack/glance-default-internal-api-0" Feb 25 12:07:09 crc kubenswrapper[5005]: I0225 12:07:09.115726 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5484c86b-7109-4708-b68d-a8ba3a06925b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5484c86b-7109-4708-b68d-a8ba3a06925b\") " pod="openstack/glance-default-internal-api-0" Feb 25 12:07:09 crc kubenswrapper[5005]: I0225 12:07:09.115888 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5484c86b-7109-4708-b68d-a8ba3a06925b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5484c86b-7109-4708-b68d-a8ba3a06925b\") " pod="openstack/glance-default-internal-api-0" Feb 25 12:07:09 crc kubenswrapper[5005]: I0225 12:07:09.115941 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5484c86b-7109-4708-b68d-a8ba3a06925b-logs\") pod \"glance-default-internal-api-0\" (UID: \"5484c86b-7109-4708-b68d-a8ba3a06925b\") " pod="openstack/glance-default-internal-api-0" Feb 25 12:07:09 crc kubenswrapper[5005]: I0225 12:07:09.115985 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5484c86b-7109-4708-b68d-a8ba3a06925b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5484c86b-7109-4708-b68d-a8ba3a06925b\") " pod="openstack/glance-default-internal-api-0" Feb 25 12:07:09 crc kubenswrapper[5005]: I0225 12:07:09.116032 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5497\" (UniqueName: \"kubernetes.io/projected/5484c86b-7109-4708-b68d-a8ba3a06925b-kube-api-access-g5497\") pod \"glance-default-internal-api-0\" (UID: \"5484c86b-7109-4708-b68d-a8ba3a06925b\") " pod="openstack/glance-default-internal-api-0" Feb 25 12:07:09 crc kubenswrapper[5005]: I0225 12:07:09.116056 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5484c86b-7109-4708-b68d-a8ba3a06925b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5484c86b-7109-4708-b68d-a8ba3a06925b\") " pod="openstack/glance-default-internal-api-0" Feb 25 12:07:09 crc kubenswrapper[5005]: I0225 12:07:09.116088 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5484c86b-7109-4708-b68d-a8ba3a06925b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5484c86b-7109-4708-b68d-a8ba3a06925b\") " pod="openstack/glance-default-internal-api-0" Feb 25 12:07:09 crc kubenswrapper[5005]: I0225 12:07:09.128297 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"da2a296c-43d5-4d35-9929-d13584a2d821","Type":"ContainerStarted","Data":"844939fadb45778a4a0ee02d699cb8fbd852ec41d40d0e98bc14dcd78e70cc4a"} Feb 25 12:07:09 crc kubenswrapper[5005]: W0225 12:07:09.152237 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb64ce473_ca21_4e77_afd9_24d93e79a71f.slice/crio-7f0856656cf12ca30fa179be406dd7aa65722422b98c80d79bd5a46091213621 WatchSource:0}: Error finding container 7f0856656cf12ca30fa179be406dd7aa65722422b98c80d79bd5a46091213621: Status 404 returned error can't find the container with id 7f0856656cf12ca30fa179be406dd7aa65722422b98c80d79bd5a46091213621 Feb 25 12:07:09 crc kubenswrapper[5005]: I0225 12:07:09.208709 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 25 12:07:09 crc kubenswrapper[5005]: I0225 12:07:09.217228 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5497\" (UniqueName: \"kubernetes.io/projected/5484c86b-7109-4708-b68d-a8ba3a06925b-kube-api-access-g5497\") pod \"glance-default-internal-api-0\" (UID: \"5484c86b-7109-4708-b68d-a8ba3a06925b\") " pod="openstack/glance-default-internal-api-0" Feb 25 12:07:09 crc kubenswrapper[5005]: I0225 12:07:09.217272 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5484c86b-7109-4708-b68d-a8ba3a06925b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5484c86b-7109-4708-b68d-a8ba3a06925b\") " pod="openstack/glance-default-internal-api-0" Feb 25 12:07:09 crc kubenswrapper[5005]: I0225 12:07:09.217307 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5484c86b-7109-4708-b68d-a8ba3a06925b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5484c86b-7109-4708-b68d-a8ba3a06925b\") " pod="openstack/glance-default-internal-api-0" Feb 25 12:07:09 crc kubenswrapper[5005]: I0225 12:07:09.217352 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"5484c86b-7109-4708-b68d-a8ba3a06925b\") " pod="openstack/glance-default-internal-api-0" Feb 25 12:07:09 crc kubenswrapper[5005]: I0225 12:07:09.217397 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/5484c86b-7109-4708-b68d-a8ba3a06925b-ceph\") pod \"glance-default-internal-api-0\" (UID: \"5484c86b-7109-4708-b68d-a8ba3a06925b\") " pod="openstack/glance-default-internal-api-0" Feb 25 12:07:09 crc kubenswrapper[5005]: I0225 12:07:09.217420 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5484c86b-7109-4708-b68d-a8ba3a06925b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5484c86b-7109-4708-b68d-a8ba3a06925b\") " pod="openstack/glance-default-internal-api-0" Feb 25 12:07:09 crc kubenswrapper[5005]: I0225 12:07:09.217451 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5484c86b-7109-4708-b68d-a8ba3a06925b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5484c86b-7109-4708-b68d-a8ba3a06925b\") " pod="openstack/glance-default-internal-api-0" Feb 25 12:07:09 crc kubenswrapper[5005]: I0225 12:07:09.217494 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5484c86b-7109-4708-b68d-a8ba3a06925b-logs\") pod \"glance-default-internal-api-0\" (UID: \"5484c86b-7109-4708-b68d-a8ba3a06925b\") " pod="openstack/glance-default-internal-api-0" Feb 25 12:07:09 crc kubenswrapper[5005]: I0225 12:07:09.217529 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5484c86b-7109-4708-b68d-a8ba3a06925b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5484c86b-7109-4708-b68d-a8ba3a06925b\") " pod="openstack/glance-default-internal-api-0" Feb 25 12:07:09 crc kubenswrapper[5005]: I0225 12:07:09.217667 5005 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"5484c86b-7109-4708-b68d-a8ba3a06925b\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-internal-api-0" Feb 25 12:07:09 crc kubenswrapper[5005]: I0225 12:07:09.219242 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5484c86b-7109-4708-b68d-a8ba3a06925b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5484c86b-7109-4708-b68d-a8ba3a06925b\") " pod="openstack/glance-default-internal-api-0" Feb 25 12:07:09 crc kubenswrapper[5005]: I0225 12:07:09.219784 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5484c86b-7109-4708-b68d-a8ba3a06925b-logs\") pod \"glance-default-internal-api-0\" (UID: \"5484c86b-7109-4708-b68d-a8ba3a06925b\") " pod="openstack/glance-default-internal-api-0" Feb 25 12:07:09 crc kubenswrapper[5005]: I0225 12:07:09.237025 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5484c86b-7109-4708-b68d-a8ba3a06925b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5484c86b-7109-4708-b68d-a8ba3a06925b\") " pod="openstack/glance-default-internal-api-0" Feb 25 12:07:09 crc kubenswrapper[5005]: I0225 12:07:09.237322 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5484c86b-7109-4708-b68d-a8ba3a06925b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5484c86b-7109-4708-b68d-a8ba3a06925b\") " pod="openstack/glance-default-internal-api-0" Feb 25 12:07:09 crc kubenswrapper[5005]: I0225 12:07:09.237442 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5484c86b-7109-4708-b68d-a8ba3a06925b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5484c86b-7109-4708-b68d-a8ba3a06925b\") " pod="openstack/glance-default-internal-api-0" Feb 25 12:07:09 crc kubenswrapper[5005]: I0225 12:07:09.238314 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5484c86b-7109-4708-b68d-a8ba3a06925b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5484c86b-7109-4708-b68d-a8ba3a06925b\") " pod="openstack/glance-default-internal-api-0" Feb 25 12:07:09 crc kubenswrapper[5005]: I0225 12:07:09.244786 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/5484c86b-7109-4708-b68d-a8ba3a06925b-ceph\") pod \"glance-default-internal-api-0\" (UID: \"5484c86b-7109-4708-b68d-a8ba3a06925b\") " pod="openstack/glance-default-internal-api-0" Feb 25 12:07:09 crc kubenswrapper[5005]: I0225 12:07:09.246415 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5497\" (UniqueName: \"kubernetes.io/projected/5484c86b-7109-4708-b68d-a8ba3a06925b-kube-api-access-g5497\") pod \"glance-default-internal-api-0\" (UID: \"5484c86b-7109-4708-b68d-a8ba3a06925b\") " pod="openstack/glance-default-internal-api-0" Feb 25 12:07:09 crc kubenswrapper[5005]: I0225 12:07:09.297966 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"5484c86b-7109-4708-b68d-a8ba3a06925b\") " pod="openstack/glance-default-internal-api-0" Feb 25 12:07:09 crc kubenswrapper[5005]: I0225 12:07:09.444473 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-bfmds"] Feb 25 12:07:09 crc kubenswrapper[5005]: I0225 12:07:09.516162 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 25 12:07:09 crc kubenswrapper[5005]: I0225 12:07:09.572081 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-6303-account-create-update-wkvmp"] Feb 25 12:07:09 crc kubenswrapper[5005]: W0225 12:07:09.573006 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04d08653_4d72_4043_8f2d_aafdd7f7c384.slice/crio-70bc2e2418f40155e5ffde7a61b50669fc284d4c0ef0c04cc6009fc48f97142e WatchSource:0}: Error finding container 70bc2e2418f40155e5ffde7a61b50669fc284d4c0ef0c04cc6009fc48f97142e: Status 404 returned error can't find the container with id 70bc2e2418f40155e5ffde7a61b50669fc284d4c0ef0c04cc6009fc48f97142e Feb 25 12:07:09 crc kubenswrapper[5005]: I0225 12:07:09.808131 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 25 12:07:10 crc kubenswrapper[5005]: I0225 12:07:10.104793 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 25 12:07:10 crc kubenswrapper[5005]: E0225 12:07:10.109630 5005 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod911d1b58_a288_457b_9acb_206003bb9c0b.slice/crio-conmon-fc5625cc13af47722041412700840f0f429fa480fe71d97134c77757a36af3ca.scope\": RecentStats: unable to find data in memory cache]" Feb 25 12:07:10 crc kubenswrapper[5005]: W0225 12:07:10.131217 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5484c86b_7109_4708_b68d_a8ba3a06925b.slice/crio-a4515d1e2b064a946c53f4b9db39c6bc00915b436a024a1d368fe36c1344fdb0 WatchSource:0}: Error finding container a4515d1e2b064a946c53f4b9db39c6bc00915b436a024a1d368fe36c1344fdb0: Status 404 returned error can't find the container with id a4515d1e2b064a946c53f4b9db39c6bc00915b436a024a1d368fe36c1344fdb0 Feb 25 12:07:10 crc kubenswrapper[5005]: I0225 12:07:10.139016 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-6303-account-create-update-wkvmp" event={"ID":"04d08653-4d72-4043-8f2d-aafdd7f7c384","Type":"ContainerStarted","Data":"557a41332180d194dbca1e9e4a3e4959368221f72c243411551413407d8cd7ce"} Feb 25 12:07:10 crc kubenswrapper[5005]: I0225 12:07:10.139216 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-6303-account-create-update-wkvmp" event={"ID":"04d08653-4d72-4043-8f2d-aafdd7f7c384","Type":"ContainerStarted","Data":"70bc2e2418f40155e5ffde7a61b50669fc284d4c0ef0c04cc6009fc48f97142e"} Feb 25 12:07:10 crc kubenswrapper[5005]: I0225 12:07:10.155621 5005 generic.go:334] "Generic (PLEG): container finished" podID="911d1b58-a288-457b-9acb-206003bb9c0b" containerID="fc5625cc13af47722041412700840f0f429fa480fe71d97134c77757a36af3ca" exitCode=0 Feb 25 12:07:10 crc kubenswrapper[5005]: I0225 12:07:10.156216 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-bfmds" event={"ID":"911d1b58-a288-457b-9acb-206003bb9c0b","Type":"ContainerDied","Data":"fc5625cc13af47722041412700840f0f429fa480fe71d97134c77757a36af3ca"} Feb 25 12:07:10 crc kubenswrapper[5005]: I0225 12:07:10.156258 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-bfmds" event={"ID":"911d1b58-a288-457b-9acb-206003bb9c0b","Type":"ContainerStarted","Data":"79635d4b116ce9f10c1aaa6d2257b561ea0286e2ee787d799bb44ea11013054a"} Feb 25 12:07:10 crc kubenswrapper[5005]: I0225 12:07:10.171780 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-6303-account-create-update-wkvmp" podStartSLOduration=2.171749769 podStartE2EDuration="2.171749769s" podCreationTimestamp="2026-02-25 12:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 12:07:10.165573548 +0000 UTC m=+2944.206305895" watchObservedRunningTime="2026-02-25 12:07:10.171749769 +0000 UTC m=+2944.212482096" Feb 25 12:07:10 crc kubenswrapper[5005]: I0225 12:07:10.184424 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"b64ce473-ca21-4e77-afd9-24d93e79a71f","Type":"ContainerStarted","Data":"7f0856656cf12ca30fa179be406dd7aa65722422b98c80d79bd5a46091213621"} Feb 25 12:07:10 crc kubenswrapper[5005]: I0225 12:07:10.186553 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e67820f3-0872-48ef-b531-e263d5be19bf","Type":"ContainerStarted","Data":"368fd3533bb084e1389d0b3a84dbbd39e6176d479ee935b6683f2a89f4431647"} Feb 25 12:07:11 crc kubenswrapper[5005]: I0225 12:07:11.196550 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"da2a296c-43d5-4d35-9929-d13584a2d821","Type":"ContainerStarted","Data":"9f066d14e1af64b599ce3edee657ca1ad83f873834865deeb76b232971d58e35"} Feb 25 12:07:11 crc kubenswrapper[5005]: I0225 12:07:11.198465 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"da2a296c-43d5-4d35-9929-d13584a2d821","Type":"ContainerStarted","Data":"ed7ad29d3106731928ac788472061ef26d4bd29ab267f5be10be5710f6d8fb14"} Feb 25 12:07:11 crc kubenswrapper[5005]: I0225 12:07:11.201172 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5484c86b-7109-4708-b68d-a8ba3a06925b","Type":"ContainerStarted","Data":"2a6298b17820bb7cad2881309db55190f97f9c8c0003636502b2338e08aab1d6"} Feb 25 12:07:11 crc kubenswrapper[5005]: I0225 12:07:11.201210 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5484c86b-7109-4708-b68d-a8ba3a06925b","Type":"ContainerStarted","Data":"a4515d1e2b064a946c53f4b9db39c6bc00915b436a024a1d368fe36c1344fdb0"} Feb 25 12:07:11 crc kubenswrapper[5005]: I0225 12:07:11.205052 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"b64ce473-ca21-4e77-afd9-24d93e79a71f","Type":"ContainerStarted","Data":"af4895fc8f6d85ab7a0a2eedae3fb59edc8a54f29f7baaa4751be75bd7222115"} Feb 25 12:07:11 crc kubenswrapper[5005]: I0225 12:07:11.206790 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e67820f3-0872-48ef-b531-e263d5be19bf","Type":"ContainerStarted","Data":"771bf617f80e4a5015afda455ae17c984475c6499c58e43a96afba644bd59ad9"} Feb 25 12:07:11 crc kubenswrapper[5005]: I0225 12:07:11.215482 5005 generic.go:334] "Generic (PLEG): container finished" podID="04d08653-4d72-4043-8f2d-aafdd7f7c384" containerID="557a41332180d194dbca1e9e4a3e4959368221f72c243411551413407d8cd7ce" exitCode=0 Feb 25 12:07:11 crc kubenswrapper[5005]: I0225 12:07:11.215543 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-6303-account-create-update-wkvmp" event={"ID":"04d08653-4d72-4043-8f2d-aafdd7f7c384","Type":"ContainerDied","Data":"557a41332180d194dbca1e9e4a3e4959368221f72c243411551413407d8cd7ce"} Feb 25 12:07:11 crc kubenswrapper[5005]: I0225 12:07:11.223905 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=3.027453826 podStartE2EDuration="4.22389304s" podCreationTimestamp="2026-02-25 12:07:07 +0000 UTC" firstStartedPulling="2026-02-25 12:07:09.017353709 +0000 UTC m=+2943.058086036" lastFinishedPulling="2026-02-25 12:07:10.213792923 +0000 UTC m=+2944.254525250" observedRunningTime="2026-02-25 12:07:11.218339039 +0000 UTC m=+2945.259071366" watchObservedRunningTime="2026-02-25 12:07:11.22389304 +0000 UTC m=+2945.264625367" Feb 25 12:07:11 crc kubenswrapper[5005]: I0225 12:07:11.647648 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-bfmds" Feb 25 12:07:11 crc kubenswrapper[5005]: I0225 12:07:11.673125 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2lqx\" (UniqueName: \"kubernetes.io/projected/911d1b58-a288-457b-9acb-206003bb9c0b-kube-api-access-x2lqx\") pod \"911d1b58-a288-457b-9acb-206003bb9c0b\" (UID: \"911d1b58-a288-457b-9acb-206003bb9c0b\") " Feb 25 12:07:11 crc kubenswrapper[5005]: I0225 12:07:11.673168 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/911d1b58-a288-457b-9acb-206003bb9c0b-operator-scripts\") pod \"911d1b58-a288-457b-9acb-206003bb9c0b\" (UID: \"911d1b58-a288-457b-9acb-206003bb9c0b\") " Feb 25 12:07:11 crc kubenswrapper[5005]: I0225 12:07:11.674091 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/911d1b58-a288-457b-9acb-206003bb9c0b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "911d1b58-a288-457b-9acb-206003bb9c0b" (UID: "911d1b58-a288-457b-9acb-206003bb9c0b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 12:07:11 crc kubenswrapper[5005]: I0225 12:07:11.680598 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/911d1b58-a288-457b-9acb-206003bb9c0b-kube-api-access-x2lqx" (OuterVolumeSpecName: "kube-api-access-x2lqx") pod "911d1b58-a288-457b-9acb-206003bb9c0b" (UID: "911d1b58-a288-457b-9acb-206003bb9c0b"). InnerVolumeSpecName "kube-api-access-x2lqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:07:11 crc kubenswrapper[5005]: I0225 12:07:11.686513 5005 scope.go:117] "RemoveContainer" containerID="7ca266498d6b9c8fb2a14f176fd0f40e3e96606757679e079fcb7ecb4ca85b52" Feb 25 12:07:11 crc kubenswrapper[5005]: E0225 12:07:11.686764 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:07:11 crc kubenswrapper[5005]: I0225 12:07:11.775167 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2lqx\" (UniqueName: \"kubernetes.io/projected/911d1b58-a288-457b-9acb-206003bb9c0b-kube-api-access-x2lqx\") on node \"crc\" DevicePath \"\"" Feb 25 12:07:11 crc kubenswrapper[5005]: I0225 12:07:11.775201 5005 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/911d1b58-a288-457b-9acb-206003bb9c0b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 25 12:07:12 crc kubenswrapper[5005]: I0225 12:07:12.233102 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"b64ce473-ca21-4e77-afd9-24d93e79a71f","Type":"ContainerStarted","Data":"4106cf47fbd45e1b96e491a010c5bc783912dc219ae056d6b841d81367bea285"} Feb 25 12:07:12 crc kubenswrapper[5005]: I0225 12:07:12.235950 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e67820f3-0872-48ef-b531-e263d5be19bf","Type":"ContainerStarted","Data":"3ddaa16348d99228f26d67f441b752f8348dc0ee6ae6311e3f8fc4178fb27ef3"} Feb 25 12:07:12 crc kubenswrapper[5005]: I0225 12:07:12.239294 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5484c86b-7109-4708-b68d-a8ba3a06925b","Type":"ContainerStarted","Data":"2485d34c060ad86bfa5332764ea5c0baa056c0de762a13aae9d5b34aea00f755"} Feb 25 12:07:12 crc kubenswrapper[5005]: I0225 12:07:12.242137 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-bfmds" event={"ID":"911d1b58-a288-457b-9acb-206003bb9c0b","Type":"ContainerDied","Data":"79635d4b116ce9f10c1aaa6d2257b561ea0286e2ee787d799bb44ea11013054a"} Feb 25 12:07:12 crc kubenswrapper[5005]: I0225 12:07:12.242168 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79635d4b116ce9f10c1aaa6d2257b561ea0286e2ee787d799bb44ea11013054a" Feb 25 12:07:12 crc kubenswrapper[5005]: I0225 12:07:12.242332 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-bfmds" Feb 25 12:07:12 crc kubenswrapper[5005]: I0225 12:07:12.260293 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=3.919592552 podStartE2EDuration="5.260273077s" podCreationTimestamp="2026-02-25 12:07:07 +0000 UTC" firstStartedPulling="2026-02-25 12:07:09.15611053 +0000 UTC m=+2943.196842857" lastFinishedPulling="2026-02-25 12:07:10.496791055 +0000 UTC m=+2944.537523382" observedRunningTime="2026-02-25 12:07:12.257567854 +0000 UTC m=+2946.298300181" watchObservedRunningTime="2026-02-25 12:07:12.260273077 +0000 UTC m=+2946.301005404" Feb 25 12:07:12 crc kubenswrapper[5005]: I0225 12:07:12.306219 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.30619425 podStartE2EDuration="5.30619425s" podCreationTimestamp="2026-02-25 12:07:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 12:07:12.286814453 +0000 UTC m=+2946.327546780" watchObservedRunningTime="2026-02-25 12:07:12.30619425 +0000 UTC m=+2946.346926597" Feb 25 12:07:12 crc kubenswrapper[5005]: I0225 12:07:12.324233 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.324211845 podStartE2EDuration="5.324211845s" podCreationTimestamp="2026-02-25 12:07:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 12:07:12.315907939 +0000 UTC m=+2946.356640266" watchObservedRunningTime="2026-02-25 12:07:12.324211845 +0000 UTC m=+2946.364944172" Feb 25 12:07:12 crc kubenswrapper[5005]: I0225 12:07:12.600507 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-6303-account-create-update-wkvmp" Feb 25 12:07:12 crc kubenswrapper[5005]: I0225 12:07:12.693477 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04d08653-4d72-4043-8f2d-aafdd7f7c384-operator-scripts\") pod \"04d08653-4d72-4043-8f2d-aafdd7f7c384\" (UID: \"04d08653-4d72-4043-8f2d-aafdd7f7c384\") " Feb 25 12:07:12 crc kubenswrapper[5005]: I0225 12:07:12.693544 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tf8sv\" (UniqueName: \"kubernetes.io/projected/04d08653-4d72-4043-8f2d-aafdd7f7c384-kube-api-access-tf8sv\") pod \"04d08653-4d72-4043-8f2d-aafdd7f7c384\" (UID: \"04d08653-4d72-4043-8f2d-aafdd7f7c384\") " Feb 25 12:07:12 crc kubenswrapper[5005]: I0225 12:07:12.695263 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04d08653-4d72-4043-8f2d-aafdd7f7c384-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "04d08653-4d72-4043-8f2d-aafdd7f7c384" (UID: "04d08653-4d72-4043-8f2d-aafdd7f7c384"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 12:07:12 crc kubenswrapper[5005]: I0225 12:07:12.717041 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04d08653-4d72-4043-8f2d-aafdd7f7c384-kube-api-access-tf8sv" (OuterVolumeSpecName: "kube-api-access-tf8sv") pod "04d08653-4d72-4043-8f2d-aafdd7f7c384" (UID: "04d08653-4d72-4043-8f2d-aafdd7f7c384"). InnerVolumeSpecName "kube-api-access-tf8sv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:07:12 crc kubenswrapper[5005]: I0225 12:07:12.795442 5005 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04d08653-4d72-4043-8f2d-aafdd7f7c384-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 25 12:07:12 crc kubenswrapper[5005]: I0225 12:07:12.795476 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tf8sv\" (UniqueName: \"kubernetes.io/projected/04d08653-4d72-4043-8f2d-aafdd7f7c384-kube-api-access-tf8sv\") on node \"crc\" DevicePath \"\"" Feb 25 12:07:13 crc kubenswrapper[5005]: I0225 12:07:13.272910 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-6303-account-create-update-wkvmp" event={"ID":"04d08653-4d72-4043-8f2d-aafdd7f7c384","Type":"ContainerDied","Data":"70bc2e2418f40155e5ffde7a61b50669fc284d4c0ef0c04cc6009fc48f97142e"} Feb 25 12:07:13 crc kubenswrapper[5005]: I0225 12:07:13.273221 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70bc2e2418f40155e5ffde7a61b50669fc284d4c0ef0c04cc6009fc48f97142e" Feb 25 12:07:13 crc kubenswrapper[5005]: I0225 12:07:13.273272 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-6303-account-create-update-wkvmp" Feb 25 12:07:13 crc kubenswrapper[5005]: I0225 12:07:13.277232 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Feb 25 12:07:13 crc kubenswrapper[5005]: I0225 12:07:13.305365 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Feb 25 12:07:14 crc kubenswrapper[5005]: I0225 12:07:14.054535 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-sync-5pz6m"] Feb 25 12:07:14 crc kubenswrapper[5005]: E0225 12:07:14.054999 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04d08653-4d72-4043-8f2d-aafdd7f7c384" containerName="mariadb-account-create-update" Feb 25 12:07:14 crc kubenswrapper[5005]: I0225 12:07:14.055018 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="04d08653-4d72-4043-8f2d-aafdd7f7c384" containerName="mariadb-account-create-update" Feb 25 12:07:14 crc kubenswrapper[5005]: E0225 12:07:14.055042 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="911d1b58-a288-457b-9acb-206003bb9c0b" containerName="mariadb-database-create" Feb 25 12:07:14 crc kubenswrapper[5005]: I0225 12:07:14.055050 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="911d1b58-a288-457b-9acb-206003bb9c0b" containerName="mariadb-database-create" Feb 25 12:07:14 crc kubenswrapper[5005]: I0225 12:07:14.055233 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="911d1b58-a288-457b-9acb-206003bb9c0b" containerName="mariadb-database-create" Feb 25 12:07:14 crc kubenswrapper[5005]: I0225 12:07:14.055253 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="04d08653-4d72-4043-8f2d-aafdd7f7c384" containerName="mariadb-account-create-update" Feb 25 12:07:14 crc kubenswrapper[5005]: I0225 12:07:14.055941 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-5pz6m" Feb 25 12:07:14 crc kubenswrapper[5005]: I0225 12:07:14.058225 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-vtbrc" Feb 25 12:07:14 crc kubenswrapper[5005]: I0225 12:07:14.058600 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Feb 25 12:07:14 crc kubenswrapper[5005]: I0225 12:07:14.073255 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-5pz6m"] Feb 25 12:07:14 crc kubenswrapper[5005]: I0225 12:07:14.226146 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/7e95d02c-8ff5-4c4d-a8a9-c002e69afdd0-job-config-data\") pod \"manila-db-sync-5pz6m\" (UID: \"7e95d02c-8ff5-4c4d-a8a9-c002e69afdd0\") " pod="openstack/manila-db-sync-5pz6m" Feb 25 12:07:14 crc kubenswrapper[5005]: I0225 12:07:14.226331 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqdfx\" (UniqueName: \"kubernetes.io/projected/7e95d02c-8ff5-4c4d-a8a9-c002e69afdd0-kube-api-access-dqdfx\") pod \"manila-db-sync-5pz6m\" (UID: \"7e95d02c-8ff5-4c4d-a8a9-c002e69afdd0\") " pod="openstack/manila-db-sync-5pz6m" Feb 25 12:07:14 crc kubenswrapper[5005]: I0225 12:07:14.226537 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e95d02c-8ff5-4c4d-a8a9-c002e69afdd0-combined-ca-bundle\") pod \"manila-db-sync-5pz6m\" (UID: \"7e95d02c-8ff5-4c4d-a8a9-c002e69afdd0\") " pod="openstack/manila-db-sync-5pz6m" Feb 25 12:07:14 crc kubenswrapper[5005]: I0225 12:07:14.226774 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e95d02c-8ff5-4c4d-a8a9-c002e69afdd0-config-data\") pod \"manila-db-sync-5pz6m\" (UID: \"7e95d02c-8ff5-4c4d-a8a9-c002e69afdd0\") " pod="openstack/manila-db-sync-5pz6m" Feb 25 12:07:14 crc kubenswrapper[5005]: I0225 12:07:14.328729 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/7e95d02c-8ff5-4c4d-a8a9-c002e69afdd0-job-config-data\") pod \"manila-db-sync-5pz6m\" (UID: \"7e95d02c-8ff5-4c4d-a8a9-c002e69afdd0\") " pod="openstack/manila-db-sync-5pz6m" Feb 25 12:07:14 crc kubenswrapper[5005]: I0225 12:07:14.329099 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqdfx\" (UniqueName: \"kubernetes.io/projected/7e95d02c-8ff5-4c4d-a8a9-c002e69afdd0-kube-api-access-dqdfx\") pod \"manila-db-sync-5pz6m\" (UID: \"7e95d02c-8ff5-4c4d-a8a9-c002e69afdd0\") " pod="openstack/manila-db-sync-5pz6m" Feb 25 12:07:14 crc kubenswrapper[5005]: I0225 12:07:14.329622 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e95d02c-8ff5-4c4d-a8a9-c002e69afdd0-combined-ca-bundle\") pod \"manila-db-sync-5pz6m\" (UID: \"7e95d02c-8ff5-4c4d-a8a9-c002e69afdd0\") " pod="openstack/manila-db-sync-5pz6m" Feb 25 12:07:14 crc kubenswrapper[5005]: I0225 12:07:14.330130 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e95d02c-8ff5-4c4d-a8a9-c002e69afdd0-config-data\") pod \"manila-db-sync-5pz6m\" (UID: \"7e95d02c-8ff5-4c4d-a8a9-c002e69afdd0\") " pod="openstack/manila-db-sync-5pz6m" Feb 25 12:07:14 crc kubenswrapper[5005]: I0225 12:07:14.335435 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e95d02c-8ff5-4c4d-a8a9-c002e69afdd0-combined-ca-bundle\") pod \"manila-db-sync-5pz6m\" (UID: \"7e95d02c-8ff5-4c4d-a8a9-c002e69afdd0\") " pod="openstack/manila-db-sync-5pz6m" Feb 25 12:07:14 crc kubenswrapper[5005]: I0225 12:07:14.336029 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/7e95d02c-8ff5-4c4d-a8a9-c002e69afdd0-job-config-data\") pod \"manila-db-sync-5pz6m\" (UID: \"7e95d02c-8ff5-4c4d-a8a9-c002e69afdd0\") " pod="openstack/manila-db-sync-5pz6m" Feb 25 12:07:14 crc kubenswrapper[5005]: I0225 12:07:14.339230 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e95d02c-8ff5-4c4d-a8a9-c002e69afdd0-config-data\") pod \"manila-db-sync-5pz6m\" (UID: \"7e95d02c-8ff5-4c4d-a8a9-c002e69afdd0\") " pod="openstack/manila-db-sync-5pz6m" Feb 25 12:07:14 crc kubenswrapper[5005]: I0225 12:07:14.354419 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqdfx\" (UniqueName: \"kubernetes.io/projected/7e95d02c-8ff5-4c4d-a8a9-c002e69afdd0-kube-api-access-dqdfx\") pod \"manila-db-sync-5pz6m\" (UID: \"7e95d02c-8ff5-4c4d-a8a9-c002e69afdd0\") " pod="openstack/manila-db-sync-5pz6m" Feb 25 12:07:14 crc kubenswrapper[5005]: I0225 12:07:14.449276 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-5pz6m" Feb 25 12:07:15 crc kubenswrapper[5005]: I0225 12:07:15.356092 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-5pz6m"] Feb 25 12:07:16 crc kubenswrapper[5005]: I0225 12:07:16.305725 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-5pz6m" event={"ID":"7e95d02c-8ff5-4c4d-a8a9-c002e69afdd0","Type":"ContainerStarted","Data":"d4d22ecd0ffbc925c243a42060da0b33342de239acc21a2a750a855851bb6efc"} Feb 25 12:07:18 crc kubenswrapper[5005]: I0225 12:07:18.537677 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Feb 25 12:07:18 crc kubenswrapper[5005]: I0225 12:07:18.550870 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Feb 25 12:07:19 crc kubenswrapper[5005]: I0225 12:07:19.210271 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 25 12:07:19 crc kubenswrapper[5005]: I0225 12:07:19.210337 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 25 12:07:19 crc kubenswrapper[5005]: I0225 12:07:19.241811 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 25 12:07:19 crc kubenswrapper[5005]: I0225 12:07:19.259188 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 25 12:07:19 crc kubenswrapper[5005]: I0225 12:07:19.339503 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 25 12:07:19 crc kubenswrapper[5005]: I0225 12:07:19.339545 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 25 12:07:19 crc kubenswrapper[5005]: I0225 12:07:19.518388 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 25 12:07:19 crc kubenswrapper[5005]: I0225 12:07:19.518438 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 25 12:07:19 crc kubenswrapper[5005]: I0225 12:07:19.561307 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 25 12:07:19 crc kubenswrapper[5005]: I0225 12:07:19.577968 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 25 12:07:20 crc kubenswrapper[5005]: I0225 12:07:20.348916 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 25 12:07:20 crc kubenswrapper[5005]: I0225 12:07:20.349064 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 25 12:07:21 crc kubenswrapper[5005]: I0225 12:07:21.357680 5005 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 25 12:07:21 crc kubenswrapper[5005]: I0225 12:07:21.358248 5005 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 25 12:07:21 crc kubenswrapper[5005]: I0225 12:07:21.667216 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 25 12:07:21 crc kubenswrapper[5005]: I0225 12:07:21.668778 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 25 12:07:22 crc kubenswrapper[5005]: I0225 12:07:22.368649 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-5pz6m" event={"ID":"7e95d02c-8ff5-4c4d-a8a9-c002e69afdd0","Type":"ContainerStarted","Data":"da4df5d6487e271791dda521398e960339958d8c8d33646c7bb3d5954fed5569"} Feb 25 12:07:22 crc kubenswrapper[5005]: I0225 12:07:22.368893 5005 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 25 12:07:22 crc kubenswrapper[5005]: I0225 12:07:22.369002 5005 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 25 12:07:22 crc kubenswrapper[5005]: I0225 12:07:22.397219 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-sync-5pz6m" podStartSLOduration=2.629353814 podStartE2EDuration="8.397193894s" podCreationTimestamp="2026-02-25 12:07:14 +0000 UTC" firstStartedPulling="2026-02-25 12:07:15.369600841 +0000 UTC m=+2949.410333168" lastFinishedPulling="2026-02-25 12:07:21.137440921 +0000 UTC m=+2955.178173248" observedRunningTime="2026-02-25 12:07:22.390118756 +0000 UTC m=+2956.430851083" watchObservedRunningTime="2026-02-25 12:07:22.397193894 +0000 UTC m=+2956.437926221" Feb 25 12:07:22 crc kubenswrapper[5005]: I0225 12:07:22.537306 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 25 12:07:22 crc kubenswrapper[5005]: I0225 12:07:22.978958 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 25 12:07:26 crc kubenswrapper[5005]: I0225 12:07:26.692696 5005 scope.go:117] "RemoveContainer" containerID="7ca266498d6b9c8fb2a14f176fd0f40e3e96606757679e079fcb7ecb4ca85b52" Feb 25 12:07:26 crc kubenswrapper[5005]: E0225 12:07:26.693666 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:07:30 crc kubenswrapper[5005]: E0225 12:07:30.649646 5005 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e95d02c_8ff5_4c4d_a8a9_c002e69afdd0.slice/crio-da4df5d6487e271791dda521398e960339958d8c8d33646c7bb3d5954fed5569.scope\": RecentStats: unable to find data in memory cache]" Feb 25 12:07:31 crc kubenswrapper[5005]: I0225 12:07:31.452417 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-5pz6m" event={"ID":"7e95d02c-8ff5-4c4d-a8a9-c002e69afdd0","Type":"ContainerDied","Data":"da4df5d6487e271791dda521398e960339958d8c8d33646c7bb3d5954fed5569"} Feb 25 12:07:31 crc kubenswrapper[5005]: I0225 12:07:31.452427 5005 generic.go:334] "Generic (PLEG): container finished" podID="7e95d02c-8ff5-4c4d-a8a9-c002e69afdd0" containerID="da4df5d6487e271791dda521398e960339958d8c8d33646c7bb3d5954fed5569" exitCode=0 Feb 25 12:07:32 crc kubenswrapper[5005]: I0225 12:07:32.955862 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-5pz6m" Feb 25 12:07:32 crc kubenswrapper[5005]: I0225 12:07:32.991086 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e95d02c-8ff5-4c4d-a8a9-c002e69afdd0-config-data\") pod \"7e95d02c-8ff5-4c4d-a8a9-c002e69afdd0\" (UID: \"7e95d02c-8ff5-4c4d-a8a9-c002e69afdd0\") " Feb 25 12:07:32 crc kubenswrapper[5005]: I0225 12:07:32.991161 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/7e95d02c-8ff5-4c4d-a8a9-c002e69afdd0-job-config-data\") pod \"7e95d02c-8ff5-4c4d-a8a9-c002e69afdd0\" (UID: \"7e95d02c-8ff5-4c4d-a8a9-c002e69afdd0\") " Feb 25 12:07:32 crc kubenswrapper[5005]: I0225 12:07:32.991205 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e95d02c-8ff5-4c4d-a8a9-c002e69afdd0-combined-ca-bundle\") pod \"7e95d02c-8ff5-4c4d-a8a9-c002e69afdd0\" (UID: \"7e95d02c-8ff5-4c4d-a8a9-c002e69afdd0\") " Feb 25 12:07:32 crc kubenswrapper[5005]: I0225 12:07:32.991330 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqdfx\" (UniqueName: \"kubernetes.io/projected/7e95d02c-8ff5-4c4d-a8a9-c002e69afdd0-kube-api-access-dqdfx\") pod \"7e95d02c-8ff5-4c4d-a8a9-c002e69afdd0\" (UID: \"7e95d02c-8ff5-4c4d-a8a9-c002e69afdd0\") " Feb 25 12:07:33 crc kubenswrapper[5005]: I0225 12:07:33.001198 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e95d02c-8ff5-4c4d-a8a9-c002e69afdd0-kube-api-access-dqdfx" (OuterVolumeSpecName: "kube-api-access-dqdfx") pod "7e95d02c-8ff5-4c4d-a8a9-c002e69afdd0" (UID: "7e95d02c-8ff5-4c4d-a8a9-c002e69afdd0"). InnerVolumeSpecName "kube-api-access-dqdfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:07:33 crc kubenswrapper[5005]: I0225 12:07:33.004757 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e95d02c-8ff5-4c4d-a8a9-c002e69afdd0-job-config-data" (OuterVolumeSpecName: "job-config-data") pod "7e95d02c-8ff5-4c4d-a8a9-c002e69afdd0" (UID: "7e95d02c-8ff5-4c4d-a8a9-c002e69afdd0"). InnerVolumeSpecName "job-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 12:07:33 crc kubenswrapper[5005]: I0225 12:07:33.005757 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e95d02c-8ff5-4c4d-a8a9-c002e69afdd0-config-data" (OuterVolumeSpecName: "config-data") pod "7e95d02c-8ff5-4c4d-a8a9-c002e69afdd0" (UID: "7e95d02c-8ff5-4c4d-a8a9-c002e69afdd0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 12:07:33 crc kubenswrapper[5005]: I0225 12:07:33.045793 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e95d02c-8ff5-4c4d-a8a9-c002e69afdd0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7e95d02c-8ff5-4c4d-a8a9-c002e69afdd0" (UID: "7e95d02c-8ff5-4c4d-a8a9-c002e69afdd0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 12:07:33 crc kubenswrapper[5005]: I0225 12:07:33.094114 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqdfx\" (UniqueName: \"kubernetes.io/projected/7e95d02c-8ff5-4c4d-a8a9-c002e69afdd0-kube-api-access-dqdfx\") on node \"crc\" DevicePath \"\"" Feb 25 12:07:33 crc kubenswrapper[5005]: I0225 12:07:33.094155 5005 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e95d02c-8ff5-4c4d-a8a9-c002e69afdd0-config-data\") on node \"crc\" DevicePath \"\"" Feb 25 12:07:33 crc kubenswrapper[5005]: I0225 12:07:33.094197 5005 reconciler_common.go:293] "Volume detached for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/7e95d02c-8ff5-4c4d-a8a9-c002e69afdd0-job-config-data\") on node \"crc\" DevicePath \"\"" Feb 25 12:07:33 crc kubenswrapper[5005]: I0225 12:07:33.094206 5005 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e95d02c-8ff5-4c4d-a8a9-c002e69afdd0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 12:07:33 crc kubenswrapper[5005]: I0225 12:07:33.487155 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-5pz6m" event={"ID":"7e95d02c-8ff5-4c4d-a8a9-c002e69afdd0","Type":"ContainerDied","Data":"d4d22ecd0ffbc925c243a42060da0b33342de239acc21a2a750a855851bb6efc"} Feb 25 12:07:33 crc kubenswrapper[5005]: I0225 12:07:33.487240 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4d22ecd0ffbc925c243a42060da0b33342de239acc21a2a750a855851bb6efc" Feb 25 12:07:33 crc kubenswrapper[5005]: I0225 12:07:33.487192 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-5pz6m" Feb 25 12:07:33 crc kubenswrapper[5005]: I0225 12:07:33.943540 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Feb 25 12:07:33 crc kubenswrapper[5005]: E0225 12:07:33.944519 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e95d02c-8ff5-4c4d-a8a9-c002e69afdd0" containerName="manila-db-sync" Feb 25 12:07:33 crc kubenswrapper[5005]: I0225 12:07:33.944538 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e95d02c-8ff5-4c4d-a8a9-c002e69afdd0" containerName="manila-db-sync" Feb 25 12:07:33 crc kubenswrapper[5005]: I0225 12:07:33.944712 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e95d02c-8ff5-4c4d-a8a9-c002e69afdd0" containerName="manila-db-sync" Feb 25 12:07:33 crc kubenswrapper[5005]: I0225 12:07:33.945794 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Feb 25 12:07:33 crc kubenswrapper[5005]: I0225 12:07:33.955120 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Feb 25 12:07:33 crc kubenswrapper[5005]: I0225 12:07:33.955371 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Feb 25 12:07:33 crc kubenswrapper[5005]: I0225 12:07:33.955511 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Feb 25 12:07:33 crc kubenswrapper[5005]: I0225 12:07:33.955648 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-vtbrc" Feb 25 12:07:33 crc kubenswrapper[5005]: I0225 12:07:33.966248 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Feb 25 12:07:33 crc kubenswrapper[5005]: I0225 12:07:33.969691 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Feb 25 12:07:33 crc kubenswrapper[5005]: I0225 12:07:33.973083 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Feb 25 12:07:33 crc kubenswrapper[5005]: I0225 12:07:33.986057 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.001457 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.015973 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/66e4f614-7882-4113-9196-36ad55e76edb-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"66e4f614-7882-4113-9196-36ad55e76edb\") " pod="openstack/manila-share-share1-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.016051 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/81f2b95d-74fd-4386-acb0-7c2508043937-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"81f2b95d-74fd-4386-acb0-7c2508043937\") " pod="openstack/manila-scheduler-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.016073 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/66e4f614-7882-4113-9196-36ad55e76edb-ceph\") pod \"manila-share-share1-0\" (UID: \"66e4f614-7882-4113-9196-36ad55e76edb\") " pod="openstack/manila-share-share1-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.016097 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6wvn\" (UniqueName: \"kubernetes.io/projected/66e4f614-7882-4113-9196-36ad55e76edb-kube-api-access-c6wvn\") pod \"manila-share-share1-0\" (UID: \"66e4f614-7882-4113-9196-36ad55e76edb\") " pod="openstack/manila-share-share1-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.016134 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81f2b95d-74fd-4386-acb0-7c2508043937-config-data\") pod \"manila-scheduler-0\" (UID: \"81f2b95d-74fd-4386-acb0-7c2508043937\") " pod="openstack/manila-scheduler-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.016149 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/81f2b95d-74fd-4386-acb0-7c2508043937-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"81f2b95d-74fd-4386-acb0-7c2508043937\") " pod="openstack/manila-scheduler-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.016166 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81f2b95d-74fd-4386-acb0-7c2508043937-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"81f2b95d-74fd-4386-acb0-7c2508043937\") " pod="openstack/manila-scheduler-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.016184 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/66e4f614-7882-4113-9196-36ad55e76edb-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"66e4f614-7882-4113-9196-36ad55e76edb\") " pod="openstack/manila-share-share1-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.016276 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66e4f614-7882-4113-9196-36ad55e76edb-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"66e4f614-7882-4113-9196-36ad55e76edb\") " pod="openstack/manila-share-share1-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.016311 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66e4f614-7882-4113-9196-36ad55e76edb-scripts\") pod \"manila-share-share1-0\" (UID: \"66e4f614-7882-4113-9196-36ad55e76edb\") " pod="openstack/manila-share-share1-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.016355 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/66e4f614-7882-4113-9196-36ad55e76edb-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"66e4f614-7882-4113-9196-36ad55e76edb\") " pod="openstack/manila-share-share1-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.016380 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81f2b95d-74fd-4386-acb0-7c2508043937-scripts\") pod \"manila-scheduler-0\" (UID: \"81f2b95d-74fd-4386-acb0-7c2508043937\") " pod="openstack/manila-scheduler-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.016556 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66e4f614-7882-4113-9196-36ad55e76edb-config-data\") pod \"manila-share-share1-0\" (UID: \"66e4f614-7882-4113-9196-36ad55e76edb\") " pod="openstack/manila-share-share1-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.016584 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nh8df\" (UniqueName: \"kubernetes.io/projected/81f2b95d-74fd-4386-acb0-7c2508043937-kube-api-access-nh8df\") pod \"manila-scheduler-0\" (UID: \"81f2b95d-74fd-4386-acb0-7c2508043937\") " pod="openstack/manila-scheduler-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.052784 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76b5fdb995-mb8k4"] Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.054461 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76b5fdb995-mb8k4" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.068278 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76b5fdb995-mb8k4"] Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.117824 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d5ab8f06-b6d7-4d93-9aed-ca2cf66b3856-ovsdbserver-nb\") pod \"dnsmasq-dns-76b5fdb995-mb8k4\" (UID: \"d5ab8f06-b6d7-4d93-9aed-ca2cf66b3856\") " pod="openstack/dnsmasq-dns-76b5fdb995-mb8k4" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.117902 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81f2b95d-74fd-4386-acb0-7c2508043937-config-data\") pod \"manila-scheduler-0\" (UID: \"81f2b95d-74fd-4386-acb0-7c2508043937\") " pod="openstack/manila-scheduler-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.117927 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/81f2b95d-74fd-4386-acb0-7c2508043937-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"81f2b95d-74fd-4386-acb0-7c2508043937\") " pod="openstack/manila-scheduler-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.117956 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81f2b95d-74fd-4386-acb0-7c2508043937-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"81f2b95d-74fd-4386-acb0-7c2508043937\") " pod="openstack/manila-scheduler-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.117981 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/66e4f614-7882-4113-9196-36ad55e76edb-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"66e4f614-7882-4113-9196-36ad55e76edb\") " pod="openstack/manila-share-share1-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.118008 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66e4f614-7882-4113-9196-36ad55e76edb-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"66e4f614-7882-4113-9196-36ad55e76edb\") " pod="openstack/manila-share-share1-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.118038 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66e4f614-7882-4113-9196-36ad55e76edb-scripts\") pod \"manila-share-share1-0\" (UID: \"66e4f614-7882-4113-9196-36ad55e76edb\") " pod="openstack/manila-share-share1-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.118086 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d5ab8f06-b6d7-4d93-9aed-ca2cf66b3856-openstack-edpm-ipam\") pod \"dnsmasq-dns-76b5fdb995-mb8k4\" (UID: \"d5ab8f06-b6d7-4d93-9aed-ca2cf66b3856\") " pod="openstack/dnsmasq-dns-76b5fdb995-mb8k4" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.118110 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ncgf\" (UniqueName: \"kubernetes.io/projected/d5ab8f06-b6d7-4d93-9aed-ca2cf66b3856-kube-api-access-5ncgf\") pod \"dnsmasq-dns-76b5fdb995-mb8k4\" (UID: \"d5ab8f06-b6d7-4d93-9aed-ca2cf66b3856\") " pod="openstack/dnsmasq-dns-76b5fdb995-mb8k4" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.118135 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5ab8f06-b6d7-4d93-9aed-ca2cf66b3856-dns-svc\") pod \"dnsmasq-dns-76b5fdb995-mb8k4\" (UID: \"d5ab8f06-b6d7-4d93-9aed-ca2cf66b3856\") " pod="openstack/dnsmasq-dns-76b5fdb995-mb8k4" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.118154 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81f2b95d-74fd-4386-acb0-7c2508043937-scripts\") pod \"manila-scheduler-0\" (UID: \"81f2b95d-74fd-4386-acb0-7c2508043937\") " pod="openstack/manila-scheduler-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.118175 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/66e4f614-7882-4113-9196-36ad55e76edb-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"66e4f614-7882-4113-9196-36ad55e76edb\") " pod="openstack/manila-share-share1-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.118195 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5ab8f06-b6d7-4d93-9aed-ca2cf66b3856-config\") pod \"dnsmasq-dns-76b5fdb995-mb8k4\" (UID: \"d5ab8f06-b6d7-4d93-9aed-ca2cf66b3856\") " pod="openstack/dnsmasq-dns-76b5fdb995-mb8k4" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.118238 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66e4f614-7882-4113-9196-36ad55e76edb-config-data\") pod \"manila-share-share1-0\" (UID: \"66e4f614-7882-4113-9196-36ad55e76edb\") " pod="openstack/manila-share-share1-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.118264 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nh8df\" (UniqueName: \"kubernetes.io/projected/81f2b95d-74fd-4386-acb0-7c2508043937-kube-api-access-nh8df\") pod \"manila-scheduler-0\" (UID: \"81f2b95d-74fd-4386-acb0-7c2508043937\") " pod="openstack/manila-scheduler-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.118285 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/66e4f614-7882-4113-9196-36ad55e76edb-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"66e4f614-7882-4113-9196-36ad55e76edb\") " pod="openstack/manila-share-share1-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.118307 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d5ab8f06-b6d7-4d93-9aed-ca2cf66b3856-ovsdbserver-sb\") pod \"dnsmasq-dns-76b5fdb995-mb8k4\" (UID: \"d5ab8f06-b6d7-4d93-9aed-ca2cf66b3856\") " pod="openstack/dnsmasq-dns-76b5fdb995-mb8k4" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.118353 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/81f2b95d-74fd-4386-acb0-7c2508043937-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"81f2b95d-74fd-4386-acb0-7c2508043937\") " pod="openstack/manila-scheduler-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.118374 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/66e4f614-7882-4113-9196-36ad55e76edb-ceph\") pod \"manila-share-share1-0\" (UID: \"66e4f614-7882-4113-9196-36ad55e76edb\") " pod="openstack/manila-share-share1-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.118407 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6wvn\" (UniqueName: \"kubernetes.io/projected/66e4f614-7882-4113-9196-36ad55e76edb-kube-api-access-c6wvn\") pod \"manila-share-share1-0\" (UID: \"66e4f614-7882-4113-9196-36ad55e76edb\") " pod="openstack/manila-share-share1-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.122994 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/66e4f614-7882-4113-9196-36ad55e76edb-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"66e4f614-7882-4113-9196-36ad55e76edb\") " pod="openstack/manila-share-share1-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.123146 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/66e4f614-7882-4113-9196-36ad55e76edb-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"66e4f614-7882-4113-9196-36ad55e76edb\") " pod="openstack/manila-share-share1-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.123960 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/81f2b95d-74fd-4386-acb0-7c2508043937-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"81f2b95d-74fd-4386-acb0-7c2508043937\") " pod="openstack/manila-scheduler-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.124096 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81f2b95d-74fd-4386-acb0-7c2508043937-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"81f2b95d-74fd-4386-acb0-7c2508043937\") " pod="openstack/manila-scheduler-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.124859 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/66e4f614-7882-4113-9196-36ad55e76edb-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"66e4f614-7882-4113-9196-36ad55e76edb\") " pod="openstack/manila-share-share1-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.125027 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/81f2b95d-74fd-4386-acb0-7c2508043937-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"81f2b95d-74fd-4386-acb0-7c2508043937\") " pod="openstack/manila-scheduler-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.134416 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66e4f614-7882-4113-9196-36ad55e76edb-config-data\") pod \"manila-share-share1-0\" (UID: \"66e4f614-7882-4113-9196-36ad55e76edb\") " pod="openstack/manila-share-share1-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.135470 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66e4f614-7882-4113-9196-36ad55e76edb-scripts\") pod \"manila-share-share1-0\" (UID: \"66e4f614-7882-4113-9196-36ad55e76edb\") " pod="openstack/manila-share-share1-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.136091 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81f2b95d-74fd-4386-acb0-7c2508043937-config-data\") pod \"manila-scheduler-0\" (UID: \"81f2b95d-74fd-4386-acb0-7c2508043937\") " pod="openstack/manila-scheduler-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.137269 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66e4f614-7882-4113-9196-36ad55e76edb-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"66e4f614-7882-4113-9196-36ad55e76edb\") " pod="openstack/manila-share-share1-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.137999 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/66e4f614-7882-4113-9196-36ad55e76edb-ceph\") pod \"manila-share-share1-0\" (UID: \"66e4f614-7882-4113-9196-36ad55e76edb\") " pod="openstack/manila-share-share1-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.139046 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81f2b95d-74fd-4386-acb0-7c2508043937-scripts\") pod \"manila-scheduler-0\" (UID: \"81f2b95d-74fd-4386-acb0-7c2508043937\") " pod="openstack/manila-scheduler-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.145978 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6wvn\" (UniqueName: \"kubernetes.io/projected/66e4f614-7882-4113-9196-36ad55e76edb-kube-api-access-c6wvn\") pod \"manila-share-share1-0\" (UID: \"66e4f614-7882-4113-9196-36ad55e76edb\") " pod="openstack/manila-share-share1-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.146166 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nh8df\" (UniqueName: \"kubernetes.io/projected/81f2b95d-74fd-4386-acb0-7c2508043937-kube-api-access-nh8df\") pod \"manila-scheduler-0\" (UID: \"81f2b95d-74fd-4386-acb0-7c2508043937\") " pod="openstack/manila-scheduler-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.219685 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d5ab8f06-b6d7-4d93-9aed-ca2cf66b3856-ovsdbserver-sb\") pod \"dnsmasq-dns-76b5fdb995-mb8k4\" (UID: \"d5ab8f06-b6d7-4d93-9aed-ca2cf66b3856\") " pod="openstack/dnsmasq-dns-76b5fdb995-mb8k4" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.219840 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d5ab8f06-b6d7-4d93-9aed-ca2cf66b3856-ovsdbserver-nb\") pod \"dnsmasq-dns-76b5fdb995-mb8k4\" (UID: \"d5ab8f06-b6d7-4d93-9aed-ca2cf66b3856\") " pod="openstack/dnsmasq-dns-76b5fdb995-mb8k4" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.219904 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d5ab8f06-b6d7-4d93-9aed-ca2cf66b3856-openstack-edpm-ipam\") pod \"dnsmasq-dns-76b5fdb995-mb8k4\" (UID: \"d5ab8f06-b6d7-4d93-9aed-ca2cf66b3856\") " pod="openstack/dnsmasq-dns-76b5fdb995-mb8k4" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.219929 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ncgf\" (UniqueName: \"kubernetes.io/projected/d5ab8f06-b6d7-4d93-9aed-ca2cf66b3856-kube-api-access-5ncgf\") pod \"dnsmasq-dns-76b5fdb995-mb8k4\" (UID: \"d5ab8f06-b6d7-4d93-9aed-ca2cf66b3856\") " pod="openstack/dnsmasq-dns-76b5fdb995-mb8k4" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.219953 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5ab8f06-b6d7-4d93-9aed-ca2cf66b3856-dns-svc\") pod \"dnsmasq-dns-76b5fdb995-mb8k4\" (UID: \"d5ab8f06-b6d7-4d93-9aed-ca2cf66b3856\") " pod="openstack/dnsmasq-dns-76b5fdb995-mb8k4" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.219972 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5ab8f06-b6d7-4d93-9aed-ca2cf66b3856-config\") pod \"dnsmasq-dns-76b5fdb995-mb8k4\" (UID: \"d5ab8f06-b6d7-4d93-9aed-ca2cf66b3856\") " pod="openstack/dnsmasq-dns-76b5fdb995-mb8k4" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.220950 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5ab8f06-b6d7-4d93-9aed-ca2cf66b3856-config\") pod \"dnsmasq-dns-76b5fdb995-mb8k4\" (UID: \"d5ab8f06-b6d7-4d93-9aed-ca2cf66b3856\") " pod="openstack/dnsmasq-dns-76b5fdb995-mb8k4" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.220962 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d5ab8f06-b6d7-4d93-9aed-ca2cf66b3856-ovsdbserver-sb\") pod \"dnsmasq-dns-76b5fdb995-mb8k4\" (UID: \"d5ab8f06-b6d7-4d93-9aed-ca2cf66b3856\") " pod="openstack/dnsmasq-dns-76b5fdb995-mb8k4" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.221648 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d5ab8f06-b6d7-4d93-9aed-ca2cf66b3856-openstack-edpm-ipam\") pod \"dnsmasq-dns-76b5fdb995-mb8k4\" (UID: \"d5ab8f06-b6d7-4d93-9aed-ca2cf66b3856\") " pod="openstack/dnsmasq-dns-76b5fdb995-mb8k4" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.221738 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d5ab8f06-b6d7-4d93-9aed-ca2cf66b3856-ovsdbserver-nb\") pod \"dnsmasq-dns-76b5fdb995-mb8k4\" (UID: \"d5ab8f06-b6d7-4d93-9aed-ca2cf66b3856\") " pod="openstack/dnsmasq-dns-76b5fdb995-mb8k4" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.221933 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5ab8f06-b6d7-4d93-9aed-ca2cf66b3856-dns-svc\") pod \"dnsmasq-dns-76b5fdb995-mb8k4\" (UID: \"d5ab8f06-b6d7-4d93-9aed-ca2cf66b3856\") " pod="openstack/dnsmasq-dns-76b5fdb995-mb8k4" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.244839 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.246826 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ncgf\" (UniqueName: \"kubernetes.io/projected/d5ab8f06-b6d7-4d93-9aed-ca2cf66b3856-kube-api-access-5ncgf\") pod \"dnsmasq-dns-76b5fdb995-mb8k4\" (UID: \"d5ab8f06-b6d7-4d93-9aed-ca2cf66b3856\") " pod="openstack/dnsmasq-dns-76b5fdb995-mb8k4" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.246858 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.254824 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.285633 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.286403 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.301882 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.322567 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgnmq\" (UniqueName: \"kubernetes.io/projected/c0924249-650f-4937-ba9f-4fc5619a5fd1-kube-api-access-pgnmq\") pod \"manila-api-0\" (UID: \"c0924249-650f-4937-ba9f-4fc5619a5fd1\") " pod="openstack/manila-api-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.322614 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0924249-650f-4937-ba9f-4fc5619a5fd1-scripts\") pod \"manila-api-0\" (UID: \"c0924249-650f-4937-ba9f-4fc5619a5fd1\") " pod="openstack/manila-api-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.322694 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0924249-650f-4937-ba9f-4fc5619a5fd1-config-data-custom\") pod \"manila-api-0\" (UID: \"c0924249-650f-4937-ba9f-4fc5619a5fd1\") " pod="openstack/manila-api-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.322737 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0924249-650f-4937-ba9f-4fc5619a5fd1-config-data\") pod \"manila-api-0\" (UID: \"c0924249-650f-4937-ba9f-4fc5619a5fd1\") " pod="openstack/manila-api-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.322770 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0924249-650f-4937-ba9f-4fc5619a5fd1-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"c0924249-650f-4937-ba9f-4fc5619a5fd1\") " pod="openstack/manila-api-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.322824 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0924249-650f-4937-ba9f-4fc5619a5fd1-logs\") pod \"manila-api-0\" (UID: \"c0924249-650f-4937-ba9f-4fc5619a5fd1\") " pod="openstack/manila-api-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.322871 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c0924249-650f-4937-ba9f-4fc5619a5fd1-etc-machine-id\") pod \"manila-api-0\" (UID: \"c0924249-650f-4937-ba9f-4fc5619a5fd1\") " pod="openstack/manila-api-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.383966 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76b5fdb995-mb8k4" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.424703 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c0924249-650f-4937-ba9f-4fc5619a5fd1-etc-machine-id\") pod \"manila-api-0\" (UID: \"c0924249-650f-4937-ba9f-4fc5619a5fd1\") " pod="openstack/manila-api-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.424806 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgnmq\" (UniqueName: \"kubernetes.io/projected/c0924249-650f-4937-ba9f-4fc5619a5fd1-kube-api-access-pgnmq\") pod \"manila-api-0\" (UID: \"c0924249-650f-4937-ba9f-4fc5619a5fd1\") " pod="openstack/manila-api-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.424839 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0924249-650f-4937-ba9f-4fc5619a5fd1-scripts\") pod \"manila-api-0\" (UID: \"c0924249-650f-4937-ba9f-4fc5619a5fd1\") " pod="openstack/manila-api-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.424889 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0924249-650f-4937-ba9f-4fc5619a5fd1-config-data-custom\") pod \"manila-api-0\" (UID: \"c0924249-650f-4937-ba9f-4fc5619a5fd1\") " pod="openstack/manila-api-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.424920 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0924249-650f-4937-ba9f-4fc5619a5fd1-config-data\") pod \"manila-api-0\" (UID: \"c0924249-650f-4937-ba9f-4fc5619a5fd1\") " pod="openstack/manila-api-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.424950 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0924249-650f-4937-ba9f-4fc5619a5fd1-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"c0924249-650f-4937-ba9f-4fc5619a5fd1\") " pod="openstack/manila-api-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.424986 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0924249-650f-4937-ba9f-4fc5619a5fd1-logs\") pod \"manila-api-0\" (UID: \"c0924249-650f-4937-ba9f-4fc5619a5fd1\") " pod="openstack/manila-api-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.425001 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c0924249-650f-4937-ba9f-4fc5619a5fd1-etc-machine-id\") pod \"manila-api-0\" (UID: \"c0924249-650f-4937-ba9f-4fc5619a5fd1\") " pod="openstack/manila-api-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.431563 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0924249-650f-4937-ba9f-4fc5619a5fd1-logs\") pod \"manila-api-0\" (UID: \"c0924249-650f-4937-ba9f-4fc5619a5fd1\") " pod="openstack/manila-api-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.434337 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0924249-650f-4937-ba9f-4fc5619a5fd1-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"c0924249-650f-4937-ba9f-4fc5619a5fd1\") " pod="openstack/manila-api-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.434495 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0924249-650f-4937-ba9f-4fc5619a5fd1-config-data-custom\") pod \"manila-api-0\" (UID: \"c0924249-650f-4937-ba9f-4fc5619a5fd1\") " pod="openstack/manila-api-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.434607 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0924249-650f-4937-ba9f-4fc5619a5fd1-config-data\") pod \"manila-api-0\" (UID: \"c0924249-650f-4937-ba9f-4fc5619a5fd1\") " pod="openstack/manila-api-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.436030 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0924249-650f-4937-ba9f-4fc5619a5fd1-scripts\") pod \"manila-api-0\" (UID: \"c0924249-650f-4937-ba9f-4fc5619a5fd1\") " pod="openstack/manila-api-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.445211 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgnmq\" (UniqueName: \"kubernetes.io/projected/c0924249-650f-4937-ba9f-4fc5619a5fd1-kube-api-access-pgnmq\") pod \"manila-api-0\" (UID: \"c0924249-650f-4937-ba9f-4fc5619a5fd1\") " pod="openstack/manila-api-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.723162 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Feb 25 12:07:34 crc kubenswrapper[5005]: I0225 12:07:34.969543 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Feb 25 12:07:35 crc kubenswrapper[5005]: I0225 12:07:35.090193 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76b5fdb995-mb8k4"] Feb 25 12:07:35 crc kubenswrapper[5005]: I0225 12:07:35.165323 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Feb 25 12:07:35 crc kubenswrapper[5005]: I0225 12:07:35.387659 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Feb 25 12:07:35 crc kubenswrapper[5005]: I0225 12:07:35.537770 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"c0924249-650f-4937-ba9f-4fc5619a5fd1","Type":"ContainerStarted","Data":"a14f57eb3c5984f1a033854c2b500ac46d4cbd46e207dfcc28793275ba0a0116"} Feb 25 12:07:35 crc kubenswrapper[5005]: I0225 12:07:35.542866 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"66e4f614-7882-4113-9196-36ad55e76edb","Type":"ContainerStarted","Data":"eabbd3f8973d05e04b49763aec71a3d84a71cd4324b155268d431ca0fa1535d5"} Feb 25 12:07:35 crc kubenswrapper[5005]: I0225 12:07:35.546676 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"81f2b95d-74fd-4386-acb0-7c2508043937","Type":"ContainerStarted","Data":"ee2dad58d43eeaed5f090e9426e7ae706225152069ea5332efeedf245d714a9a"} Feb 25 12:07:35 crc kubenswrapper[5005]: I0225 12:07:35.556133 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b5fdb995-mb8k4" event={"ID":"d5ab8f06-b6d7-4d93-9aed-ca2cf66b3856","Type":"ContainerStarted","Data":"ccc762817d4478ac9582b984fe77284db654e5d8be8595842a6d0842c0a2efeb"} Feb 25 12:07:35 crc kubenswrapper[5005]: I0225 12:07:35.556188 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b5fdb995-mb8k4" event={"ID":"d5ab8f06-b6d7-4d93-9aed-ca2cf66b3856","Type":"ContainerStarted","Data":"f6f8d285cb1495c4ab71dee5dc3851969c51846728f32edf5575c197a6f94fad"} Feb 25 12:07:36 crc kubenswrapper[5005]: I0225 12:07:36.608689 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"81f2b95d-74fd-4386-acb0-7c2508043937","Type":"ContainerStarted","Data":"67acda7614e883251402416d307216e5405ed26d7e2c5a96def8a81a4cfa3e4b"} Feb 25 12:07:36 crc kubenswrapper[5005]: I0225 12:07:36.620021 5005 generic.go:334] "Generic (PLEG): container finished" podID="d5ab8f06-b6d7-4d93-9aed-ca2cf66b3856" containerID="ccc762817d4478ac9582b984fe77284db654e5d8be8595842a6d0842c0a2efeb" exitCode=0 Feb 25 12:07:36 crc kubenswrapper[5005]: I0225 12:07:36.620132 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b5fdb995-mb8k4" event={"ID":"d5ab8f06-b6d7-4d93-9aed-ca2cf66b3856","Type":"ContainerDied","Data":"ccc762817d4478ac9582b984fe77284db654e5d8be8595842a6d0842c0a2efeb"} Feb 25 12:07:36 crc kubenswrapper[5005]: I0225 12:07:36.620173 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b5fdb995-mb8k4" event={"ID":"d5ab8f06-b6d7-4d93-9aed-ca2cf66b3856","Type":"ContainerStarted","Data":"7fe8069eae95e5fbd327437d9ebba41dead5acd8df881c2db5a3d900419d0693"} Feb 25 12:07:36 crc kubenswrapper[5005]: I0225 12:07:36.621827 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-76b5fdb995-mb8k4" Feb 25 12:07:36 crc kubenswrapper[5005]: I0225 12:07:36.672105 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-76b5fdb995-mb8k4" podStartSLOduration=3.6720808529999998 podStartE2EDuration="3.672080853s" podCreationTimestamp="2026-02-25 12:07:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 12:07:36.670094491 +0000 UTC m=+2970.710826818" watchObservedRunningTime="2026-02-25 12:07:36.672080853 +0000 UTC m=+2970.712813180" Feb 25 12:07:36 crc kubenswrapper[5005]: I0225 12:07:36.730523 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"c0924249-650f-4937-ba9f-4fc5619a5fd1","Type":"ContainerStarted","Data":"81e2057e252d53ee711add933336a80b8dabfb6f7c22bc2339ee9bf8f666c451"} Feb 25 12:07:37 crc kubenswrapper[5005]: I0225 12:07:37.332610 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Feb 25 12:07:37 crc kubenswrapper[5005]: I0225 12:07:37.703785 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"81f2b95d-74fd-4386-acb0-7c2508043937","Type":"ContainerStarted","Data":"2290ffe6d43d688030863c2ad906bb814eb8e94d5a1d4afe15c16c312e9b53bd"} Feb 25 12:07:37 crc kubenswrapper[5005]: I0225 12:07:37.710777 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"c0924249-650f-4937-ba9f-4fc5619a5fd1","Type":"ContainerStarted","Data":"db0550558d6b70b9ba1d2d92a98676042c9da39904d09d2b75e05f552fa8d071"} Feb 25 12:07:37 crc kubenswrapper[5005]: I0225 12:07:37.712141 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Feb 25 12:07:37 crc kubenswrapper[5005]: I0225 12:07:37.729426 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=4.05011369 podStartE2EDuration="4.729407764s" podCreationTimestamp="2026-02-25 12:07:33 +0000 UTC" firstStartedPulling="2026-02-25 12:07:34.976701068 +0000 UTC m=+2969.017433395" lastFinishedPulling="2026-02-25 12:07:35.655995142 +0000 UTC m=+2969.696727469" observedRunningTime="2026-02-25 12:07:37.726463323 +0000 UTC m=+2971.767195670" watchObservedRunningTime="2026-02-25 12:07:37.729407764 +0000 UTC m=+2971.770140091" Feb 25 12:07:37 crc kubenswrapper[5005]: I0225 12:07:37.794245 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=3.7942255400000002 podStartE2EDuration="3.79422554s" podCreationTimestamp="2026-02-25 12:07:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 12:07:37.786615276 +0000 UTC m=+2971.827347603" watchObservedRunningTime="2026-02-25 12:07:37.79422554 +0000 UTC m=+2971.834957867" Feb 25 12:07:38 crc kubenswrapper[5005]: I0225 12:07:38.732517 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="c0924249-650f-4937-ba9f-4fc5619a5fd1" containerName="manila-api-log" containerID="cri-o://81e2057e252d53ee711add933336a80b8dabfb6f7c22bc2339ee9bf8f666c451" gracePeriod=30 Feb 25 12:07:38 crc kubenswrapper[5005]: I0225 12:07:38.732603 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="c0924249-650f-4937-ba9f-4fc5619a5fd1" containerName="manila-api" containerID="cri-o://db0550558d6b70b9ba1d2d92a98676042c9da39904d09d2b75e05f552fa8d071" gracePeriod=30 Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.148913 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.149193 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="29c39ed3-bc30-4749-b26c-92e320736c0f" containerName="ceilometer-central-agent" containerID="cri-o://f88be5bedbbc34161d653695096fb400839e346b8c89d725084aba39145623e5" gracePeriod=30 Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.149324 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="29c39ed3-bc30-4749-b26c-92e320736c0f" containerName="proxy-httpd" containerID="cri-o://0aa9e7276d63c74f973f1a9ab82927df2838e0efe2e4543358fee0062c6f9a89" gracePeriod=30 Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.149358 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="29c39ed3-bc30-4749-b26c-92e320736c0f" containerName="sg-core" containerID="cri-o://307be29c718a83803567db76497968305a12ac38154316e21bffaeb1781b7ed5" gracePeriod=30 Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.149451 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="29c39ed3-bc30-4749-b26c-92e320736c0f" containerName="ceilometer-notification-agent" containerID="cri-o://fbd62676a0db118d9d36752afba8e28086d289817b8972fbe610e2f28bd4f8ee" gracePeriod=30 Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.505755 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.581770 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgnmq\" (UniqueName: \"kubernetes.io/projected/c0924249-650f-4937-ba9f-4fc5619a5fd1-kube-api-access-pgnmq\") pod \"c0924249-650f-4937-ba9f-4fc5619a5fd1\" (UID: \"c0924249-650f-4937-ba9f-4fc5619a5fd1\") " Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.581866 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0924249-650f-4937-ba9f-4fc5619a5fd1-config-data\") pod \"c0924249-650f-4937-ba9f-4fc5619a5fd1\" (UID: \"c0924249-650f-4937-ba9f-4fc5619a5fd1\") " Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.581913 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0924249-650f-4937-ba9f-4fc5619a5fd1-config-data-custom\") pod \"c0924249-650f-4937-ba9f-4fc5619a5fd1\" (UID: \"c0924249-650f-4937-ba9f-4fc5619a5fd1\") " Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.581965 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0924249-650f-4937-ba9f-4fc5619a5fd1-combined-ca-bundle\") pod \"c0924249-650f-4937-ba9f-4fc5619a5fd1\" (UID: \"c0924249-650f-4937-ba9f-4fc5619a5fd1\") " Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.582071 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0924249-650f-4937-ba9f-4fc5619a5fd1-scripts\") pod \"c0924249-650f-4937-ba9f-4fc5619a5fd1\" (UID: \"c0924249-650f-4937-ba9f-4fc5619a5fd1\") " Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.582109 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c0924249-650f-4937-ba9f-4fc5619a5fd1-etc-machine-id\") pod \"c0924249-650f-4937-ba9f-4fc5619a5fd1\" (UID: \"c0924249-650f-4937-ba9f-4fc5619a5fd1\") " Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.582237 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0924249-650f-4937-ba9f-4fc5619a5fd1-logs\") pod \"c0924249-650f-4937-ba9f-4fc5619a5fd1\" (UID: \"c0924249-650f-4937-ba9f-4fc5619a5fd1\") " Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.586089 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0924249-650f-4937-ba9f-4fc5619a5fd1-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c0924249-650f-4937-ba9f-4fc5619a5fd1" (UID: "c0924249-650f-4937-ba9f-4fc5619a5fd1"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.588624 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0924249-650f-4937-ba9f-4fc5619a5fd1-logs" (OuterVolumeSpecName: "logs") pod "c0924249-650f-4937-ba9f-4fc5619a5fd1" (UID: "c0924249-650f-4937-ba9f-4fc5619a5fd1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.594695 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0924249-650f-4937-ba9f-4fc5619a5fd1-kube-api-access-pgnmq" (OuterVolumeSpecName: "kube-api-access-pgnmq") pod "c0924249-650f-4937-ba9f-4fc5619a5fd1" (UID: "c0924249-650f-4937-ba9f-4fc5619a5fd1"). InnerVolumeSpecName "kube-api-access-pgnmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.595601 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0924249-650f-4937-ba9f-4fc5619a5fd1-scripts" (OuterVolumeSpecName: "scripts") pod "c0924249-650f-4937-ba9f-4fc5619a5fd1" (UID: "c0924249-650f-4937-ba9f-4fc5619a5fd1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.605540 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0924249-650f-4937-ba9f-4fc5619a5fd1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c0924249-650f-4937-ba9f-4fc5619a5fd1" (UID: "c0924249-650f-4937-ba9f-4fc5619a5fd1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.654041 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0924249-650f-4937-ba9f-4fc5619a5fd1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c0924249-650f-4937-ba9f-4fc5619a5fd1" (UID: "c0924249-650f-4937-ba9f-4fc5619a5fd1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.661111 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0924249-650f-4937-ba9f-4fc5619a5fd1-config-data" (OuterVolumeSpecName: "config-data") pod "c0924249-650f-4937-ba9f-4fc5619a5fd1" (UID: "c0924249-650f-4937-ba9f-4fc5619a5fd1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.685280 5005 scope.go:117] "RemoveContainer" containerID="7ca266498d6b9c8fb2a14f176fd0f40e3e96606757679e079fcb7ecb4ca85b52" Feb 25 12:07:39 crc kubenswrapper[5005]: E0225 12:07:39.685710 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.686708 5005 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0924249-650f-4937-ba9f-4fc5619a5fd1-logs\") on node \"crc\" DevicePath \"\"" Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.686742 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgnmq\" (UniqueName: \"kubernetes.io/projected/c0924249-650f-4937-ba9f-4fc5619a5fd1-kube-api-access-pgnmq\") on node \"crc\" DevicePath \"\"" Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.686755 5005 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0924249-650f-4937-ba9f-4fc5619a5fd1-config-data\") on node \"crc\" DevicePath \"\"" Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.686764 5005 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0924249-650f-4937-ba9f-4fc5619a5fd1-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.686773 5005 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0924249-650f-4937-ba9f-4fc5619a5fd1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.686781 5005 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0924249-650f-4937-ba9f-4fc5619a5fd1-scripts\") on node \"crc\" DevicePath \"\"" Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.686793 5005 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c0924249-650f-4937-ba9f-4fc5619a5fd1-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.744539 5005 generic.go:334] "Generic (PLEG): container finished" podID="c0924249-650f-4937-ba9f-4fc5619a5fd1" containerID="db0550558d6b70b9ba1d2d92a98676042c9da39904d09d2b75e05f552fa8d071" exitCode=0 Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.744570 5005 generic.go:334] "Generic (PLEG): container finished" podID="c0924249-650f-4937-ba9f-4fc5619a5fd1" containerID="81e2057e252d53ee711add933336a80b8dabfb6f7c22bc2339ee9bf8f666c451" exitCode=143 Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.744617 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"c0924249-650f-4937-ba9f-4fc5619a5fd1","Type":"ContainerDied","Data":"db0550558d6b70b9ba1d2d92a98676042c9da39904d09d2b75e05f552fa8d071"} Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.744644 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"c0924249-650f-4937-ba9f-4fc5619a5fd1","Type":"ContainerDied","Data":"81e2057e252d53ee711add933336a80b8dabfb6f7c22bc2339ee9bf8f666c451"} Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.744654 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"c0924249-650f-4937-ba9f-4fc5619a5fd1","Type":"ContainerDied","Data":"a14f57eb3c5984f1a033854c2b500ac46d4cbd46e207dfcc28793275ba0a0116"} Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.744658 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.744668 5005 scope.go:117] "RemoveContainer" containerID="db0550558d6b70b9ba1d2d92a98676042c9da39904d09d2b75e05f552fa8d071" Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.750088 5005 generic.go:334] "Generic (PLEG): container finished" podID="29c39ed3-bc30-4749-b26c-92e320736c0f" containerID="0aa9e7276d63c74f973f1a9ab82927df2838e0efe2e4543358fee0062c6f9a89" exitCode=0 Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.750141 5005 generic.go:334] "Generic (PLEG): container finished" podID="29c39ed3-bc30-4749-b26c-92e320736c0f" containerID="307be29c718a83803567db76497968305a12ac38154316e21bffaeb1781b7ed5" exitCode=2 Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.750151 5005 generic.go:334] "Generic (PLEG): container finished" podID="29c39ed3-bc30-4749-b26c-92e320736c0f" containerID="f88be5bedbbc34161d653695096fb400839e346b8c89d725084aba39145623e5" exitCode=0 Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.750170 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"29c39ed3-bc30-4749-b26c-92e320736c0f","Type":"ContainerDied","Data":"0aa9e7276d63c74f973f1a9ab82927df2838e0efe2e4543358fee0062c6f9a89"} Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.750237 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"29c39ed3-bc30-4749-b26c-92e320736c0f","Type":"ContainerDied","Data":"307be29c718a83803567db76497968305a12ac38154316e21bffaeb1781b7ed5"} Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.750251 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"29c39ed3-bc30-4749-b26c-92e320736c0f","Type":"ContainerDied","Data":"f88be5bedbbc34161d653695096fb400839e346b8c89d725084aba39145623e5"} Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.796524 5005 scope.go:117] "RemoveContainer" containerID="81e2057e252d53ee711add933336a80b8dabfb6f7c22bc2339ee9bf8f666c451" Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.803238 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.816071 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-api-0"] Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.831220 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Feb 25 12:07:39 crc kubenswrapper[5005]: E0225 12:07:39.835754 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0924249-650f-4937-ba9f-4fc5619a5fd1" containerName="manila-api-log" Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.835788 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0924249-650f-4937-ba9f-4fc5619a5fd1" containerName="manila-api-log" Feb 25 12:07:39 crc kubenswrapper[5005]: E0225 12:07:39.835812 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0924249-650f-4937-ba9f-4fc5619a5fd1" containerName="manila-api" Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.835819 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0924249-650f-4937-ba9f-4fc5619a5fd1" containerName="manila-api" Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.835993 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0924249-650f-4937-ba9f-4fc5619a5fd1" containerName="manila-api-log" Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.836012 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0924249-650f-4937-ba9f-4fc5619a5fd1" containerName="manila-api" Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.836996 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.841933 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.842267 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-public-svc" Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.842424 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-internal-svc" Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.852740 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.866513 5005 scope.go:117] "RemoveContainer" containerID="db0550558d6b70b9ba1d2d92a98676042c9da39904d09d2b75e05f552fa8d071" Feb 25 12:07:39 crc kubenswrapper[5005]: E0225 12:07:39.869711 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db0550558d6b70b9ba1d2d92a98676042c9da39904d09d2b75e05f552fa8d071\": container with ID starting with db0550558d6b70b9ba1d2d92a98676042c9da39904d09d2b75e05f552fa8d071 not found: ID does not exist" containerID="db0550558d6b70b9ba1d2d92a98676042c9da39904d09d2b75e05f552fa8d071" Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.869750 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db0550558d6b70b9ba1d2d92a98676042c9da39904d09d2b75e05f552fa8d071"} err="failed to get container status \"db0550558d6b70b9ba1d2d92a98676042c9da39904d09d2b75e05f552fa8d071\": rpc error: code = NotFound desc = could not find container \"db0550558d6b70b9ba1d2d92a98676042c9da39904d09d2b75e05f552fa8d071\": container with ID starting with db0550558d6b70b9ba1d2d92a98676042c9da39904d09d2b75e05f552fa8d071 not found: ID does not exist" Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.869771 5005 scope.go:117] "RemoveContainer" containerID="81e2057e252d53ee711add933336a80b8dabfb6f7c22bc2339ee9bf8f666c451" Feb 25 12:07:39 crc kubenswrapper[5005]: E0225 12:07:39.870180 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81e2057e252d53ee711add933336a80b8dabfb6f7c22bc2339ee9bf8f666c451\": container with ID starting with 81e2057e252d53ee711add933336a80b8dabfb6f7c22bc2339ee9bf8f666c451 not found: ID does not exist" containerID="81e2057e252d53ee711add933336a80b8dabfb6f7c22bc2339ee9bf8f666c451" Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.870224 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81e2057e252d53ee711add933336a80b8dabfb6f7c22bc2339ee9bf8f666c451"} err="failed to get container status \"81e2057e252d53ee711add933336a80b8dabfb6f7c22bc2339ee9bf8f666c451\": rpc error: code = NotFound desc = could not find container \"81e2057e252d53ee711add933336a80b8dabfb6f7c22bc2339ee9bf8f666c451\": container with ID starting with 81e2057e252d53ee711add933336a80b8dabfb6f7c22bc2339ee9bf8f666c451 not found: ID does not exist" Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.870254 5005 scope.go:117] "RemoveContainer" containerID="db0550558d6b70b9ba1d2d92a98676042c9da39904d09d2b75e05f552fa8d071" Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.870612 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db0550558d6b70b9ba1d2d92a98676042c9da39904d09d2b75e05f552fa8d071"} err="failed to get container status \"db0550558d6b70b9ba1d2d92a98676042c9da39904d09d2b75e05f552fa8d071\": rpc error: code = NotFound desc = could not find container \"db0550558d6b70b9ba1d2d92a98676042c9da39904d09d2b75e05f552fa8d071\": container with ID starting with db0550558d6b70b9ba1d2d92a98676042c9da39904d09d2b75e05f552fa8d071 not found: ID does not exist" Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.870653 5005 scope.go:117] "RemoveContainer" containerID="81e2057e252d53ee711add933336a80b8dabfb6f7c22bc2339ee9bf8f666c451" Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.871210 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81e2057e252d53ee711add933336a80b8dabfb6f7c22bc2339ee9bf8f666c451"} err="failed to get container status \"81e2057e252d53ee711add933336a80b8dabfb6f7c22bc2339ee9bf8f666c451\": rpc error: code = NotFound desc = could not find container \"81e2057e252d53ee711add933336a80b8dabfb6f7c22bc2339ee9bf8f666c451\": container with ID starting with 81e2057e252d53ee711add933336a80b8dabfb6f7c22bc2339ee9bf8f666c451 not found: ID does not exist" Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.896365 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e583340e-f1c1-49c8-bb9b-edce3afe87c5-public-tls-certs\") pod \"manila-api-0\" (UID: \"e583340e-f1c1-49c8-bb9b-edce3afe87c5\") " pod="openstack/manila-api-0" Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.896424 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e583340e-f1c1-49c8-bb9b-edce3afe87c5-scripts\") pod \"manila-api-0\" (UID: \"e583340e-f1c1-49c8-bb9b-edce3afe87c5\") " pod="openstack/manila-api-0" Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.896447 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e583340e-f1c1-49c8-bb9b-edce3afe87c5-etc-machine-id\") pod \"manila-api-0\" (UID: \"e583340e-f1c1-49c8-bb9b-edce3afe87c5\") " pod="openstack/manila-api-0" Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.896498 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e583340e-f1c1-49c8-bb9b-edce3afe87c5-logs\") pod \"manila-api-0\" (UID: \"e583340e-f1c1-49c8-bb9b-edce3afe87c5\") " pod="openstack/manila-api-0" Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.896518 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e583340e-f1c1-49c8-bb9b-edce3afe87c5-config-data\") pod \"manila-api-0\" (UID: \"e583340e-f1c1-49c8-bb9b-edce3afe87c5\") " pod="openstack/manila-api-0" Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.896537 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r495p\" (UniqueName: \"kubernetes.io/projected/e583340e-f1c1-49c8-bb9b-edce3afe87c5-kube-api-access-r495p\") pod \"manila-api-0\" (UID: \"e583340e-f1c1-49c8-bb9b-edce3afe87c5\") " pod="openstack/manila-api-0" Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.896596 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e583340e-f1c1-49c8-bb9b-edce3afe87c5-config-data-custom\") pod \"manila-api-0\" (UID: \"e583340e-f1c1-49c8-bb9b-edce3afe87c5\") " pod="openstack/manila-api-0" Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.896630 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e583340e-f1c1-49c8-bb9b-edce3afe87c5-internal-tls-certs\") pod \"manila-api-0\" (UID: \"e583340e-f1c1-49c8-bb9b-edce3afe87c5\") " pod="openstack/manila-api-0" Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.896677 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e583340e-f1c1-49c8-bb9b-edce3afe87c5-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"e583340e-f1c1-49c8-bb9b-edce3afe87c5\") " pod="openstack/manila-api-0" Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.998347 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e583340e-f1c1-49c8-bb9b-edce3afe87c5-config-data-custom\") pod \"manila-api-0\" (UID: \"e583340e-f1c1-49c8-bb9b-edce3afe87c5\") " pod="openstack/manila-api-0" Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.998487 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e583340e-f1c1-49c8-bb9b-edce3afe87c5-internal-tls-certs\") pod \"manila-api-0\" (UID: \"e583340e-f1c1-49c8-bb9b-edce3afe87c5\") " pod="openstack/manila-api-0" Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.998538 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e583340e-f1c1-49c8-bb9b-edce3afe87c5-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"e583340e-f1c1-49c8-bb9b-edce3afe87c5\") " pod="openstack/manila-api-0" Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.998566 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e583340e-f1c1-49c8-bb9b-edce3afe87c5-public-tls-certs\") pod \"manila-api-0\" (UID: \"e583340e-f1c1-49c8-bb9b-edce3afe87c5\") " pod="openstack/manila-api-0" Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.998582 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e583340e-f1c1-49c8-bb9b-edce3afe87c5-scripts\") pod \"manila-api-0\" (UID: \"e583340e-f1c1-49c8-bb9b-edce3afe87c5\") " pod="openstack/manila-api-0" Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.998603 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e583340e-f1c1-49c8-bb9b-edce3afe87c5-etc-machine-id\") pod \"manila-api-0\" (UID: \"e583340e-f1c1-49c8-bb9b-edce3afe87c5\") " pod="openstack/manila-api-0" Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.998645 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e583340e-f1c1-49c8-bb9b-edce3afe87c5-logs\") pod \"manila-api-0\" (UID: \"e583340e-f1c1-49c8-bb9b-edce3afe87c5\") " pod="openstack/manila-api-0" Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.998663 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e583340e-f1c1-49c8-bb9b-edce3afe87c5-config-data\") pod \"manila-api-0\" (UID: \"e583340e-f1c1-49c8-bb9b-edce3afe87c5\") " pod="openstack/manila-api-0" Feb 25 12:07:39 crc kubenswrapper[5005]: I0225 12:07:39.998681 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r495p\" (UniqueName: \"kubernetes.io/projected/e583340e-f1c1-49c8-bb9b-edce3afe87c5-kube-api-access-r495p\") pod \"manila-api-0\" (UID: \"e583340e-f1c1-49c8-bb9b-edce3afe87c5\") " pod="openstack/manila-api-0" Feb 25 12:07:40 crc kubenswrapper[5005]: I0225 12:07:40.000813 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e583340e-f1c1-49c8-bb9b-edce3afe87c5-logs\") pod \"manila-api-0\" (UID: \"e583340e-f1c1-49c8-bb9b-edce3afe87c5\") " pod="openstack/manila-api-0" Feb 25 12:07:40 crc kubenswrapper[5005]: I0225 12:07:40.000945 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e583340e-f1c1-49c8-bb9b-edce3afe87c5-etc-machine-id\") pod \"manila-api-0\" (UID: \"e583340e-f1c1-49c8-bb9b-edce3afe87c5\") " pod="openstack/manila-api-0" Feb 25 12:07:40 crc kubenswrapper[5005]: I0225 12:07:40.005091 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e583340e-f1c1-49c8-bb9b-edce3afe87c5-scripts\") pod \"manila-api-0\" (UID: \"e583340e-f1c1-49c8-bb9b-edce3afe87c5\") " pod="openstack/manila-api-0" Feb 25 12:07:40 crc kubenswrapper[5005]: I0225 12:07:40.005350 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e583340e-f1c1-49c8-bb9b-edce3afe87c5-config-data-custom\") pod \"manila-api-0\" (UID: \"e583340e-f1c1-49c8-bb9b-edce3afe87c5\") " pod="openstack/manila-api-0" Feb 25 12:07:40 crc kubenswrapper[5005]: I0225 12:07:40.009761 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e583340e-f1c1-49c8-bb9b-edce3afe87c5-public-tls-certs\") pod \"manila-api-0\" (UID: \"e583340e-f1c1-49c8-bb9b-edce3afe87c5\") " pod="openstack/manila-api-0" Feb 25 12:07:40 crc kubenswrapper[5005]: I0225 12:07:40.009991 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e583340e-f1c1-49c8-bb9b-edce3afe87c5-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"e583340e-f1c1-49c8-bb9b-edce3afe87c5\") " pod="openstack/manila-api-0" Feb 25 12:07:40 crc kubenswrapper[5005]: I0225 12:07:40.010111 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e583340e-f1c1-49c8-bb9b-edce3afe87c5-internal-tls-certs\") pod \"manila-api-0\" (UID: \"e583340e-f1c1-49c8-bb9b-edce3afe87c5\") " pod="openstack/manila-api-0" Feb 25 12:07:40 crc kubenswrapper[5005]: I0225 12:07:40.016228 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e583340e-f1c1-49c8-bb9b-edce3afe87c5-config-data\") pod \"manila-api-0\" (UID: \"e583340e-f1c1-49c8-bb9b-edce3afe87c5\") " pod="openstack/manila-api-0" Feb 25 12:07:40 crc kubenswrapper[5005]: I0225 12:07:40.019423 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r495p\" (UniqueName: \"kubernetes.io/projected/e583340e-f1c1-49c8-bb9b-edce3afe87c5-kube-api-access-r495p\") pod \"manila-api-0\" (UID: \"e583340e-f1c1-49c8-bb9b-edce3afe87c5\") " pod="openstack/manila-api-0" Feb 25 12:07:40 crc kubenswrapper[5005]: I0225 12:07:40.229466 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Feb 25 12:07:40 crc kubenswrapper[5005]: I0225 12:07:40.718832 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0924249-650f-4937-ba9f-4fc5619a5fd1" path="/var/lib/kubelet/pods/c0924249-650f-4937-ba9f-4fc5619a5fd1/volumes" Feb 25 12:07:42 crc kubenswrapper[5005]: I0225 12:07:42.805075 5005 generic.go:334] "Generic (PLEG): container finished" podID="29c39ed3-bc30-4749-b26c-92e320736c0f" containerID="fbd62676a0db118d9d36752afba8e28086d289817b8972fbe610e2f28bd4f8ee" exitCode=0 Feb 25 12:07:42 crc kubenswrapper[5005]: I0225 12:07:42.805143 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"29c39ed3-bc30-4749-b26c-92e320736c0f","Type":"ContainerDied","Data":"fbd62676a0db118d9d36752afba8e28086d289817b8972fbe610e2f28bd4f8ee"} Feb 25 12:07:44 crc kubenswrapper[5005]: I0225 12:07:44.016175 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 25 12:07:44 crc kubenswrapper[5005]: I0225 12:07:44.100827 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/29c39ed3-bc30-4749-b26c-92e320736c0f-log-httpd\") pod \"29c39ed3-bc30-4749-b26c-92e320736c0f\" (UID: \"29c39ed3-bc30-4749-b26c-92e320736c0f\") " Feb 25 12:07:44 crc kubenswrapper[5005]: I0225 12:07:44.100993 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29c39ed3-bc30-4749-b26c-92e320736c0f-combined-ca-bundle\") pod \"29c39ed3-bc30-4749-b26c-92e320736c0f\" (UID: \"29c39ed3-bc30-4749-b26c-92e320736c0f\") " Feb 25 12:07:44 crc kubenswrapper[5005]: I0225 12:07:44.101031 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/29c39ed3-bc30-4749-b26c-92e320736c0f-sg-core-conf-yaml\") pod \"29c39ed3-bc30-4749-b26c-92e320736c0f\" (UID: \"29c39ed3-bc30-4749-b26c-92e320736c0f\") " Feb 25 12:07:44 crc kubenswrapper[5005]: I0225 12:07:44.101234 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29c39ed3-bc30-4749-b26c-92e320736c0f-scripts\") pod \"29c39ed3-bc30-4749-b26c-92e320736c0f\" (UID: \"29c39ed3-bc30-4749-b26c-92e320736c0f\") " Feb 25 12:07:44 crc kubenswrapper[5005]: I0225 12:07:44.101277 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/29c39ed3-bc30-4749-b26c-92e320736c0f-ceilometer-tls-certs\") pod \"29c39ed3-bc30-4749-b26c-92e320736c0f\" (UID: \"29c39ed3-bc30-4749-b26c-92e320736c0f\") " Feb 25 12:07:44 crc kubenswrapper[5005]: I0225 12:07:44.101329 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29c39ed3-bc30-4749-b26c-92e320736c0f-config-data\") pod \"29c39ed3-bc30-4749-b26c-92e320736c0f\" (UID: \"29c39ed3-bc30-4749-b26c-92e320736c0f\") " Feb 25 12:07:44 crc kubenswrapper[5005]: I0225 12:07:44.101389 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxcq7\" (UniqueName: \"kubernetes.io/projected/29c39ed3-bc30-4749-b26c-92e320736c0f-kube-api-access-kxcq7\") pod \"29c39ed3-bc30-4749-b26c-92e320736c0f\" (UID: \"29c39ed3-bc30-4749-b26c-92e320736c0f\") " Feb 25 12:07:44 crc kubenswrapper[5005]: I0225 12:07:44.101412 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/29c39ed3-bc30-4749-b26c-92e320736c0f-run-httpd\") pod \"29c39ed3-bc30-4749-b26c-92e320736c0f\" (UID: \"29c39ed3-bc30-4749-b26c-92e320736c0f\") " Feb 25 12:07:44 crc kubenswrapper[5005]: I0225 12:07:44.101619 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29c39ed3-bc30-4749-b26c-92e320736c0f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "29c39ed3-bc30-4749-b26c-92e320736c0f" (UID: "29c39ed3-bc30-4749-b26c-92e320736c0f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 12:07:44 crc kubenswrapper[5005]: I0225 12:07:44.102050 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29c39ed3-bc30-4749-b26c-92e320736c0f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "29c39ed3-bc30-4749-b26c-92e320736c0f" (UID: "29c39ed3-bc30-4749-b26c-92e320736c0f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 12:07:44 crc kubenswrapper[5005]: I0225 12:07:44.102285 5005 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/29c39ed3-bc30-4749-b26c-92e320736c0f-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 25 12:07:44 crc kubenswrapper[5005]: I0225 12:07:44.102309 5005 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/29c39ed3-bc30-4749-b26c-92e320736c0f-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 25 12:07:44 crc kubenswrapper[5005]: I0225 12:07:44.108081 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29c39ed3-bc30-4749-b26c-92e320736c0f-scripts" (OuterVolumeSpecName: "scripts") pod "29c39ed3-bc30-4749-b26c-92e320736c0f" (UID: "29c39ed3-bc30-4749-b26c-92e320736c0f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 12:07:44 crc kubenswrapper[5005]: I0225 12:07:44.111920 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29c39ed3-bc30-4749-b26c-92e320736c0f-kube-api-access-kxcq7" (OuterVolumeSpecName: "kube-api-access-kxcq7") pod "29c39ed3-bc30-4749-b26c-92e320736c0f" (UID: "29c39ed3-bc30-4749-b26c-92e320736c0f"). InnerVolumeSpecName "kube-api-access-kxcq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:07:44 crc kubenswrapper[5005]: I0225 12:07:44.132442 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29c39ed3-bc30-4749-b26c-92e320736c0f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "29c39ed3-bc30-4749-b26c-92e320736c0f" (UID: "29c39ed3-bc30-4749-b26c-92e320736c0f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 12:07:44 crc kubenswrapper[5005]: I0225 12:07:44.173843 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29c39ed3-bc30-4749-b26c-92e320736c0f-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "29c39ed3-bc30-4749-b26c-92e320736c0f" (UID: "29c39ed3-bc30-4749-b26c-92e320736c0f"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 12:07:44 crc kubenswrapper[5005]: I0225 12:07:44.194960 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29c39ed3-bc30-4749-b26c-92e320736c0f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "29c39ed3-bc30-4749-b26c-92e320736c0f" (UID: "29c39ed3-bc30-4749-b26c-92e320736c0f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 12:07:44 crc kubenswrapper[5005]: I0225 12:07:44.204822 5005 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29c39ed3-bc30-4749-b26c-92e320736c0f-scripts\") on node \"crc\" DevicePath \"\"" Feb 25 12:07:44 crc kubenswrapper[5005]: I0225 12:07:44.204874 5005 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/29c39ed3-bc30-4749-b26c-92e320736c0f-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 25 12:07:44 crc kubenswrapper[5005]: I0225 12:07:44.204885 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxcq7\" (UniqueName: \"kubernetes.io/projected/29c39ed3-bc30-4749-b26c-92e320736c0f-kube-api-access-kxcq7\") on node \"crc\" DevicePath \"\"" Feb 25 12:07:44 crc kubenswrapper[5005]: I0225 12:07:44.204894 5005 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29c39ed3-bc30-4749-b26c-92e320736c0f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 12:07:44 crc kubenswrapper[5005]: I0225 12:07:44.204902 5005 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/29c39ed3-bc30-4749-b26c-92e320736c0f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 25 12:07:44 crc kubenswrapper[5005]: I0225 12:07:44.228610 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29c39ed3-bc30-4749-b26c-92e320736c0f-config-data" (OuterVolumeSpecName: "config-data") pod "29c39ed3-bc30-4749-b26c-92e320736c0f" (UID: "29c39ed3-bc30-4749-b26c-92e320736c0f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 12:07:44 crc kubenswrapper[5005]: I0225 12:07:44.287756 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Feb 25 12:07:44 crc kubenswrapper[5005]: I0225 12:07:44.299681 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Feb 25 12:07:44 crc kubenswrapper[5005]: W0225 12:07:44.301157 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode583340e_f1c1_49c8_bb9b_edce3afe87c5.slice/crio-a7efa1194af46296462524225c698cc21220d90880f8f189b02d12806f5b96e8 WatchSource:0}: Error finding container a7efa1194af46296462524225c698cc21220d90880f8f189b02d12806f5b96e8: Status 404 returned error can't find the container with id a7efa1194af46296462524225c698cc21220d90880f8f189b02d12806f5b96e8 Feb 25 12:07:44 crc kubenswrapper[5005]: I0225 12:07:44.306360 5005 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29c39ed3-bc30-4749-b26c-92e320736c0f-config-data\") on node \"crc\" DevicePath \"\"" Feb 25 12:07:44 crc kubenswrapper[5005]: I0225 12:07:44.387676 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-76b5fdb995-mb8k4" Feb 25 12:07:44 crc kubenswrapper[5005]: I0225 12:07:44.492460 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-c22hh"] Feb 25 12:07:44 crc kubenswrapper[5005]: I0225 12:07:44.492698 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-864d5fc68c-c22hh" podUID="84abcbc9-9dcf-42fa-ac91-67d55b7895b8" containerName="dnsmasq-dns" containerID="cri-o://954bb89a70bbfcc542b01de400fd7af576b9256d11d9d25461ae572ee9913db8" gracePeriod=10 Feb 25 12:07:44 crc kubenswrapper[5005]: I0225 12:07:44.853966 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"66e4f614-7882-4113-9196-36ad55e76edb","Type":"ContainerStarted","Data":"acf08c337ae1567a292f6a78b155167bf504432d1176656b3d3472968e0a338e"} Feb 25 12:07:44 crc kubenswrapper[5005]: I0225 12:07:44.860135 5005 generic.go:334] "Generic (PLEG): container finished" podID="84abcbc9-9dcf-42fa-ac91-67d55b7895b8" containerID="954bb89a70bbfcc542b01de400fd7af576b9256d11d9d25461ae572ee9913db8" exitCode=0 Feb 25 12:07:44 crc kubenswrapper[5005]: I0225 12:07:44.860201 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-c22hh" event={"ID":"84abcbc9-9dcf-42fa-ac91-67d55b7895b8","Type":"ContainerDied","Data":"954bb89a70bbfcc542b01de400fd7af576b9256d11d9d25461ae572ee9913db8"} Feb 25 12:07:44 crc kubenswrapper[5005]: I0225 12:07:44.862864 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"e583340e-f1c1-49c8-bb9b-edce3afe87c5","Type":"ContainerStarted","Data":"a7efa1194af46296462524225c698cc21220d90880f8f189b02d12806f5b96e8"} Feb 25 12:07:44 crc kubenswrapper[5005]: I0225 12:07:44.892188 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"29c39ed3-bc30-4749-b26c-92e320736c0f","Type":"ContainerDied","Data":"ff2a7205c9fcee10165d45fbcc4e6fada2bbe36e75ef6488a1f3025ae9371c65"} Feb 25 12:07:44 crc kubenswrapper[5005]: I0225 12:07:44.892240 5005 scope.go:117] "RemoveContainer" containerID="0aa9e7276d63c74f973f1a9ab82927df2838e0efe2e4543358fee0062c6f9a89" Feb 25 12:07:44 crc kubenswrapper[5005]: I0225 12:07:44.892365 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 25 12:07:44 crc kubenswrapper[5005]: I0225 12:07:44.929163 5005 scope.go:117] "RemoveContainer" containerID="307be29c718a83803567db76497968305a12ac38154316e21bffaeb1781b7ed5" Feb 25 12:07:44 crc kubenswrapper[5005]: I0225 12:07:44.935534 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 25 12:07:44 crc kubenswrapper[5005]: I0225 12:07:44.959447 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 25 12:07:44 crc kubenswrapper[5005]: I0225 12:07:44.977530 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 25 12:07:44 crc kubenswrapper[5005]: E0225 12:07:44.977976 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29c39ed3-bc30-4749-b26c-92e320736c0f" containerName="ceilometer-notification-agent" Feb 25 12:07:44 crc kubenswrapper[5005]: I0225 12:07:44.977994 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="29c39ed3-bc30-4749-b26c-92e320736c0f" containerName="ceilometer-notification-agent" Feb 25 12:07:44 crc kubenswrapper[5005]: E0225 12:07:44.978008 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29c39ed3-bc30-4749-b26c-92e320736c0f" containerName="sg-core" Feb 25 12:07:44 crc kubenswrapper[5005]: I0225 12:07:44.978014 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="29c39ed3-bc30-4749-b26c-92e320736c0f" containerName="sg-core" Feb 25 12:07:44 crc kubenswrapper[5005]: E0225 12:07:44.978025 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29c39ed3-bc30-4749-b26c-92e320736c0f" containerName="ceilometer-central-agent" Feb 25 12:07:44 crc kubenswrapper[5005]: I0225 12:07:44.978031 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="29c39ed3-bc30-4749-b26c-92e320736c0f" containerName="ceilometer-central-agent" Feb 25 12:07:44 crc kubenswrapper[5005]: E0225 12:07:44.978047 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29c39ed3-bc30-4749-b26c-92e320736c0f" containerName="proxy-httpd" Feb 25 12:07:44 crc kubenswrapper[5005]: I0225 12:07:44.978052 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="29c39ed3-bc30-4749-b26c-92e320736c0f" containerName="proxy-httpd" Feb 25 12:07:44 crc kubenswrapper[5005]: I0225 12:07:44.978218 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="29c39ed3-bc30-4749-b26c-92e320736c0f" containerName="ceilometer-notification-agent" Feb 25 12:07:44 crc kubenswrapper[5005]: I0225 12:07:44.978232 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="29c39ed3-bc30-4749-b26c-92e320736c0f" containerName="proxy-httpd" Feb 25 12:07:44 crc kubenswrapper[5005]: I0225 12:07:44.978245 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="29c39ed3-bc30-4749-b26c-92e320736c0f" containerName="sg-core" Feb 25 12:07:44 crc kubenswrapper[5005]: I0225 12:07:44.978253 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="29c39ed3-bc30-4749-b26c-92e320736c0f" containerName="ceilometer-central-agent" Feb 25 12:07:44 crc kubenswrapper[5005]: I0225 12:07:44.979993 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 25 12:07:44 crc kubenswrapper[5005]: I0225 12:07:44.983516 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 25 12:07:44 crc kubenswrapper[5005]: I0225 12:07:44.983709 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 25 12:07:44 crc kubenswrapper[5005]: I0225 12:07:44.983857 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 25 12:07:44 crc kubenswrapper[5005]: I0225 12:07:44.985813 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 25 12:07:44 crc kubenswrapper[5005]: I0225 12:07:44.991259 5005 scope.go:117] "RemoveContainer" containerID="fbd62676a0db118d9d36752afba8e28086d289817b8972fbe610e2f28bd4f8ee" Feb 25 12:07:45 crc kubenswrapper[5005]: I0225 12:07:45.031733 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3427d7b8-3604-4c34-8f83-d21644e52dd7-run-httpd\") pod \"ceilometer-0\" (UID: \"3427d7b8-3604-4c34-8f83-d21644e52dd7\") " pod="openstack/ceilometer-0" Feb 25 12:07:45 crc kubenswrapper[5005]: I0225 12:07:45.032269 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3427d7b8-3604-4c34-8f83-d21644e52dd7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3427d7b8-3604-4c34-8f83-d21644e52dd7\") " pod="openstack/ceilometer-0" Feb 25 12:07:45 crc kubenswrapper[5005]: I0225 12:07:45.032301 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6z5m\" (UniqueName: \"kubernetes.io/projected/3427d7b8-3604-4c34-8f83-d21644e52dd7-kube-api-access-m6z5m\") pod \"ceilometer-0\" (UID: \"3427d7b8-3604-4c34-8f83-d21644e52dd7\") " pod="openstack/ceilometer-0" Feb 25 12:07:45 crc kubenswrapper[5005]: I0225 12:07:45.032353 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3427d7b8-3604-4c34-8f83-d21644e52dd7-log-httpd\") pod \"ceilometer-0\" (UID: \"3427d7b8-3604-4c34-8f83-d21644e52dd7\") " pod="openstack/ceilometer-0" Feb 25 12:07:45 crc kubenswrapper[5005]: I0225 12:07:45.032395 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3427d7b8-3604-4c34-8f83-d21644e52dd7-scripts\") pod \"ceilometer-0\" (UID: \"3427d7b8-3604-4c34-8f83-d21644e52dd7\") " pod="openstack/ceilometer-0" Feb 25 12:07:45 crc kubenswrapper[5005]: I0225 12:07:45.032648 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3427d7b8-3604-4c34-8f83-d21644e52dd7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3427d7b8-3604-4c34-8f83-d21644e52dd7\") " pod="openstack/ceilometer-0" Feb 25 12:07:45 crc kubenswrapper[5005]: I0225 12:07:45.032705 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3427d7b8-3604-4c34-8f83-d21644e52dd7-config-data\") pod \"ceilometer-0\" (UID: \"3427d7b8-3604-4c34-8f83-d21644e52dd7\") " pod="openstack/ceilometer-0" Feb 25 12:07:45 crc kubenswrapper[5005]: I0225 12:07:45.032738 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3427d7b8-3604-4c34-8f83-d21644e52dd7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3427d7b8-3604-4c34-8f83-d21644e52dd7\") " pod="openstack/ceilometer-0" Feb 25 12:07:45 crc kubenswrapper[5005]: I0225 12:07:45.136562 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3427d7b8-3604-4c34-8f83-d21644e52dd7-log-httpd\") pod \"ceilometer-0\" (UID: \"3427d7b8-3604-4c34-8f83-d21644e52dd7\") " pod="openstack/ceilometer-0" Feb 25 12:07:45 crc kubenswrapper[5005]: I0225 12:07:45.136618 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3427d7b8-3604-4c34-8f83-d21644e52dd7-scripts\") pod \"ceilometer-0\" (UID: \"3427d7b8-3604-4c34-8f83-d21644e52dd7\") " pod="openstack/ceilometer-0" Feb 25 12:07:45 crc kubenswrapper[5005]: I0225 12:07:45.136673 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3427d7b8-3604-4c34-8f83-d21644e52dd7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3427d7b8-3604-4c34-8f83-d21644e52dd7\") " pod="openstack/ceilometer-0" Feb 25 12:07:45 crc kubenswrapper[5005]: I0225 12:07:45.136710 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3427d7b8-3604-4c34-8f83-d21644e52dd7-config-data\") pod \"ceilometer-0\" (UID: \"3427d7b8-3604-4c34-8f83-d21644e52dd7\") " pod="openstack/ceilometer-0" Feb 25 12:07:45 crc kubenswrapper[5005]: I0225 12:07:45.136730 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3427d7b8-3604-4c34-8f83-d21644e52dd7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3427d7b8-3604-4c34-8f83-d21644e52dd7\") " pod="openstack/ceilometer-0" Feb 25 12:07:45 crc kubenswrapper[5005]: I0225 12:07:45.136785 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3427d7b8-3604-4c34-8f83-d21644e52dd7-run-httpd\") pod \"ceilometer-0\" (UID: \"3427d7b8-3604-4c34-8f83-d21644e52dd7\") " pod="openstack/ceilometer-0" Feb 25 12:07:45 crc kubenswrapper[5005]: I0225 12:07:45.136825 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3427d7b8-3604-4c34-8f83-d21644e52dd7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3427d7b8-3604-4c34-8f83-d21644e52dd7\") " pod="openstack/ceilometer-0" Feb 25 12:07:45 crc kubenswrapper[5005]: I0225 12:07:45.136843 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6z5m\" (UniqueName: \"kubernetes.io/projected/3427d7b8-3604-4c34-8f83-d21644e52dd7-kube-api-access-m6z5m\") pod \"ceilometer-0\" (UID: \"3427d7b8-3604-4c34-8f83-d21644e52dd7\") " pod="openstack/ceilometer-0" Feb 25 12:07:45 crc kubenswrapper[5005]: I0225 12:07:45.137669 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3427d7b8-3604-4c34-8f83-d21644e52dd7-log-httpd\") pod \"ceilometer-0\" (UID: \"3427d7b8-3604-4c34-8f83-d21644e52dd7\") " pod="openstack/ceilometer-0" Feb 25 12:07:45 crc kubenswrapper[5005]: I0225 12:07:45.151842 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3427d7b8-3604-4c34-8f83-d21644e52dd7-run-httpd\") pod \"ceilometer-0\" (UID: \"3427d7b8-3604-4c34-8f83-d21644e52dd7\") " pod="openstack/ceilometer-0" Feb 25 12:07:45 crc kubenswrapper[5005]: I0225 12:07:45.169299 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3427d7b8-3604-4c34-8f83-d21644e52dd7-scripts\") pod \"ceilometer-0\" (UID: \"3427d7b8-3604-4c34-8f83-d21644e52dd7\") " pod="openstack/ceilometer-0" Feb 25 12:07:45 crc kubenswrapper[5005]: I0225 12:07:45.171979 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3427d7b8-3604-4c34-8f83-d21644e52dd7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3427d7b8-3604-4c34-8f83-d21644e52dd7\") " pod="openstack/ceilometer-0" Feb 25 12:07:45 crc kubenswrapper[5005]: I0225 12:07:45.173603 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3427d7b8-3604-4c34-8f83-d21644e52dd7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3427d7b8-3604-4c34-8f83-d21644e52dd7\") " pod="openstack/ceilometer-0" Feb 25 12:07:45 crc kubenswrapper[5005]: I0225 12:07:45.173747 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3427d7b8-3604-4c34-8f83-d21644e52dd7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3427d7b8-3604-4c34-8f83-d21644e52dd7\") " pod="openstack/ceilometer-0" Feb 25 12:07:45 crc kubenswrapper[5005]: I0225 12:07:45.178155 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6z5m\" (UniqueName: \"kubernetes.io/projected/3427d7b8-3604-4c34-8f83-d21644e52dd7-kube-api-access-m6z5m\") pod \"ceilometer-0\" (UID: \"3427d7b8-3604-4c34-8f83-d21644e52dd7\") " pod="openstack/ceilometer-0" Feb 25 12:07:45 crc kubenswrapper[5005]: I0225 12:07:45.195147 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3427d7b8-3604-4c34-8f83-d21644e52dd7-config-data\") pod \"ceilometer-0\" (UID: \"3427d7b8-3604-4c34-8f83-d21644e52dd7\") " pod="openstack/ceilometer-0" Feb 25 12:07:45 crc kubenswrapper[5005]: I0225 12:07:45.229917 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 25 12:07:45 crc kubenswrapper[5005]: I0225 12:07:45.244260 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864d5fc68c-c22hh" Feb 25 12:07:45 crc kubenswrapper[5005]: I0225 12:07:45.277845 5005 scope.go:117] "RemoveContainer" containerID="f88be5bedbbc34161d653695096fb400839e346b8c89d725084aba39145623e5" Feb 25 12:07:45 crc kubenswrapper[5005]: I0225 12:07:45.345144 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/84abcbc9-9dcf-42fa-ac91-67d55b7895b8-ovsdbserver-nb\") pod \"84abcbc9-9dcf-42fa-ac91-67d55b7895b8\" (UID: \"84abcbc9-9dcf-42fa-ac91-67d55b7895b8\") " Feb 25 12:07:45 crc kubenswrapper[5005]: I0225 12:07:45.345209 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84abcbc9-9dcf-42fa-ac91-67d55b7895b8-dns-svc\") pod \"84abcbc9-9dcf-42fa-ac91-67d55b7895b8\" (UID: \"84abcbc9-9dcf-42fa-ac91-67d55b7895b8\") " Feb 25 12:07:45 crc kubenswrapper[5005]: I0225 12:07:45.345265 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/84abcbc9-9dcf-42fa-ac91-67d55b7895b8-openstack-edpm-ipam\") pod \"84abcbc9-9dcf-42fa-ac91-67d55b7895b8\" (UID: \"84abcbc9-9dcf-42fa-ac91-67d55b7895b8\") " Feb 25 12:07:45 crc kubenswrapper[5005]: I0225 12:07:45.345336 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84abcbc9-9dcf-42fa-ac91-67d55b7895b8-config\") pod \"84abcbc9-9dcf-42fa-ac91-67d55b7895b8\" (UID: \"84abcbc9-9dcf-42fa-ac91-67d55b7895b8\") " Feb 25 12:07:45 crc kubenswrapper[5005]: I0225 12:07:45.352151 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/84abcbc9-9dcf-42fa-ac91-67d55b7895b8-ovsdbserver-sb\") pod \"84abcbc9-9dcf-42fa-ac91-67d55b7895b8\" (UID: \"84abcbc9-9dcf-42fa-ac91-67d55b7895b8\") " Feb 25 12:07:45 crc kubenswrapper[5005]: I0225 12:07:45.352352 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmfm4\" (UniqueName: \"kubernetes.io/projected/84abcbc9-9dcf-42fa-ac91-67d55b7895b8-kube-api-access-hmfm4\") pod \"84abcbc9-9dcf-42fa-ac91-67d55b7895b8\" (UID: \"84abcbc9-9dcf-42fa-ac91-67d55b7895b8\") " Feb 25 12:07:45 crc kubenswrapper[5005]: I0225 12:07:45.434979 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84abcbc9-9dcf-42fa-ac91-67d55b7895b8-kube-api-access-hmfm4" (OuterVolumeSpecName: "kube-api-access-hmfm4") pod "84abcbc9-9dcf-42fa-ac91-67d55b7895b8" (UID: "84abcbc9-9dcf-42fa-ac91-67d55b7895b8"). InnerVolumeSpecName "kube-api-access-hmfm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:07:45 crc kubenswrapper[5005]: I0225 12:07:45.483555 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmfm4\" (UniqueName: \"kubernetes.io/projected/84abcbc9-9dcf-42fa-ac91-67d55b7895b8-kube-api-access-hmfm4\") on node \"crc\" DevicePath \"\"" Feb 25 12:07:45 crc kubenswrapper[5005]: I0225 12:07:45.487044 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84abcbc9-9dcf-42fa-ac91-67d55b7895b8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "84abcbc9-9dcf-42fa-ac91-67d55b7895b8" (UID: "84abcbc9-9dcf-42fa-ac91-67d55b7895b8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 12:07:45 crc kubenswrapper[5005]: I0225 12:07:45.504463 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84abcbc9-9dcf-42fa-ac91-67d55b7895b8-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "84abcbc9-9dcf-42fa-ac91-67d55b7895b8" (UID: "84abcbc9-9dcf-42fa-ac91-67d55b7895b8"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 12:07:45 crc kubenswrapper[5005]: I0225 12:07:45.513970 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84abcbc9-9dcf-42fa-ac91-67d55b7895b8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "84abcbc9-9dcf-42fa-ac91-67d55b7895b8" (UID: "84abcbc9-9dcf-42fa-ac91-67d55b7895b8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 12:07:45 crc kubenswrapper[5005]: I0225 12:07:45.538184 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84abcbc9-9dcf-42fa-ac91-67d55b7895b8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "84abcbc9-9dcf-42fa-ac91-67d55b7895b8" (UID: "84abcbc9-9dcf-42fa-ac91-67d55b7895b8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 12:07:45 crc kubenswrapper[5005]: I0225 12:07:45.546087 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84abcbc9-9dcf-42fa-ac91-67d55b7895b8-config" (OuterVolumeSpecName: "config") pod "84abcbc9-9dcf-42fa-ac91-67d55b7895b8" (UID: "84abcbc9-9dcf-42fa-ac91-67d55b7895b8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 12:07:45 crc kubenswrapper[5005]: I0225 12:07:45.586321 5005 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84abcbc9-9dcf-42fa-ac91-67d55b7895b8-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 25 12:07:45 crc kubenswrapper[5005]: I0225 12:07:45.586358 5005 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/84abcbc9-9dcf-42fa-ac91-67d55b7895b8-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 25 12:07:45 crc kubenswrapper[5005]: I0225 12:07:45.586383 5005 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84abcbc9-9dcf-42fa-ac91-67d55b7895b8-config\") on node \"crc\" DevicePath \"\"" Feb 25 12:07:45 crc kubenswrapper[5005]: I0225 12:07:45.586392 5005 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/84abcbc9-9dcf-42fa-ac91-67d55b7895b8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 25 12:07:45 crc kubenswrapper[5005]: I0225 12:07:45.586402 5005 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/84abcbc9-9dcf-42fa-ac91-67d55b7895b8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 25 12:07:45 crc kubenswrapper[5005]: I0225 12:07:45.903523 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-c22hh" event={"ID":"84abcbc9-9dcf-42fa-ac91-67d55b7895b8","Type":"ContainerDied","Data":"f3f38256fdf40b3ce521b522c2b4fb3083e0dbd8ac7c7d65ad01bf9d58e94027"} Feb 25 12:07:45 crc kubenswrapper[5005]: I0225 12:07:45.903581 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864d5fc68c-c22hh" Feb 25 12:07:45 crc kubenswrapper[5005]: I0225 12:07:45.904166 5005 scope.go:117] "RemoveContainer" containerID="954bb89a70bbfcc542b01de400fd7af576b9256d11d9d25461ae572ee9913db8" Feb 25 12:07:45 crc kubenswrapper[5005]: I0225 12:07:45.909429 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"e583340e-f1c1-49c8-bb9b-edce3afe87c5","Type":"ContainerStarted","Data":"267af1f55a3bef7802105aa0e9936e493991b258cdd495c7c7d946bbecf4a406"} Feb 25 12:07:45 crc kubenswrapper[5005]: I0225 12:07:45.909480 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"e583340e-f1c1-49c8-bb9b-edce3afe87c5","Type":"ContainerStarted","Data":"bd4fc7e2164404e3eb45d24405a5ad7195facaf4ba05503c558c633ae2168021"} Feb 25 12:07:45 crc kubenswrapper[5005]: I0225 12:07:45.910599 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Feb 25 12:07:45 crc kubenswrapper[5005]: I0225 12:07:45.924382 5005 scope.go:117] "RemoveContainer" containerID="d1fb3736af0dac1d68c5dc0e6520bdb6a2e0d5b60f34d0084a3816ff0a118dd3" Feb 25 12:07:45 crc kubenswrapper[5005]: I0225 12:07:45.939177 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=6.939152941 podStartE2EDuration="6.939152941s" podCreationTimestamp="2026-02-25 12:07:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 12:07:45.927456331 +0000 UTC m=+2979.968188658" watchObservedRunningTime="2026-02-25 12:07:45.939152941 +0000 UTC m=+2979.979885268" Feb 25 12:07:45 crc kubenswrapper[5005]: I0225 12:07:45.942845 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"66e4f614-7882-4113-9196-36ad55e76edb","Type":"ContainerStarted","Data":"649023ffc01cc0e27f50035ba4c5d97e731d18fdde28a5ad44a3bd9ab6597fde"} Feb 25 12:07:45 crc kubenswrapper[5005]: I0225 12:07:45.976502 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-c22hh"] Feb 25 12:07:45 crc kubenswrapper[5005]: I0225 12:07:45.986526 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-c22hh"] Feb 25 12:07:45 crc kubenswrapper[5005]: I0225 12:07:45.992566 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=4.472831615 podStartE2EDuration="12.992547214s" podCreationTimestamp="2026-02-25 12:07:33 +0000 UTC" firstStartedPulling="2026-02-25 12:07:35.184202387 +0000 UTC m=+2969.224934714" lastFinishedPulling="2026-02-25 12:07:43.703917986 +0000 UTC m=+2977.744650313" observedRunningTime="2026-02-25 12:07:45.983479766 +0000 UTC m=+2980.024212103" watchObservedRunningTime="2026-02-25 12:07:45.992547214 +0000 UTC m=+2980.033279541" Feb 25 12:07:46 crc kubenswrapper[5005]: W0225 12:07:46.019529 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3427d7b8_3604_4c34_8f83_d21644e52dd7.slice/crio-5185d4528ad90c702e86b46ffd85c90d84844a0e2b426b0c8997d68017ba0c83 WatchSource:0}: Error finding container 5185d4528ad90c702e86b46ffd85c90d84844a0e2b426b0c8997d68017ba0c83: Status 404 returned error can't find the container with id 5185d4528ad90c702e86b46ffd85c90d84844a0e2b426b0c8997d68017ba0c83 Feb 25 12:07:46 crc kubenswrapper[5005]: I0225 12:07:46.019557 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 25 12:07:46 crc kubenswrapper[5005]: I0225 12:07:46.696115 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29c39ed3-bc30-4749-b26c-92e320736c0f" path="/var/lib/kubelet/pods/29c39ed3-bc30-4749-b26c-92e320736c0f/volumes" Feb 25 12:07:46 crc kubenswrapper[5005]: I0225 12:07:46.697516 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84abcbc9-9dcf-42fa-ac91-67d55b7895b8" path="/var/lib/kubelet/pods/84abcbc9-9dcf-42fa-ac91-67d55b7895b8/volumes" Feb 25 12:07:46 crc kubenswrapper[5005]: I0225 12:07:46.959080 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3427d7b8-3604-4c34-8f83-d21644e52dd7","Type":"ContainerStarted","Data":"a5ff6bffdb9ab9ca3d12b6d78a03d235b2ee0669aaadb3a8a12e5d7588e5986d"} Feb 25 12:07:46 crc kubenswrapper[5005]: I0225 12:07:46.959116 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3427d7b8-3604-4c34-8f83-d21644e52dd7","Type":"ContainerStarted","Data":"5185d4528ad90c702e86b46ffd85c90d84844a0e2b426b0c8997d68017ba0c83"} Feb 25 12:07:47 crc kubenswrapper[5005]: I0225 12:07:47.886687 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 25 12:07:48 crc kubenswrapper[5005]: I0225 12:07:48.987964 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3427d7b8-3604-4c34-8f83-d21644e52dd7","Type":"ContainerStarted","Data":"f7714da9b5d2bd8d48c09211b15f72fe72e2d85348381df9d58f18526d7c1929"} Feb 25 12:07:48 crc kubenswrapper[5005]: I0225 12:07:48.988514 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3427d7b8-3604-4c34-8f83-d21644e52dd7","Type":"ContainerStarted","Data":"81cc88fb278d2ea432c31f2b9c22971b6f5f075aa89bc798114afd2225215057"} Feb 25 12:07:51 crc kubenswrapper[5005]: I0225 12:07:51.685428 5005 scope.go:117] "RemoveContainer" containerID="7ca266498d6b9c8fb2a14f176fd0f40e3e96606757679e079fcb7ecb4ca85b52" Feb 25 12:07:51 crc kubenswrapper[5005]: E0225 12:07:51.686676 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:07:52 crc kubenswrapper[5005]: I0225 12:07:52.023484 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3427d7b8-3604-4c34-8f83-d21644e52dd7","Type":"ContainerStarted","Data":"486d45dd24cf36a13db323d374f1d4e95ee32ebe39770d830d339c530a577336"} Feb 25 12:07:52 crc kubenswrapper[5005]: I0225 12:07:52.023638 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3427d7b8-3604-4c34-8f83-d21644e52dd7" containerName="ceilometer-central-agent" containerID="cri-o://a5ff6bffdb9ab9ca3d12b6d78a03d235b2ee0669aaadb3a8a12e5d7588e5986d" gracePeriod=30 Feb 25 12:07:52 crc kubenswrapper[5005]: I0225 12:07:52.023674 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3427d7b8-3604-4c34-8f83-d21644e52dd7" containerName="proxy-httpd" containerID="cri-o://486d45dd24cf36a13db323d374f1d4e95ee32ebe39770d830d339c530a577336" gracePeriod=30 Feb 25 12:07:52 crc kubenswrapper[5005]: I0225 12:07:52.023726 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3427d7b8-3604-4c34-8f83-d21644e52dd7" containerName="ceilometer-notification-agent" containerID="cri-o://81cc88fb278d2ea432c31f2b9c22971b6f5f075aa89bc798114afd2225215057" gracePeriod=30 Feb 25 12:07:52 crc kubenswrapper[5005]: I0225 12:07:52.023782 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3427d7b8-3604-4c34-8f83-d21644e52dd7" containerName="sg-core" containerID="cri-o://f7714da9b5d2bd8d48c09211b15f72fe72e2d85348381df9d58f18526d7c1929" gracePeriod=30 Feb 25 12:07:52 crc kubenswrapper[5005]: I0225 12:07:52.023651 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 25 12:07:52 crc kubenswrapper[5005]: I0225 12:07:52.089946 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.695842907 podStartE2EDuration="8.089922649s" podCreationTimestamp="2026-02-25 12:07:44 +0000 UTC" firstStartedPulling="2026-02-25 12:07:46.021280629 +0000 UTC m=+2980.062012946" lastFinishedPulling="2026-02-25 12:07:51.415360351 +0000 UTC m=+2985.456092688" observedRunningTime="2026-02-25 12:07:52.061710571 +0000 UTC m=+2986.102442898" watchObservedRunningTime="2026-02-25 12:07:52.089922649 +0000 UTC m=+2986.130654986" Feb 25 12:07:52 crc kubenswrapper[5005]: I0225 12:07:52.791313 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 25 12:07:52 crc kubenswrapper[5005]: I0225 12:07:52.885109 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3427d7b8-3604-4c34-8f83-d21644e52dd7-ceilometer-tls-certs\") pod \"3427d7b8-3604-4c34-8f83-d21644e52dd7\" (UID: \"3427d7b8-3604-4c34-8f83-d21644e52dd7\") " Feb 25 12:07:52 crc kubenswrapper[5005]: I0225 12:07:52.885151 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3427d7b8-3604-4c34-8f83-d21644e52dd7-config-data\") pod \"3427d7b8-3604-4c34-8f83-d21644e52dd7\" (UID: \"3427d7b8-3604-4c34-8f83-d21644e52dd7\") " Feb 25 12:07:52 crc kubenswrapper[5005]: I0225 12:07:52.885235 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3427d7b8-3604-4c34-8f83-d21644e52dd7-run-httpd\") pod \"3427d7b8-3604-4c34-8f83-d21644e52dd7\" (UID: \"3427d7b8-3604-4c34-8f83-d21644e52dd7\") " Feb 25 12:07:52 crc kubenswrapper[5005]: I0225 12:07:52.885305 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6z5m\" (UniqueName: \"kubernetes.io/projected/3427d7b8-3604-4c34-8f83-d21644e52dd7-kube-api-access-m6z5m\") pod \"3427d7b8-3604-4c34-8f83-d21644e52dd7\" (UID: \"3427d7b8-3604-4c34-8f83-d21644e52dd7\") " Feb 25 12:07:52 crc kubenswrapper[5005]: I0225 12:07:52.885402 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3427d7b8-3604-4c34-8f83-d21644e52dd7-log-httpd\") pod \"3427d7b8-3604-4c34-8f83-d21644e52dd7\" (UID: \"3427d7b8-3604-4c34-8f83-d21644e52dd7\") " Feb 25 12:07:52 crc kubenswrapper[5005]: I0225 12:07:52.885425 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3427d7b8-3604-4c34-8f83-d21644e52dd7-combined-ca-bundle\") pod \"3427d7b8-3604-4c34-8f83-d21644e52dd7\" (UID: \"3427d7b8-3604-4c34-8f83-d21644e52dd7\") " Feb 25 12:07:52 crc kubenswrapper[5005]: I0225 12:07:52.885516 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3427d7b8-3604-4c34-8f83-d21644e52dd7-sg-core-conf-yaml\") pod \"3427d7b8-3604-4c34-8f83-d21644e52dd7\" (UID: \"3427d7b8-3604-4c34-8f83-d21644e52dd7\") " Feb 25 12:07:52 crc kubenswrapper[5005]: I0225 12:07:52.885602 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3427d7b8-3604-4c34-8f83-d21644e52dd7-scripts\") pod \"3427d7b8-3604-4c34-8f83-d21644e52dd7\" (UID: \"3427d7b8-3604-4c34-8f83-d21644e52dd7\") " Feb 25 12:07:52 crc kubenswrapper[5005]: I0225 12:07:52.885779 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3427d7b8-3604-4c34-8f83-d21644e52dd7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3427d7b8-3604-4c34-8f83-d21644e52dd7" (UID: "3427d7b8-3604-4c34-8f83-d21644e52dd7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 12:07:52 crc kubenswrapper[5005]: I0225 12:07:52.886145 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3427d7b8-3604-4c34-8f83-d21644e52dd7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3427d7b8-3604-4c34-8f83-d21644e52dd7" (UID: "3427d7b8-3604-4c34-8f83-d21644e52dd7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 12:07:52 crc kubenswrapper[5005]: I0225 12:07:52.887167 5005 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3427d7b8-3604-4c34-8f83-d21644e52dd7-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 25 12:07:52 crc kubenswrapper[5005]: I0225 12:07:52.887202 5005 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3427d7b8-3604-4c34-8f83-d21644e52dd7-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 25 12:07:52 crc kubenswrapper[5005]: I0225 12:07:52.891137 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3427d7b8-3604-4c34-8f83-d21644e52dd7-scripts" (OuterVolumeSpecName: "scripts") pod "3427d7b8-3604-4c34-8f83-d21644e52dd7" (UID: "3427d7b8-3604-4c34-8f83-d21644e52dd7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 12:07:52 crc kubenswrapper[5005]: I0225 12:07:52.891198 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3427d7b8-3604-4c34-8f83-d21644e52dd7-kube-api-access-m6z5m" (OuterVolumeSpecName: "kube-api-access-m6z5m") pod "3427d7b8-3604-4c34-8f83-d21644e52dd7" (UID: "3427d7b8-3604-4c34-8f83-d21644e52dd7"). InnerVolumeSpecName "kube-api-access-m6z5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:07:52 crc kubenswrapper[5005]: I0225 12:07:52.920967 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3427d7b8-3604-4c34-8f83-d21644e52dd7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3427d7b8-3604-4c34-8f83-d21644e52dd7" (UID: "3427d7b8-3604-4c34-8f83-d21644e52dd7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 12:07:52 crc kubenswrapper[5005]: I0225 12:07:52.937564 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3427d7b8-3604-4c34-8f83-d21644e52dd7-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "3427d7b8-3604-4c34-8f83-d21644e52dd7" (UID: "3427d7b8-3604-4c34-8f83-d21644e52dd7"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 12:07:52 crc kubenswrapper[5005]: I0225 12:07:52.965870 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3427d7b8-3604-4c34-8f83-d21644e52dd7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3427d7b8-3604-4c34-8f83-d21644e52dd7" (UID: "3427d7b8-3604-4c34-8f83-d21644e52dd7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 12:07:52 crc kubenswrapper[5005]: I0225 12:07:52.983056 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3427d7b8-3604-4c34-8f83-d21644e52dd7-config-data" (OuterVolumeSpecName: "config-data") pod "3427d7b8-3604-4c34-8f83-d21644e52dd7" (UID: "3427d7b8-3604-4c34-8f83-d21644e52dd7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 12:07:52 crc kubenswrapper[5005]: I0225 12:07:52.988695 5005 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3427d7b8-3604-4c34-8f83-d21644e52dd7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 25 12:07:52 crc kubenswrapper[5005]: I0225 12:07:52.988732 5005 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3427d7b8-3604-4c34-8f83-d21644e52dd7-scripts\") on node \"crc\" DevicePath \"\"" Feb 25 12:07:52 crc kubenswrapper[5005]: I0225 12:07:52.988748 5005 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3427d7b8-3604-4c34-8f83-d21644e52dd7-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 25 12:07:52 crc kubenswrapper[5005]: I0225 12:07:52.988759 5005 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3427d7b8-3604-4c34-8f83-d21644e52dd7-config-data\") on node \"crc\" DevicePath \"\"" Feb 25 12:07:52 crc kubenswrapper[5005]: I0225 12:07:52.988769 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6z5m\" (UniqueName: \"kubernetes.io/projected/3427d7b8-3604-4c34-8f83-d21644e52dd7-kube-api-access-m6z5m\") on node \"crc\" DevicePath \"\"" Feb 25 12:07:52 crc kubenswrapper[5005]: I0225 12:07:52.988782 5005 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3427d7b8-3604-4c34-8f83-d21644e52dd7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.039641 5005 generic.go:334] "Generic (PLEG): container finished" podID="3427d7b8-3604-4c34-8f83-d21644e52dd7" containerID="486d45dd24cf36a13db323d374f1d4e95ee32ebe39770d830d339c530a577336" exitCode=0 Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.039711 5005 generic.go:334] "Generic (PLEG): container finished" podID="3427d7b8-3604-4c34-8f83-d21644e52dd7" containerID="f7714da9b5d2bd8d48c09211b15f72fe72e2d85348381df9d58f18526d7c1929" exitCode=2 Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.039726 5005 generic.go:334] "Generic (PLEG): container finished" podID="3427d7b8-3604-4c34-8f83-d21644e52dd7" containerID="81cc88fb278d2ea432c31f2b9c22971b6f5f075aa89bc798114afd2225215057" exitCode=0 Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.039738 5005 generic.go:334] "Generic (PLEG): container finished" podID="3427d7b8-3604-4c34-8f83-d21644e52dd7" containerID="a5ff6bffdb9ab9ca3d12b6d78a03d235b2ee0669aaadb3a8a12e5d7588e5986d" exitCode=0 Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.039763 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3427d7b8-3604-4c34-8f83-d21644e52dd7","Type":"ContainerDied","Data":"486d45dd24cf36a13db323d374f1d4e95ee32ebe39770d830d339c530a577336"} Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.039794 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3427d7b8-3604-4c34-8f83-d21644e52dd7","Type":"ContainerDied","Data":"f7714da9b5d2bd8d48c09211b15f72fe72e2d85348381df9d58f18526d7c1929"} Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.039810 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3427d7b8-3604-4c34-8f83-d21644e52dd7","Type":"ContainerDied","Data":"81cc88fb278d2ea432c31f2b9c22971b6f5f075aa89bc798114afd2225215057"} Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.039825 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3427d7b8-3604-4c34-8f83-d21644e52dd7","Type":"ContainerDied","Data":"a5ff6bffdb9ab9ca3d12b6d78a03d235b2ee0669aaadb3a8a12e5d7588e5986d"} Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.039838 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3427d7b8-3604-4c34-8f83-d21644e52dd7","Type":"ContainerDied","Data":"5185d4528ad90c702e86b46ffd85c90d84844a0e2b426b0c8997d68017ba0c83"} Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.039860 5005 scope.go:117] "RemoveContainer" containerID="486d45dd24cf36a13db323d374f1d4e95ee32ebe39770d830d339c530a577336" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.040025 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.091066 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.097274 5005 scope.go:117] "RemoveContainer" containerID="f7714da9b5d2bd8d48c09211b15f72fe72e2d85348381df9d58f18526d7c1929" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.108206 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.119232 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 25 12:07:53 crc kubenswrapper[5005]: E0225 12:07:53.119594 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3427d7b8-3604-4c34-8f83-d21644e52dd7" containerName="sg-core" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.119620 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="3427d7b8-3604-4c34-8f83-d21644e52dd7" containerName="sg-core" Feb 25 12:07:53 crc kubenswrapper[5005]: E0225 12:07:53.119642 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3427d7b8-3604-4c34-8f83-d21644e52dd7" containerName="proxy-httpd" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.119649 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="3427d7b8-3604-4c34-8f83-d21644e52dd7" containerName="proxy-httpd" Feb 25 12:07:53 crc kubenswrapper[5005]: E0225 12:07:53.119663 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84abcbc9-9dcf-42fa-ac91-67d55b7895b8" containerName="init" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.119671 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="84abcbc9-9dcf-42fa-ac91-67d55b7895b8" containerName="init" Feb 25 12:07:53 crc kubenswrapper[5005]: E0225 12:07:53.119691 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3427d7b8-3604-4c34-8f83-d21644e52dd7" containerName="ceilometer-central-agent" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.119697 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="3427d7b8-3604-4c34-8f83-d21644e52dd7" containerName="ceilometer-central-agent" Feb 25 12:07:53 crc kubenswrapper[5005]: E0225 12:07:53.119708 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3427d7b8-3604-4c34-8f83-d21644e52dd7" containerName="ceilometer-notification-agent" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.119714 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="3427d7b8-3604-4c34-8f83-d21644e52dd7" containerName="ceilometer-notification-agent" Feb 25 12:07:53 crc kubenswrapper[5005]: E0225 12:07:53.119722 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84abcbc9-9dcf-42fa-ac91-67d55b7895b8" containerName="dnsmasq-dns" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.119729 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="84abcbc9-9dcf-42fa-ac91-67d55b7895b8" containerName="dnsmasq-dns" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.119911 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="84abcbc9-9dcf-42fa-ac91-67d55b7895b8" containerName="dnsmasq-dns" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.119929 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="3427d7b8-3604-4c34-8f83-d21644e52dd7" containerName="sg-core" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.119946 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="3427d7b8-3604-4c34-8f83-d21644e52dd7" containerName="proxy-httpd" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.119958 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="3427d7b8-3604-4c34-8f83-d21644e52dd7" containerName="ceilometer-notification-agent" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.119975 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="3427d7b8-3604-4c34-8f83-d21644e52dd7" containerName="ceilometer-central-agent" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.121579 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.126620 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.147095 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.147414 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.147728 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.175296 5005 scope.go:117] "RemoveContainer" containerID="81cc88fb278d2ea432c31f2b9c22971b6f5f075aa89bc798114afd2225215057" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.193717 5005 scope.go:117] "RemoveContainer" containerID="a5ff6bffdb9ab9ca3d12b6d78a03d235b2ee0669aaadb3a8a12e5d7588e5986d" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.215338 5005 scope.go:117] "RemoveContainer" containerID="486d45dd24cf36a13db323d374f1d4e95ee32ebe39770d830d339c530a577336" Feb 25 12:07:53 crc kubenswrapper[5005]: E0225 12:07:53.215835 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"486d45dd24cf36a13db323d374f1d4e95ee32ebe39770d830d339c530a577336\": container with ID starting with 486d45dd24cf36a13db323d374f1d4e95ee32ebe39770d830d339c530a577336 not found: ID does not exist" containerID="486d45dd24cf36a13db323d374f1d4e95ee32ebe39770d830d339c530a577336" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.215867 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"486d45dd24cf36a13db323d374f1d4e95ee32ebe39770d830d339c530a577336"} err="failed to get container status \"486d45dd24cf36a13db323d374f1d4e95ee32ebe39770d830d339c530a577336\": rpc error: code = NotFound desc = could not find container \"486d45dd24cf36a13db323d374f1d4e95ee32ebe39770d830d339c530a577336\": container with ID starting with 486d45dd24cf36a13db323d374f1d4e95ee32ebe39770d830d339c530a577336 not found: ID does not exist" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.215918 5005 scope.go:117] "RemoveContainer" containerID="f7714da9b5d2bd8d48c09211b15f72fe72e2d85348381df9d58f18526d7c1929" Feb 25 12:07:53 crc kubenswrapper[5005]: E0225 12:07:53.216533 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7714da9b5d2bd8d48c09211b15f72fe72e2d85348381df9d58f18526d7c1929\": container with ID starting with f7714da9b5d2bd8d48c09211b15f72fe72e2d85348381df9d58f18526d7c1929 not found: ID does not exist" containerID="f7714da9b5d2bd8d48c09211b15f72fe72e2d85348381df9d58f18526d7c1929" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.216554 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7714da9b5d2bd8d48c09211b15f72fe72e2d85348381df9d58f18526d7c1929"} err="failed to get container status \"f7714da9b5d2bd8d48c09211b15f72fe72e2d85348381df9d58f18526d7c1929\": rpc error: code = NotFound desc = could not find container \"f7714da9b5d2bd8d48c09211b15f72fe72e2d85348381df9d58f18526d7c1929\": container with ID starting with f7714da9b5d2bd8d48c09211b15f72fe72e2d85348381df9d58f18526d7c1929 not found: ID does not exist" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.216567 5005 scope.go:117] "RemoveContainer" containerID="81cc88fb278d2ea432c31f2b9c22971b6f5f075aa89bc798114afd2225215057" Feb 25 12:07:53 crc kubenswrapper[5005]: E0225 12:07:53.216864 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81cc88fb278d2ea432c31f2b9c22971b6f5f075aa89bc798114afd2225215057\": container with ID starting with 81cc88fb278d2ea432c31f2b9c22971b6f5f075aa89bc798114afd2225215057 not found: ID does not exist" containerID="81cc88fb278d2ea432c31f2b9c22971b6f5f075aa89bc798114afd2225215057" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.216887 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81cc88fb278d2ea432c31f2b9c22971b6f5f075aa89bc798114afd2225215057"} err="failed to get container status \"81cc88fb278d2ea432c31f2b9c22971b6f5f075aa89bc798114afd2225215057\": rpc error: code = NotFound desc = could not find container \"81cc88fb278d2ea432c31f2b9c22971b6f5f075aa89bc798114afd2225215057\": container with ID starting with 81cc88fb278d2ea432c31f2b9c22971b6f5f075aa89bc798114afd2225215057 not found: ID does not exist" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.216899 5005 scope.go:117] "RemoveContainer" containerID="a5ff6bffdb9ab9ca3d12b6d78a03d235b2ee0669aaadb3a8a12e5d7588e5986d" Feb 25 12:07:53 crc kubenswrapper[5005]: E0225 12:07:53.217205 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5ff6bffdb9ab9ca3d12b6d78a03d235b2ee0669aaadb3a8a12e5d7588e5986d\": container with ID starting with a5ff6bffdb9ab9ca3d12b6d78a03d235b2ee0669aaadb3a8a12e5d7588e5986d not found: ID does not exist" containerID="a5ff6bffdb9ab9ca3d12b6d78a03d235b2ee0669aaadb3a8a12e5d7588e5986d" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.217258 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5ff6bffdb9ab9ca3d12b6d78a03d235b2ee0669aaadb3a8a12e5d7588e5986d"} err="failed to get container status \"a5ff6bffdb9ab9ca3d12b6d78a03d235b2ee0669aaadb3a8a12e5d7588e5986d\": rpc error: code = NotFound desc = could not find container \"a5ff6bffdb9ab9ca3d12b6d78a03d235b2ee0669aaadb3a8a12e5d7588e5986d\": container with ID starting with a5ff6bffdb9ab9ca3d12b6d78a03d235b2ee0669aaadb3a8a12e5d7588e5986d not found: ID does not exist" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.217294 5005 scope.go:117] "RemoveContainer" containerID="486d45dd24cf36a13db323d374f1d4e95ee32ebe39770d830d339c530a577336" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.217668 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"486d45dd24cf36a13db323d374f1d4e95ee32ebe39770d830d339c530a577336"} err="failed to get container status \"486d45dd24cf36a13db323d374f1d4e95ee32ebe39770d830d339c530a577336\": rpc error: code = NotFound desc = could not find container \"486d45dd24cf36a13db323d374f1d4e95ee32ebe39770d830d339c530a577336\": container with ID starting with 486d45dd24cf36a13db323d374f1d4e95ee32ebe39770d830d339c530a577336 not found: ID does not exist" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.217691 5005 scope.go:117] "RemoveContainer" containerID="f7714da9b5d2bd8d48c09211b15f72fe72e2d85348381df9d58f18526d7c1929" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.217896 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7714da9b5d2bd8d48c09211b15f72fe72e2d85348381df9d58f18526d7c1929"} err="failed to get container status \"f7714da9b5d2bd8d48c09211b15f72fe72e2d85348381df9d58f18526d7c1929\": rpc error: code = NotFound desc = could not find container \"f7714da9b5d2bd8d48c09211b15f72fe72e2d85348381df9d58f18526d7c1929\": container with ID starting with f7714da9b5d2bd8d48c09211b15f72fe72e2d85348381df9d58f18526d7c1929 not found: ID does not exist" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.217926 5005 scope.go:117] "RemoveContainer" containerID="81cc88fb278d2ea432c31f2b9c22971b6f5f075aa89bc798114afd2225215057" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.218153 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81cc88fb278d2ea432c31f2b9c22971b6f5f075aa89bc798114afd2225215057"} err="failed to get container status \"81cc88fb278d2ea432c31f2b9c22971b6f5f075aa89bc798114afd2225215057\": rpc error: code = NotFound desc = could not find container \"81cc88fb278d2ea432c31f2b9c22971b6f5f075aa89bc798114afd2225215057\": container with ID starting with 81cc88fb278d2ea432c31f2b9c22971b6f5f075aa89bc798114afd2225215057 not found: ID does not exist" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.218178 5005 scope.go:117] "RemoveContainer" containerID="a5ff6bffdb9ab9ca3d12b6d78a03d235b2ee0669aaadb3a8a12e5d7588e5986d" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.218378 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5ff6bffdb9ab9ca3d12b6d78a03d235b2ee0669aaadb3a8a12e5d7588e5986d"} err="failed to get container status \"a5ff6bffdb9ab9ca3d12b6d78a03d235b2ee0669aaadb3a8a12e5d7588e5986d\": rpc error: code = NotFound desc = could not find container \"a5ff6bffdb9ab9ca3d12b6d78a03d235b2ee0669aaadb3a8a12e5d7588e5986d\": container with ID starting with a5ff6bffdb9ab9ca3d12b6d78a03d235b2ee0669aaadb3a8a12e5d7588e5986d not found: ID does not exist" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.218401 5005 scope.go:117] "RemoveContainer" containerID="486d45dd24cf36a13db323d374f1d4e95ee32ebe39770d830d339c530a577336" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.218589 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"486d45dd24cf36a13db323d374f1d4e95ee32ebe39770d830d339c530a577336"} err="failed to get container status \"486d45dd24cf36a13db323d374f1d4e95ee32ebe39770d830d339c530a577336\": rpc error: code = NotFound desc = could not find container \"486d45dd24cf36a13db323d374f1d4e95ee32ebe39770d830d339c530a577336\": container with ID starting with 486d45dd24cf36a13db323d374f1d4e95ee32ebe39770d830d339c530a577336 not found: ID does not exist" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.218615 5005 scope.go:117] "RemoveContainer" containerID="f7714da9b5d2bd8d48c09211b15f72fe72e2d85348381df9d58f18526d7c1929" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.218823 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7714da9b5d2bd8d48c09211b15f72fe72e2d85348381df9d58f18526d7c1929"} err="failed to get container status \"f7714da9b5d2bd8d48c09211b15f72fe72e2d85348381df9d58f18526d7c1929\": rpc error: code = NotFound desc = could not find container \"f7714da9b5d2bd8d48c09211b15f72fe72e2d85348381df9d58f18526d7c1929\": container with ID starting with f7714da9b5d2bd8d48c09211b15f72fe72e2d85348381df9d58f18526d7c1929 not found: ID does not exist" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.218905 5005 scope.go:117] "RemoveContainer" containerID="81cc88fb278d2ea432c31f2b9c22971b6f5f075aa89bc798114afd2225215057" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.219255 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81cc88fb278d2ea432c31f2b9c22971b6f5f075aa89bc798114afd2225215057"} err="failed to get container status \"81cc88fb278d2ea432c31f2b9c22971b6f5f075aa89bc798114afd2225215057\": rpc error: code = NotFound desc = could not find container \"81cc88fb278d2ea432c31f2b9c22971b6f5f075aa89bc798114afd2225215057\": container with ID starting with 81cc88fb278d2ea432c31f2b9c22971b6f5f075aa89bc798114afd2225215057 not found: ID does not exist" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.219276 5005 scope.go:117] "RemoveContainer" containerID="a5ff6bffdb9ab9ca3d12b6d78a03d235b2ee0669aaadb3a8a12e5d7588e5986d" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.219512 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5ff6bffdb9ab9ca3d12b6d78a03d235b2ee0669aaadb3a8a12e5d7588e5986d"} err="failed to get container status \"a5ff6bffdb9ab9ca3d12b6d78a03d235b2ee0669aaadb3a8a12e5d7588e5986d\": rpc error: code = NotFound desc = could not find container \"a5ff6bffdb9ab9ca3d12b6d78a03d235b2ee0669aaadb3a8a12e5d7588e5986d\": container with ID starting with a5ff6bffdb9ab9ca3d12b6d78a03d235b2ee0669aaadb3a8a12e5d7588e5986d not found: ID does not exist" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.219539 5005 scope.go:117] "RemoveContainer" containerID="486d45dd24cf36a13db323d374f1d4e95ee32ebe39770d830d339c530a577336" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.219781 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"486d45dd24cf36a13db323d374f1d4e95ee32ebe39770d830d339c530a577336"} err="failed to get container status \"486d45dd24cf36a13db323d374f1d4e95ee32ebe39770d830d339c530a577336\": rpc error: code = NotFound desc = could not find container \"486d45dd24cf36a13db323d374f1d4e95ee32ebe39770d830d339c530a577336\": container with ID starting with 486d45dd24cf36a13db323d374f1d4e95ee32ebe39770d830d339c530a577336 not found: ID does not exist" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.219804 5005 scope.go:117] "RemoveContainer" containerID="f7714da9b5d2bd8d48c09211b15f72fe72e2d85348381df9d58f18526d7c1929" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.220108 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7714da9b5d2bd8d48c09211b15f72fe72e2d85348381df9d58f18526d7c1929"} err="failed to get container status \"f7714da9b5d2bd8d48c09211b15f72fe72e2d85348381df9d58f18526d7c1929\": rpc error: code = NotFound desc = could not find container \"f7714da9b5d2bd8d48c09211b15f72fe72e2d85348381df9d58f18526d7c1929\": container with ID starting with f7714da9b5d2bd8d48c09211b15f72fe72e2d85348381df9d58f18526d7c1929 not found: ID does not exist" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.220129 5005 scope.go:117] "RemoveContainer" containerID="81cc88fb278d2ea432c31f2b9c22971b6f5f075aa89bc798114afd2225215057" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.220385 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81cc88fb278d2ea432c31f2b9c22971b6f5f075aa89bc798114afd2225215057"} err="failed to get container status \"81cc88fb278d2ea432c31f2b9c22971b6f5f075aa89bc798114afd2225215057\": rpc error: code = NotFound desc = could not find container \"81cc88fb278d2ea432c31f2b9c22971b6f5f075aa89bc798114afd2225215057\": container with ID starting with 81cc88fb278d2ea432c31f2b9c22971b6f5f075aa89bc798114afd2225215057 not found: ID does not exist" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.220416 5005 scope.go:117] "RemoveContainer" containerID="a5ff6bffdb9ab9ca3d12b6d78a03d235b2ee0669aaadb3a8a12e5d7588e5986d" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.220644 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5ff6bffdb9ab9ca3d12b6d78a03d235b2ee0669aaadb3a8a12e5d7588e5986d"} err="failed to get container status \"a5ff6bffdb9ab9ca3d12b6d78a03d235b2ee0669aaadb3a8a12e5d7588e5986d\": rpc error: code = NotFound desc = could not find container \"a5ff6bffdb9ab9ca3d12b6d78a03d235b2ee0669aaadb3a8a12e5d7588e5986d\": container with ID starting with a5ff6bffdb9ab9ca3d12b6d78a03d235b2ee0669aaadb3a8a12e5d7588e5986d not found: ID does not exist" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.296324 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b438d6e5-da98-4860-ae13-71372ea976cb-config-data\") pod \"ceilometer-0\" (UID: \"b438d6e5-da98-4860-ae13-71372ea976cb\") " pod="openstack/ceilometer-0" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.296618 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b438d6e5-da98-4860-ae13-71372ea976cb-log-httpd\") pod \"ceilometer-0\" (UID: \"b438d6e5-da98-4860-ae13-71372ea976cb\") " pod="openstack/ceilometer-0" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.296767 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b438d6e5-da98-4860-ae13-71372ea976cb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b438d6e5-da98-4860-ae13-71372ea976cb\") " pod="openstack/ceilometer-0" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.296863 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b438d6e5-da98-4860-ae13-71372ea976cb-run-httpd\") pod \"ceilometer-0\" (UID: \"b438d6e5-da98-4860-ae13-71372ea976cb\") " pod="openstack/ceilometer-0" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.296902 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r49pk\" (UniqueName: \"kubernetes.io/projected/b438d6e5-da98-4860-ae13-71372ea976cb-kube-api-access-r49pk\") pod \"ceilometer-0\" (UID: \"b438d6e5-da98-4860-ae13-71372ea976cb\") " pod="openstack/ceilometer-0" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.297203 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b438d6e5-da98-4860-ae13-71372ea976cb-scripts\") pod \"ceilometer-0\" (UID: \"b438d6e5-da98-4860-ae13-71372ea976cb\") " pod="openstack/ceilometer-0" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.297298 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b438d6e5-da98-4860-ae13-71372ea976cb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b438d6e5-da98-4860-ae13-71372ea976cb\") " pod="openstack/ceilometer-0" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.297484 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b438d6e5-da98-4860-ae13-71372ea976cb-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b438d6e5-da98-4860-ae13-71372ea976cb\") " pod="openstack/ceilometer-0" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.399489 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b438d6e5-da98-4860-ae13-71372ea976cb-config-data\") pod \"ceilometer-0\" (UID: \"b438d6e5-da98-4860-ae13-71372ea976cb\") " pod="openstack/ceilometer-0" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.399568 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b438d6e5-da98-4860-ae13-71372ea976cb-log-httpd\") pod \"ceilometer-0\" (UID: \"b438d6e5-da98-4860-ae13-71372ea976cb\") " pod="openstack/ceilometer-0" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.399604 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b438d6e5-da98-4860-ae13-71372ea976cb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b438d6e5-da98-4860-ae13-71372ea976cb\") " pod="openstack/ceilometer-0" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.399649 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b438d6e5-da98-4860-ae13-71372ea976cb-run-httpd\") pod \"ceilometer-0\" (UID: \"b438d6e5-da98-4860-ae13-71372ea976cb\") " pod="openstack/ceilometer-0" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.399675 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r49pk\" (UniqueName: \"kubernetes.io/projected/b438d6e5-da98-4860-ae13-71372ea976cb-kube-api-access-r49pk\") pod \"ceilometer-0\" (UID: \"b438d6e5-da98-4860-ae13-71372ea976cb\") " pod="openstack/ceilometer-0" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.399714 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b438d6e5-da98-4860-ae13-71372ea976cb-scripts\") pod \"ceilometer-0\" (UID: \"b438d6e5-da98-4860-ae13-71372ea976cb\") " pod="openstack/ceilometer-0" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.399735 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b438d6e5-da98-4860-ae13-71372ea976cb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b438d6e5-da98-4860-ae13-71372ea976cb\") " pod="openstack/ceilometer-0" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.399767 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b438d6e5-da98-4860-ae13-71372ea976cb-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b438d6e5-da98-4860-ae13-71372ea976cb\") " pod="openstack/ceilometer-0" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.400295 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b438d6e5-da98-4860-ae13-71372ea976cb-run-httpd\") pod \"ceilometer-0\" (UID: \"b438d6e5-da98-4860-ae13-71372ea976cb\") " pod="openstack/ceilometer-0" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.400592 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b438d6e5-da98-4860-ae13-71372ea976cb-log-httpd\") pod \"ceilometer-0\" (UID: \"b438d6e5-da98-4860-ae13-71372ea976cb\") " pod="openstack/ceilometer-0" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.405935 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b438d6e5-da98-4860-ae13-71372ea976cb-config-data\") pod \"ceilometer-0\" (UID: \"b438d6e5-da98-4860-ae13-71372ea976cb\") " pod="openstack/ceilometer-0" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.406058 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b438d6e5-da98-4860-ae13-71372ea976cb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b438d6e5-da98-4860-ae13-71372ea976cb\") " pod="openstack/ceilometer-0" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.406130 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b438d6e5-da98-4860-ae13-71372ea976cb-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b438d6e5-da98-4860-ae13-71372ea976cb\") " pod="openstack/ceilometer-0" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.407134 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b438d6e5-da98-4860-ae13-71372ea976cb-scripts\") pod \"ceilometer-0\" (UID: \"b438d6e5-da98-4860-ae13-71372ea976cb\") " pod="openstack/ceilometer-0" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.408364 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b438d6e5-da98-4860-ae13-71372ea976cb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b438d6e5-da98-4860-ae13-71372ea976cb\") " pod="openstack/ceilometer-0" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.417888 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r49pk\" (UniqueName: \"kubernetes.io/projected/b438d6e5-da98-4860-ae13-71372ea976cb-kube-api-access-r49pk\") pod \"ceilometer-0\" (UID: \"b438d6e5-da98-4860-ae13-71372ea976cb\") " pod="openstack/ceilometer-0" Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.465965 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 25 12:07:53 crc kubenswrapper[5005]: W0225 12:07:53.955098 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb438d6e5_da98_4860_ae13_71372ea976cb.slice/crio-f5d0d9b3d38174acf2e1967b6e16d05c698eca5040ffeee51244402bfc414ade WatchSource:0}: Error finding container f5d0d9b3d38174acf2e1967b6e16d05c698eca5040ffeee51244402bfc414ade: Status 404 returned error can't find the container with id f5d0d9b3d38174acf2e1967b6e16d05c698eca5040ffeee51244402bfc414ade Feb 25 12:07:53 crc kubenswrapper[5005]: I0225 12:07:53.955281 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 25 12:07:54 crc kubenswrapper[5005]: I0225 12:07:54.051604 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b438d6e5-da98-4860-ae13-71372ea976cb","Type":"ContainerStarted","Data":"f5d0d9b3d38174acf2e1967b6e16d05c698eca5040ffeee51244402bfc414ade"} Feb 25 12:07:54 crc kubenswrapper[5005]: I0225 12:07:54.303494 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Feb 25 12:07:54 crc kubenswrapper[5005]: I0225 12:07:54.698171 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3427d7b8-3604-4c34-8f83-d21644e52dd7" path="/var/lib/kubelet/pods/3427d7b8-3604-4c34-8f83-d21644e52dd7/volumes" Feb 25 12:07:55 crc kubenswrapper[5005]: I0225 12:07:55.088765 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b438d6e5-da98-4860-ae13-71372ea976cb","Type":"ContainerStarted","Data":"1591e731361e4b89e6a64d25cc27b2a45b3994d2229975c226586fe0c969dd19"} Feb 25 12:07:55 crc kubenswrapper[5005]: I0225 12:07:55.836435 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Feb 25 12:07:55 crc kubenswrapper[5005]: I0225 12:07:55.890820 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Feb 25 12:07:56 crc kubenswrapper[5005]: I0225 12:07:56.010545 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Feb 25 12:07:56 crc kubenswrapper[5005]: I0225 12:07:56.060592 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Feb 25 12:07:56 crc kubenswrapper[5005]: I0225 12:07:56.098761 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b438d6e5-da98-4860-ae13-71372ea976cb","Type":"ContainerStarted","Data":"a7a512da9f1d4262fa78460355d704733143424e42a399a8717af3d9df54f81d"} Feb 25 12:07:56 crc kubenswrapper[5005]: I0225 12:07:56.098923 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="66e4f614-7882-4113-9196-36ad55e76edb" containerName="manila-share" containerID="cri-o://acf08c337ae1567a292f6a78b155167bf504432d1176656b3d3472968e0a338e" gracePeriod=30 Feb 25 12:07:56 crc kubenswrapper[5005]: I0225 12:07:56.098988 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="66e4f614-7882-4113-9196-36ad55e76edb" containerName="probe" containerID="cri-o://649023ffc01cc0e27f50035ba4c5d97e731d18fdde28a5ad44a3bd9ab6597fde" gracePeriod=30 Feb 25 12:07:56 crc kubenswrapper[5005]: I0225 12:07:56.099084 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="81f2b95d-74fd-4386-acb0-7c2508043937" containerName="manila-scheduler" containerID="cri-o://67acda7614e883251402416d307216e5405ed26d7e2c5a96def8a81a4cfa3e4b" gracePeriod=30 Feb 25 12:07:56 crc kubenswrapper[5005]: I0225 12:07:56.099139 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="81f2b95d-74fd-4386-acb0-7c2508043937" containerName="probe" containerID="cri-o://2290ffe6d43d688030863c2ad906bb814eb8e94d5a1d4afe15c16c312e9b53bd" gracePeriod=30 Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.035956 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.110259 5005 generic.go:334] "Generic (PLEG): container finished" podID="66e4f614-7882-4113-9196-36ad55e76edb" containerID="649023ffc01cc0e27f50035ba4c5d97e731d18fdde28a5ad44a3bd9ab6597fde" exitCode=0 Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.110768 5005 generic.go:334] "Generic (PLEG): container finished" podID="66e4f614-7882-4113-9196-36ad55e76edb" containerID="acf08c337ae1567a292f6a78b155167bf504432d1176656b3d3472968e0a338e" exitCode=1 Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.110335 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.110346 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"66e4f614-7882-4113-9196-36ad55e76edb","Type":"ContainerDied","Data":"649023ffc01cc0e27f50035ba4c5d97e731d18fdde28a5ad44a3bd9ab6597fde"} Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.111246 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"66e4f614-7882-4113-9196-36ad55e76edb","Type":"ContainerDied","Data":"acf08c337ae1567a292f6a78b155167bf504432d1176656b3d3472968e0a338e"} Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.111292 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"66e4f614-7882-4113-9196-36ad55e76edb","Type":"ContainerDied","Data":"eabbd3f8973d05e04b49763aec71a3d84a71cd4324b155268d431ca0fa1535d5"} Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.111312 5005 scope.go:117] "RemoveContainer" containerID="649023ffc01cc0e27f50035ba4c5d97e731d18fdde28a5ad44a3bd9ab6597fde" Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.113489 5005 generic.go:334] "Generic (PLEG): container finished" podID="81f2b95d-74fd-4386-acb0-7c2508043937" containerID="2290ffe6d43d688030863c2ad906bb814eb8e94d5a1d4afe15c16c312e9b53bd" exitCode=0 Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.113580 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"81f2b95d-74fd-4386-acb0-7c2508043937","Type":"ContainerDied","Data":"2290ffe6d43d688030863c2ad906bb814eb8e94d5a1d4afe15c16c312e9b53bd"} Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.116186 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b438d6e5-da98-4860-ae13-71372ea976cb","Type":"ContainerStarted","Data":"9774d3a4135ad96c739f88975ab0ee9a6ba19b095ae7481f36994cb1a2df0795"} Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.143647 5005 scope.go:117] "RemoveContainer" containerID="acf08c337ae1567a292f6a78b155167bf504432d1176656b3d3472968e0a338e" Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.181046 5005 scope.go:117] "RemoveContainer" containerID="649023ffc01cc0e27f50035ba4c5d97e731d18fdde28a5ad44a3bd9ab6597fde" Feb 25 12:07:57 crc kubenswrapper[5005]: E0225 12:07:57.181539 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"649023ffc01cc0e27f50035ba4c5d97e731d18fdde28a5ad44a3bd9ab6597fde\": container with ID starting with 649023ffc01cc0e27f50035ba4c5d97e731d18fdde28a5ad44a3bd9ab6597fde not found: ID does not exist" containerID="649023ffc01cc0e27f50035ba4c5d97e731d18fdde28a5ad44a3bd9ab6597fde" Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.181578 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"649023ffc01cc0e27f50035ba4c5d97e731d18fdde28a5ad44a3bd9ab6597fde"} err="failed to get container status \"649023ffc01cc0e27f50035ba4c5d97e731d18fdde28a5ad44a3bd9ab6597fde\": rpc error: code = NotFound desc = could not find container \"649023ffc01cc0e27f50035ba4c5d97e731d18fdde28a5ad44a3bd9ab6597fde\": container with ID starting with 649023ffc01cc0e27f50035ba4c5d97e731d18fdde28a5ad44a3bd9ab6597fde not found: ID does not exist" Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.181615 5005 scope.go:117] "RemoveContainer" containerID="acf08c337ae1567a292f6a78b155167bf504432d1176656b3d3472968e0a338e" Feb 25 12:07:57 crc kubenswrapper[5005]: E0225 12:07:57.181963 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acf08c337ae1567a292f6a78b155167bf504432d1176656b3d3472968e0a338e\": container with ID starting with acf08c337ae1567a292f6a78b155167bf504432d1176656b3d3472968e0a338e not found: ID does not exist" containerID="acf08c337ae1567a292f6a78b155167bf504432d1176656b3d3472968e0a338e" Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.182002 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acf08c337ae1567a292f6a78b155167bf504432d1176656b3d3472968e0a338e"} err="failed to get container status \"acf08c337ae1567a292f6a78b155167bf504432d1176656b3d3472968e0a338e\": rpc error: code = NotFound desc = could not find container \"acf08c337ae1567a292f6a78b155167bf504432d1176656b3d3472968e0a338e\": container with ID starting with acf08c337ae1567a292f6a78b155167bf504432d1176656b3d3472968e0a338e not found: ID does not exist" Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.182036 5005 scope.go:117] "RemoveContainer" containerID="649023ffc01cc0e27f50035ba4c5d97e731d18fdde28a5ad44a3bd9ab6597fde" Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.182597 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"649023ffc01cc0e27f50035ba4c5d97e731d18fdde28a5ad44a3bd9ab6597fde"} err="failed to get container status \"649023ffc01cc0e27f50035ba4c5d97e731d18fdde28a5ad44a3bd9ab6597fde\": rpc error: code = NotFound desc = could not find container \"649023ffc01cc0e27f50035ba4c5d97e731d18fdde28a5ad44a3bd9ab6597fde\": container with ID starting with 649023ffc01cc0e27f50035ba4c5d97e731d18fdde28a5ad44a3bd9ab6597fde not found: ID does not exist" Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.182635 5005 scope.go:117] "RemoveContainer" containerID="acf08c337ae1567a292f6a78b155167bf504432d1176656b3d3472968e0a338e" Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.182875 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acf08c337ae1567a292f6a78b155167bf504432d1176656b3d3472968e0a338e"} err="failed to get container status \"acf08c337ae1567a292f6a78b155167bf504432d1176656b3d3472968e0a338e\": rpc error: code = NotFound desc = could not find container \"acf08c337ae1567a292f6a78b155167bf504432d1176656b3d3472968e0a338e\": container with ID starting with acf08c337ae1567a292f6a78b155167bf504432d1176656b3d3472968e0a338e not found: ID does not exist" Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.194289 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/66e4f614-7882-4113-9196-36ad55e76edb-ceph\") pod \"66e4f614-7882-4113-9196-36ad55e76edb\" (UID: \"66e4f614-7882-4113-9196-36ad55e76edb\") " Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.194334 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/66e4f614-7882-4113-9196-36ad55e76edb-var-lib-manila\") pod \"66e4f614-7882-4113-9196-36ad55e76edb\" (UID: \"66e4f614-7882-4113-9196-36ad55e76edb\") " Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.194388 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66e4f614-7882-4113-9196-36ad55e76edb-scripts\") pod \"66e4f614-7882-4113-9196-36ad55e76edb\" (UID: \"66e4f614-7882-4113-9196-36ad55e76edb\") " Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.194450 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6wvn\" (UniqueName: \"kubernetes.io/projected/66e4f614-7882-4113-9196-36ad55e76edb-kube-api-access-c6wvn\") pod \"66e4f614-7882-4113-9196-36ad55e76edb\" (UID: \"66e4f614-7882-4113-9196-36ad55e76edb\") " Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.194497 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/66e4f614-7882-4113-9196-36ad55e76edb-config-data-custom\") pod \"66e4f614-7882-4113-9196-36ad55e76edb\" (UID: \"66e4f614-7882-4113-9196-36ad55e76edb\") " Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.194560 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/66e4f614-7882-4113-9196-36ad55e76edb-etc-machine-id\") pod \"66e4f614-7882-4113-9196-36ad55e76edb\" (UID: \"66e4f614-7882-4113-9196-36ad55e76edb\") " Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.194554 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/66e4f614-7882-4113-9196-36ad55e76edb-var-lib-manila" (OuterVolumeSpecName: "var-lib-manila") pod "66e4f614-7882-4113-9196-36ad55e76edb" (UID: "66e4f614-7882-4113-9196-36ad55e76edb"). InnerVolumeSpecName "var-lib-manila". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.194674 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66e4f614-7882-4113-9196-36ad55e76edb-combined-ca-bundle\") pod \"66e4f614-7882-4113-9196-36ad55e76edb\" (UID: \"66e4f614-7882-4113-9196-36ad55e76edb\") " Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.194704 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66e4f614-7882-4113-9196-36ad55e76edb-config-data\") pod \"66e4f614-7882-4113-9196-36ad55e76edb\" (UID: \"66e4f614-7882-4113-9196-36ad55e76edb\") " Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.195012 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/66e4f614-7882-4113-9196-36ad55e76edb-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "66e4f614-7882-4113-9196-36ad55e76edb" (UID: "66e4f614-7882-4113-9196-36ad55e76edb"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.195225 5005 reconciler_common.go:293] "Volume detached for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/66e4f614-7882-4113-9196-36ad55e76edb-var-lib-manila\") on node \"crc\" DevicePath \"\"" Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.195239 5005 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/66e4f614-7882-4113-9196-36ad55e76edb-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.201327 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66e4f614-7882-4113-9196-36ad55e76edb-ceph" (OuterVolumeSpecName: "ceph") pod "66e4f614-7882-4113-9196-36ad55e76edb" (UID: "66e4f614-7882-4113-9196-36ad55e76edb"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.201365 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66e4f614-7882-4113-9196-36ad55e76edb-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "66e4f614-7882-4113-9196-36ad55e76edb" (UID: "66e4f614-7882-4113-9196-36ad55e76edb"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.201669 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66e4f614-7882-4113-9196-36ad55e76edb-scripts" (OuterVolumeSpecName: "scripts") pod "66e4f614-7882-4113-9196-36ad55e76edb" (UID: "66e4f614-7882-4113-9196-36ad55e76edb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.207802 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66e4f614-7882-4113-9196-36ad55e76edb-kube-api-access-c6wvn" (OuterVolumeSpecName: "kube-api-access-c6wvn") pod "66e4f614-7882-4113-9196-36ad55e76edb" (UID: "66e4f614-7882-4113-9196-36ad55e76edb"). InnerVolumeSpecName "kube-api-access-c6wvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.242083 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66e4f614-7882-4113-9196-36ad55e76edb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "66e4f614-7882-4113-9196-36ad55e76edb" (UID: "66e4f614-7882-4113-9196-36ad55e76edb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.289253 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66e4f614-7882-4113-9196-36ad55e76edb-config-data" (OuterVolumeSpecName: "config-data") pod "66e4f614-7882-4113-9196-36ad55e76edb" (UID: "66e4f614-7882-4113-9196-36ad55e76edb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.296717 5005 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/66e4f614-7882-4113-9196-36ad55e76edb-ceph\") on node \"crc\" DevicePath \"\"" Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.296748 5005 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66e4f614-7882-4113-9196-36ad55e76edb-scripts\") on node \"crc\" DevicePath \"\"" Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.296758 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6wvn\" (UniqueName: \"kubernetes.io/projected/66e4f614-7882-4113-9196-36ad55e76edb-kube-api-access-c6wvn\") on node \"crc\" DevicePath \"\"" Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.296771 5005 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/66e4f614-7882-4113-9196-36ad55e76edb-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.296780 5005 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66e4f614-7882-4113-9196-36ad55e76edb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.296787 5005 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66e4f614-7882-4113-9196-36ad55e76edb-config-data\") on node \"crc\" DevicePath \"\"" Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.447574 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.460989 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-share-share1-0"] Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.478066 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Feb 25 12:07:57 crc kubenswrapper[5005]: E0225 12:07:57.478429 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66e4f614-7882-4113-9196-36ad55e76edb" containerName="manila-share" Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.478444 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="66e4f614-7882-4113-9196-36ad55e76edb" containerName="manila-share" Feb 25 12:07:57 crc kubenswrapper[5005]: E0225 12:07:57.478480 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66e4f614-7882-4113-9196-36ad55e76edb" containerName="probe" Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.478486 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="66e4f614-7882-4113-9196-36ad55e76edb" containerName="probe" Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.478658 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="66e4f614-7882-4113-9196-36ad55e76edb" containerName="manila-share" Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.478678 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="66e4f614-7882-4113-9196-36ad55e76edb" containerName="probe" Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.479729 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.482128 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.491912 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.602830 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/213c62b6-1330-492a-92aa-4e756678a6f2-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"213c62b6-1330-492a-92aa-4e756678a6f2\") " pod="openstack/manila-share-share1-0" Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.602930 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/213c62b6-1330-492a-92aa-4e756678a6f2-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"213c62b6-1330-492a-92aa-4e756678a6f2\") " pod="openstack/manila-share-share1-0" Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.603040 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/213c62b6-1330-492a-92aa-4e756678a6f2-ceph\") pod \"manila-share-share1-0\" (UID: \"213c62b6-1330-492a-92aa-4e756678a6f2\") " pod="openstack/manila-share-share1-0" Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.603058 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/213c62b6-1330-492a-92aa-4e756678a6f2-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"213c62b6-1330-492a-92aa-4e756678a6f2\") " pod="openstack/manila-share-share1-0" Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.603075 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/213c62b6-1330-492a-92aa-4e756678a6f2-config-data\") pod \"manila-share-share1-0\" (UID: \"213c62b6-1330-492a-92aa-4e756678a6f2\") " pod="openstack/manila-share-share1-0" Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.603095 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/213c62b6-1330-492a-92aa-4e756678a6f2-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"213c62b6-1330-492a-92aa-4e756678a6f2\") " pod="openstack/manila-share-share1-0" Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.603386 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq6tg\" (UniqueName: \"kubernetes.io/projected/213c62b6-1330-492a-92aa-4e756678a6f2-kube-api-access-nq6tg\") pod \"manila-share-share1-0\" (UID: \"213c62b6-1330-492a-92aa-4e756678a6f2\") " pod="openstack/manila-share-share1-0" Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.603544 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/213c62b6-1330-492a-92aa-4e756678a6f2-scripts\") pod \"manila-share-share1-0\" (UID: \"213c62b6-1330-492a-92aa-4e756678a6f2\") " pod="openstack/manila-share-share1-0" Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.705015 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/213c62b6-1330-492a-92aa-4e756678a6f2-scripts\") pod \"manila-share-share1-0\" (UID: \"213c62b6-1330-492a-92aa-4e756678a6f2\") " pod="openstack/manila-share-share1-0" Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.705093 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/213c62b6-1330-492a-92aa-4e756678a6f2-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"213c62b6-1330-492a-92aa-4e756678a6f2\") " pod="openstack/manila-share-share1-0" Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.705157 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/213c62b6-1330-492a-92aa-4e756678a6f2-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"213c62b6-1330-492a-92aa-4e756678a6f2\") " pod="openstack/manila-share-share1-0" Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.705183 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/213c62b6-1330-492a-92aa-4e756678a6f2-ceph\") pod \"manila-share-share1-0\" (UID: \"213c62b6-1330-492a-92aa-4e756678a6f2\") " pod="openstack/manila-share-share1-0" Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.705199 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/213c62b6-1330-492a-92aa-4e756678a6f2-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"213c62b6-1330-492a-92aa-4e756678a6f2\") " pod="openstack/manila-share-share1-0" Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.705214 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/213c62b6-1330-492a-92aa-4e756678a6f2-config-data\") pod \"manila-share-share1-0\" (UID: \"213c62b6-1330-492a-92aa-4e756678a6f2\") " pod="openstack/manila-share-share1-0" Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.705236 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/213c62b6-1330-492a-92aa-4e756678a6f2-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"213c62b6-1330-492a-92aa-4e756678a6f2\") " pod="openstack/manila-share-share1-0" Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.705280 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nq6tg\" (UniqueName: \"kubernetes.io/projected/213c62b6-1330-492a-92aa-4e756678a6f2-kube-api-access-nq6tg\") pod \"manila-share-share1-0\" (UID: \"213c62b6-1330-492a-92aa-4e756678a6f2\") " pod="openstack/manila-share-share1-0" Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.705911 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/213c62b6-1330-492a-92aa-4e756678a6f2-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"213c62b6-1330-492a-92aa-4e756678a6f2\") " pod="openstack/manila-share-share1-0" Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.705855 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/213c62b6-1330-492a-92aa-4e756678a6f2-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"213c62b6-1330-492a-92aa-4e756678a6f2\") " pod="openstack/manila-share-share1-0" Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.710329 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/213c62b6-1330-492a-92aa-4e756678a6f2-config-data\") pod \"manila-share-share1-0\" (UID: \"213c62b6-1330-492a-92aa-4e756678a6f2\") " pod="openstack/manila-share-share1-0" Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.710490 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/213c62b6-1330-492a-92aa-4e756678a6f2-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"213c62b6-1330-492a-92aa-4e756678a6f2\") " pod="openstack/manila-share-share1-0" Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.710521 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/213c62b6-1330-492a-92aa-4e756678a6f2-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"213c62b6-1330-492a-92aa-4e756678a6f2\") " pod="openstack/manila-share-share1-0" Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.710608 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/213c62b6-1330-492a-92aa-4e756678a6f2-ceph\") pod \"manila-share-share1-0\" (UID: \"213c62b6-1330-492a-92aa-4e756678a6f2\") " pod="openstack/manila-share-share1-0" Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.714006 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/213c62b6-1330-492a-92aa-4e756678a6f2-scripts\") pod \"manila-share-share1-0\" (UID: \"213c62b6-1330-492a-92aa-4e756678a6f2\") " pod="openstack/manila-share-share1-0" Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.723331 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nq6tg\" (UniqueName: \"kubernetes.io/projected/213c62b6-1330-492a-92aa-4e756678a6f2-kube-api-access-nq6tg\") pod \"manila-share-share1-0\" (UID: \"213c62b6-1330-492a-92aa-4e756678a6f2\") " pod="openstack/manila-share-share1-0" Feb 25 12:07:57 crc kubenswrapper[5005]: I0225 12:07:57.807387 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Feb 25 12:07:58 crc kubenswrapper[5005]: I0225 12:07:58.350556 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Feb 25 12:07:58 crc kubenswrapper[5005]: W0225 12:07:58.363134 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod213c62b6_1330_492a_92aa_4e756678a6f2.slice/crio-5c589f983d9ea3561c01c75c6b8e7451f8d7d767808a9704ab63611ef4262ce5 WatchSource:0}: Error finding container 5c589f983d9ea3561c01c75c6b8e7451f8d7d767808a9704ab63611ef4262ce5: Status 404 returned error can't find the container with id 5c589f983d9ea3561c01c75c6b8e7451f8d7d767808a9704ab63611ef4262ce5 Feb 25 12:07:58 crc kubenswrapper[5005]: I0225 12:07:58.701629 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66e4f614-7882-4113-9196-36ad55e76edb" path="/var/lib/kubelet/pods/66e4f614-7882-4113-9196-36ad55e76edb/volumes" Feb 25 12:07:59 crc kubenswrapper[5005]: I0225 12:07:59.149157 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"213c62b6-1330-492a-92aa-4e756678a6f2","Type":"ContainerStarted","Data":"c4dda5b24c1e00d95e0d29b304915a57e2d4792244e18b43ed6392f87e3fa62e"} Feb 25 12:07:59 crc kubenswrapper[5005]: I0225 12:07:59.149211 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"213c62b6-1330-492a-92aa-4e756678a6f2","Type":"ContainerStarted","Data":"5c589f983d9ea3561c01c75c6b8e7451f8d7d767808a9704ab63611ef4262ce5"} Feb 25 12:07:59 crc kubenswrapper[5005]: I0225 12:07:59.153155 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b438d6e5-da98-4860-ae13-71372ea976cb","Type":"ContainerStarted","Data":"6dfbd6d501d02e83a215f334f143b0c91164130d6713a2ea31e1f25cdd65d5e1"} Feb 25 12:07:59 crc kubenswrapper[5005]: I0225 12:07:59.153328 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 25 12:07:59 crc kubenswrapper[5005]: I0225 12:07:59.182146 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.795725231 podStartE2EDuration="6.182124701s" podCreationTimestamp="2026-02-25 12:07:53 +0000 UTC" firstStartedPulling="2026-02-25 12:07:53.958027191 +0000 UTC m=+2987.998759508" lastFinishedPulling="2026-02-25 12:07:58.344426651 +0000 UTC m=+2992.385158978" observedRunningTime="2026-02-25 12:07:59.176936341 +0000 UTC m=+2993.217668698" watchObservedRunningTime="2026-02-25 12:07:59.182124701 +0000 UTC m=+2993.222857028" Feb 25 12:08:00 crc kubenswrapper[5005]: I0225 12:08:00.156190 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533688-dpwrw"] Feb 25 12:08:00 crc kubenswrapper[5005]: I0225 12:08:00.158514 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533688-dpwrw" Feb 25 12:08:00 crc kubenswrapper[5005]: I0225 12:08:00.163561 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 12:08:00 crc kubenswrapper[5005]: I0225 12:08:00.163614 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 12:08:00 crc kubenswrapper[5005]: I0225 12:08:00.163561 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 12:08:00 crc kubenswrapper[5005]: I0225 12:08:00.167796 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533688-dpwrw"] Feb 25 12:08:00 crc kubenswrapper[5005]: I0225 12:08:00.168809 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"213c62b6-1330-492a-92aa-4e756678a6f2","Type":"ContainerStarted","Data":"ab02045ef1f60a76a4b69ddfedb9d2c6932912f1927f716dd221af065373f318"} Feb 25 12:08:00 crc kubenswrapper[5005]: I0225 12:08:00.207408 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=3.207362794 podStartE2EDuration="3.207362794s" podCreationTimestamp="2026-02-25 12:07:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 12:08:00.20202304 +0000 UTC m=+2994.242755377" watchObservedRunningTime="2026-02-25 12:08:00.207362794 +0000 UTC m=+2994.248095121" Feb 25 12:08:00 crc kubenswrapper[5005]: I0225 12:08:00.265598 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95pzs\" (UniqueName: \"kubernetes.io/projected/c21aa569-5f82-4605-930f-6b0b4799779f-kube-api-access-95pzs\") pod \"auto-csr-approver-29533688-dpwrw\" (UID: \"c21aa569-5f82-4605-930f-6b0b4799779f\") " pod="openshift-infra/auto-csr-approver-29533688-dpwrw" Feb 25 12:08:00 crc kubenswrapper[5005]: I0225 12:08:00.368119 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95pzs\" (UniqueName: \"kubernetes.io/projected/c21aa569-5f82-4605-930f-6b0b4799779f-kube-api-access-95pzs\") pod \"auto-csr-approver-29533688-dpwrw\" (UID: \"c21aa569-5f82-4605-930f-6b0b4799779f\") " pod="openshift-infra/auto-csr-approver-29533688-dpwrw" Feb 25 12:08:00 crc kubenswrapper[5005]: I0225 12:08:00.395117 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95pzs\" (UniqueName: \"kubernetes.io/projected/c21aa569-5f82-4605-930f-6b0b4799779f-kube-api-access-95pzs\") pod \"auto-csr-approver-29533688-dpwrw\" (UID: \"c21aa569-5f82-4605-930f-6b0b4799779f\") " pod="openshift-infra/auto-csr-approver-29533688-dpwrw" Feb 25 12:08:00 crc kubenswrapper[5005]: I0225 12:08:00.479547 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533688-dpwrw" Feb 25 12:08:00 crc kubenswrapper[5005]: I0225 12:08:00.964833 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533688-dpwrw"] Feb 25 12:08:00 crc kubenswrapper[5005]: W0225 12:08:00.998640 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc21aa569_5f82_4605_930f_6b0b4799779f.slice/crio-e32821f6bd56565b31385e964e3d1270999b3e831e4efaad3d6b2a5caca78fba WatchSource:0}: Error finding container e32821f6bd56565b31385e964e3d1270999b3e831e4efaad3d6b2a5caca78fba: Status 404 returned error can't find the container with id e32821f6bd56565b31385e964e3d1270999b3e831e4efaad3d6b2a5caca78fba Feb 25 12:08:01 crc kubenswrapper[5005]: I0225 12:08:01.129217 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Feb 25 12:08:01 crc kubenswrapper[5005]: I0225 12:08:01.180286 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533688-dpwrw" event={"ID":"c21aa569-5f82-4605-930f-6b0b4799779f","Type":"ContainerStarted","Data":"e32821f6bd56565b31385e964e3d1270999b3e831e4efaad3d6b2a5caca78fba"} Feb 25 12:08:01 crc kubenswrapper[5005]: I0225 12:08:01.183324 5005 generic.go:334] "Generic (PLEG): container finished" podID="81f2b95d-74fd-4386-acb0-7c2508043937" containerID="67acda7614e883251402416d307216e5405ed26d7e2c5a96def8a81a4cfa3e4b" exitCode=0 Feb 25 12:08:01 crc kubenswrapper[5005]: I0225 12:08:01.183853 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Feb 25 12:08:01 crc kubenswrapper[5005]: I0225 12:08:01.183876 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"81f2b95d-74fd-4386-acb0-7c2508043937","Type":"ContainerDied","Data":"67acda7614e883251402416d307216e5405ed26d7e2c5a96def8a81a4cfa3e4b"} Feb 25 12:08:01 crc kubenswrapper[5005]: I0225 12:08:01.183974 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"81f2b95d-74fd-4386-acb0-7c2508043937","Type":"ContainerDied","Data":"ee2dad58d43eeaed5f090e9426e7ae706225152069ea5332efeedf245d714a9a"} Feb 25 12:08:01 crc kubenswrapper[5005]: I0225 12:08:01.184004 5005 scope.go:117] "RemoveContainer" containerID="2290ffe6d43d688030863c2ad906bb814eb8e94d5a1d4afe15c16c312e9b53bd" Feb 25 12:08:01 crc kubenswrapper[5005]: I0225 12:08:01.214244 5005 scope.go:117] "RemoveContainer" containerID="67acda7614e883251402416d307216e5405ed26d7e2c5a96def8a81a4cfa3e4b" Feb 25 12:08:01 crc kubenswrapper[5005]: I0225 12:08:01.234204 5005 scope.go:117] "RemoveContainer" containerID="2290ffe6d43d688030863c2ad906bb814eb8e94d5a1d4afe15c16c312e9b53bd" Feb 25 12:08:01 crc kubenswrapper[5005]: E0225 12:08:01.234698 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2290ffe6d43d688030863c2ad906bb814eb8e94d5a1d4afe15c16c312e9b53bd\": container with ID starting with 2290ffe6d43d688030863c2ad906bb814eb8e94d5a1d4afe15c16c312e9b53bd not found: ID does not exist" containerID="2290ffe6d43d688030863c2ad906bb814eb8e94d5a1d4afe15c16c312e9b53bd" Feb 25 12:08:01 crc kubenswrapper[5005]: I0225 12:08:01.234786 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2290ffe6d43d688030863c2ad906bb814eb8e94d5a1d4afe15c16c312e9b53bd"} err="failed to get container status \"2290ffe6d43d688030863c2ad906bb814eb8e94d5a1d4afe15c16c312e9b53bd\": rpc error: code = NotFound desc = could not find container \"2290ffe6d43d688030863c2ad906bb814eb8e94d5a1d4afe15c16c312e9b53bd\": container with ID starting with 2290ffe6d43d688030863c2ad906bb814eb8e94d5a1d4afe15c16c312e9b53bd not found: ID does not exist" Feb 25 12:08:01 crc kubenswrapper[5005]: I0225 12:08:01.234821 5005 scope.go:117] "RemoveContainer" containerID="67acda7614e883251402416d307216e5405ed26d7e2c5a96def8a81a4cfa3e4b" Feb 25 12:08:01 crc kubenswrapper[5005]: E0225 12:08:01.238780 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67acda7614e883251402416d307216e5405ed26d7e2c5a96def8a81a4cfa3e4b\": container with ID starting with 67acda7614e883251402416d307216e5405ed26d7e2c5a96def8a81a4cfa3e4b not found: ID does not exist" containerID="67acda7614e883251402416d307216e5405ed26d7e2c5a96def8a81a4cfa3e4b" Feb 25 12:08:01 crc kubenswrapper[5005]: I0225 12:08:01.238809 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67acda7614e883251402416d307216e5405ed26d7e2c5a96def8a81a4cfa3e4b"} err="failed to get container status \"67acda7614e883251402416d307216e5405ed26d7e2c5a96def8a81a4cfa3e4b\": rpc error: code = NotFound desc = could not find container \"67acda7614e883251402416d307216e5405ed26d7e2c5a96def8a81a4cfa3e4b\": container with ID starting with 67acda7614e883251402416d307216e5405ed26d7e2c5a96def8a81a4cfa3e4b not found: ID does not exist" Feb 25 12:08:01 crc kubenswrapper[5005]: I0225 12:08:01.288638 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/81f2b95d-74fd-4386-acb0-7c2508043937-config-data-custom\") pod \"81f2b95d-74fd-4386-acb0-7c2508043937\" (UID: \"81f2b95d-74fd-4386-acb0-7c2508043937\") " Feb 25 12:08:01 crc kubenswrapper[5005]: I0225 12:08:01.288841 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81f2b95d-74fd-4386-acb0-7c2508043937-combined-ca-bundle\") pod \"81f2b95d-74fd-4386-acb0-7c2508043937\" (UID: \"81f2b95d-74fd-4386-acb0-7c2508043937\") " Feb 25 12:08:01 crc kubenswrapper[5005]: I0225 12:08:01.288870 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81f2b95d-74fd-4386-acb0-7c2508043937-scripts\") pod \"81f2b95d-74fd-4386-acb0-7c2508043937\" (UID: \"81f2b95d-74fd-4386-acb0-7c2508043937\") " Feb 25 12:08:01 crc kubenswrapper[5005]: I0225 12:08:01.288907 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81f2b95d-74fd-4386-acb0-7c2508043937-config-data\") pod \"81f2b95d-74fd-4386-acb0-7c2508043937\" (UID: \"81f2b95d-74fd-4386-acb0-7c2508043937\") " Feb 25 12:08:01 crc kubenswrapper[5005]: I0225 12:08:01.289142 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/81f2b95d-74fd-4386-acb0-7c2508043937-etc-machine-id\") pod \"81f2b95d-74fd-4386-acb0-7c2508043937\" (UID: \"81f2b95d-74fd-4386-acb0-7c2508043937\") " Feb 25 12:08:01 crc kubenswrapper[5005]: I0225 12:08:01.289206 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nh8df\" (UniqueName: \"kubernetes.io/projected/81f2b95d-74fd-4386-acb0-7c2508043937-kube-api-access-nh8df\") pod \"81f2b95d-74fd-4386-acb0-7c2508043937\" (UID: \"81f2b95d-74fd-4386-acb0-7c2508043937\") " Feb 25 12:08:01 crc kubenswrapper[5005]: I0225 12:08:01.292288 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81f2b95d-74fd-4386-acb0-7c2508043937-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "81f2b95d-74fd-4386-acb0-7c2508043937" (UID: "81f2b95d-74fd-4386-acb0-7c2508043937"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 25 12:08:01 crc kubenswrapper[5005]: I0225 12:08:01.298638 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81f2b95d-74fd-4386-acb0-7c2508043937-kube-api-access-nh8df" (OuterVolumeSpecName: "kube-api-access-nh8df") pod "81f2b95d-74fd-4386-acb0-7c2508043937" (UID: "81f2b95d-74fd-4386-acb0-7c2508043937"). InnerVolumeSpecName "kube-api-access-nh8df". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:08:01 crc kubenswrapper[5005]: I0225 12:08:01.301503 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81f2b95d-74fd-4386-acb0-7c2508043937-scripts" (OuterVolumeSpecName: "scripts") pod "81f2b95d-74fd-4386-acb0-7c2508043937" (UID: "81f2b95d-74fd-4386-acb0-7c2508043937"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 12:08:01 crc kubenswrapper[5005]: I0225 12:08:01.303552 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81f2b95d-74fd-4386-acb0-7c2508043937-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "81f2b95d-74fd-4386-acb0-7c2508043937" (UID: "81f2b95d-74fd-4386-acb0-7c2508043937"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 12:08:01 crc kubenswrapper[5005]: I0225 12:08:01.353932 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81f2b95d-74fd-4386-acb0-7c2508043937-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "81f2b95d-74fd-4386-acb0-7c2508043937" (UID: "81f2b95d-74fd-4386-acb0-7c2508043937"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 12:08:01 crc kubenswrapper[5005]: I0225 12:08:01.392133 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nh8df\" (UniqueName: \"kubernetes.io/projected/81f2b95d-74fd-4386-acb0-7c2508043937-kube-api-access-nh8df\") on node \"crc\" DevicePath \"\"" Feb 25 12:08:01 crc kubenswrapper[5005]: I0225 12:08:01.392467 5005 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/81f2b95d-74fd-4386-acb0-7c2508043937-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 25 12:08:01 crc kubenswrapper[5005]: I0225 12:08:01.393104 5005 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81f2b95d-74fd-4386-acb0-7c2508043937-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 12:08:01 crc kubenswrapper[5005]: I0225 12:08:01.393195 5005 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81f2b95d-74fd-4386-acb0-7c2508043937-scripts\") on node \"crc\" DevicePath \"\"" Feb 25 12:08:01 crc kubenswrapper[5005]: I0225 12:08:01.393274 5005 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/81f2b95d-74fd-4386-acb0-7c2508043937-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 25 12:08:01 crc kubenswrapper[5005]: I0225 12:08:01.415096 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81f2b95d-74fd-4386-acb0-7c2508043937-config-data" (OuterVolumeSpecName: "config-data") pod "81f2b95d-74fd-4386-acb0-7c2508043937" (UID: "81f2b95d-74fd-4386-acb0-7c2508043937"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 12:08:01 crc kubenswrapper[5005]: I0225 12:08:01.496793 5005 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81f2b95d-74fd-4386-acb0-7c2508043937-config-data\") on node \"crc\" DevicePath \"\"" Feb 25 12:08:01 crc kubenswrapper[5005]: I0225 12:08:01.588152 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Feb 25 12:08:01 crc kubenswrapper[5005]: I0225 12:08:01.596484 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-scheduler-0"] Feb 25 12:08:01 crc kubenswrapper[5005]: I0225 12:08:01.619872 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Feb 25 12:08:01 crc kubenswrapper[5005]: E0225 12:08:01.620266 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81f2b95d-74fd-4386-acb0-7c2508043937" containerName="probe" Feb 25 12:08:01 crc kubenswrapper[5005]: I0225 12:08:01.620288 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="81f2b95d-74fd-4386-acb0-7c2508043937" containerName="probe" Feb 25 12:08:01 crc kubenswrapper[5005]: E0225 12:08:01.620318 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81f2b95d-74fd-4386-acb0-7c2508043937" containerName="manila-scheduler" Feb 25 12:08:01 crc kubenswrapper[5005]: I0225 12:08:01.620326 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="81f2b95d-74fd-4386-acb0-7c2508043937" containerName="manila-scheduler" Feb 25 12:08:01 crc kubenswrapper[5005]: I0225 12:08:01.620547 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="81f2b95d-74fd-4386-acb0-7c2508043937" containerName="manila-scheduler" Feb 25 12:08:01 crc kubenswrapper[5005]: I0225 12:08:01.620559 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="81f2b95d-74fd-4386-acb0-7c2508043937" containerName="probe" Feb 25 12:08:01 crc kubenswrapper[5005]: I0225 12:08:01.621642 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Feb 25 12:08:01 crc kubenswrapper[5005]: I0225 12:08:01.625846 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Feb 25 12:08:01 crc kubenswrapper[5005]: I0225 12:08:01.631351 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Feb 25 12:08:01 crc kubenswrapper[5005]: I0225 12:08:01.717746 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/manila-api-0" Feb 25 12:08:01 crc kubenswrapper[5005]: I0225 12:08:01.804631 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg5rd\" (UniqueName: \"kubernetes.io/projected/eddaedf3-e462-471f-abac-a5e1553d14a4-kube-api-access-rg5rd\") pod \"manila-scheduler-0\" (UID: \"eddaedf3-e462-471f-abac-a5e1553d14a4\") " pod="openstack/manila-scheduler-0" Feb 25 12:08:01 crc kubenswrapper[5005]: I0225 12:08:01.805062 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eddaedf3-e462-471f-abac-a5e1553d14a4-config-data\") pod \"manila-scheduler-0\" (UID: \"eddaedf3-e462-471f-abac-a5e1553d14a4\") " pod="openstack/manila-scheduler-0" Feb 25 12:08:01 crc kubenswrapper[5005]: I0225 12:08:01.805151 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eddaedf3-e462-471f-abac-a5e1553d14a4-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"eddaedf3-e462-471f-abac-a5e1553d14a4\") " pod="openstack/manila-scheduler-0" Feb 25 12:08:01 crc kubenswrapper[5005]: I0225 12:08:01.805224 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eddaedf3-e462-471f-abac-a5e1553d14a4-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"eddaedf3-e462-471f-abac-a5e1553d14a4\") " pod="openstack/manila-scheduler-0" Feb 25 12:08:01 crc kubenswrapper[5005]: I0225 12:08:01.805358 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eddaedf3-e462-471f-abac-a5e1553d14a4-scripts\") pod \"manila-scheduler-0\" (UID: \"eddaedf3-e462-471f-abac-a5e1553d14a4\") " pod="openstack/manila-scheduler-0" Feb 25 12:08:01 crc kubenswrapper[5005]: I0225 12:08:01.805447 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eddaedf3-e462-471f-abac-a5e1553d14a4-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"eddaedf3-e462-471f-abac-a5e1553d14a4\") " pod="openstack/manila-scheduler-0" Feb 25 12:08:01 crc kubenswrapper[5005]: I0225 12:08:01.908071 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rg5rd\" (UniqueName: \"kubernetes.io/projected/eddaedf3-e462-471f-abac-a5e1553d14a4-kube-api-access-rg5rd\") pod \"manila-scheduler-0\" (UID: \"eddaedf3-e462-471f-abac-a5e1553d14a4\") " pod="openstack/manila-scheduler-0" Feb 25 12:08:01 crc kubenswrapper[5005]: I0225 12:08:01.908865 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eddaedf3-e462-471f-abac-a5e1553d14a4-config-data\") pod \"manila-scheduler-0\" (UID: \"eddaedf3-e462-471f-abac-a5e1553d14a4\") " pod="openstack/manila-scheduler-0" Feb 25 12:08:01 crc kubenswrapper[5005]: I0225 12:08:01.908986 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eddaedf3-e462-471f-abac-a5e1553d14a4-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"eddaedf3-e462-471f-abac-a5e1553d14a4\") " pod="openstack/manila-scheduler-0" Feb 25 12:08:01 crc kubenswrapper[5005]: I0225 12:08:01.909068 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eddaedf3-e462-471f-abac-a5e1553d14a4-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"eddaedf3-e462-471f-abac-a5e1553d14a4\") " pod="openstack/manila-scheduler-0" Feb 25 12:08:01 crc kubenswrapper[5005]: I0225 12:08:01.909123 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eddaedf3-e462-471f-abac-a5e1553d14a4-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"eddaedf3-e462-471f-abac-a5e1553d14a4\") " pod="openstack/manila-scheduler-0" Feb 25 12:08:01 crc kubenswrapper[5005]: I0225 12:08:01.909253 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eddaedf3-e462-471f-abac-a5e1553d14a4-scripts\") pod \"manila-scheduler-0\" (UID: \"eddaedf3-e462-471f-abac-a5e1553d14a4\") " pod="openstack/manila-scheduler-0" Feb 25 12:08:01 crc kubenswrapper[5005]: I0225 12:08:01.909335 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eddaedf3-e462-471f-abac-a5e1553d14a4-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"eddaedf3-e462-471f-abac-a5e1553d14a4\") " pod="openstack/manila-scheduler-0" Feb 25 12:08:01 crc kubenswrapper[5005]: I0225 12:08:01.916632 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eddaedf3-e462-471f-abac-a5e1553d14a4-config-data\") pod \"manila-scheduler-0\" (UID: \"eddaedf3-e462-471f-abac-a5e1553d14a4\") " pod="openstack/manila-scheduler-0" Feb 25 12:08:01 crc kubenswrapper[5005]: I0225 12:08:01.917164 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eddaedf3-e462-471f-abac-a5e1553d14a4-scripts\") pod \"manila-scheduler-0\" (UID: \"eddaedf3-e462-471f-abac-a5e1553d14a4\") " pod="openstack/manila-scheduler-0" Feb 25 12:08:01 crc kubenswrapper[5005]: I0225 12:08:01.917496 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eddaedf3-e462-471f-abac-a5e1553d14a4-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"eddaedf3-e462-471f-abac-a5e1553d14a4\") " pod="openstack/manila-scheduler-0" Feb 25 12:08:01 crc kubenswrapper[5005]: I0225 12:08:01.931979 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rg5rd\" (UniqueName: \"kubernetes.io/projected/eddaedf3-e462-471f-abac-a5e1553d14a4-kube-api-access-rg5rd\") pod \"manila-scheduler-0\" (UID: \"eddaedf3-e462-471f-abac-a5e1553d14a4\") " pod="openstack/manila-scheduler-0" Feb 25 12:08:01 crc kubenswrapper[5005]: I0225 12:08:01.937939 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eddaedf3-e462-471f-abac-a5e1553d14a4-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"eddaedf3-e462-471f-abac-a5e1553d14a4\") " pod="openstack/manila-scheduler-0" Feb 25 12:08:01 crc kubenswrapper[5005]: I0225 12:08:01.954028 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Feb 25 12:08:02 crc kubenswrapper[5005]: I0225 12:08:02.442631 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Feb 25 12:08:02 crc kubenswrapper[5005]: W0225 12:08:02.451091 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeddaedf3_e462_471f_abac_a5e1553d14a4.slice/crio-5021eeb01156e55de02d788365b02cc167349168dc996d5fb469fab85d743638 WatchSource:0}: Error finding container 5021eeb01156e55de02d788365b02cc167349168dc996d5fb469fab85d743638: Status 404 returned error can't find the container with id 5021eeb01156e55de02d788365b02cc167349168dc996d5fb469fab85d743638 Feb 25 12:08:02 crc kubenswrapper[5005]: I0225 12:08:02.702638 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81f2b95d-74fd-4386-acb0-7c2508043937" path="/var/lib/kubelet/pods/81f2b95d-74fd-4386-acb0-7c2508043937/volumes" Feb 25 12:08:03 crc kubenswrapper[5005]: I0225 12:08:03.220454 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"eddaedf3-e462-471f-abac-a5e1553d14a4","Type":"ContainerStarted","Data":"3fc0df2c53b1c5cb6b9bd127fbeeff8506ac363062f3f054298d505aa6309192"} Feb 25 12:08:03 crc kubenswrapper[5005]: I0225 12:08:03.220875 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"eddaedf3-e462-471f-abac-a5e1553d14a4","Type":"ContainerStarted","Data":"5021eeb01156e55de02d788365b02cc167349168dc996d5fb469fab85d743638"} Feb 25 12:08:03 crc kubenswrapper[5005]: I0225 12:08:03.224252 5005 generic.go:334] "Generic (PLEG): container finished" podID="c21aa569-5f82-4605-930f-6b0b4799779f" containerID="b1e188f6d7640cc0567e3f857721d68b5b8144d38d1495625bb12f635bdadd4d" exitCode=0 Feb 25 12:08:03 crc kubenswrapper[5005]: I0225 12:08:03.224329 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533688-dpwrw" event={"ID":"c21aa569-5f82-4605-930f-6b0b4799779f","Type":"ContainerDied","Data":"b1e188f6d7640cc0567e3f857721d68b5b8144d38d1495625bb12f635bdadd4d"} Feb 25 12:08:04 crc kubenswrapper[5005]: I0225 12:08:04.260556 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"eddaedf3-e462-471f-abac-a5e1553d14a4","Type":"ContainerStarted","Data":"3185f50b2059b1a0f7f139eb3b2b75754816b6f2b3d78fe4a09eaf4645126006"} Feb 25 12:08:04 crc kubenswrapper[5005]: I0225 12:08:04.294247 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=3.294227623 podStartE2EDuration="3.294227623s" podCreationTimestamp="2026-02-25 12:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 12:08:04.28211273 +0000 UTC m=+2998.322845067" watchObservedRunningTime="2026-02-25 12:08:04.294227623 +0000 UTC m=+2998.334959950" Feb 25 12:08:04 crc kubenswrapper[5005]: I0225 12:08:04.685206 5005 scope.go:117] "RemoveContainer" containerID="7ca266498d6b9c8fb2a14f176fd0f40e3e96606757679e079fcb7ecb4ca85b52" Feb 25 12:08:04 crc kubenswrapper[5005]: E0225 12:08:04.685802 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:08:04 crc kubenswrapper[5005]: I0225 12:08:04.691791 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533688-dpwrw" Feb 25 12:08:04 crc kubenswrapper[5005]: I0225 12:08:04.787478 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95pzs\" (UniqueName: \"kubernetes.io/projected/c21aa569-5f82-4605-930f-6b0b4799779f-kube-api-access-95pzs\") pod \"c21aa569-5f82-4605-930f-6b0b4799779f\" (UID: \"c21aa569-5f82-4605-930f-6b0b4799779f\") " Feb 25 12:08:04 crc kubenswrapper[5005]: I0225 12:08:04.798902 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c21aa569-5f82-4605-930f-6b0b4799779f-kube-api-access-95pzs" (OuterVolumeSpecName: "kube-api-access-95pzs") pod "c21aa569-5f82-4605-930f-6b0b4799779f" (UID: "c21aa569-5f82-4605-930f-6b0b4799779f"). InnerVolumeSpecName "kube-api-access-95pzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:08:04 crc kubenswrapper[5005]: I0225 12:08:04.890559 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95pzs\" (UniqueName: \"kubernetes.io/projected/c21aa569-5f82-4605-930f-6b0b4799779f-kube-api-access-95pzs\") on node \"crc\" DevicePath \"\"" Feb 25 12:08:05 crc kubenswrapper[5005]: I0225 12:08:05.275570 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533688-dpwrw" event={"ID":"c21aa569-5f82-4605-930f-6b0b4799779f","Type":"ContainerDied","Data":"e32821f6bd56565b31385e964e3d1270999b3e831e4efaad3d6b2a5caca78fba"} Feb 25 12:08:05 crc kubenswrapper[5005]: I0225 12:08:05.275612 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e32821f6bd56565b31385e964e3d1270999b3e831e4efaad3d6b2a5caca78fba" Feb 25 12:08:05 crc kubenswrapper[5005]: I0225 12:08:05.275665 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533688-dpwrw" Feb 25 12:08:05 crc kubenswrapper[5005]: I0225 12:08:05.805416 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533682-l2dn9"] Feb 25 12:08:05 crc kubenswrapper[5005]: I0225 12:08:05.815649 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533682-l2dn9"] Feb 25 12:08:06 crc kubenswrapper[5005]: I0225 12:08:06.703124 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be9a9c8d-c303-4a5b-af89-b546f299d069" path="/var/lib/kubelet/pods/be9a9c8d-c303-4a5b-af89-b546f299d069/volumes" Feb 25 12:08:07 crc kubenswrapper[5005]: I0225 12:08:07.808414 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Feb 25 12:08:11 crc kubenswrapper[5005]: I0225 12:08:11.956481 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Feb 25 12:08:16 crc kubenswrapper[5005]: I0225 12:08:16.710392 5005 scope.go:117] "RemoveContainer" containerID="7ca266498d6b9c8fb2a14f176fd0f40e3e96606757679e079fcb7ecb4ca85b52" Feb 25 12:08:16 crc kubenswrapper[5005]: E0225 12:08:16.713062 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:08:19 crc kubenswrapper[5005]: I0225 12:08:19.316186 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Feb 25 12:08:19 crc kubenswrapper[5005]: I0225 12:08:19.667646 5005 scope.go:117] "RemoveContainer" containerID="e806359198a2b7c588f1fd50314aef4d6da2c8c10fd9ce849938070e1e4f87c4" Feb 25 12:08:23 crc kubenswrapper[5005]: I0225 12:08:23.342319 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Feb 25 12:08:23 crc kubenswrapper[5005]: I0225 12:08:23.491895 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 25 12:08:28 crc kubenswrapper[5005]: I0225 12:08:28.686005 5005 scope.go:117] "RemoveContainer" containerID="7ca266498d6b9c8fb2a14f176fd0f40e3e96606757679e079fcb7ecb4ca85b52" Feb 25 12:08:28 crc kubenswrapper[5005]: E0225 12:08:28.686849 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:08:39 crc kubenswrapper[5005]: I0225 12:08:39.685885 5005 scope.go:117] "RemoveContainer" containerID="7ca266498d6b9c8fb2a14f176fd0f40e3e96606757679e079fcb7ecb4ca85b52" Feb 25 12:08:39 crc kubenswrapper[5005]: E0225 12:08:39.687903 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:08:48 crc kubenswrapper[5005]: E0225 12:08:48.669521 5005 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.233:37616->38.102.83.233:40985: write tcp 38.102.83.233:37616->38.102.83.233:40985: write: broken pipe Feb 25 12:08:52 crc kubenswrapper[5005]: I0225 12:08:52.686127 5005 scope.go:117] "RemoveContainer" containerID="7ca266498d6b9c8fb2a14f176fd0f40e3e96606757679e079fcb7ecb4ca85b52" Feb 25 12:08:52 crc kubenswrapper[5005]: E0225 12:08:52.686761 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:09:07 crc kubenswrapper[5005]: I0225 12:09:07.686068 5005 scope.go:117] "RemoveContainer" containerID="7ca266498d6b9c8fb2a14f176fd0f40e3e96606757679e079fcb7ecb4ca85b52" Feb 25 12:09:07 crc kubenswrapper[5005]: E0225 12:09:07.686947 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:09:21 crc kubenswrapper[5005]: I0225 12:09:21.685344 5005 scope.go:117] "RemoveContainer" containerID="7ca266498d6b9c8fb2a14f176fd0f40e3e96606757679e079fcb7ecb4ca85b52" Feb 25 12:09:21 crc kubenswrapper[5005]: E0225 12:09:21.686461 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:09:27 crc kubenswrapper[5005]: I0225 12:09:27.338236 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest-s00-full"] Feb 25 12:09:27 crc kubenswrapper[5005]: E0225 12:09:27.339231 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c21aa569-5f82-4605-930f-6b0b4799779f" containerName="oc" Feb 25 12:09:27 crc kubenswrapper[5005]: I0225 12:09:27.339246 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="c21aa569-5f82-4605-930f-6b0b4799779f" containerName="oc" Feb 25 12:09:27 crc kubenswrapper[5005]: I0225 12:09:27.339640 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="c21aa569-5f82-4605-930f-6b0b4799779f" containerName="oc" Feb 25 12:09:27 crc kubenswrapper[5005]: I0225 12:09:27.340330 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-full" Feb 25 12:09:27 crc kubenswrapper[5005]: I0225 12:09:27.343129 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 25 12:09:27 crc kubenswrapper[5005]: I0225 12:09:27.343265 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 25 12:09:27 crc kubenswrapper[5005]: I0225 12:09:27.343437 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-g5ll5" Feb 25 12:09:27 crc kubenswrapper[5005]: I0225 12:09:27.345476 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 25 12:09:27 crc kubenswrapper[5005]: I0225 12:09:27.357620 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s00-full"] Feb 25 12:09:27 crc kubenswrapper[5005]: I0225 12:09:27.397295 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/31e745d1-9f40-4deb-adb0-7cb412b3b21f-openstack-config\") pod \"tempest-tests-tempest-s00-full\" (UID: \"31e745d1-9f40-4deb-adb0-7cb412b3b21f\") " pod="openstack/tempest-tests-tempest-s00-full" Feb 25 12:09:27 crc kubenswrapper[5005]: I0225 12:09:27.397493 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/31e745d1-9f40-4deb-adb0-7cb412b3b21f-config-data\") pod \"tempest-tests-tempest-s00-full\" (UID: \"31e745d1-9f40-4deb-adb0-7cb412b3b21f\") " pod="openstack/tempest-tests-tempest-s00-full" Feb 25 12:09:27 crc kubenswrapper[5005]: I0225 12:09:27.397791 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/31e745d1-9f40-4deb-adb0-7cb412b3b21f-openstack-config-secret\") pod \"tempest-tests-tempest-s00-full\" (UID: \"31e745d1-9f40-4deb-adb0-7cb412b3b21f\") " pod="openstack/tempest-tests-tempest-s00-full" Feb 25 12:09:27 crc kubenswrapper[5005]: I0225 12:09:27.501887 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/31e745d1-9f40-4deb-adb0-7cb412b3b21f-ssh-key\") pod \"tempest-tests-tempest-s00-full\" (UID: \"31e745d1-9f40-4deb-adb0-7cb412b3b21f\") " pod="openstack/tempest-tests-tempest-s00-full" Feb 25 12:09:27 crc kubenswrapper[5005]: I0225 12:09:27.502303 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/31e745d1-9f40-4deb-adb0-7cb412b3b21f-ca-certs\") pod \"tempest-tests-tempest-s00-full\" (UID: \"31e745d1-9f40-4deb-adb0-7cb412b3b21f\") " pod="openstack/tempest-tests-tempest-s00-full" Feb 25 12:09:27 crc kubenswrapper[5005]: I0225 12:09:27.502364 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest-s00-full\" (UID: \"31e745d1-9f40-4deb-adb0-7cb412b3b21f\") " pod="openstack/tempest-tests-tempest-s00-full" Feb 25 12:09:27 crc kubenswrapper[5005]: I0225 12:09:27.502441 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/31e745d1-9f40-4deb-adb0-7cb412b3b21f-openstack-config-secret\") pod \"tempest-tests-tempest-s00-full\" (UID: \"31e745d1-9f40-4deb-adb0-7cb412b3b21f\") " pod="openstack/tempest-tests-tempest-s00-full" Feb 25 12:09:27 crc kubenswrapper[5005]: I0225 12:09:27.502474 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/31e745d1-9f40-4deb-adb0-7cb412b3b21f-ceph\") pod \"tempest-tests-tempest-s00-full\" (UID: \"31e745d1-9f40-4deb-adb0-7cb412b3b21f\") " pod="openstack/tempest-tests-tempest-s00-full" Feb 25 12:09:27 crc kubenswrapper[5005]: I0225 12:09:27.502552 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/31e745d1-9f40-4deb-adb0-7cb412b3b21f-openstack-config\") pod \"tempest-tests-tempest-s00-full\" (UID: \"31e745d1-9f40-4deb-adb0-7cb412b3b21f\") " pod="openstack/tempest-tests-tempest-s00-full" Feb 25 12:09:27 crc kubenswrapper[5005]: I0225 12:09:27.502575 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/31e745d1-9f40-4deb-adb0-7cb412b3b21f-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s00-full\" (UID: \"31e745d1-9f40-4deb-adb0-7cb412b3b21f\") " pod="openstack/tempest-tests-tempest-s00-full" Feb 25 12:09:27 crc kubenswrapper[5005]: I0225 12:09:27.502606 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/31e745d1-9f40-4deb-adb0-7cb412b3b21f-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s00-full\" (UID: \"31e745d1-9f40-4deb-adb0-7cb412b3b21f\") " pod="openstack/tempest-tests-tempest-s00-full" Feb 25 12:09:27 crc kubenswrapper[5005]: I0225 12:09:27.502634 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/31e745d1-9f40-4deb-adb0-7cb412b3b21f-config-data\") pod \"tempest-tests-tempest-s00-full\" (UID: \"31e745d1-9f40-4deb-adb0-7cb412b3b21f\") " pod="openstack/tempest-tests-tempest-s00-full" Feb 25 12:09:27 crc kubenswrapper[5005]: I0225 12:09:27.502660 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25gl7\" (UniqueName: \"kubernetes.io/projected/31e745d1-9f40-4deb-adb0-7cb412b3b21f-kube-api-access-25gl7\") pod \"tempest-tests-tempest-s00-full\" (UID: \"31e745d1-9f40-4deb-adb0-7cb412b3b21f\") " pod="openstack/tempest-tests-tempest-s00-full" Feb 25 12:09:27 crc kubenswrapper[5005]: I0225 12:09:27.504120 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/31e745d1-9f40-4deb-adb0-7cb412b3b21f-openstack-config\") pod \"tempest-tests-tempest-s00-full\" (UID: \"31e745d1-9f40-4deb-adb0-7cb412b3b21f\") " pod="openstack/tempest-tests-tempest-s00-full" Feb 25 12:09:27 crc kubenswrapper[5005]: I0225 12:09:27.504166 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/31e745d1-9f40-4deb-adb0-7cb412b3b21f-config-data\") pod \"tempest-tests-tempest-s00-full\" (UID: \"31e745d1-9f40-4deb-adb0-7cb412b3b21f\") " pod="openstack/tempest-tests-tempest-s00-full" Feb 25 12:09:27 crc kubenswrapper[5005]: I0225 12:09:27.510845 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/31e745d1-9f40-4deb-adb0-7cb412b3b21f-openstack-config-secret\") pod \"tempest-tests-tempest-s00-full\" (UID: \"31e745d1-9f40-4deb-adb0-7cb412b3b21f\") " pod="openstack/tempest-tests-tempest-s00-full" Feb 25 12:09:27 crc kubenswrapper[5005]: I0225 12:09:27.605146 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/31e745d1-9f40-4deb-adb0-7cb412b3b21f-ssh-key\") pod \"tempest-tests-tempest-s00-full\" (UID: \"31e745d1-9f40-4deb-adb0-7cb412b3b21f\") " pod="openstack/tempest-tests-tempest-s00-full" Feb 25 12:09:27 crc kubenswrapper[5005]: I0225 12:09:27.605233 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/31e745d1-9f40-4deb-adb0-7cb412b3b21f-ca-certs\") pod \"tempest-tests-tempest-s00-full\" (UID: \"31e745d1-9f40-4deb-adb0-7cb412b3b21f\") " pod="openstack/tempest-tests-tempest-s00-full" Feb 25 12:09:27 crc kubenswrapper[5005]: I0225 12:09:27.605263 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest-s00-full\" (UID: \"31e745d1-9f40-4deb-adb0-7cb412b3b21f\") " pod="openstack/tempest-tests-tempest-s00-full" Feb 25 12:09:27 crc kubenswrapper[5005]: I0225 12:09:27.605327 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/31e745d1-9f40-4deb-adb0-7cb412b3b21f-ceph\") pod \"tempest-tests-tempest-s00-full\" (UID: \"31e745d1-9f40-4deb-adb0-7cb412b3b21f\") " pod="openstack/tempest-tests-tempest-s00-full" Feb 25 12:09:27 crc kubenswrapper[5005]: I0225 12:09:27.605433 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/31e745d1-9f40-4deb-adb0-7cb412b3b21f-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s00-full\" (UID: \"31e745d1-9f40-4deb-adb0-7cb412b3b21f\") " pod="openstack/tempest-tests-tempest-s00-full" Feb 25 12:09:27 crc kubenswrapper[5005]: I0225 12:09:27.605473 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/31e745d1-9f40-4deb-adb0-7cb412b3b21f-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s00-full\" (UID: \"31e745d1-9f40-4deb-adb0-7cb412b3b21f\") " pod="openstack/tempest-tests-tempest-s00-full" Feb 25 12:09:27 crc kubenswrapper[5005]: I0225 12:09:27.605517 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25gl7\" (UniqueName: \"kubernetes.io/projected/31e745d1-9f40-4deb-adb0-7cb412b3b21f-kube-api-access-25gl7\") pod \"tempest-tests-tempest-s00-full\" (UID: \"31e745d1-9f40-4deb-adb0-7cb412b3b21f\") " pod="openstack/tempest-tests-tempest-s00-full" Feb 25 12:09:27 crc kubenswrapper[5005]: I0225 12:09:27.606239 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/31e745d1-9f40-4deb-adb0-7cb412b3b21f-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s00-full\" (UID: \"31e745d1-9f40-4deb-adb0-7cb412b3b21f\") " pod="openstack/tempest-tests-tempest-s00-full" Feb 25 12:09:27 crc kubenswrapper[5005]: I0225 12:09:27.606342 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/31e745d1-9f40-4deb-adb0-7cb412b3b21f-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s00-full\" (UID: \"31e745d1-9f40-4deb-adb0-7cb412b3b21f\") " pod="openstack/tempest-tests-tempest-s00-full" Feb 25 12:09:27 crc kubenswrapper[5005]: I0225 12:09:27.606341 5005 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest-s00-full\" (UID: \"31e745d1-9f40-4deb-adb0-7cb412b3b21f\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/tempest-tests-tempest-s00-full" Feb 25 12:09:27 crc kubenswrapper[5005]: I0225 12:09:27.609610 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/31e745d1-9f40-4deb-adb0-7cb412b3b21f-ceph\") pod \"tempest-tests-tempest-s00-full\" (UID: \"31e745d1-9f40-4deb-adb0-7cb412b3b21f\") " pod="openstack/tempest-tests-tempest-s00-full" Feb 25 12:09:27 crc kubenswrapper[5005]: I0225 12:09:27.609963 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/31e745d1-9f40-4deb-adb0-7cb412b3b21f-ssh-key\") pod \"tempest-tests-tempest-s00-full\" (UID: \"31e745d1-9f40-4deb-adb0-7cb412b3b21f\") " pod="openstack/tempest-tests-tempest-s00-full" Feb 25 12:09:27 crc kubenswrapper[5005]: I0225 12:09:27.610338 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/31e745d1-9f40-4deb-adb0-7cb412b3b21f-ca-certs\") pod \"tempest-tests-tempest-s00-full\" (UID: \"31e745d1-9f40-4deb-adb0-7cb412b3b21f\") " pod="openstack/tempest-tests-tempest-s00-full" Feb 25 12:09:27 crc kubenswrapper[5005]: I0225 12:09:27.629493 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25gl7\" (UniqueName: \"kubernetes.io/projected/31e745d1-9f40-4deb-adb0-7cb412b3b21f-kube-api-access-25gl7\") pod \"tempest-tests-tempest-s00-full\" (UID: \"31e745d1-9f40-4deb-adb0-7cb412b3b21f\") " pod="openstack/tempest-tests-tempest-s00-full" Feb 25 12:09:27 crc kubenswrapper[5005]: I0225 12:09:27.657955 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest-s00-full\" (UID: \"31e745d1-9f40-4deb-adb0-7cb412b3b21f\") " pod="openstack/tempest-tests-tempest-s00-full" Feb 25 12:09:27 crc kubenswrapper[5005]: I0225 12:09:27.666771 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-full" Feb 25 12:09:28 crc kubenswrapper[5005]: I0225 12:09:28.190524 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s00-full"] Feb 25 12:09:28 crc kubenswrapper[5005]: W0225 12:09:28.198363 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31e745d1_9f40_4deb_adb0_7cb412b3b21f.slice/crio-3e8a6b625ca907c6660801f1488e97c6cade48122d989e8e9c402f39a5198e30 WatchSource:0}: Error finding container 3e8a6b625ca907c6660801f1488e97c6cade48122d989e8e9c402f39a5198e30: Status 404 returned error can't find the container with id 3e8a6b625ca907c6660801f1488e97c6cade48122d989e8e9c402f39a5198e30 Feb 25 12:09:29 crc kubenswrapper[5005]: I0225 12:09:29.144435 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-full" event={"ID":"31e745d1-9f40-4deb-adb0-7cb412b3b21f","Type":"ContainerStarted","Data":"3e8a6b625ca907c6660801f1488e97c6cade48122d989e8e9c402f39a5198e30"} Feb 25 12:09:33 crc kubenswrapper[5005]: I0225 12:09:33.685525 5005 scope.go:117] "RemoveContainer" containerID="7ca266498d6b9c8fb2a14f176fd0f40e3e96606757679e079fcb7ecb4ca85b52" Feb 25 12:09:33 crc kubenswrapper[5005]: E0225 12:09:33.686462 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:09:48 crc kubenswrapper[5005]: I0225 12:09:48.686003 5005 scope.go:117] "RemoveContainer" containerID="7ca266498d6b9c8fb2a14f176fd0f40e3e96606757679e079fcb7ecb4ca85b52" Feb 25 12:09:48 crc kubenswrapper[5005]: E0225 12:09:48.686973 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:09:56 crc kubenswrapper[5005]: I0225 12:09:56.037237 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cws7k"] Feb 25 12:09:56 crc kubenswrapper[5005]: I0225 12:09:56.048183 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cws7k"] Feb 25 12:09:56 crc kubenswrapper[5005]: I0225 12:09:56.048304 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cws7k" Feb 25 12:09:56 crc kubenswrapper[5005]: I0225 12:09:56.188571 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88rch\" (UniqueName: \"kubernetes.io/projected/6ced917a-95f4-4ace-981a-fd1b24dc2080-kube-api-access-88rch\") pod \"redhat-marketplace-cws7k\" (UID: \"6ced917a-95f4-4ace-981a-fd1b24dc2080\") " pod="openshift-marketplace/redhat-marketplace-cws7k" Feb 25 12:09:56 crc kubenswrapper[5005]: I0225 12:09:56.188710 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ced917a-95f4-4ace-981a-fd1b24dc2080-utilities\") pod \"redhat-marketplace-cws7k\" (UID: \"6ced917a-95f4-4ace-981a-fd1b24dc2080\") " pod="openshift-marketplace/redhat-marketplace-cws7k" Feb 25 12:09:56 crc kubenswrapper[5005]: I0225 12:09:56.188733 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ced917a-95f4-4ace-981a-fd1b24dc2080-catalog-content\") pod \"redhat-marketplace-cws7k\" (UID: \"6ced917a-95f4-4ace-981a-fd1b24dc2080\") " pod="openshift-marketplace/redhat-marketplace-cws7k" Feb 25 12:09:56 crc kubenswrapper[5005]: I0225 12:09:56.291052 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ced917a-95f4-4ace-981a-fd1b24dc2080-utilities\") pod \"redhat-marketplace-cws7k\" (UID: \"6ced917a-95f4-4ace-981a-fd1b24dc2080\") " pod="openshift-marketplace/redhat-marketplace-cws7k" Feb 25 12:09:56 crc kubenswrapper[5005]: I0225 12:09:56.291127 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ced917a-95f4-4ace-981a-fd1b24dc2080-catalog-content\") pod \"redhat-marketplace-cws7k\" (UID: \"6ced917a-95f4-4ace-981a-fd1b24dc2080\") " pod="openshift-marketplace/redhat-marketplace-cws7k" Feb 25 12:09:56 crc kubenswrapper[5005]: I0225 12:09:56.291321 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88rch\" (UniqueName: \"kubernetes.io/projected/6ced917a-95f4-4ace-981a-fd1b24dc2080-kube-api-access-88rch\") pod \"redhat-marketplace-cws7k\" (UID: \"6ced917a-95f4-4ace-981a-fd1b24dc2080\") " pod="openshift-marketplace/redhat-marketplace-cws7k" Feb 25 12:09:56 crc kubenswrapper[5005]: I0225 12:09:56.292562 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ced917a-95f4-4ace-981a-fd1b24dc2080-utilities\") pod \"redhat-marketplace-cws7k\" (UID: \"6ced917a-95f4-4ace-981a-fd1b24dc2080\") " pod="openshift-marketplace/redhat-marketplace-cws7k" Feb 25 12:09:56 crc kubenswrapper[5005]: I0225 12:09:56.292633 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ced917a-95f4-4ace-981a-fd1b24dc2080-catalog-content\") pod \"redhat-marketplace-cws7k\" (UID: \"6ced917a-95f4-4ace-981a-fd1b24dc2080\") " pod="openshift-marketplace/redhat-marketplace-cws7k" Feb 25 12:09:56 crc kubenswrapper[5005]: I0225 12:09:56.325937 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88rch\" (UniqueName: \"kubernetes.io/projected/6ced917a-95f4-4ace-981a-fd1b24dc2080-kube-api-access-88rch\") pod \"redhat-marketplace-cws7k\" (UID: \"6ced917a-95f4-4ace-981a-fd1b24dc2080\") " pod="openshift-marketplace/redhat-marketplace-cws7k" Feb 25 12:09:56 crc kubenswrapper[5005]: I0225 12:09:56.403010 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cws7k" Feb 25 12:09:58 crc kubenswrapper[5005]: E0225 12:09:58.529899 5005 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Feb 25 12:09:58 crc kubenswrapper[5005]: E0225 12:09:58.530898 5005 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ceph,ReadOnly:true,MountPath:/etc/ceph,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-25gl7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest-s00-full_openstack(31e745d1-9f40-4deb-adb0-7cb412b3b21f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 25 12:09:58 crc kubenswrapper[5005]: E0225 12:09:58.532437 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest-s00-full" podUID="31e745d1-9f40-4deb-adb0-7cb412b3b21f" Feb 25 12:09:58 crc kubenswrapper[5005]: I0225 12:09:58.805023 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cws7k"] Feb 25 12:09:59 crc kubenswrapper[5005]: I0225 12:09:59.483797 5005 generic.go:334] "Generic (PLEG): container finished" podID="6ced917a-95f4-4ace-981a-fd1b24dc2080" containerID="b3ca3c55b2b9d898bf6aca90ddb0c30ec0d1b1849759e2cf212cdfd2dcdd915c" exitCode=0 Feb 25 12:09:59 crc kubenswrapper[5005]: I0225 12:09:59.483942 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cws7k" event={"ID":"6ced917a-95f4-4ace-981a-fd1b24dc2080","Type":"ContainerDied","Data":"b3ca3c55b2b9d898bf6aca90ddb0c30ec0d1b1849759e2cf212cdfd2dcdd915c"} Feb 25 12:09:59 crc kubenswrapper[5005]: I0225 12:09:59.484433 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cws7k" event={"ID":"6ced917a-95f4-4ace-981a-fd1b24dc2080","Type":"ContainerStarted","Data":"21b331c56ff1cada7d344e7d75c526bf3c19ab1a05c26eececf27e428d001ac5"} Feb 25 12:09:59 crc kubenswrapper[5005]: E0225 12:09:59.486686 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest-s00-full" podUID="31e745d1-9f40-4deb-adb0-7cb412b3b21f" Feb 25 12:10:00 crc kubenswrapper[5005]: I0225 12:10:00.150852 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533690-27vkj"] Feb 25 12:10:00 crc kubenswrapper[5005]: I0225 12:10:00.152291 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533690-27vkj" Feb 25 12:10:00 crc kubenswrapper[5005]: I0225 12:10:00.154362 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 12:10:00 crc kubenswrapper[5005]: I0225 12:10:00.154981 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 12:10:00 crc kubenswrapper[5005]: I0225 12:10:00.155054 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 12:10:00 crc kubenswrapper[5005]: I0225 12:10:00.164664 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533690-27vkj"] Feb 25 12:10:00 crc kubenswrapper[5005]: I0225 12:10:00.231721 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4z69\" (UniqueName: \"kubernetes.io/projected/210b3f94-a7aa-4823-9778-7aede3fc6a45-kube-api-access-k4z69\") pod \"auto-csr-approver-29533690-27vkj\" (UID: \"210b3f94-a7aa-4823-9778-7aede3fc6a45\") " pod="openshift-infra/auto-csr-approver-29533690-27vkj" Feb 25 12:10:00 crc kubenswrapper[5005]: I0225 12:10:00.333935 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4z69\" (UniqueName: \"kubernetes.io/projected/210b3f94-a7aa-4823-9778-7aede3fc6a45-kube-api-access-k4z69\") pod \"auto-csr-approver-29533690-27vkj\" (UID: \"210b3f94-a7aa-4823-9778-7aede3fc6a45\") " pod="openshift-infra/auto-csr-approver-29533690-27vkj" Feb 25 12:10:00 crc kubenswrapper[5005]: I0225 12:10:00.357145 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4z69\" (UniqueName: \"kubernetes.io/projected/210b3f94-a7aa-4823-9778-7aede3fc6a45-kube-api-access-k4z69\") pod \"auto-csr-approver-29533690-27vkj\" (UID: \"210b3f94-a7aa-4823-9778-7aede3fc6a45\") " pod="openshift-infra/auto-csr-approver-29533690-27vkj" Feb 25 12:10:00 crc kubenswrapper[5005]: I0225 12:10:00.469242 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533690-27vkj" Feb 25 12:10:00 crc kubenswrapper[5005]: I0225 12:10:00.685537 5005 scope.go:117] "RemoveContainer" containerID="7ca266498d6b9c8fb2a14f176fd0f40e3e96606757679e079fcb7ecb4ca85b52" Feb 25 12:10:00 crc kubenswrapper[5005]: E0225 12:10:00.686201 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:10:00 crc kubenswrapper[5005]: I0225 12:10:00.956149 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533690-27vkj"] Feb 25 12:10:01 crc kubenswrapper[5005]: I0225 12:10:01.503443 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533690-27vkj" event={"ID":"210b3f94-a7aa-4823-9778-7aede3fc6a45","Type":"ContainerStarted","Data":"2c2454638a067b52256636c42ac3d395d93a4503eea5634b646e568b5e764182"} Feb 25 12:10:01 crc kubenswrapper[5005]: I0225 12:10:01.505777 5005 generic.go:334] "Generic (PLEG): container finished" podID="6ced917a-95f4-4ace-981a-fd1b24dc2080" containerID="902e0e344ac8f91dfb31bbfdb862d8ca0ba560d05b6d466534f2db947010a5f0" exitCode=0 Feb 25 12:10:01 crc kubenswrapper[5005]: I0225 12:10:01.505832 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cws7k" event={"ID":"6ced917a-95f4-4ace-981a-fd1b24dc2080","Type":"ContainerDied","Data":"902e0e344ac8f91dfb31bbfdb862d8ca0ba560d05b6d466534f2db947010a5f0"} Feb 25 12:10:02 crc kubenswrapper[5005]: I0225 12:10:02.516092 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cws7k" event={"ID":"6ced917a-95f4-4ace-981a-fd1b24dc2080","Type":"ContainerStarted","Data":"7280909e287faf3186bb6e340dba883d8b1f244999149e98c4ff10ebac6309cd"} Feb 25 12:10:02 crc kubenswrapper[5005]: I0225 12:10:02.535647 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cws7k" podStartSLOduration=4.917941882 podStartE2EDuration="7.535631581s" podCreationTimestamp="2026-02-25 12:09:55 +0000 UTC" firstStartedPulling="2026-02-25 12:09:59.486622133 +0000 UTC m=+3113.527354460" lastFinishedPulling="2026-02-25 12:10:02.104311822 +0000 UTC m=+3116.145044159" observedRunningTime="2026-02-25 12:10:02.534297569 +0000 UTC m=+3116.575029906" watchObservedRunningTime="2026-02-25 12:10:02.535631581 +0000 UTC m=+3116.576363908" Feb 25 12:10:06 crc kubenswrapper[5005]: I0225 12:10:06.403990 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cws7k" Feb 25 12:10:06 crc kubenswrapper[5005]: I0225 12:10:06.404693 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cws7k" Feb 25 12:10:06 crc kubenswrapper[5005]: I0225 12:10:06.465646 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cws7k" Feb 25 12:10:07 crc kubenswrapper[5005]: I0225 12:10:07.575601 5005 generic.go:334] "Generic (PLEG): container finished" podID="210b3f94-a7aa-4823-9778-7aede3fc6a45" containerID="08f0b03f8b21330cb633a0cd6358143b99ec3a8b266755e1f386904345370043" exitCode=0 Feb 25 12:10:07 crc kubenswrapper[5005]: I0225 12:10:07.575673 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533690-27vkj" event={"ID":"210b3f94-a7aa-4823-9778-7aede3fc6a45","Type":"ContainerDied","Data":"08f0b03f8b21330cb633a0cd6358143b99ec3a8b266755e1f386904345370043"} Feb 25 12:10:08 crc kubenswrapper[5005]: I0225 12:10:08.932997 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533690-27vkj" Feb 25 12:10:09 crc kubenswrapper[5005]: I0225 12:10:09.007603 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4z69\" (UniqueName: \"kubernetes.io/projected/210b3f94-a7aa-4823-9778-7aede3fc6a45-kube-api-access-k4z69\") pod \"210b3f94-a7aa-4823-9778-7aede3fc6a45\" (UID: \"210b3f94-a7aa-4823-9778-7aede3fc6a45\") " Feb 25 12:10:09 crc kubenswrapper[5005]: I0225 12:10:09.015511 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210b3f94-a7aa-4823-9778-7aede3fc6a45-kube-api-access-k4z69" (OuterVolumeSpecName: "kube-api-access-k4z69") pod "210b3f94-a7aa-4823-9778-7aede3fc6a45" (UID: "210b3f94-a7aa-4823-9778-7aede3fc6a45"). InnerVolumeSpecName "kube-api-access-k4z69". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:10:09 crc kubenswrapper[5005]: I0225 12:10:09.110030 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4z69\" (UniqueName: \"kubernetes.io/projected/210b3f94-a7aa-4823-9778-7aede3fc6a45-kube-api-access-k4z69\") on node \"crc\" DevicePath \"\"" Feb 25 12:10:09 crc kubenswrapper[5005]: I0225 12:10:09.591649 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533690-27vkj" event={"ID":"210b3f94-a7aa-4823-9778-7aede3fc6a45","Type":"ContainerDied","Data":"2c2454638a067b52256636c42ac3d395d93a4503eea5634b646e568b5e764182"} Feb 25 12:10:09 crc kubenswrapper[5005]: I0225 12:10:09.592040 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c2454638a067b52256636c42ac3d395d93a4503eea5634b646e568b5e764182" Feb 25 12:10:09 crc kubenswrapper[5005]: I0225 12:10:09.591715 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533690-27vkj" Feb 25 12:10:10 crc kubenswrapper[5005]: I0225 12:10:10.000588 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533684-gzchd"] Feb 25 12:10:10 crc kubenswrapper[5005]: I0225 12:10:10.008992 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533684-gzchd"] Feb 25 12:10:10 crc kubenswrapper[5005]: I0225 12:10:10.702676 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be9206cf-f74e-4999-8555-a8d30375cfe0" path="/var/lib/kubelet/pods/be9206cf-f74e-4999-8555-a8d30375cfe0/volumes" Feb 25 12:10:13 crc kubenswrapper[5005]: I0225 12:10:13.238180 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 25 12:10:14 crc kubenswrapper[5005]: I0225 12:10:14.685729 5005 scope.go:117] "RemoveContainer" containerID="7ca266498d6b9c8fb2a14f176fd0f40e3e96606757679e079fcb7ecb4ca85b52" Feb 25 12:10:14 crc kubenswrapper[5005]: E0225 12:10:14.687239 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:10:14 crc kubenswrapper[5005]: I0225 12:10:14.698690 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-full" event={"ID":"31e745d1-9f40-4deb-adb0-7cb412b3b21f","Type":"ContainerStarted","Data":"f649e48e1a2c0d55a745d90fee17a1a5e5f4b2e66a7e264331485816dce7bd76"} Feb 25 12:10:14 crc kubenswrapper[5005]: I0225 12:10:14.722914 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest-s00-full" podStartSLOduration=3.691128893 podStartE2EDuration="48.72289709s" podCreationTimestamp="2026-02-25 12:09:26 +0000 UTC" firstStartedPulling="2026-02-25 12:09:28.202581716 +0000 UTC m=+3082.243314043" lastFinishedPulling="2026-02-25 12:10:13.234349893 +0000 UTC m=+3127.275082240" observedRunningTime="2026-02-25 12:10:14.713015546 +0000 UTC m=+3128.753747863" watchObservedRunningTime="2026-02-25 12:10:14.72289709 +0000 UTC m=+3128.763629417" Feb 25 12:10:16 crc kubenswrapper[5005]: I0225 12:10:16.456758 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cws7k" Feb 25 12:10:16 crc kubenswrapper[5005]: I0225 12:10:16.528731 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cws7k"] Feb 25 12:10:16 crc kubenswrapper[5005]: I0225 12:10:16.712102 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cws7k" podUID="6ced917a-95f4-4ace-981a-fd1b24dc2080" containerName="registry-server" containerID="cri-o://7280909e287faf3186bb6e340dba883d8b1f244999149e98c4ff10ebac6309cd" gracePeriod=2 Feb 25 12:10:17 crc kubenswrapper[5005]: I0225 12:10:17.232292 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cws7k" Feb 25 12:10:17 crc kubenswrapper[5005]: I0225 12:10:17.399391 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ced917a-95f4-4ace-981a-fd1b24dc2080-catalog-content\") pod \"6ced917a-95f4-4ace-981a-fd1b24dc2080\" (UID: \"6ced917a-95f4-4ace-981a-fd1b24dc2080\") " Feb 25 12:10:17 crc kubenswrapper[5005]: I0225 12:10:17.399540 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ced917a-95f4-4ace-981a-fd1b24dc2080-utilities\") pod \"6ced917a-95f4-4ace-981a-fd1b24dc2080\" (UID: \"6ced917a-95f4-4ace-981a-fd1b24dc2080\") " Feb 25 12:10:17 crc kubenswrapper[5005]: I0225 12:10:17.399585 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88rch\" (UniqueName: \"kubernetes.io/projected/6ced917a-95f4-4ace-981a-fd1b24dc2080-kube-api-access-88rch\") pod \"6ced917a-95f4-4ace-981a-fd1b24dc2080\" (UID: \"6ced917a-95f4-4ace-981a-fd1b24dc2080\") " Feb 25 12:10:17 crc kubenswrapper[5005]: I0225 12:10:17.400307 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ced917a-95f4-4ace-981a-fd1b24dc2080-utilities" (OuterVolumeSpecName: "utilities") pod "6ced917a-95f4-4ace-981a-fd1b24dc2080" (UID: "6ced917a-95f4-4ace-981a-fd1b24dc2080"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 12:10:17 crc kubenswrapper[5005]: I0225 12:10:17.406034 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ced917a-95f4-4ace-981a-fd1b24dc2080-kube-api-access-88rch" (OuterVolumeSpecName: "kube-api-access-88rch") pod "6ced917a-95f4-4ace-981a-fd1b24dc2080" (UID: "6ced917a-95f4-4ace-981a-fd1b24dc2080"). InnerVolumeSpecName "kube-api-access-88rch". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:10:17 crc kubenswrapper[5005]: I0225 12:10:17.426539 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ced917a-95f4-4ace-981a-fd1b24dc2080-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6ced917a-95f4-4ace-981a-fd1b24dc2080" (UID: "6ced917a-95f4-4ace-981a-fd1b24dc2080"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 12:10:17 crc kubenswrapper[5005]: I0225 12:10:17.502330 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ced917a-95f4-4ace-981a-fd1b24dc2080-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 12:10:17 crc kubenswrapper[5005]: I0225 12:10:17.503759 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ced917a-95f4-4ace-981a-fd1b24dc2080-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 12:10:17 crc kubenswrapper[5005]: I0225 12:10:17.503866 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88rch\" (UniqueName: \"kubernetes.io/projected/6ced917a-95f4-4ace-981a-fd1b24dc2080-kube-api-access-88rch\") on node \"crc\" DevicePath \"\"" Feb 25 12:10:17 crc kubenswrapper[5005]: I0225 12:10:17.722091 5005 generic.go:334] "Generic (PLEG): container finished" podID="6ced917a-95f4-4ace-981a-fd1b24dc2080" containerID="7280909e287faf3186bb6e340dba883d8b1f244999149e98c4ff10ebac6309cd" exitCode=0 Feb 25 12:10:17 crc kubenswrapper[5005]: I0225 12:10:17.722291 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cws7k" event={"ID":"6ced917a-95f4-4ace-981a-fd1b24dc2080","Type":"ContainerDied","Data":"7280909e287faf3186bb6e340dba883d8b1f244999149e98c4ff10ebac6309cd"} Feb 25 12:10:17 crc kubenswrapper[5005]: I0225 12:10:17.722547 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cws7k" event={"ID":"6ced917a-95f4-4ace-981a-fd1b24dc2080","Type":"ContainerDied","Data":"21b331c56ff1cada7d344e7d75c526bf3c19ab1a05c26eececf27e428d001ac5"} Feb 25 12:10:17 crc kubenswrapper[5005]: I0225 12:10:17.722359 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cws7k" Feb 25 12:10:17 crc kubenswrapper[5005]: I0225 12:10:17.722658 5005 scope.go:117] "RemoveContainer" containerID="7280909e287faf3186bb6e340dba883d8b1f244999149e98c4ff10ebac6309cd" Feb 25 12:10:17 crc kubenswrapper[5005]: I0225 12:10:17.748989 5005 scope.go:117] "RemoveContainer" containerID="902e0e344ac8f91dfb31bbfdb862d8ca0ba560d05b6d466534f2db947010a5f0" Feb 25 12:10:17 crc kubenswrapper[5005]: I0225 12:10:17.791041 5005 scope.go:117] "RemoveContainer" containerID="b3ca3c55b2b9d898bf6aca90ddb0c30ec0d1b1849759e2cf212cdfd2dcdd915c" Feb 25 12:10:17 crc kubenswrapper[5005]: I0225 12:10:17.796718 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cws7k"] Feb 25 12:10:17 crc kubenswrapper[5005]: I0225 12:10:17.806175 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cws7k"] Feb 25 12:10:17 crc kubenswrapper[5005]: I0225 12:10:17.838396 5005 scope.go:117] "RemoveContainer" containerID="7280909e287faf3186bb6e340dba883d8b1f244999149e98c4ff10ebac6309cd" Feb 25 12:10:17 crc kubenswrapper[5005]: E0225 12:10:17.838907 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7280909e287faf3186bb6e340dba883d8b1f244999149e98c4ff10ebac6309cd\": container with ID starting with 7280909e287faf3186bb6e340dba883d8b1f244999149e98c4ff10ebac6309cd not found: ID does not exist" containerID="7280909e287faf3186bb6e340dba883d8b1f244999149e98c4ff10ebac6309cd" Feb 25 12:10:17 crc kubenswrapper[5005]: I0225 12:10:17.838944 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7280909e287faf3186bb6e340dba883d8b1f244999149e98c4ff10ebac6309cd"} err="failed to get container status \"7280909e287faf3186bb6e340dba883d8b1f244999149e98c4ff10ebac6309cd\": rpc error: code = NotFound desc = could not find container \"7280909e287faf3186bb6e340dba883d8b1f244999149e98c4ff10ebac6309cd\": container with ID starting with 7280909e287faf3186bb6e340dba883d8b1f244999149e98c4ff10ebac6309cd not found: ID does not exist" Feb 25 12:10:17 crc kubenswrapper[5005]: I0225 12:10:17.838970 5005 scope.go:117] "RemoveContainer" containerID="902e0e344ac8f91dfb31bbfdb862d8ca0ba560d05b6d466534f2db947010a5f0" Feb 25 12:10:17 crc kubenswrapper[5005]: E0225 12:10:17.839511 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"902e0e344ac8f91dfb31bbfdb862d8ca0ba560d05b6d466534f2db947010a5f0\": container with ID starting with 902e0e344ac8f91dfb31bbfdb862d8ca0ba560d05b6d466534f2db947010a5f0 not found: ID does not exist" containerID="902e0e344ac8f91dfb31bbfdb862d8ca0ba560d05b6d466534f2db947010a5f0" Feb 25 12:10:17 crc kubenswrapper[5005]: I0225 12:10:17.839537 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"902e0e344ac8f91dfb31bbfdb862d8ca0ba560d05b6d466534f2db947010a5f0"} err="failed to get container status \"902e0e344ac8f91dfb31bbfdb862d8ca0ba560d05b6d466534f2db947010a5f0\": rpc error: code = NotFound desc = could not find container \"902e0e344ac8f91dfb31bbfdb862d8ca0ba560d05b6d466534f2db947010a5f0\": container with ID starting with 902e0e344ac8f91dfb31bbfdb862d8ca0ba560d05b6d466534f2db947010a5f0 not found: ID does not exist" Feb 25 12:10:17 crc kubenswrapper[5005]: I0225 12:10:17.839550 5005 scope.go:117] "RemoveContainer" containerID="b3ca3c55b2b9d898bf6aca90ddb0c30ec0d1b1849759e2cf212cdfd2dcdd915c" Feb 25 12:10:17 crc kubenswrapper[5005]: E0225 12:10:17.839964 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3ca3c55b2b9d898bf6aca90ddb0c30ec0d1b1849759e2cf212cdfd2dcdd915c\": container with ID starting with b3ca3c55b2b9d898bf6aca90ddb0c30ec0d1b1849759e2cf212cdfd2dcdd915c not found: ID does not exist" containerID="b3ca3c55b2b9d898bf6aca90ddb0c30ec0d1b1849759e2cf212cdfd2dcdd915c" Feb 25 12:10:17 crc kubenswrapper[5005]: I0225 12:10:17.840001 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3ca3c55b2b9d898bf6aca90ddb0c30ec0d1b1849759e2cf212cdfd2dcdd915c"} err="failed to get container status \"b3ca3c55b2b9d898bf6aca90ddb0c30ec0d1b1849759e2cf212cdfd2dcdd915c\": rpc error: code = NotFound desc = could not find container \"b3ca3c55b2b9d898bf6aca90ddb0c30ec0d1b1849759e2cf212cdfd2dcdd915c\": container with ID starting with b3ca3c55b2b9d898bf6aca90ddb0c30ec0d1b1849759e2cf212cdfd2dcdd915c not found: ID does not exist" Feb 25 12:10:18 crc kubenswrapper[5005]: I0225 12:10:18.725483 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ced917a-95f4-4ace-981a-fd1b24dc2080" path="/var/lib/kubelet/pods/6ced917a-95f4-4ace-981a-fd1b24dc2080/volumes" Feb 25 12:10:19 crc kubenswrapper[5005]: I0225 12:10:19.883191 5005 scope.go:117] "RemoveContainer" containerID="2d5bb7409d133c5a5b1fad84b7a7f0e3285edd14105b07455ab1572fd521d948" Feb 25 12:10:26 crc kubenswrapper[5005]: I0225 12:10:26.692609 5005 scope.go:117] "RemoveContainer" containerID="7ca266498d6b9c8fb2a14f176fd0f40e3e96606757679e079fcb7ecb4ca85b52" Feb 25 12:10:26 crc kubenswrapper[5005]: E0225 12:10:26.693980 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:10:37 crc kubenswrapper[5005]: I0225 12:10:37.685640 5005 scope.go:117] "RemoveContainer" containerID="7ca266498d6b9c8fb2a14f176fd0f40e3e96606757679e079fcb7ecb4ca85b52" Feb 25 12:10:37 crc kubenswrapper[5005]: E0225 12:10:37.686526 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:10:50 crc kubenswrapper[5005]: I0225 12:10:50.686574 5005 scope.go:117] "RemoveContainer" containerID="7ca266498d6b9c8fb2a14f176fd0f40e3e96606757679e079fcb7ecb4ca85b52" Feb 25 12:10:50 crc kubenswrapper[5005]: E0225 12:10:50.687973 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:11:02 crc kubenswrapper[5005]: I0225 12:11:02.689073 5005 scope.go:117] "RemoveContainer" containerID="7ca266498d6b9c8fb2a14f176fd0f40e3e96606757679e079fcb7ecb4ca85b52" Feb 25 12:11:02 crc kubenswrapper[5005]: E0225 12:11:02.690564 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:11:17 crc kubenswrapper[5005]: I0225 12:11:17.685913 5005 scope.go:117] "RemoveContainer" containerID="7ca266498d6b9c8fb2a14f176fd0f40e3e96606757679e079fcb7ecb4ca85b52" Feb 25 12:11:17 crc kubenswrapper[5005]: E0225 12:11:17.686782 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:11:31 crc kubenswrapper[5005]: I0225 12:11:31.686360 5005 scope.go:117] "RemoveContainer" containerID="7ca266498d6b9c8fb2a14f176fd0f40e3e96606757679e079fcb7ecb4ca85b52" Feb 25 12:11:31 crc kubenswrapper[5005]: E0225 12:11:31.687460 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:11:42 crc kubenswrapper[5005]: I0225 12:11:42.685899 5005 scope.go:117] "RemoveContainer" containerID="7ca266498d6b9c8fb2a14f176fd0f40e3e96606757679e079fcb7ecb4ca85b52" Feb 25 12:11:42 crc kubenswrapper[5005]: E0225 12:11:42.687120 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:11:53 crc kubenswrapper[5005]: I0225 12:11:53.686136 5005 scope.go:117] "RemoveContainer" containerID="7ca266498d6b9c8fb2a14f176fd0f40e3e96606757679e079fcb7ecb4ca85b52" Feb 25 12:11:53 crc kubenswrapper[5005]: E0225 12:11:53.687195 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:12:00 crc kubenswrapper[5005]: I0225 12:12:00.170109 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533692-tclrm"] Feb 25 12:12:00 crc kubenswrapper[5005]: E0225 12:12:00.171191 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ced917a-95f4-4ace-981a-fd1b24dc2080" containerName="registry-server" Feb 25 12:12:00 crc kubenswrapper[5005]: I0225 12:12:00.171202 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ced917a-95f4-4ace-981a-fd1b24dc2080" containerName="registry-server" Feb 25 12:12:00 crc kubenswrapper[5005]: E0225 12:12:00.171230 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ced917a-95f4-4ace-981a-fd1b24dc2080" containerName="extract-content" Feb 25 12:12:00 crc kubenswrapper[5005]: I0225 12:12:00.171236 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ced917a-95f4-4ace-981a-fd1b24dc2080" containerName="extract-content" Feb 25 12:12:00 crc kubenswrapper[5005]: E0225 12:12:00.171262 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ced917a-95f4-4ace-981a-fd1b24dc2080" containerName="extract-utilities" Feb 25 12:12:00 crc kubenswrapper[5005]: I0225 12:12:00.171269 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ced917a-95f4-4ace-981a-fd1b24dc2080" containerName="extract-utilities" Feb 25 12:12:00 crc kubenswrapper[5005]: E0225 12:12:00.171281 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="210b3f94-a7aa-4823-9778-7aede3fc6a45" containerName="oc" Feb 25 12:12:00 crc kubenswrapper[5005]: I0225 12:12:00.171287 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="210b3f94-a7aa-4823-9778-7aede3fc6a45" containerName="oc" Feb 25 12:12:00 crc kubenswrapper[5005]: I0225 12:12:00.171476 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="210b3f94-a7aa-4823-9778-7aede3fc6a45" containerName="oc" Feb 25 12:12:00 crc kubenswrapper[5005]: I0225 12:12:00.171505 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ced917a-95f4-4ace-981a-fd1b24dc2080" containerName="registry-server" Feb 25 12:12:00 crc kubenswrapper[5005]: I0225 12:12:00.172119 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533692-tclrm" Feb 25 12:12:00 crc kubenswrapper[5005]: I0225 12:12:00.175546 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 12:12:00 crc kubenswrapper[5005]: I0225 12:12:00.175675 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 12:12:00 crc kubenswrapper[5005]: I0225 12:12:00.175752 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 12:12:00 crc kubenswrapper[5005]: I0225 12:12:00.185531 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533692-tclrm"] Feb 25 12:12:00 crc kubenswrapper[5005]: I0225 12:12:00.234492 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vprsr\" (UniqueName: \"kubernetes.io/projected/dd05dd1e-a301-49a1-b196-126a2590e9b6-kube-api-access-vprsr\") pod \"auto-csr-approver-29533692-tclrm\" (UID: \"dd05dd1e-a301-49a1-b196-126a2590e9b6\") " pod="openshift-infra/auto-csr-approver-29533692-tclrm" Feb 25 12:12:00 crc kubenswrapper[5005]: I0225 12:12:00.335979 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vprsr\" (UniqueName: \"kubernetes.io/projected/dd05dd1e-a301-49a1-b196-126a2590e9b6-kube-api-access-vprsr\") pod \"auto-csr-approver-29533692-tclrm\" (UID: \"dd05dd1e-a301-49a1-b196-126a2590e9b6\") " pod="openshift-infra/auto-csr-approver-29533692-tclrm" Feb 25 12:12:00 crc kubenswrapper[5005]: I0225 12:12:00.355021 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vprsr\" (UniqueName: \"kubernetes.io/projected/dd05dd1e-a301-49a1-b196-126a2590e9b6-kube-api-access-vprsr\") pod \"auto-csr-approver-29533692-tclrm\" (UID: \"dd05dd1e-a301-49a1-b196-126a2590e9b6\") " pod="openshift-infra/auto-csr-approver-29533692-tclrm" Feb 25 12:12:00 crc kubenswrapper[5005]: I0225 12:12:00.495125 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533692-tclrm" Feb 25 12:12:01 crc kubenswrapper[5005]: I0225 12:12:01.025405 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533692-tclrm"] Feb 25 12:12:01 crc kubenswrapper[5005]: I0225 12:12:01.037617 5005 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 25 12:12:01 crc kubenswrapper[5005]: I0225 12:12:01.789067 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533692-tclrm" event={"ID":"dd05dd1e-a301-49a1-b196-126a2590e9b6","Type":"ContainerStarted","Data":"e054d42eff1e6c48ec9b97e3b0a74b04ddba92d829f8046bd6a9a495a057ba8b"} Feb 25 12:12:02 crc kubenswrapper[5005]: I0225 12:12:02.799287 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533692-tclrm" event={"ID":"dd05dd1e-a301-49a1-b196-126a2590e9b6","Type":"ContainerStarted","Data":"386f85388c3906421769e2dd44b63e803304292d730676ba6a47671b80f9361f"} Feb 25 12:12:02 crc kubenswrapper[5005]: I0225 12:12:02.825594 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29533692-tclrm" podStartSLOduration=1.703684599 podStartE2EDuration="2.825570768s" podCreationTimestamp="2026-02-25 12:12:00 +0000 UTC" firstStartedPulling="2026-02-25 12:12:01.037440708 +0000 UTC m=+3235.078173035" lastFinishedPulling="2026-02-25 12:12:02.159326867 +0000 UTC m=+3236.200059204" observedRunningTime="2026-02-25 12:12:02.819339356 +0000 UTC m=+3236.860071683" watchObservedRunningTime="2026-02-25 12:12:02.825570768 +0000 UTC m=+3236.866303095" Feb 25 12:12:03 crc kubenswrapper[5005]: I0225 12:12:03.813998 5005 generic.go:334] "Generic (PLEG): container finished" podID="dd05dd1e-a301-49a1-b196-126a2590e9b6" containerID="386f85388c3906421769e2dd44b63e803304292d730676ba6a47671b80f9361f" exitCode=0 Feb 25 12:12:03 crc kubenswrapper[5005]: I0225 12:12:03.814818 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533692-tclrm" event={"ID":"dd05dd1e-a301-49a1-b196-126a2590e9b6","Type":"ContainerDied","Data":"386f85388c3906421769e2dd44b63e803304292d730676ba6a47671b80f9361f"} Feb 25 12:12:05 crc kubenswrapper[5005]: I0225 12:12:05.268152 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533692-tclrm" Feb 25 12:12:05 crc kubenswrapper[5005]: I0225 12:12:05.375395 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vprsr\" (UniqueName: \"kubernetes.io/projected/dd05dd1e-a301-49a1-b196-126a2590e9b6-kube-api-access-vprsr\") pod \"dd05dd1e-a301-49a1-b196-126a2590e9b6\" (UID: \"dd05dd1e-a301-49a1-b196-126a2590e9b6\") " Feb 25 12:12:05 crc kubenswrapper[5005]: I0225 12:12:05.383809 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd05dd1e-a301-49a1-b196-126a2590e9b6-kube-api-access-vprsr" (OuterVolumeSpecName: "kube-api-access-vprsr") pod "dd05dd1e-a301-49a1-b196-126a2590e9b6" (UID: "dd05dd1e-a301-49a1-b196-126a2590e9b6"). InnerVolumeSpecName "kube-api-access-vprsr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:12:05 crc kubenswrapper[5005]: I0225 12:12:05.477996 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vprsr\" (UniqueName: \"kubernetes.io/projected/dd05dd1e-a301-49a1-b196-126a2590e9b6-kube-api-access-vprsr\") on node \"crc\" DevicePath \"\"" Feb 25 12:12:05 crc kubenswrapper[5005]: I0225 12:12:05.846109 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533692-tclrm" event={"ID":"dd05dd1e-a301-49a1-b196-126a2590e9b6","Type":"ContainerDied","Data":"e054d42eff1e6c48ec9b97e3b0a74b04ddba92d829f8046bd6a9a495a057ba8b"} Feb 25 12:12:05 crc kubenswrapper[5005]: I0225 12:12:05.846167 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e054d42eff1e6c48ec9b97e3b0a74b04ddba92d829f8046bd6a9a495a057ba8b" Feb 25 12:12:05 crc kubenswrapper[5005]: I0225 12:12:05.846249 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533692-tclrm" Feb 25 12:12:05 crc kubenswrapper[5005]: I0225 12:12:05.926943 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533686-hqqh5"] Feb 25 12:12:05 crc kubenswrapper[5005]: I0225 12:12:05.943830 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533686-hqqh5"] Feb 25 12:12:06 crc kubenswrapper[5005]: I0225 12:12:06.722403 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ba2309e-c965-431a-8da7-85e347d172d6" path="/var/lib/kubelet/pods/7ba2309e-c965-431a-8da7-85e347d172d6/volumes" Feb 25 12:12:07 crc kubenswrapper[5005]: I0225 12:12:07.689300 5005 scope.go:117] "RemoveContainer" containerID="7ca266498d6b9c8fb2a14f176fd0f40e3e96606757679e079fcb7ecb4ca85b52" Feb 25 12:12:08 crc kubenswrapper[5005]: I0225 12:12:08.895933 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerStarted","Data":"1cd290234490006d5dc0f3b60428821ed2b4d093736936da7505bf8063309e11"} Feb 25 12:12:20 crc kubenswrapper[5005]: I0225 12:12:20.002978 5005 scope.go:117] "RemoveContainer" containerID="0ae4c861233e6a4a070c39a52a95481a5ee30f23886940793fd2b383ec40db2c" Feb 25 12:14:00 crc kubenswrapper[5005]: I0225 12:14:00.159393 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533694-t5x7x"] Feb 25 12:14:00 crc kubenswrapper[5005]: E0225 12:14:00.160384 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd05dd1e-a301-49a1-b196-126a2590e9b6" containerName="oc" Feb 25 12:14:00 crc kubenswrapper[5005]: I0225 12:14:00.160417 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd05dd1e-a301-49a1-b196-126a2590e9b6" containerName="oc" Feb 25 12:14:00 crc kubenswrapper[5005]: I0225 12:14:00.160618 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd05dd1e-a301-49a1-b196-126a2590e9b6" containerName="oc" Feb 25 12:14:00 crc kubenswrapper[5005]: I0225 12:14:00.161209 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533694-t5x7x" Feb 25 12:14:00 crc kubenswrapper[5005]: I0225 12:14:00.163444 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 12:14:00 crc kubenswrapper[5005]: I0225 12:14:00.163641 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 12:14:00 crc kubenswrapper[5005]: I0225 12:14:00.164284 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 12:14:00 crc kubenswrapper[5005]: I0225 12:14:00.169250 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533694-t5x7x"] Feb 25 12:14:00 crc kubenswrapper[5005]: I0225 12:14:00.256563 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9s4w\" (UniqueName: \"kubernetes.io/projected/df35c800-a1cf-4cd0-b9c2-08ab1667de3d-kube-api-access-m9s4w\") pod \"auto-csr-approver-29533694-t5x7x\" (UID: \"df35c800-a1cf-4cd0-b9c2-08ab1667de3d\") " pod="openshift-infra/auto-csr-approver-29533694-t5x7x" Feb 25 12:14:00 crc kubenswrapper[5005]: I0225 12:14:00.282004 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bxnzl"] Feb 25 12:14:00 crc kubenswrapper[5005]: I0225 12:14:00.284043 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bxnzl" Feb 25 12:14:00 crc kubenswrapper[5005]: I0225 12:14:00.313296 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bxnzl"] Feb 25 12:14:00 crc kubenswrapper[5005]: I0225 12:14:00.359114 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr8cd\" (UniqueName: \"kubernetes.io/projected/1bd017fc-bec8-4eed-980f-facd4e791dd9-kube-api-access-jr8cd\") pod \"redhat-operators-bxnzl\" (UID: \"1bd017fc-bec8-4eed-980f-facd4e791dd9\") " pod="openshift-marketplace/redhat-operators-bxnzl" Feb 25 12:14:00 crc kubenswrapper[5005]: I0225 12:14:00.359173 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bd017fc-bec8-4eed-980f-facd4e791dd9-utilities\") pod \"redhat-operators-bxnzl\" (UID: \"1bd017fc-bec8-4eed-980f-facd4e791dd9\") " pod="openshift-marketplace/redhat-operators-bxnzl" Feb 25 12:14:00 crc kubenswrapper[5005]: I0225 12:14:00.359258 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bd017fc-bec8-4eed-980f-facd4e791dd9-catalog-content\") pod \"redhat-operators-bxnzl\" (UID: \"1bd017fc-bec8-4eed-980f-facd4e791dd9\") " pod="openshift-marketplace/redhat-operators-bxnzl" Feb 25 12:14:00 crc kubenswrapper[5005]: I0225 12:14:00.359312 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9s4w\" (UniqueName: \"kubernetes.io/projected/df35c800-a1cf-4cd0-b9c2-08ab1667de3d-kube-api-access-m9s4w\") pod \"auto-csr-approver-29533694-t5x7x\" (UID: \"df35c800-a1cf-4cd0-b9c2-08ab1667de3d\") " pod="openshift-infra/auto-csr-approver-29533694-t5x7x" Feb 25 12:14:00 crc kubenswrapper[5005]: I0225 12:14:00.380529 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9s4w\" (UniqueName: \"kubernetes.io/projected/df35c800-a1cf-4cd0-b9c2-08ab1667de3d-kube-api-access-m9s4w\") pod \"auto-csr-approver-29533694-t5x7x\" (UID: \"df35c800-a1cf-4cd0-b9c2-08ab1667de3d\") " pod="openshift-infra/auto-csr-approver-29533694-t5x7x" Feb 25 12:14:00 crc kubenswrapper[5005]: I0225 12:14:00.463255 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bd017fc-bec8-4eed-980f-facd4e791dd9-catalog-content\") pod \"redhat-operators-bxnzl\" (UID: \"1bd017fc-bec8-4eed-980f-facd4e791dd9\") " pod="openshift-marketplace/redhat-operators-bxnzl" Feb 25 12:14:00 crc kubenswrapper[5005]: I0225 12:14:00.463430 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jr8cd\" (UniqueName: \"kubernetes.io/projected/1bd017fc-bec8-4eed-980f-facd4e791dd9-kube-api-access-jr8cd\") pod \"redhat-operators-bxnzl\" (UID: \"1bd017fc-bec8-4eed-980f-facd4e791dd9\") " pod="openshift-marketplace/redhat-operators-bxnzl" Feb 25 12:14:00 crc kubenswrapper[5005]: I0225 12:14:00.463448 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bd017fc-bec8-4eed-980f-facd4e791dd9-utilities\") pod \"redhat-operators-bxnzl\" (UID: \"1bd017fc-bec8-4eed-980f-facd4e791dd9\") " pod="openshift-marketplace/redhat-operators-bxnzl" Feb 25 12:14:00 crc kubenswrapper[5005]: I0225 12:14:00.464071 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bd017fc-bec8-4eed-980f-facd4e791dd9-utilities\") pod \"redhat-operators-bxnzl\" (UID: \"1bd017fc-bec8-4eed-980f-facd4e791dd9\") " pod="openshift-marketplace/redhat-operators-bxnzl" Feb 25 12:14:00 crc kubenswrapper[5005]: I0225 12:14:00.464332 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bd017fc-bec8-4eed-980f-facd4e791dd9-catalog-content\") pod \"redhat-operators-bxnzl\" (UID: \"1bd017fc-bec8-4eed-980f-facd4e791dd9\") " pod="openshift-marketplace/redhat-operators-bxnzl" Feb 25 12:14:00 crc kubenswrapper[5005]: I0225 12:14:00.482654 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jr8cd\" (UniqueName: \"kubernetes.io/projected/1bd017fc-bec8-4eed-980f-facd4e791dd9-kube-api-access-jr8cd\") pod \"redhat-operators-bxnzl\" (UID: \"1bd017fc-bec8-4eed-980f-facd4e791dd9\") " pod="openshift-marketplace/redhat-operators-bxnzl" Feb 25 12:14:00 crc kubenswrapper[5005]: I0225 12:14:00.486551 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533694-t5x7x" Feb 25 12:14:00 crc kubenswrapper[5005]: I0225 12:14:00.612103 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bxnzl" Feb 25 12:14:01 crc kubenswrapper[5005]: I0225 12:14:01.022008 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533694-t5x7x"] Feb 25 12:14:01 crc kubenswrapper[5005]: W0225 12:14:01.217721 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1bd017fc_bec8_4eed_980f_facd4e791dd9.slice/crio-8b9bf91ab74d234e025a5d92fc8086afa758f82ff3df0f5cae2b6ab5676be952 WatchSource:0}: Error finding container 8b9bf91ab74d234e025a5d92fc8086afa758f82ff3df0f5cae2b6ab5676be952: Status 404 returned error can't find the container with id 8b9bf91ab74d234e025a5d92fc8086afa758f82ff3df0f5cae2b6ab5676be952 Feb 25 12:14:01 crc kubenswrapper[5005]: I0225 12:14:01.218184 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bxnzl"] Feb 25 12:14:01 crc kubenswrapper[5005]: I0225 12:14:01.992667 5005 generic.go:334] "Generic (PLEG): container finished" podID="1bd017fc-bec8-4eed-980f-facd4e791dd9" containerID="8d84da01d5c61b255ffa8154eff581a5e39fb4843e714ada465ae5d83f60e456" exitCode=0 Feb 25 12:14:01 crc kubenswrapper[5005]: I0225 12:14:01.993428 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bxnzl" event={"ID":"1bd017fc-bec8-4eed-980f-facd4e791dd9","Type":"ContainerDied","Data":"8d84da01d5c61b255ffa8154eff581a5e39fb4843e714ada465ae5d83f60e456"} Feb 25 12:14:01 crc kubenswrapper[5005]: I0225 12:14:01.993468 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bxnzl" event={"ID":"1bd017fc-bec8-4eed-980f-facd4e791dd9","Type":"ContainerStarted","Data":"8b9bf91ab74d234e025a5d92fc8086afa758f82ff3df0f5cae2b6ab5676be952"} Feb 25 12:14:01 crc kubenswrapper[5005]: I0225 12:14:01.994825 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533694-t5x7x" event={"ID":"df35c800-a1cf-4cd0-b9c2-08ab1667de3d","Type":"ContainerStarted","Data":"949c24d8c443d135ba71f326249550108c1f68cba92254a00495183aa2ca1044"} Feb 25 12:14:03 crc kubenswrapper[5005]: I0225 12:14:03.005018 5005 generic.go:334] "Generic (PLEG): container finished" podID="df35c800-a1cf-4cd0-b9c2-08ab1667de3d" containerID="ee76789ed3e87e2d1c6975a6feb22dcc21e0a79cd47f3e92af1c50a3e61ce530" exitCode=0 Feb 25 12:14:03 crc kubenswrapper[5005]: I0225 12:14:03.005146 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533694-t5x7x" event={"ID":"df35c800-a1cf-4cd0-b9c2-08ab1667de3d","Type":"ContainerDied","Data":"ee76789ed3e87e2d1c6975a6feb22dcc21e0a79cd47f3e92af1c50a3e61ce530"} Feb 25 12:14:04 crc kubenswrapper[5005]: I0225 12:14:04.016534 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bxnzl" event={"ID":"1bd017fc-bec8-4eed-980f-facd4e791dd9","Type":"ContainerStarted","Data":"ffa13600dfd0829bbe6e6189591edf5e9f071b8a37788b20fe43fc9ecf5f078a"} Feb 25 12:14:04 crc kubenswrapper[5005]: I0225 12:14:04.394351 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533694-t5x7x" Feb 25 12:14:04 crc kubenswrapper[5005]: I0225 12:14:04.450682 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9s4w\" (UniqueName: \"kubernetes.io/projected/df35c800-a1cf-4cd0-b9c2-08ab1667de3d-kube-api-access-m9s4w\") pod \"df35c800-a1cf-4cd0-b9c2-08ab1667de3d\" (UID: \"df35c800-a1cf-4cd0-b9c2-08ab1667de3d\") " Feb 25 12:14:04 crc kubenswrapper[5005]: I0225 12:14:04.456567 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df35c800-a1cf-4cd0-b9c2-08ab1667de3d-kube-api-access-m9s4w" (OuterVolumeSpecName: "kube-api-access-m9s4w") pod "df35c800-a1cf-4cd0-b9c2-08ab1667de3d" (UID: "df35c800-a1cf-4cd0-b9c2-08ab1667de3d"). InnerVolumeSpecName "kube-api-access-m9s4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:14:04 crc kubenswrapper[5005]: I0225 12:14:04.552631 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9s4w\" (UniqueName: \"kubernetes.io/projected/df35c800-a1cf-4cd0-b9c2-08ab1667de3d-kube-api-access-m9s4w\") on node \"crc\" DevicePath \"\"" Feb 25 12:14:05 crc kubenswrapper[5005]: I0225 12:14:05.025959 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533694-t5x7x" event={"ID":"df35c800-a1cf-4cd0-b9c2-08ab1667de3d","Type":"ContainerDied","Data":"949c24d8c443d135ba71f326249550108c1f68cba92254a00495183aa2ca1044"} Feb 25 12:14:05 crc kubenswrapper[5005]: I0225 12:14:05.026006 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533694-t5x7x" Feb 25 12:14:05 crc kubenswrapper[5005]: I0225 12:14:05.026362 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="949c24d8c443d135ba71f326249550108c1f68cba92254a00495183aa2ca1044" Feb 25 12:14:05 crc kubenswrapper[5005]: I0225 12:14:05.471073 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533688-dpwrw"] Feb 25 12:14:05 crc kubenswrapper[5005]: I0225 12:14:05.479424 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533688-dpwrw"] Feb 25 12:14:06 crc kubenswrapper[5005]: I0225 12:14:06.039151 5005 generic.go:334] "Generic (PLEG): container finished" podID="1bd017fc-bec8-4eed-980f-facd4e791dd9" containerID="ffa13600dfd0829bbe6e6189591edf5e9f071b8a37788b20fe43fc9ecf5f078a" exitCode=0 Feb 25 12:14:06 crc kubenswrapper[5005]: I0225 12:14:06.039208 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bxnzl" event={"ID":"1bd017fc-bec8-4eed-980f-facd4e791dd9","Type":"ContainerDied","Data":"ffa13600dfd0829bbe6e6189591edf5e9f071b8a37788b20fe43fc9ecf5f078a"} Feb 25 12:14:06 crc kubenswrapper[5005]: I0225 12:14:06.698721 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c21aa569-5f82-4605-930f-6b0b4799779f" path="/var/lib/kubelet/pods/c21aa569-5f82-4605-930f-6b0b4799779f/volumes" Feb 25 12:14:07 crc kubenswrapper[5005]: I0225 12:14:07.048212 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bxnzl" event={"ID":"1bd017fc-bec8-4eed-980f-facd4e791dd9","Type":"ContainerStarted","Data":"3c4d4f290183da83e401a4748e552e920940cc32a7e8b53e77002fd600d58e22"} Feb 25 12:14:07 crc kubenswrapper[5005]: I0225 12:14:07.101560 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bxnzl" podStartSLOduration=2.654813933 podStartE2EDuration="7.101536709s" podCreationTimestamp="2026-02-25 12:14:00 +0000 UTC" firstStartedPulling="2026-02-25 12:14:01.995191076 +0000 UTC m=+3356.035923403" lastFinishedPulling="2026-02-25 12:14:06.441913822 +0000 UTC m=+3360.482646179" observedRunningTime="2026-02-25 12:14:07.092927662 +0000 UTC m=+3361.133659989" watchObservedRunningTime="2026-02-25 12:14:07.101536709 +0000 UTC m=+3361.142269036" Feb 25 12:14:10 crc kubenswrapper[5005]: I0225 12:14:10.612534 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bxnzl" Feb 25 12:14:10 crc kubenswrapper[5005]: I0225 12:14:10.614178 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bxnzl" Feb 25 12:14:11 crc kubenswrapper[5005]: I0225 12:14:11.671870 5005 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bxnzl" podUID="1bd017fc-bec8-4eed-980f-facd4e791dd9" containerName="registry-server" probeResult="failure" output=< Feb 25 12:14:11 crc kubenswrapper[5005]: timeout: failed to connect service ":50051" within 1s Feb 25 12:14:11 crc kubenswrapper[5005]: > Feb 25 12:14:20 crc kubenswrapper[5005]: I0225 12:14:20.118927 5005 scope.go:117] "RemoveContainer" containerID="b1e188f6d7640cc0567e3f857721d68b5b8144d38d1495625bb12f635bdadd4d" Feb 25 12:14:20 crc kubenswrapper[5005]: I0225 12:14:20.671718 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bxnzl" Feb 25 12:14:20 crc kubenswrapper[5005]: I0225 12:14:20.724929 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bxnzl" Feb 25 12:14:20 crc kubenswrapper[5005]: I0225 12:14:20.945739 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bxnzl"] Feb 25 12:14:22 crc kubenswrapper[5005]: I0225 12:14:22.210714 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bxnzl" podUID="1bd017fc-bec8-4eed-980f-facd4e791dd9" containerName="registry-server" containerID="cri-o://3c4d4f290183da83e401a4748e552e920940cc32a7e8b53e77002fd600d58e22" gracePeriod=2 Feb 25 12:14:22 crc kubenswrapper[5005]: I0225 12:14:22.715756 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bxnzl" Feb 25 12:14:22 crc kubenswrapper[5005]: I0225 12:14:22.866059 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jr8cd\" (UniqueName: \"kubernetes.io/projected/1bd017fc-bec8-4eed-980f-facd4e791dd9-kube-api-access-jr8cd\") pod \"1bd017fc-bec8-4eed-980f-facd4e791dd9\" (UID: \"1bd017fc-bec8-4eed-980f-facd4e791dd9\") " Feb 25 12:14:22 crc kubenswrapper[5005]: I0225 12:14:22.866499 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bd017fc-bec8-4eed-980f-facd4e791dd9-catalog-content\") pod \"1bd017fc-bec8-4eed-980f-facd4e791dd9\" (UID: \"1bd017fc-bec8-4eed-980f-facd4e791dd9\") " Feb 25 12:14:22 crc kubenswrapper[5005]: I0225 12:14:22.866786 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bd017fc-bec8-4eed-980f-facd4e791dd9-utilities\") pod \"1bd017fc-bec8-4eed-980f-facd4e791dd9\" (UID: \"1bd017fc-bec8-4eed-980f-facd4e791dd9\") " Feb 25 12:14:22 crc kubenswrapper[5005]: I0225 12:14:22.871450 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bd017fc-bec8-4eed-980f-facd4e791dd9-utilities" (OuterVolumeSpecName: "utilities") pod "1bd017fc-bec8-4eed-980f-facd4e791dd9" (UID: "1bd017fc-bec8-4eed-980f-facd4e791dd9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 12:14:22 crc kubenswrapper[5005]: I0225 12:14:22.888250 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bd017fc-bec8-4eed-980f-facd4e791dd9-kube-api-access-jr8cd" (OuterVolumeSpecName: "kube-api-access-jr8cd") pod "1bd017fc-bec8-4eed-980f-facd4e791dd9" (UID: "1bd017fc-bec8-4eed-980f-facd4e791dd9"). InnerVolumeSpecName "kube-api-access-jr8cd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:14:22 crc kubenswrapper[5005]: I0225 12:14:22.970628 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bd017fc-bec8-4eed-980f-facd4e791dd9-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 12:14:22 crc kubenswrapper[5005]: I0225 12:14:22.970661 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jr8cd\" (UniqueName: \"kubernetes.io/projected/1bd017fc-bec8-4eed-980f-facd4e791dd9-kube-api-access-jr8cd\") on node \"crc\" DevicePath \"\"" Feb 25 12:14:23 crc kubenswrapper[5005]: I0225 12:14:23.016518 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bd017fc-bec8-4eed-980f-facd4e791dd9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1bd017fc-bec8-4eed-980f-facd4e791dd9" (UID: "1bd017fc-bec8-4eed-980f-facd4e791dd9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 12:14:23 crc kubenswrapper[5005]: I0225 12:14:23.073109 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bd017fc-bec8-4eed-980f-facd4e791dd9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 12:14:23 crc kubenswrapper[5005]: I0225 12:14:23.220685 5005 generic.go:334] "Generic (PLEG): container finished" podID="1bd017fc-bec8-4eed-980f-facd4e791dd9" containerID="3c4d4f290183da83e401a4748e552e920940cc32a7e8b53e77002fd600d58e22" exitCode=0 Feb 25 12:14:23 crc kubenswrapper[5005]: I0225 12:14:23.220766 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bxnzl" Feb 25 12:14:23 crc kubenswrapper[5005]: I0225 12:14:23.220775 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bxnzl" event={"ID":"1bd017fc-bec8-4eed-980f-facd4e791dd9","Type":"ContainerDied","Data":"3c4d4f290183da83e401a4748e552e920940cc32a7e8b53e77002fd600d58e22"} Feb 25 12:14:23 crc kubenswrapper[5005]: I0225 12:14:23.222178 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bxnzl" event={"ID":"1bd017fc-bec8-4eed-980f-facd4e791dd9","Type":"ContainerDied","Data":"8b9bf91ab74d234e025a5d92fc8086afa758f82ff3df0f5cae2b6ab5676be952"} Feb 25 12:14:23 crc kubenswrapper[5005]: I0225 12:14:23.222240 5005 scope.go:117] "RemoveContainer" containerID="3c4d4f290183da83e401a4748e552e920940cc32a7e8b53e77002fd600d58e22" Feb 25 12:14:23 crc kubenswrapper[5005]: I0225 12:14:23.261203 5005 scope.go:117] "RemoveContainer" containerID="ffa13600dfd0829bbe6e6189591edf5e9f071b8a37788b20fe43fc9ecf5f078a" Feb 25 12:14:23 crc kubenswrapper[5005]: I0225 12:14:23.265074 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bxnzl"] Feb 25 12:14:23 crc kubenswrapper[5005]: I0225 12:14:23.276896 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bxnzl"] Feb 25 12:14:23 crc kubenswrapper[5005]: I0225 12:14:23.284711 5005 scope.go:117] "RemoveContainer" containerID="8d84da01d5c61b255ffa8154eff581a5e39fb4843e714ada465ae5d83f60e456" Feb 25 12:14:23 crc kubenswrapper[5005]: I0225 12:14:23.332776 5005 scope.go:117] "RemoveContainer" containerID="3c4d4f290183da83e401a4748e552e920940cc32a7e8b53e77002fd600d58e22" Feb 25 12:14:23 crc kubenswrapper[5005]: E0225 12:14:23.333633 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c4d4f290183da83e401a4748e552e920940cc32a7e8b53e77002fd600d58e22\": container with ID starting with 3c4d4f290183da83e401a4748e552e920940cc32a7e8b53e77002fd600d58e22 not found: ID does not exist" containerID="3c4d4f290183da83e401a4748e552e920940cc32a7e8b53e77002fd600d58e22" Feb 25 12:14:23 crc kubenswrapper[5005]: I0225 12:14:23.333688 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c4d4f290183da83e401a4748e552e920940cc32a7e8b53e77002fd600d58e22"} err="failed to get container status \"3c4d4f290183da83e401a4748e552e920940cc32a7e8b53e77002fd600d58e22\": rpc error: code = NotFound desc = could not find container \"3c4d4f290183da83e401a4748e552e920940cc32a7e8b53e77002fd600d58e22\": container with ID starting with 3c4d4f290183da83e401a4748e552e920940cc32a7e8b53e77002fd600d58e22 not found: ID does not exist" Feb 25 12:14:23 crc kubenswrapper[5005]: I0225 12:14:23.333719 5005 scope.go:117] "RemoveContainer" containerID="ffa13600dfd0829bbe6e6189591edf5e9f071b8a37788b20fe43fc9ecf5f078a" Feb 25 12:14:23 crc kubenswrapper[5005]: E0225 12:14:23.334063 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffa13600dfd0829bbe6e6189591edf5e9f071b8a37788b20fe43fc9ecf5f078a\": container with ID starting with ffa13600dfd0829bbe6e6189591edf5e9f071b8a37788b20fe43fc9ecf5f078a not found: ID does not exist" containerID="ffa13600dfd0829bbe6e6189591edf5e9f071b8a37788b20fe43fc9ecf5f078a" Feb 25 12:14:23 crc kubenswrapper[5005]: I0225 12:14:23.334170 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffa13600dfd0829bbe6e6189591edf5e9f071b8a37788b20fe43fc9ecf5f078a"} err="failed to get container status \"ffa13600dfd0829bbe6e6189591edf5e9f071b8a37788b20fe43fc9ecf5f078a\": rpc error: code = NotFound desc = could not find container \"ffa13600dfd0829bbe6e6189591edf5e9f071b8a37788b20fe43fc9ecf5f078a\": container with ID starting with ffa13600dfd0829bbe6e6189591edf5e9f071b8a37788b20fe43fc9ecf5f078a not found: ID does not exist" Feb 25 12:14:23 crc kubenswrapper[5005]: I0225 12:14:23.334250 5005 scope.go:117] "RemoveContainer" containerID="8d84da01d5c61b255ffa8154eff581a5e39fb4843e714ada465ae5d83f60e456" Feb 25 12:14:23 crc kubenswrapper[5005]: E0225 12:14:23.335410 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d84da01d5c61b255ffa8154eff581a5e39fb4843e714ada465ae5d83f60e456\": container with ID starting with 8d84da01d5c61b255ffa8154eff581a5e39fb4843e714ada465ae5d83f60e456 not found: ID does not exist" containerID="8d84da01d5c61b255ffa8154eff581a5e39fb4843e714ada465ae5d83f60e456" Feb 25 12:14:23 crc kubenswrapper[5005]: I0225 12:14:23.335448 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d84da01d5c61b255ffa8154eff581a5e39fb4843e714ada465ae5d83f60e456"} err="failed to get container status \"8d84da01d5c61b255ffa8154eff581a5e39fb4843e714ada465ae5d83f60e456\": rpc error: code = NotFound desc = could not find container \"8d84da01d5c61b255ffa8154eff581a5e39fb4843e714ada465ae5d83f60e456\": container with ID starting with 8d84da01d5c61b255ffa8154eff581a5e39fb4843e714ada465ae5d83f60e456 not found: ID does not exist" Feb 25 12:14:24 crc kubenswrapper[5005]: I0225 12:14:24.697449 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bd017fc-bec8-4eed-980f-facd4e791dd9" path="/var/lib/kubelet/pods/1bd017fc-bec8-4eed-980f-facd4e791dd9/volumes" Feb 25 12:14:28 crc kubenswrapper[5005]: I0225 12:14:28.088059 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 12:14:28 crc kubenswrapper[5005]: I0225 12:14:28.088761 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 12:14:58 crc kubenswrapper[5005]: I0225 12:14:58.087292 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 12:14:58 crc kubenswrapper[5005]: I0225 12:14:58.090028 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 12:15:00 crc kubenswrapper[5005]: I0225 12:15:00.151291 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29533695-2twb6"] Feb 25 12:15:00 crc kubenswrapper[5005]: E0225 12:15:00.152197 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df35c800-a1cf-4cd0-b9c2-08ab1667de3d" containerName="oc" Feb 25 12:15:00 crc kubenswrapper[5005]: I0225 12:15:00.152215 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="df35c800-a1cf-4cd0-b9c2-08ab1667de3d" containerName="oc" Feb 25 12:15:00 crc kubenswrapper[5005]: E0225 12:15:00.152240 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bd017fc-bec8-4eed-980f-facd4e791dd9" containerName="registry-server" Feb 25 12:15:00 crc kubenswrapper[5005]: I0225 12:15:00.152265 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bd017fc-bec8-4eed-980f-facd4e791dd9" containerName="registry-server" Feb 25 12:15:00 crc kubenswrapper[5005]: E0225 12:15:00.152294 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bd017fc-bec8-4eed-980f-facd4e791dd9" containerName="extract-content" Feb 25 12:15:00 crc kubenswrapper[5005]: I0225 12:15:00.152302 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bd017fc-bec8-4eed-980f-facd4e791dd9" containerName="extract-content" Feb 25 12:15:00 crc kubenswrapper[5005]: E0225 12:15:00.152309 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bd017fc-bec8-4eed-980f-facd4e791dd9" containerName="extract-utilities" Feb 25 12:15:00 crc kubenswrapper[5005]: I0225 12:15:00.152316 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bd017fc-bec8-4eed-980f-facd4e791dd9" containerName="extract-utilities" Feb 25 12:15:00 crc kubenswrapper[5005]: I0225 12:15:00.152529 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bd017fc-bec8-4eed-980f-facd4e791dd9" containerName="registry-server" Feb 25 12:15:00 crc kubenswrapper[5005]: I0225 12:15:00.152543 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="df35c800-a1cf-4cd0-b9c2-08ab1667de3d" containerName="oc" Feb 25 12:15:00 crc kubenswrapper[5005]: I0225 12:15:00.153256 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29533695-2twb6" Feb 25 12:15:00 crc kubenswrapper[5005]: I0225 12:15:00.155274 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 25 12:15:00 crc kubenswrapper[5005]: I0225 12:15:00.155852 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 25 12:15:00 crc kubenswrapper[5005]: I0225 12:15:00.160316 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29533695-2twb6"] Feb 25 12:15:00 crc kubenswrapper[5005]: I0225 12:15:00.244606 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldg7l\" (UniqueName: \"kubernetes.io/projected/be6abeca-11d0-4b7f-aa7f-2ee814906b8d-kube-api-access-ldg7l\") pod \"collect-profiles-29533695-2twb6\" (UID: \"be6abeca-11d0-4b7f-aa7f-2ee814906b8d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533695-2twb6" Feb 25 12:15:00 crc kubenswrapper[5005]: I0225 12:15:00.244719 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be6abeca-11d0-4b7f-aa7f-2ee814906b8d-config-volume\") pod \"collect-profiles-29533695-2twb6\" (UID: \"be6abeca-11d0-4b7f-aa7f-2ee814906b8d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533695-2twb6" Feb 25 12:15:00 crc kubenswrapper[5005]: I0225 12:15:00.244941 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/be6abeca-11d0-4b7f-aa7f-2ee814906b8d-secret-volume\") pod \"collect-profiles-29533695-2twb6\" (UID: \"be6abeca-11d0-4b7f-aa7f-2ee814906b8d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533695-2twb6" Feb 25 12:15:00 crc kubenswrapper[5005]: I0225 12:15:00.346934 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldg7l\" (UniqueName: \"kubernetes.io/projected/be6abeca-11d0-4b7f-aa7f-2ee814906b8d-kube-api-access-ldg7l\") pod \"collect-profiles-29533695-2twb6\" (UID: \"be6abeca-11d0-4b7f-aa7f-2ee814906b8d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533695-2twb6" Feb 25 12:15:00 crc kubenswrapper[5005]: I0225 12:15:00.346996 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be6abeca-11d0-4b7f-aa7f-2ee814906b8d-config-volume\") pod \"collect-profiles-29533695-2twb6\" (UID: \"be6abeca-11d0-4b7f-aa7f-2ee814906b8d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533695-2twb6" Feb 25 12:15:00 crc kubenswrapper[5005]: I0225 12:15:00.347134 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/be6abeca-11d0-4b7f-aa7f-2ee814906b8d-secret-volume\") pod \"collect-profiles-29533695-2twb6\" (UID: \"be6abeca-11d0-4b7f-aa7f-2ee814906b8d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533695-2twb6" Feb 25 12:15:00 crc kubenswrapper[5005]: I0225 12:15:00.348448 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be6abeca-11d0-4b7f-aa7f-2ee814906b8d-config-volume\") pod \"collect-profiles-29533695-2twb6\" (UID: \"be6abeca-11d0-4b7f-aa7f-2ee814906b8d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533695-2twb6" Feb 25 12:15:00 crc kubenswrapper[5005]: I0225 12:15:00.355184 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/be6abeca-11d0-4b7f-aa7f-2ee814906b8d-secret-volume\") pod \"collect-profiles-29533695-2twb6\" (UID: \"be6abeca-11d0-4b7f-aa7f-2ee814906b8d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533695-2twb6" Feb 25 12:15:00 crc kubenswrapper[5005]: I0225 12:15:00.369342 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldg7l\" (UniqueName: \"kubernetes.io/projected/be6abeca-11d0-4b7f-aa7f-2ee814906b8d-kube-api-access-ldg7l\") pod \"collect-profiles-29533695-2twb6\" (UID: \"be6abeca-11d0-4b7f-aa7f-2ee814906b8d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533695-2twb6" Feb 25 12:15:00 crc kubenswrapper[5005]: I0225 12:15:00.481460 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29533695-2twb6" Feb 25 12:15:00 crc kubenswrapper[5005]: I0225 12:15:00.975131 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29533695-2twb6"] Feb 25 12:15:01 crc kubenswrapper[5005]: I0225 12:15:01.621768 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29533695-2twb6" event={"ID":"be6abeca-11d0-4b7f-aa7f-2ee814906b8d","Type":"ContainerStarted","Data":"3abd91c50d1623e904661b1da012eaef616db0e3f72d77d482baa472994225fb"} Feb 25 12:15:01 crc kubenswrapper[5005]: I0225 12:15:01.622152 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29533695-2twb6" event={"ID":"be6abeca-11d0-4b7f-aa7f-2ee814906b8d","Type":"ContainerStarted","Data":"65b279ce9aeb3cfa579da05af1fce696ecd5954061eca643b1b304c7694616eb"} Feb 25 12:15:01 crc kubenswrapper[5005]: I0225 12:15:01.648292 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29533695-2twb6" podStartSLOduration=1.6482723049999999 podStartE2EDuration="1.648272305s" podCreationTimestamp="2026-02-25 12:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 12:15:01.645286376 +0000 UTC m=+3415.686018703" watchObservedRunningTime="2026-02-25 12:15:01.648272305 +0000 UTC m=+3415.689004632" Feb 25 12:15:03 crc kubenswrapper[5005]: I0225 12:15:03.640301 5005 generic.go:334] "Generic (PLEG): container finished" podID="be6abeca-11d0-4b7f-aa7f-2ee814906b8d" containerID="3abd91c50d1623e904661b1da012eaef616db0e3f72d77d482baa472994225fb" exitCode=0 Feb 25 12:15:03 crc kubenswrapper[5005]: I0225 12:15:03.640397 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29533695-2twb6" event={"ID":"be6abeca-11d0-4b7f-aa7f-2ee814906b8d","Type":"ContainerDied","Data":"3abd91c50d1623e904661b1da012eaef616db0e3f72d77d482baa472994225fb"} Feb 25 12:15:05 crc kubenswrapper[5005]: I0225 12:15:05.035198 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29533695-2twb6" Feb 25 12:15:05 crc kubenswrapper[5005]: I0225 12:15:05.154285 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/be6abeca-11d0-4b7f-aa7f-2ee814906b8d-secret-volume\") pod \"be6abeca-11d0-4b7f-aa7f-2ee814906b8d\" (UID: \"be6abeca-11d0-4b7f-aa7f-2ee814906b8d\") " Feb 25 12:15:05 crc kubenswrapper[5005]: I0225 12:15:05.154535 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldg7l\" (UniqueName: \"kubernetes.io/projected/be6abeca-11d0-4b7f-aa7f-2ee814906b8d-kube-api-access-ldg7l\") pod \"be6abeca-11d0-4b7f-aa7f-2ee814906b8d\" (UID: \"be6abeca-11d0-4b7f-aa7f-2ee814906b8d\") " Feb 25 12:15:05 crc kubenswrapper[5005]: I0225 12:15:05.154644 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be6abeca-11d0-4b7f-aa7f-2ee814906b8d-config-volume\") pod \"be6abeca-11d0-4b7f-aa7f-2ee814906b8d\" (UID: \"be6abeca-11d0-4b7f-aa7f-2ee814906b8d\") " Feb 25 12:15:05 crc kubenswrapper[5005]: I0225 12:15:05.155536 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be6abeca-11d0-4b7f-aa7f-2ee814906b8d-config-volume" (OuterVolumeSpecName: "config-volume") pod "be6abeca-11d0-4b7f-aa7f-2ee814906b8d" (UID: "be6abeca-11d0-4b7f-aa7f-2ee814906b8d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 12:15:05 crc kubenswrapper[5005]: I0225 12:15:05.164309 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be6abeca-11d0-4b7f-aa7f-2ee814906b8d-kube-api-access-ldg7l" (OuterVolumeSpecName: "kube-api-access-ldg7l") pod "be6abeca-11d0-4b7f-aa7f-2ee814906b8d" (UID: "be6abeca-11d0-4b7f-aa7f-2ee814906b8d"). InnerVolumeSpecName "kube-api-access-ldg7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:15:05 crc kubenswrapper[5005]: I0225 12:15:05.165981 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be6abeca-11d0-4b7f-aa7f-2ee814906b8d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "be6abeca-11d0-4b7f-aa7f-2ee814906b8d" (UID: "be6abeca-11d0-4b7f-aa7f-2ee814906b8d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 12:15:05 crc kubenswrapper[5005]: I0225 12:15:05.257800 5005 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be6abeca-11d0-4b7f-aa7f-2ee814906b8d-config-volume\") on node \"crc\" DevicePath \"\"" Feb 25 12:15:05 crc kubenswrapper[5005]: I0225 12:15:05.257837 5005 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/be6abeca-11d0-4b7f-aa7f-2ee814906b8d-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 25 12:15:05 crc kubenswrapper[5005]: I0225 12:15:05.257847 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldg7l\" (UniqueName: \"kubernetes.io/projected/be6abeca-11d0-4b7f-aa7f-2ee814906b8d-kube-api-access-ldg7l\") on node \"crc\" DevicePath \"\"" Feb 25 12:15:05 crc kubenswrapper[5005]: I0225 12:15:05.661318 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29533695-2twb6" event={"ID":"be6abeca-11d0-4b7f-aa7f-2ee814906b8d","Type":"ContainerDied","Data":"65b279ce9aeb3cfa579da05af1fce696ecd5954061eca643b1b304c7694616eb"} Feb 25 12:15:05 crc kubenswrapper[5005]: I0225 12:15:05.661622 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65b279ce9aeb3cfa579da05af1fce696ecd5954061eca643b1b304c7694616eb" Feb 25 12:15:05 crc kubenswrapper[5005]: I0225 12:15:05.661481 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29533695-2twb6" Feb 25 12:15:05 crc kubenswrapper[5005]: I0225 12:15:05.771248 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29533650-4n78n"] Feb 25 12:15:05 crc kubenswrapper[5005]: I0225 12:15:05.784276 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29533650-4n78n"] Feb 25 12:15:06 crc kubenswrapper[5005]: I0225 12:15:06.698185 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d72ffb45-73a1-482a-8544-24afc078bacf" path="/var/lib/kubelet/pods/d72ffb45-73a1-482a-8544-24afc078bacf/volumes" Feb 25 12:15:20 crc kubenswrapper[5005]: I0225 12:15:20.192697 5005 scope.go:117] "RemoveContainer" containerID="0dd0599eae82e590a3588421ff6b9b4947e5f8b24397ad51c51b1cfb39a2ad29" Feb 25 12:15:28 crc kubenswrapper[5005]: I0225 12:15:28.087056 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 12:15:28 crc kubenswrapper[5005]: I0225 12:15:28.087884 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 12:15:28 crc kubenswrapper[5005]: I0225 12:15:28.087949 5005 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" Feb 25 12:15:28 crc kubenswrapper[5005]: I0225 12:15:28.088907 5005 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1cd290234490006d5dc0f3b60428821ed2b4d093736936da7505bf8063309e11"} pod="openshift-machine-config-operator/machine-config-daemon-tct5q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 25 12:15:28 crc kubenswrapper[5005]: I0225 12:15:28.088977 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" containerID="cri-o://1cd290234490006d5dc0f3b60428821ed2b4d093736936da7505bf8063309e11" gracePeriod=600 Feb 25 12:15:28 crc kubenswrapper[5005]: I0225 12:15:28.908822 5005 generic.go:334] "Generic (PLEG): container finished" podID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerID="1cd290234490006d5dc0f3b60428821ed2b4d093736936da7505bf8063309e11" exitCode=0 Feb 25 12:15:28 crc kubenswrapper[5005]: I0225 12:15:28.908997 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerDied","Data":"1cd290234490006d5dc0f3b60428821ed2b4d093736936da7505bf8063309e11"} Feb 25 12:15:28 crc kubenswrapper[5005]: I0225 12:15:28.909691 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerStarted","Data":"21a4def3ca83173a08a0f2afdddef58f96111d1f23cd1ee43227b62abca86494"} Feb 25 12:15:28 crc kubenswrapper[5005]: I0225 12:15:28.909726 5005 scope.go:117] "RemoveContainer" containerID="7ca266498d6b9c8fb2a14f176fd0f40e3e96606757679e079fcb7ecb4ca85b52" Feb 25 12:16:00 crc kubenswrapper[5005]: I0225 12:16:00.148732 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533696-vrhl2"] Feb 25 12:16:00 crc kubenswrapper[5005]: E0225 12:16:00.149906 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be6abeca-11d0-4b7f-aa7f-2ee814906b8d" containerName="collect-profiles" Feb 25 12:16:00 crc kubenswrapper[5005]: I0225 12:16:00.149920 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="be6abeca-11d0-4b7f-aa7f-2ee814906b8d" containerName="collect-profiles" Feb 25 12:16:00 crc kubenswrapper[5005]: I0225 12:16:00.150125 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="be6abeca-11d0-4b7f-aa7f-2ee814906b8d" containerName="collect-profiles" Feb 25 12:16:00 crc kubenswrapper[5005]: I0225 12:16:00.151656 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533696-vrhl2" Feb 25 12:16:00 crc kubenswrapper[5005]: I0225 12:16:00.153892 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 12:16:00 crc kubenswrapper[5005]: I0225 12:16:00.154212 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 12:16:00 crc kubenswrapper[5005]: I0225 12:16:00.154625 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 12:16:00 crc kubenswrapper[5005]: I0225 12:16:00.158319 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533696-vrhl2"] Feb 25 12:16:00 crc kubenswrapper[5005]: I0225 12:16:00.254197 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5n97\" (UniqueName: \"kubernetes.io/projected/0d470f4d-6787-4784-8a87-bb3f71090ad1-kube-api-access-p5n97\") pod \"auto-csr-approver-29533696-vrhl2\" (UID: \"0d470f4d-6787-4784-8a87-bb3f71090ad1\") " pod="openshift-infra/auto-csr-approver-29533696-vrhl2" Feb 25 12:16:00 crc kubenswrapper[5005]: I0225 12:16:00.356347 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5n97\" (UniqueName: \"kubernetes.io/projected/0d470f4d-6787-4784-8a87-bb3f71090ad1-kube-api-access-p5n97\") pod \"auto-csr-approver-29533696-vrhl2\" (UID: \"0d470f4d-6787-4784-8a87-bb3f71090ad1\") " pod="openshift-infra/auto-csr-approver-29533696-vrhl2" Feb 25 12:16:00 crc kubenswrapper[5005]: I0225 12:16:00.376716 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5n97\" (UniqueName: \"kubernetes.io/projected/0d470f4d-6787-4784-8a87-bb3f71090ad1-kube-api-access-p5n97\") pod \"auto-csr-approver-29533696-vrhl2\" (UID: \"0d470f4d-6787-4784-8a87-bb3f71090ad1\") " pod="openshift-infra/auto-csr-approver-29533696-vrhl2" Feb 25 12:16:00 crc kubenswrapper[5005]: I0225 12:16:00.472785 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533696-vrhl2" Feb 25 12:16:01 crc kubenswrapper[5005]: I0225 12:16:01.067658 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533696-vrhl2"] Feb 25 12:16:01 crc kubenswrapper[5005]: I0225 12:16:01.208941 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533696-vrhl2" event={"ID":"0d470f4d-6787-4784-8a87-bb3f71090ad1","Type":"ContainerStarted","Data":"7985b8fa5a6439438bcc2d3be6c8bddcd772459fd7ea5b9bef66ad41108c8bb6"} Feb 25 12:16:03 crc kubenswrapper[5005]: I0225 12:16:03.237578 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533696-vrhl2" event={"ID":"0d470f4d-6787-4784-8a87-bb3f71090ad1","Type":"ContainerStarted","Data":"82b17989f4115fca9f65c18f0999bdf5f5a39e44e257a3ef1d617d2d388a29df"} Feb 25 12:16:03 crc kubenswrapper[5005]: I0225 12:16:03.273218 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29533696-vrhl2" podStartSLOduration=1.711706186 podStartE2EDuration="3.273186859s" podCreationTimestamp="2026-02-25 12:16:00 +0000 UTC" firstStartedPulling="2026-02-25 12:16:01.065455047 +0000 UTC m=+3475.106187384" lastFinishedPulling="2026-02-25 12:16:02.62693573 +0000 UTC m=+3476.667668057" observedRunningTime="2026-02-25 12:16:03.255883862 +0000 UTC m=+3477.296616209" watchObservedRunningTime="2026-02-25 12:16:03.273186859 +0000 UTC m=+3477.313919186" Feb 25 12:16:04 crc kubenswrapper[5005]: I0225 12:16:04.522680 5005 generic.go:334] "Generic (PLEG): container finished" podID="0d470f4d-6787-4784-8a87-bb3f71090ad1" containerID="82b17989f4115fca9f65c18f0999bdf5f5a39e44e257a3ef1d617d2d388a29df" exitCode=0 Feb 25 12:16:04 crc kubenswrapper[5005]: I0225 12:16:04.523193 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533696-vrhl2" event={"ID":"0d470f4d-6787-4784-8a87-bb3f71090ad1","Type":"ContainerDied","Data":"82b17989f4115fca9f65c18f0999bdf5f5a39e44e257a3ef1d617d2d388a29df"} Feb 25 12:16:06 crc kubenswrapper[5005]: I0225 12:16:06.060187 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533696-vrhl2" Feb 25 12:16:06 crc kubenswrapper[5005]: I0225 12:16:06.177647 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5n97\" (UniqueName: \"kubernetes.io/projected/0d470f4d-6787-4784-8a87-bb3f71090ad1-kube-api-access-p5n97\") pod \"0d470f4d-6787-4784-8a87-bb3f71090ad1\" (UID: \"0d470f4d-6787-4784-8a87-bb3f71090ad1\") " Feb 25 12:16:06 crc kubenswrapper[5005]: I0225 12:16:06.185366 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d470f4d-6787-4784-8a87-bb3f71090ad1-kube-api-access-p5n97" (OuterVolumeSpecName: "kube-api-access-p5n97") pod "0d470f4d-6787-4784-8a87-bb3f71090ad1" (UID: "0d470f4d-6787-4784-8a87-bb3f71090ad1"). InnerVolumeSpecName "kube-api-access-p5n97". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:16:06 crc kubenswrapper[5005]: I0225 12:16:06.280956 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5n97\" (UniqueName: \"kubernetes.io/projected/0d470f4d-6787-4784-8a87-bb3f71090ad1-kube-api-access-p5n97\") on node \"crc\" DevicePath \"\"" Feb 25 12:16:06 crc kubenswrapper[5005]: I0225 12:16:06.525476 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533690-27vkj"] Feb 25 12:16:06 crc kubenswrapper[5005]: I0225 12:16:06.542265 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533690-27vkj"] Feb 25 12:16:06 crc kubenswrapper[5005]: I0225 12:16:06.548160 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533696-vrhl2" event={"ID":"0d470f4d-6787-4784-8a87-bb3f71090ad1","Type":"ContainerDied","Data":"7985b8fa5a6439438bcc2d3be6c8bddcd772459fd7ea5b9bef66ad41108c8bb6"} Feb 25 12:16:06 crc kubenswrapper[5005]: I0225 12:16:06.548217 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7985b8fa5a6439438bcc2d3be6c8bddcd772459fd7ea5b9bef66ad41108c8bb6" Feb 25 12:16:06 crc kubenswrapper[5005]: I0225 12:16:06.548255 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533696-vrhl2" Feb 25 12:16:06 crc kubenswrapper[5005]: I0225 12:16:06.702748 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210b3f94-a7aa-4823-9778-7aede3fc6a45" path="/var/lib/kubelet/pods/210b3f94-a7aa-4823-9778-7aede3fc6a45/volumes" Feb 25 12:16:20 crc kubenswrapper[5005]: I0225 12:16:20.292762 5005 scope.go:117] "RemoveContainer" containerID="08f0b03f8b21330cb633a0cd6358143b99ec3a8b266755e1f386904345370043" Feb 25 12:16:29 crc kubenswrapper[5005]: I0225 12:16:29.598998 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-s2mgd"] Feb 25 12:16:29 crc kubenswrapper[5005]: E0225 12:16:29.600090 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d470f4d-6787-4784-8a87-bb3f71090ad1" containerName="oc" Feb 25 12:16:29 crc kubenswrapper[5005]: I0225 12:16:29.600106 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d470f4d-6787-4784-8a87-bb3f71090ad1" containerName="oc" Feb 25 12:16:29 crc kubenswrapper[5005]: I0225 12:16:29.600390 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d470f4d-6787-4784-8a87-bb3f71090ad1" containerName="oc" Feb 25 12:16:29 crc kubenswrapper[5005]: I0225 12:16:29.601943 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s2mgd" Feb 25 12:16:29 crc kubenswrapper[5005]: I0225 12:16:29.619002 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s2mgd"] Feb 25 12:16:29 crc kubenswrapper[5005]: I0225 12:16:29.788630 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b2e2f1f-9404-46b9-8ea3-96f6e6ec3c1b-utilities\") pod \"community-operators-s2mgd\" (UID: \"8b2e2f1f-9404-46b9-8ea3-96f6e6ec3c1b\") " pod="openshift-marketplace/community-operators-s2mgd" Feb 25 12:16:29 crc kubenswrapper[5005]: I0225 12:16:29.788917 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b2e2f1f-9404-46b9-8ea3-96f6e6ec3c1b-catalog-content\") pod \"community-operators-s2mgd\" (UID: \"8b2e2f1f-9404-46b9-8ea3-96f6e6ec3c1b\") " pod="openshift-marketplace/community-operators-s2mgd" Feb 25 12:16:29 crc kubenswrapper[5005]: I0225 12:16:29.788943 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hp5wj\" (UniqueName: \"kubernetes.io/projected/8b2e2f1f-9404-46b9-8ea3-96f6e6ec3c1b-kube-api-access-hp5wj\") pod \"community-operators-s2mgd\" (UID: \"8b2e2f1f-9404-46b9-8ea3-96f6e6ec3c1b\") " pod="openshift-marketplace/community-operators-s2mgd" Feb 25 12:16:29 crc kubenswrapper[5005]: I0225 12:16:29.891291 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b2e2f1f-9404-46b9-8ea3-96f6e6ec3c1b-utilities\") pod \"community-operators-s2mgd\" (UID: \"8b2e2f1f-9404-46b9-8ea3-96f6e6ec3c1b\") " pod="openshift-marketplace/community-operators-s2mgd" Feb 25 12:16:29 crc kubenswrapper[5005]: I0225 12:16:29.891401 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b2e2f1f-9404-46b9-8ea3-96f6e6ec3c1b-catalog-content\") pod \"community-operators-s2mgd\" (UID: \"8b2e2f1f-9404-46b9-8ea3-96f6e6ec3c1b\") " pod="openshift-marketplace/community-operators-s2mgd" Feb 25 12:16:29 crc kubenswrapper[5005]: I0225 12:16:29.891422 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hp5wj\" (UniqueName: \"kubernetes.io/projected/8b2e2f1f-9404-46b9-8ea3-96f6e6ec3c1b-kube-api-access-hp5wj\") pod \"community-operators-s2mgd\" (UID: \"8b2e2f1f-9404-46b9-8ea3-96f6e6ec3c1b\") " pod="openshift-marketplace/community-operators-s2mgd" Feb 25 12:16:29 crc kubenswrapper[5005]: I0225 12:16:29.892490 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b2e2f1f-9404-46b9-8ea3-96f6e6ec3c1b-utilities\") pod \"community-operators-s2mgd\" (UID: \"8b2e2f1f-9404-46b9-8ea3-96f6e6ec3c1b\") " pod="openshift-marketplace/community-operators-s2mgd" Feb 25 12:16:29 crc kubenswrapper[5005]: I0225 12:16:29.892765 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b2e2f1f-9404-46b9-8ea3-96f6e6ec3c1b-catalog-content\") pod \"community-operators-s2mgd\" (UID: \"8b2e2f1f-9404-46b9-8ea3-96f6e6ec3c1b\") " pod="openshift-marketplace/community-operators-s2mgd" Feb 25 12:16:29 crc kubenswrapper[5005]: I0225 12:16:29.917781 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hp5wj\" (UniqueName: \"kubernetes.io/projected/8b2e2f1f-9404-46b9-8ea3-96f6e6ec3c1b-kube-api-access-hp5wj\") pod \"community-operators-s2mgd\" (UID: \"8b2e2f1f-9404-46b9-8ea3-96f6e6ec3c1b\") " pod="openshift-marketplace/community-operators-s2mgd" Feb 25 12:16:29 crc kubenswrapper[5005]: I0225 12:16:29.933970 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s2mgd" Feb 25 12:16:30 crc kubenswrapper[5005]: I0225 12:16:30.602179 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s2mgd"] Feb 25 12:16:31 crc kubenswrapper[5005]: I0225 12:16:31.052743 5005 generic.go:334] "Generic (PLEG): container finished" podID="8b2e2f1f-9404-46b9-8ea3-96f6e6ec3c1b" containerID="b6d64d83de6a7f335cc03cb5af0c4f32945aaeea4e1f68308bda9beb1fc593ec" exitCode=0 Feb 25 12:16:31 crc kubenswrapper[5005]: I0225 12:16:31.052876 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s2mgd" event={"ID":"8b2e2f1f-9404-46b9-8ea3-96f6e6ec3c1b","Type":"ContainerDied","Data":"b6d64d83de6a7f335cc03cb5af0c4f32945aaeea4e1f68308bda9beb1fc593ec"} Feb 25 12:16:31 crc kubenswrapper[5005]: I0225 12:16:31.053279 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s2mgd" event={"ID":"8b2e2f1f-9404-46b9-8ea3-96f6e6ec3c1b","Type":"ContainerStarted","Data":"ffb64ff4bfa3eb59ef2b6e7580ababf1a205fdc39107a96a387df4a96f494e99"} Feb 25 12:16:33 crc kubenswrapper[5005]: I0225 12:16:33.081194 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s2mgd" event={"ID":"8b2e2f1f-9404-46b9-8ea3-96f6e6ec3c1b","Type":"ContainerStarted","Data":"1e25b6a4888ef4581c2b35b51c547a34855949fece66779474a3e36eb5343eac"} Feb 25 12:16:37 crc kubenswrapper[5005]: I0225 12:16:37.131674 5005 generic.go:334] "Generic (PLEG): container finished" podID="8b2e2f1f-9404-46b9-8ea3-96f6e6ec3c1b" containerID="1e25b6a4888ef4581c2b35b51c547a34855949fece66779474a3e36eb5343eac" exitCode=0 Feb 25 12:16:37 crc kubenswrapper[5005]: I0225 12:16:37.131767 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s2mgd" event={"ID":"8b2e2f1f-9404-46b9-8ea3-96f6e6ec3c1b","Type":"ContainerDied","Data":"1e25b6a4888ef4581c2b35b51c547a34855949fece66779474a3e36eb5343eac"} Feb 25 12:16:38 crc kubenswrapper[5005]: I0225 12:16:38.144877 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s2mgd" event={"ID":"8b2e2f1f-9404-46b9-8ea3-96f6e6ec3c1b","Type":"ContainerStarted","Data":"ce886e1180e858a8040be3b384ba252e8d6f5ab6ffda6c1cdae27f5f8ff62fc5"} Feb 25 12:16:38 crc kubenswrapper[5005]: I0225 12:16:38.193859 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-s2mgd" podStartSLOduration=2.6508912860000002 podStartE2EDuration="9.193822766s" podCreationTimestamp="2026-02-25 12:16:29 +0000 UTC" firstStartedPulling="2026-02-25 12:16:31.05502661 +0000 UTC m=+3505.095758947" lastFinishedPulling="2026-02-25 12:16:37.59795811 +0000 UTC m=+3511.638690427" observedRunningTime="2026-02-25 12:16:38.186893299 +0000 UTC m=+3512.227625636" watchObservedRunningTime="2026-02-25 12:16:38.193822766 +0000 UTC m=+3512.234555133" Feb 25 12:16:39 crc kubenswrapper[5005]: I0225 12:16:39.934314 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-s2mgd" Feb 25 12:16:39 crc kubenswrapper[5005]: I0225 12:16:39.934368 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-s2mgd" Feb 25 12:16:40 crc kubenswrapper[5005]: I0225 12:16:40.984425 5005 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-s2mgd" podUID="8b2e2f1f-9404-46b9-8ea3-96f6e6ec3c1b" containerName="registry-server" probeResult="failure" output=< Feb 25 12:16:40 crc kubenswrapper[5005]: timeout: failed to connect service ":50051" within 1s Feb 25 12:16:40 crc kubenswrapper[5005]: > Feb 25 12:16:43 crc kubenswrapper[5005]: I0225 12:16:43.114559 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-877xt"] Feb 25 12:16:43 crc kubenswrapper[5005]: I0225 12:16:43.117998 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-877xt" Feb 25 12:16:43 crc kubenswrapper[5005]: I0225 12:16:43.139105 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-877xt"] Feb 25 12:16:43 crc kubenswrapper[5005]: I0225 12:16:43.230296 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3207b725-cd93-4243-8072-23076c45becd-catalog-content\") pod \"certified-operators-877xt\" (UID: \"3207b725-cd93-4243-8072-23076c45becd\") " pod="openshift-marketplace/certified-operators-877xt" Feb 25 12:16:43 crc kubenswrapper[5005]: I0225 12:16:43.230660 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qc5m\" (UniqueName: \"kubernetes.io/projected/3207b725-cd93-4243-8072-23076c45becd-kube-api-access-5qc5m\") pod \"certified-operators-877xt\" (UID: \"3207b725-cd93-4243-8072-23076c45becd\") " pod="openshift-marketplace/certified-operators-877xt" Feb 25 12:16:43 crc kubenswrapper[5005]: I0225 12:16:43.230947 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3207b725-cd93-4243-8072-23076c45becd-utilities\") pod \"certified-operators-877xt\" (UID: \"3207b725-cd93-4243-8072-23076c45becd\") " pod="openshift-marketplace/certified-operators-877xt" Feb 25 12:16:43 crc kubenswrapper[5005]: I0225 12:16:43.334317 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3207b725-cd93-4243-8072-23076c45becd-catalog-content\") pod \"certified-operators-877xt\" (UID: \"3207b725-cd93-4243-8072-23076c45becd\") " pod="openshift-marketplace/certified-operators-877xt" Feb 25 12:16:43 crc kubenswrapper[5005]: I0225 12:16:43.334845 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3207b725-cd93-4243-8072-23076c45becd-catalog-content\") pod \"certified-operators-877xt\" (UID: \"3207b725-cd93-4243-8072-23076c45becd\") " pod="openshift-marketplace/certified-operators-877xt" Feb 25 12:16:43 crc kubenswrapper[5005]: I0225 12:16:43.334945 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qc5m\" (UniqueName: \"kubernetes.io/projected/3207b725-cd93-4243-8072-23076c45becd-kube-api-access-5qc5m\") pod \"certified-operators-877xt\" (UID: \"3207b725-cd93-4243-8072-23076c45becd\") " pod="openshift-marketplace/certified-operators-877xt" Feb 25 12:16:43 crc kubenswrapper[5005]: I0225 12:16:43.335014 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3207b725-cd93-4243-8072-23076c45becd-utilities\") pod \"certified-operators-877xt\" (UID: \"3207b725-cd93-4243-8072-23076c45becd\") " pod="openshift-marketplace/certified-operators-877xt" Feb 25 12:16:43 crc kubenswrapper[5005]: I0225 12:16:43.335522 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3207b725-cd93-4243-8072-23076c45becd-utilities\") pod \"certified-operators-877xt\" (UID: \"3207b725-cd93-4243-8072-23076c45becd\") " pod="openshift-marketplace/certified-operators-877xt" Feb 25 12:16:43 crc kubenswrapper[5005]: I0225 12:16:43.358519 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qc5m\" (UniqueName: \"kubernetes.io/projected/3207b725-cd93-4243-8072-23076c45becd-kube-api-access-5qc5m\") pod \"certified-operators-877xt\" (UID: \"3207b725-cd93-4243-8072-23076c45becd\") " pod="openshift-marketplace/certified-operators-877xt" Feb 25 12:16:43 crc kubenswrapper[5005]: I0225 12:16:43.475197 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-877xt" Feb 25 12:16:44 crc kubenswrapper[5005]: I0225 12:16:44.095209 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-877xt"] Feb 25 12:16:44 crc kubenswrapper[5005]: I0225 12:16:44.262451 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-877xt" event={"ID":"3207b725-cd93-4243-8072-23076c45becd","Type":"ContainerStarted","Data":"6b70e8725445c88633fc2258b519790f58b69dd61ec1afcff31b27cbbe1dd732"} Feb 25 12:16:45 crc kubenswrapper[5005]: I0225 12:16:45.275035 5005 generic.go:334] "Generic (PLEG): container finished" podID="3207b725-cd93-4243-8072-23076c45becd" containerID="4e06528499bb94234ae017cb4fa5d7d53450e620026b00ed3080698635b35559" exitCode=0 Feb 25 12:16:45 crc kubenswrapper[5005]: I0225 12:16:45.275344 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-877xt" event={"ID":"3207b725-cd93-4243-8072-23076c45becd","Type":"ContainerDied","Data":"4e06528499bb94234ae017cb4fa5d7d53450e620026b00ed3080698635b35559"} Feb 25 12:16:46 crc kubenswrapper[5005]: I0225 12:16:46.286696 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-877xt" event={"ID":"3207b725-cd93-4243-8072-23076c45becd","Type":"ContainerStarted","Data":"42a859051083c556fbaa59d752e901ead0b426f7c95a7d5c605406112c526df4"} Feb 25 12:16:47 crc kubenswrapper[5005]: I0225 12:16:47.302091 5005 generic.go:334] "Generic (PLEG): container finished" podID="3207b725-cd93-4243-8072-23076c45becd" containerID="42a859051083c556fbaa59d752e901ead0b426f7c95a7d5c605406112c526df4" exitCode=0 Feb 25 12:16:47 crc kubenswrapper[5005]: I0225 12:16:47.302238 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-877xt" event={"ID":"3207b725-cd93-4243-8072-23076c45becd","Type":"ContainerDied","Data":"42a859051083c556fbaa59d752e901ead0b426f7c95a7d5c605406112c526df4"} Feb 25 12:16:48 crc kubenswrapper[5005]: I0225 12:16:48.318050 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-877xt" event={"ID":"3207b725-cd93-4243-8072-23076c45becd","Type":"ContainerStarted","Data":"d73ec98854037fa13c740b736fc28170b3f6ad8ff3b6bc089145f5cceb680824"} Feb 25 12:16:48 crc kubenswrapper[5005]: I0225 12:16:48.339319 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-877xt" podStartSLOduration=2.772986428 podStartE2EDuration="5.339299057s" podCreationTimestamp="2026-02-25 12:16:43 +0000 UTC" firstStartedPulling="2026-02-25 12:16:45.278207938 +0000 UTC m=+3519.318940265" lastFinishedPulling="2026-02-25 12:16:47.844520577 +0000 UTC m=+3521.885252894" observedRunningTime="2026-02-25 12:16:48.337255586 +0000 UTC m=+3522.377987933" watchObservedRunningTime="2026-02-25 12:16:48.339299057 +0000 UTC m=+3522.380031394" Feb 25 12:16:49 crc kubenswrapper[5005]: I0225 12:16:49.992806 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-s2mgd" Feb 25 12:16:50 crc kubenswrapper[5005]: I0225 12:16:50.048234 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-s2mgd" Feb 25 12:16:52 crc kubenswrapper[5005]: I0225 12:16:52.488309 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s2mgd"] Feb 25 12:16:52 crc kubenswrapper[5005]: I0225 12:16:52.488879 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-s2mgd" podUID="8b2e2f1f-9404-46b9-8ea3-96f6e6ec3c1b" containerName="registry-server" containerID="cri-o://ce886e1180e858a8040be3b384ba252e8d6f5ab6ffda6c1cdae27f5f8ff62fc5" gracePeriod=2 Feb 25 12:16:53 crc kubenswrapper[5005]: I0225 12:16:53.059987 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s2mgd" Feb 25 12:16:53 crc kubenswrapper[5005]: I0225 12:16:53.169230 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hp5wj\" (UniqueName: \"kubernetes.io/projected/8b2e2f1f-9404-46b9-8ea3-96f6e6ec3c1b-kube-api-access-hp5wj\") pod \"8b2e2f1f-9404-46b9-8ea3-96f6e6ec3c1b\" (UID: \"8b2e2f1f-9404-46b9-8ea3-96f6e6ec3c1b\") " Feb 25 12:16:53 crc kubenswrapper[5005]: I0225 12:16:53.169551 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b2e2f1f-9404-46b9-8ea3-96f6e6ec3c1b-utilities\") pod \"8b2e2f1f-9404-46b9-8ea3-96f6e6ec3c1b\" (UID: \"8b2e2f1f-9404-46b9-8ea3-96f6e6ec3c1b\") " Feb 25 12:16:53 crc kubenswrapper[5005]: I0225 12:16:53.169586 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b2e2f1f-9404-46b9-8ea3-96f6e6ec3c1b-catalog-content\") pod \"8b2e2f1f-9404-46b9-8ea3-96f6e6ec3c1b\" (UID: \"8b2e2f1f-9404-46b9-8ea3-96f6e6ec3c1b\") " Feb 25 12:16:53 crc kubenswrapper[5005]: I0225 12:16:53.170146 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b2e2f1f-9404-46b9-8ea3-96f6e6ec3c1b-utilities" (OuterVolumeSpecName: "utilities") pod "8b2e2f1f-9404-46b9-8ea3-96f6e6ec3c1b" (UID: "8b2e2f1f-9404-46b9-8ea3-96f6e6ec3c1b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 12:16:53 crc kubenswrapper[5005]: I0225 12:16:53.176645 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b2e2f1f-9404-46b9-8ea3-96f6e6ec3c1b-kube-api-access-hp5wj" (OuterVolumeSpecName: "kube-api-access-hp5wj") pod "8b2e2f1f-9404-46b9-8ea3-96f6e6ec3c1b" (UID: "8b2e2f1f-9404-46b9-8ea3-96f6e6ec3c1b"). InnerVolumeSpecName "kube-api-access-hp5wj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:16:53 crc kubenswrapper[5005]: I0225 12:16:53.191113 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b2e2f1f-9404-46b9-8ea3-96f6e6ec3c1b-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 12:16:53 crc kubenswrapper[5005]: I0225 12:16:53.191265 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hp5wj\" (UniqueName: \"kubernetes.io/projected/8b2e2f1f-9404-46b9-8ea3-96f6e6ec3c1b-kube-api-access-hp5wj\") on node \"crc\" DevicePath \"\"" Feb 25 12:16:53 crc kubenswrapper[5005]: I0225 12:16:53.240064 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b2e2f1f-9404-46b9-8ea3-96f6e6ec3c1b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8b2e2f1f-9404-46b9-8ea3-96f6e6ec3c1b" (UID: "8b2e2f1f-9404-46b9-8ea3-96f6e6ec3c1b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 12:16:53 crc kubenswrapper[5005]: I0225 12:16:53.293707 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b2e2f1f-9404-46b9-8ea3-96f6e6ec3c1b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 12:16:53 crc kubenswrapper[5005]: I0225 12:16:53.381711 5005 generic.go:334] "Generic (PLEG): container finished" podID="8b2e2f1f-9404-46b9-8ea3-96f6e6ec3c1b" containerID="ce886e1180e858a8040be3b384ba252e8d6f5ab6ffda6c1cdae27f5f8ff62fc5" exitCode=0 Feb 25 12:16:53 crc kubenswrapper[5005]: I0225 12:16:53.381790 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s2mgd" event={"ID":"8b2e2f1f-9404-46b9-8ea3-96f6e6ec3c1b","Type":"ContainerDied","Data":"ce886e1180e858a8040be3b384ba252e8d6f5ab6ffda6c1cdae27f5f8ff62fc5"} Feb 25 12:16:53 crc kubenswrapper[5005]: I0225 12:16:53.381828 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s2mgd" event={"ID":"8b2e2f1f-9404-46b9-8ea3-96f6e6ec3c1b","Type":"ContainerDied","Data":"ffb64ff4bfa3eb59ef2b6e7580ababf1a205fdc39107a96a387df4a96f494e99"} Feb 25 12:16:53 crc kubenswrapper[5005]: I0225 12:16:53.381852 5005 scope.go:117] "RemoveContainer" containerID="ce886e1180e858a8040be3b384ba252e8d6f5ab6ffda6c1cdae27f5f8ff62fc5" Feb 25 12:16:53 crc kubenswrapper[5005]: I0225 12:16:53.382095 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s2mgd" Feb 25 12:16:53 crc kubenswrapper[5005]: I0225 12:16:53.415613 5005 scope.go:117] "RemoveContainer" containerID="1e25b6a4888ef4581c2b35b51c547a34855949fece66779474a3e36eb5343eac" Feb 25 12:16:53 crc kubenswrapper[5005]: I0225 12:16:53.426024 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s2mgd"] Feb 25 12:16:53 crc kubenswrapper[5005]: I0225 12:16:53.435504 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-s2mgd"] Feb 25 12:16:53 crc kubenswrapper[5005]: I0225 12:16:53.470267 5005 scope.go:117] "RemoveContainer" containerID="b6d64d83de6a7f335cc03cb5af0c4f32945aaeea4e1f68308bda9beb1fc593ec" Feb 25 12:16:53 crc kubenswrapper[5005]: I0225 12:16:53.475945 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-877xt" Feb 25 12:16:53 crc kubenswrapper[5005]: I0225 12:16:53.476595 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-877xt" Feb 25 12:16:53 crc kubenswrapper[5005]: I0225 12:16:53.513911 5005 scope.go:117] "RemoveContainer" containerID="ce886e1180e858a8040be3b384ba252e8d6f5ab6ffda6c1cdae27f5f8ff62fc5" Feb 25 12:16:53 crc kubenswrapper[5005]: E0225 12:16:53.514484 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce886e1180e858a8040be3b384ba252e8d6f5ab6ffda6c1cdae27f5f8ff62fc5\": container with ID starting with ce886e1180e858a8040be3b384ba252e8d6f5ab6ffda6c1cdae27f5f8ff62fc5 not found: ID does not exist" containerID="ce886e1180e858a8040be3b384ba252e8d6f5ab6ffda6c1cdae27f5f8ff62fc5" Feb 25 12:16:53 crc kubenswrapper[5005]: I0225 12:16:53.514521 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce886e1180e858a8040be3b384ba252e8d6f5ab6ffda6c1cdae27f5f8ff62fc5"} err="failed to get container status \"ce886e1180e858a8040be3b384ba252e8d6f5ab6ffda6c1cdae27f5f8ff62fc5\": rpc error: code = NotFound desc = could not find container \"ce886e1180e858a8040be3b384ba252e8d6f5ab6ffda6c1cdae27f5f8ff62fc5\": container with ID starting with ce886e1180e858a8040be3b384ba252e8d6f5ab6ffda6c1cdae27f5f8ff62fc5 not found: ID does not exist" Feb 25 12:16:53 crc kubenswrapper[5005]: I0225 12:16:53.514552 5005 scope.go:117] "RemoveContainer" containerID="1e25b6a4888ef4581c2b35b51c547a34855949fece66779474a3e36eb5343eac" Feb 25 12:16:53 crc kubenswrapper[5005]: E0225 12:16:53.514796 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e25b6a4888ef4581c2b35b51c547a34855949fece66779474a3e36eb5343eac\": container with ID starting with 1e25b6a4888ef4581c2b35b51c547a34855949fece66779474a3e36eb5343eac not found: ID does not exist" containerID="1e25b6a4888ef4581c2b35b51c547a34855949fece66779474a3e36eb5343eac" Feb 25 12:16:53 crc kubenswrapper[5005]: I0225 12:16:53.514819 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e25b6a4888ef4581c2b35b51c547a34855949fece66779474a3e36eb5343eac"} err="failed to get container status \"1e25b6a4888ef4581c2b35b51c547a34855949fece66779474a3e36eb5343eac\": rpc error: code = NotFound desc = could not find container \"1e25b6a4888ef4581c2b35b51c547a34855949fece66779474a3e36eb5343eac\": container with ID starting with 1e25b6a4888ef4581c2b35b51c547a34855949fece66779474a3e36eb5343eac not found: ID does not exist" Feb 25 12:16:53 crc kubenswrapper[5005]: I0225 12:16:53.514838 5005 scope.go:117] "RemoveContainer" containerID="b6d64d83de6a7f335cc03cb5af0c4f32945aaeea4e1f68308bda9beb1fc593ec" Feb 25 12:16:53 crc kubenswrapper[5005]: E0225 12:16:53.515131 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6d64d83de6a7f335cc03cb5af0c4f32945aaeea4e1f68308bda9beb1fc593ec\": container with ID starting with b6d64d83de6a7f335cc03cb5af0c4f32945aaeea4e1f68308bda9beb1fc593ec not found: ID does not exist" containerID="b6d64d83de6a7f335cc03cb5af0c4f32945aaeea4e1f68308bda9beb1fc593ec" Feb 25 12:16:53 crc kubenswrapper[5005]: I0225 12:16:53.515151 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6d64d83de6a7f335cc03cb5af0c4f32945aaeea4e1f68308bda9beb1fc593ec"} err="failed to get container status \"b6d64d83de6a7f335cc03cb5af0c4f32945aaeea4e1f68308bda9beb1fc593ec\": rpc error: code = NotFound desc = could not find container \"b6d64d83de6a7f335cc03cb5af0c4f32945aaeea4e1f68308bda9beb1fc593ec\": container with ID starting with b6d64d83de6a7f335cc03cb5af0c4f32945aaeea4e1f68308bda9beb1fc593ec not found: ID does not exist" Feb 25 12:16:53 crc kubenswrapper[5005]: I0225 12:16:53.547569 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-877xt" Feb 25 12:16:54 crc kubenswrapper[5005]: I0225 12:16:54.442979 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-877xt" Feb 25 12:16:54 crc kubenswrapper[5005]: I0225 12:16:54.700501 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b2e2f1f-9404-46b9-8ea3-96f6e6ec3c1b" path="/var/lib/kubelet/pods/8b2e2f1f-9404-46b9-8ea3-96f6e6ec3c1b/volumes" Feb 25 12:16:57 crc kubenswrapper[5005]: I0225 12:16:57.285709 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-877xt"] Feb 25 12:16:57 crc kubenswrapper[5005]: I0225 12:16:57.420756 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-877xt" podUID="3207b725-cd93-4243-8072-23076c45becd" containerName="registry-server" containerID="cri-o://d73ec98854037fa13c740b736fc28170b3f6ad8ff3b6bc089145f5cceb680824" gracePeriod=2 Feb 25 12:16:58 crc kubenswrapper[5005]: I0225 12:16:58.036831 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-877xt" Feb 25 12:16:58 crc kubenswrapper[5005]: I0225 12:16:58.202179 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qc5m\" (UniqueName: \"kubernetes.io/projected/3207b725-cd93-4243-8072-23076c45becd-kube-api-access-5qc5m\") pod \"3207b725-cd93-4243-8072-23076c45becd\" (UID: \"3207b725-cd93-4243-8072-23076c45becd\") " Feb 25 12:16:58 crc kubenswrapper[5005]: I0225 12:16:58.202266 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3207b725-cd93-4243-8072-23076c45becd-utilities\") pod \"3207b725-cd93-4243-8072-23076c45becd\" (UID: \"3207b725-cd93-4243-8072-23076c45becd\") " Feb 25 12:16:58 crc kubenswrapper[5005]: I0225 12:16:58.202349 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3207b725-cd93-4243-8072-23076c45becd-catalog-content\") pod \"3207b725-cd93-4243-8072-23076c45becd\" (UID: \"3207b725-cd93-4243-8072-23076c45becd\") " Feb 25 12:16:58 crc kubenswrapper[5005]: I0225 12:16:58.203692 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3207b725-cd93-4243-8072-23076c45becd-utilities" (OuterVolumeSpecName: "utilities") pod "3207b725-cd93-4243-8072-23076c45becd" (UID: "3207b725-cd93-4243-8072-23076c45becd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 12:16:58 crc kubenswrapper[5005]: I0225 12:16:58.213850 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3207b725-cd93-4243-8072-23076c45becd-kube-api-access-5qc5m" (OuterVolumeSpecName: "kube-api-access-5qc5m") pod "3207b725-cd93-4243-8072-23076c45becd" (UID: "3207b725-cd93-4243-8072-23076c45becd"). InnerVolumeSpecName "kube-api-access-5qc5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:16:58 crc kubenswrapper[5005]: I0225 12:16:58.260729 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3207b725-cd93-4243-8072-23076c45becd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3207b725-cd93-4243-8072-23076c45becd" (UID: "3207b725-cd93-4243-8072-23076c45becd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 12:16:58 crc kubenswrapper[5005]: I0225 12:16:58.305705 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qc5m\" (UniqueName: \"kubernetes.io/projected/3207b725-cd93-4243-8072-23076c45becd-kube-api-access-5qc5m\") on node \"crc\" DevicePath \"\"" Feb 25 12:16:58 crc kubenswrapper[5005]: I0225 12:16:58.305771 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3207b725-cd93-4243-8072-23076c45becd-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 12:16:58 crc kubenswrapper[5005]: I0225 12:16:58.305794 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3207b725-cd93-4243-8072-23076c45becd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 12:16:58 crc kubenswrapper[5005]: I0225 12:16:58.438733 5005 generic.go:334] "Generic (PLEG): container finished" podID="3207b725-cd93-4243-8072-23076c45becd" containerID="d73ec98854037fa13c740b736fc28170b3f6ad8ff3b6bc089145f5cceb680824" exitCode=0 Feb 25 12:16:58 crc kubenswrapper[5005]: I0225 12:16:58.438789 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-877xt" event={"ID":"3207b725-cd93-4243-8072-23076c45becd","Type":"ContainerDied","Data":"d73ec98854037fa13c740b736fc28170b3f6ad8ff3b6bc089145f5cceb680824"} Feb 25 12:16:58 crc kubenswrapper[5005]: I0225 12:16:58.438824 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-877xt" event={"ID":"3207b725-cd93-4243-8072-23076c45becd","Type":"ContainerDied","Data":"6b70e8725445c88633fc2258b519790f58b69dd61ec1afcff31b27cbbe1dd732"} Feb 25 12:16:58 crc kubenswrapper[5005]: I0225 12:16:58.438846 5005 scope.go:117] "RemoveContainer" containerID="d73ec98854037fa13c740b736fc28170b3f6ad8ff3b6bc089145f5cceb680824" Feb 25 12:16:58 crc kubenswrapper[5005]: I0225 12:16:58.439028 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-877xt" Feb 25 12:16:58 crc kubenswrapper[5005]: I0225 12:16:58.479328 5005 scope.go:117] "RemoveContainer" containerID="42a859051083c556fbaa59d752e901ead0b426f7c95a7d5c605406112c526df4" Feb 25 12:16:58 crc kubenswrapper[5005]: I0225 12:16:58.498212 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-877xt"] Feb 25 12:16:58 crc kubenswrapper[5005]: I0225 12:16:58.508691 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-877xt"] Feb 25 12:16:58 crc kubenswrapper[5005]: I0225 12:16:58.521400 5005 scope.go:117] "RemoveContainer" containerID="4e06528499bb94234ae017cb4fa5d7d53450e620026b00ed3080698635b35559" Feb 25 12:16:58 crc kubenswrapper[5005]: I0225 12:16:58.571396 5005 scope.go:117] "RemoveContainer" containerID="d73ec98854037fa13c740b736fc28170b3f6ad8ff3b6bc089145f5cceb680824" Feb 25 12:16:58 crc kubenswrapper[5005]: E0225 12:16:58.572213 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d73ec98854037fa13c740b736fc28170b3f6ad8ff3b6bc089145f5cceb680824\": container with ID starting with d73ec98854037fa13c740b736fc28170b3f6ad8ff3b6bc089145f5cceb680824 not found: ID does not exist" containerID="d73ec98854037fa13c740b736fc28170b3f6ad8ff3b6bc089145f5cceb680824" Feb 25 12:16:58 crc kubenswrapper[5005]: I0225 12:16:58.572279 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d73ec98854037fa13c740b736fc28170b3f6ad8ff3b6bc089145f5cceb680824"} err="failed to get container status \"d73ec98854037fa13c740b736fc28170b3f6ad8ff3b6bc089145f5cceb680824\": rpc error: code = NotFound desc = could not find container \"d73ec98854037fa13c740b736fc28170b3f6ad8ff3b6bc089145f5cceb680824\": container with ID starting with d73ec98854037fa13c740b736fc28170b3f6ad8ff3b6bc089145f5cceb680824 not found: ID does not exist" Feb 25 12:16:58 crc kubenswrapper[5005]: I0225 12:16:58.572322 5005 scope.go:117] "RemoveContainer" containerID="42a859051083c556fbaa59d752e901ead0b426f7c95a7d5c605406112c526df4" Feb 25 12:16:58 crc kubenswrapper[5005]: E0225 12:16:58.572671 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42a859051083c556fbaa59d752e901ead0b426f7c95a7d5c605406112c526df4\": container with ID starting with 42a859051083c556fbaa59d752e901ead0b426f7c95a7d5c605406112c526df4 not found: ID does not exist" containerID="42a859051083c556fbaa59d752e901ead0b426f7c95a7d5c605406112c526df4" Feb 25 12:16:58 crc kubenswrapper[5005]: I0225 12:16:58.572715 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42a859051083c556fbaa59d752e901ead0b426f7c95a7d5c605406112c526df4"} err="failed to get container status \"42a859051083c556fbaa59d752e901ead0b426f7c95a7d5c605406112c526df4\": rpc error: code = NotFound desc = could not find container \"42a859051083c556fbaa59d752e901ead0b426f7c95a7d5c605406112c526df4\": container with ID starting with 42a859051083c556fbaa59d752e901ead0b426f7c95a7d5c605406112c526df4 not found: ID does not exist" Feb 25 12:16:58 crc kubenswrapper[5005]: I0225 12:16:58.572745 5005 scope.go:117] "RemoveContainer" containerID="4e06528499bb94234ae017cb4fa5d7d53450e620026b00ed3080698635b35559" Feb 25 12:16:58 crc kubenswrapper[5005]: E0225 12:16:58.573178 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e06528499bb94234ae017cb4fa5d7d53450e620026b00ed3080698635b35559\": container with ID starting with 4e06528499bb94234ae017cb4fa5d7d53450e620026b00ed3080698635b35559 not found: ID does not exist" containerID="4e06528499bb94234ae017cb4fa5d7d53450e620026b00ed3080698635b35559" Feb 25 12:16:58 crc kubenswrapper[5005]: I0225 12:16:58.573199 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e06528499bb94234ae017cb4fa5d7d53450e620026b00ed3080698635b35559"} err="failed to get container status \"4e06528499bb94234ae017cb4fa5d7d53450e620026b00ed3080698635b35559\": rpc error: code = NotFound desc = could not find container \"4e06528499bb94234ae017cb4fa5d7d53450e620026b00ed3080698635b35559\": container with ID starting with 4e06528499bb94234ae017cb4fa5d7d53450e620026b00ed3080698635b35559 not found: ID does not exist" Feb 25 12:16:58 crc kubenswrapper[5005]: I0225 12:16:58.699599 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3207b725-cd93-4243-8072-23076c45becd" path="/var/lib/kubelet/pods/3207b725-cd93-4243-8072-23076c45becd/volumes" Feb 25 12:17:12 crc kubenswrapper[5005]: I0225 12:17:12.047454 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-create-bfmds"] Feb 25 12:17:12 crc kubenswrapper[5005]: I0225 12:17:12.060496 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-create-bfmds"] Feb 25 12:17:12 crc kubenswrapper[5005]: I0225 12:17:12.700934 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="911d1b58-a288-457b-9acb-206003bb9c0b" path="/var/lib/kubelet/pods/911d1b58-a288-457b-9acb-206003bb9c0b/volumes" Feb 25 12:17:13 crc kubenswrapper[5005]: I0225 12:17:13.044400 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-6303-account-create-update-wkvmp"] Feb 25 12:17:13 crc kubenswrapper[5005]: I0225 12:17:13.053652 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-6303-account-create-update-wkvmp"] Feb 25 12:17:14 crc kubenswrapper[5005]: I0225 12:17:14.698997 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04d08653-4d72-4043-8f2d-aafdd7f7c384" path="/var/lib/kubelet/pods/04d08653-4d72-4043-8f2d-aafdd7f7c384/volumes" Feb 25 12:17:20 crc kubenswrapper[5005]: I0225 12:17:20.389458 5005 scope.go:117] "RemoveContainer" containerID="fc5625cc13af47722041412700840f0f429fa480fe71d97134c77757a36af3ca" Feb 25 12:17:20 crc kubenswrapper[5005]: I0225 12:17:20.431776 5005 scope.go:117] "RemoveContainer" containerID="557a41332180d194dbca1e9e4a3e4959368221f72c243411551413407d8cd7ce" Feb 25 12:17:28 crc kubenswrapper[5005]: I0225 12:17:28.087201 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 12:17:28 crc kubenswrapper[5005]: I0225 12:17:28.087863 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 12:17:33 crc kubenswrapper[5005]: I0225 12:17:33.056311 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-sync-5pz6m"] Feb 25 12:17:33 crc kubenswrapper[5005]: I0225 12:17:33.069154 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-sync-5pz6m"] Feb 25 12:17:34 crc kubenswrapper[5005]: I0225 12:17:34.702914 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e95d02c-8ff5-4c4d-a8a9-c002e69afdd0" path="/var/lib/kubelet/pods/7e95d02c-8ff5-4c4d-a8a9-c002e69afdd0/volumes" Feb 25 12:17:58 crc kubenswrapper[5005]: I0225 12:17:58.087905 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 12:17:58 crc kubenswrapper[5005]: I0225 12:17:58.088821 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 12:18:00 crc kubenswrapper[5005]: I0225 12:18:00.142775 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533698-px9xp"] Feb 25 12:18:00 crc kubenswrapper[5005]: E0225 12:18:00.143697 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3207b725-cd93-4243-8072-23076c45becd" containerName="registry-server" Feb 25 12:18:00 crc kubenswrapper[5005]: I0225 12:18:00.143716 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="3207b725-cd93-4243-8072-23076c45becd" containerName="registry-server" Feb 25 12:18:00 crc kubenswrapper[5005]: E0225 12:18:00.143737 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3207b725-cd93-4243-8072-23076c45becd" containerName="extract-utilities" Feb 25 12:18:00 crc kubenswrapper[5005]: I0225 12:18:00.143745 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="3207b725-cd93-4243-8072-23076c45becd" containerName="extract-utilities" Feb 25 12:18:00 crc kubenswrapper[5005]: E0225 12:18:00.143756 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3207b725-cd93-4243-8072-23076c45becd" containerName="extract-content" Feb 25 12:18:00 crc kubenswrapper[5005]: I0225 12:18:00.143765 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="3207b725-cd93-4243-8072-23076c45becd" containerName="extract-content" Feb 25 12:18:00 crc kubenswrapper[5005]: E0225 12:18:00.143778 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b2e2f1f-9404-46b9-8ea3-96f6e6ec3c1b" containerName="extract-content" Feb 25 12:18:00 crc kubenswrapper[5005]: I0225 12:18:00.143785 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b2e2f1f-9404-46b9-8ea3-96f6e6ec3c1b" containerName="extract-content" Feb 25 12:18:00 crc kubenswrapper[5005]: E0225 12:18:00.143797 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b2e2f1f-9404-46b9-8ea3-96f6e6ec3c1b" containerName="extract-utilities" Feb 25 12:18:00 crc kubenswrapper[5005]: I0225 12:18:00.143804 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b2e2f1f-9404-46b9-8ea3-96f6e6ec3c1b" containerName="extract-utilities" Feb 25 12:18:00 crc kubenswrapper[5005]: E0225 12:18:00.143836 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b2e2f1f-9404-46b9-8ea3-96f6e6ec3c1b" containerName="registry-server" Feb 25 12:18:00 crc kubenswrapper[5005]: I0225 12:18:00.143842 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b2e2f1f-9404-46b9-8ea3-96f6e6ec3c1b" containerName="registry-server" Feb 25 12:18:00 crc kubenswrapper[5005]: I0225 12:18:00.144010 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b2e2f1f-9404-46b9-8ea3-96f6e6ec3c1b" containerName="registry-server" Feb 25 12:18:00 crc kubenswrapper[5005]: I0225 12:18:00.144022 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="3207b725-cd93-4243-8072-23076c45becd" containerName="registry-server" Feb 25 12:18:00 crc kubenswrapper[5005]: I0225 12:18:00.144722 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533698-px9xp" Feb 25 12:18:00 crc kubenswrapper[5005]: I0225 12:18:00.146844 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 12:18:00 crc kubenswrapper[5005]: I0225 12:18:00.147164 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 12:18:00 crc kubenswrapper[5005]: I0225 12:18:00.148269 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 12:18:00 crc kubenswrapper[5005]: I0225 12:18:00.162681 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533698-px9xp"] Feb 25 12:18:00 crc kubenswrapper[5005]: I0225 12:18:00.248764 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22x9s\" (UniqueName: \"kubernetes.io/projected/db977573-14f3-461f-96ff-a0c6b05526ba-kube-api-access-22x9s\") pod \"auto-csr-approver-29533698-px9xp\" (UID: \"db977573-14f3-461f-96ff-a0c6b05526ba\") " pod="openshift-infra/auto-csr-approver-29533698-px9xp" Feb 25 12:18:00 crc kubenswrapper[5005]: I0225 12:18:00.350911 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22x9s\" (UniqueName: \"kubernetes.io/projected/db977573-14f3-461f-96ff-a0c6b05526ba-kube-api-access-22x9s\") pod \"auto-csr-approver-29533698-px9xp\" (UID: \"db977573-14f3-461f-96ff-a0c6b05526ba\") " pod="openshift-infra/auto-csr-approver-29533698-px9xp" Feb 25 12:18:00 crc kubenswrapper[5005]: I0225 12:18:00.372639 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22x9s\" (UniqueName: \"kubernetes.io/projected/db977573-14f3-461f-96ff-a0c6b05526ba-kube-api-access-22x9s\") pod \"auto-csr-approver-29533698-px9xp\" (UID: \"db977573-14f3-461f-96ff-a0c6b05526ba\") " pod="openshift-infra/auto-csr-approver-29533698-px9xp" Feb 25 12:18:00 crc kubenswrapper[5005]: I0225 12:18:00.466733 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533698-px9xp" Feb 25 12:18:01 crc kubenswrapper[5005]: I0225 12:18:01.013150 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533698-px9xp"] Feb 25 12:18:01 crc kubenswrapper[5005]: I0225 12:18:01.028049 5005 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 25 12:18:01 crc kubenswrapper[5005]: I0225 12:18:01.076993 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533698-px9xp" event={"ID":"db977573-14f3-461f-96ff-a0c6b05526ba","Type":"ContainerStarted","Data":"a8c00e1f59d578283efdd14b2310452cff7109f03f5c77eab16df5a8b08575e4"} Feb 25 12:18:03 crc kubenswrapper[5005]: I0225 12:18:03.096752 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533698-px9xp" event={"ID":"db977573-14f3-461f-96ff-a0c6b05526ba","Type":"ContainerStarted","Data":"84afb84acc4e4d0587ee75eb9554b1a5df35733d0e8981e9d8d1831256c711f4"} Feb 25 12:18:03 crc kubenswrapper[5005]: I0225 12:18:03.115523 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29533698-px9xp" podStartSLOduration=1.5222390300000002 podStartE2EDuration="3.115505251s" podCreationTimestamp="2026-02-25 12:18:00 +0000 UTC" firstStartedPulling="2026-02-25 12:18:01.02779956 +0000 UTC m=+3595.068531887" lastFinishedPulling="2026-02-25 12:18:02.621065781 +0000 UTC m=+3596.661798108" observedRunningTime="2026-02-25 12:18:03.111346377 +0000 UTC m=+3597.152078724" watchObservedRunningTime="2026-02-25 12:18:03.115505251 +0000 UTC m=+3597.156237578" Feb 25 12:18:04 crc kubenswrapper[5005]: I0225 12:18:04.108192 5005 generic.go:334] "Generic (PLEG): container finished" podID="db977573-14f3-461f-96ff-a0c6b05526ba" containerID="84afb84acc4e4d0587ee75eb9554b1a5df35733d0e8981e9d8d1831256c711f4" exitCode=0 Feb 25 12:18:04 crc kubenswrapper[5005]: I0225 12:18:04.108590 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533698-px9xp" event={"ID":"db977573-14f3-461f-96ff-a0c6b05526ba","Type":"ContainerDied","Data":"84afb84acc4e4d0587ee75eb9554b1a5df35733d0e8981e9d8d1831256c711f4"} Feb 25 12:18:05 crc kubenswrapper[5005]: I0225 12:18:05.645785 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533698-px9xp" Feb 25 12:18:05 crc kubenswrapper[5005]: I0225 12:18:05.771354 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22x9s\" (UniqueName: \"kubernetes.io/projected/db977573-14f3-461f-96ff-a0c6b05526ba-kube-api-access-22x9s\") pod \"db977573-14f3-461f-96ff-a0c6b05526ba\" (UID: \"db977573-14f3-461f-96ff-a0c6b05526ba\") " Feb 25 12:18:05 crc kubenswrapper[5005]: I0225 12:18:05.780849 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db977573-14f3-461f-96ff-a0c6b05526ba-kube-api-access-22x9s" (OuterVolumeSpecName: "kube-api-access-22x9s") pod "db977573-14f3-461f-96ff-a0c6b05526ba" (UID: "db977573-14f3-461f-96ff-a0c6b05526ba"). InnerVolumeSpecName "kube-api-access-22x9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:18:05 crc kubenswrapper[5005]: I0225 12:18:05.874076 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22x9s\" (UniqueName: \"kubernetes.io/projected/db977573-14f3-461f-96ff-a0c6b05526ba-kube-api-access-22x9s\") on node \"crc\" DevicePath \"\"" Feb 25 12:18:06 crc kubenswrapper[5005]: I0225 12:18:06.125512 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533698-px9xp" event={"ID":"db977573-14f3-461f-96ff-a0c6b05526ba","Type":"ContainerDied","Data":"a8c00e1f59d578283efdd14b2310452cff7109f03f5c77eab16df5a8b08575e4"} Feb 25 12:18:06 crc kubenswrapper[5005]: I0225 12:18:06.126237 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8c00e1f59d578283efdd14b2310452cff7109f03f5c77eab16df5a8b08575e4" Feb 25 12:18:06 crc kubenswrapper[5005]: I0225 12:18:06.125558 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533698-px9xp" Feb 25 12:18:06 crc kubenswrapper[5005]: I0225 12:18:06.191194 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533692-tclrm"] Feb 25 12:18:06 crc kubenswrapper[5005]: I0225 12:18:06.202211 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533692-tclrm"] Feb 25 12:18:06 crc kubenswrapper[5005]: I0225 12:18:06.700448 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd05dd1e-a301-49a1-b196-126a2590e9b6" path="/var/lib/kubelet/pods/dd05dd1e-a301-49a1-b196-126a2590e9b6/volumes" Feb 25 12:18:20 crc kubenswrapper[5005]: I0225 12:18:20.590079 5005 scope.go:117] "RemoveContainer" containerID="da4df5d6487e271791dda521398e960339958d8c8d33646c7bb3d5954fed5569" Feb 25 12:18:20 crc kubenswrapper[5005]: I0225 12:18:20.644392 5005 scope.go:117] "RemoveContainer" containerID="386f85388c3906421769e2dd44b63e803304292d730676ba6a47671b80f9361f" Feb 25 12:18:28 crc kubenswrapper[5005]: I0225 12:18:28.087729 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 12:18:28 crc kubenswrapper[5005]: I0225 12:18:28.088575 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 12:18:28 crc kubenswrapper[5005]: I0225 12:18:28.088656 5005 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" Feb 25 12:18:28 crc kubenswrapper[5005]: I0225 12:18:28.089659 5005 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"21a4def3ca83173a08a0f2afdddef58f96111d1f23cd1ee43227b62abca86494"} pod="openshift-machine-config-operator/machine-config-daemon-tct5q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 25 12:18:28 crc kubenswrapper[5005]: I0225 12:18:28.089776 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" containerID="cri-o://21a4def3ca83173a08a0f2afdddef58f96111d1f23cd1ee43227b62abca86494" gracePeriod=600 Feb 25 12:18:28 crc kubenswrapper[5005]: E0225 12:18:28.220946 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:18:28 crc kubenswrapper[5005]: I0225 12:18:28.314451 5005 generic.go:334] "Generic (PLEG): container finished" podID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerID="21a4def3ca83173a08a0f2afdddef58f96111d1f23cd1ee43227b62abca86494" exitCode=0 Feb 25 12:18:28 crc kubenswrapper[5005]: I0225 12:18:28.314541 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerDied","Data":"21a4def3ca83173a08a0f2afdddef58f96111d1f23cd1ee43227b62abca86494"} Feb 25 12:18:28 crc kubenswrapper[5005]: I0225 12:18:28.314722 5005 scope.go:117] "RemoveContainer" containerID="1cd290234490006d5dc0f3b60428821ed2b4d093736936da7505bf8063309e11" Feb 25 12:18:28 crc kubenswrapper[5005]: I0225 12:18:28.315497 5005 scope.go:117] "RemoveContainer" containerID="21a4def3ca83173a08a0f2afdddef58f96111d1f23cd1ee43227b62abca86494" Feb 25 12:18:28 crc kubenswrapper[5005]: E0225 12:18:28.315773 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:18:40 crc kubenswrapper[5005]: I0225 12:18:40.685789 5005 scope.go:117] "RemoveContainer" containerID="21a4def3ca83173a08a0f2afdddef58f96111d1f23cd1ee43227b62abca86494" Feb 25 12:18:40 crc kubenswrapper[5005]: E0225 12:18:40.686586 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:18:51 crc kubenswrapper[5005]: I0225 12:18:51.686847 5005 scope.go:117] "RemoveContainer" containerID="21a4def3ca83173a08a0f2afdddef58f96111d1f23cd1ee43227b62abca86494" Feb 25 12:18:51 crc kubenswrapper[5005]: E0225 12:18:51.689470 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:19:04 crc kubenswrapper[5005]: I0225 12:19:04.685435 5005 scope.go:117] "RemoveContainer" containerID="21a4def3ca83173a08a0f2afdddef58f96111d1f23cd1ee43227b62abca86494" Feb 25 12:19:04 crc kubenswrapper[5005]: E0225 12:19:04.686241 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:19:17 crc kubenswrapper[5005]: I0225 12:19:17.686528 5005 scope.go:117] "RemoveContainer" containerID="21a4def3ca83173a08a0f2afdddef58f96111d1f23cd1ee43227b62abca86494" Feb 25 12:19:17 crc kubenswrapper[5005]: E0225 12:19:17.688060 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:19:28 crc kubenswrapper[5005]: I0225 12:19:28.685787 5005 scope.go:117] "RemoveContainer" containerID="21a4def3ca83173a08a0f2afdddef58f96111d1f23cd1ee43227b62abca86494" Feb 25 12:19:28 crc kubenswrapper[5005]: E0225 12:19:28.686711 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:19:39 crc kubenswrapper[5005]: I0225 12:19:39.685766 5005 scope.go:117] "RemoveContainer" containerID="21a4def3ca83173a08a0f2afdddef58f96111d1f23cd1ee43227b62abca86494" Feb 25 12:19:39 crc kubenswrapper[5005]: E0225 12:19:39.686571 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:19:53 crc kubenswrapper[5005]: I0225 12:19:53.686782 5005 scope.go:117] "RemoveContainer" containerID="21a4def3ca83173a08a0f2afdddef58f96111d1f23cd1ee43227b62abca86494" Feb 25 12:19:53 crc kubenswrapper[5005]: E0225 12:19:53.688050 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:20:00 crc kubenswrapper[5005]: I0225 12:20:00.154260 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533700-dbqxd"] Feb 25 12:20:00 crc kubenswrapper[5005]: E0225 12:20:00.155393 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db977573-14f3-461f-96ff-a0c6b05526ba" containerName="oc" Feb 25 12:20:00 crc kubenswrapper[5005]: I0225 12:20:00.155406 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="db977573-14f3-461f-96ff-a0c6b05526ba" containerName="oc" Feb 25 12:20:00 crc kubenswrapper[5005]: I0225 12:20:00.155587 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="db977573-14f3-461f-96ff-a0c6b05526ba" containerName="oc" Feb 25 12:20:00 crc kubenswrapper[5005]: I0225 12:20:00.156200 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533700-dbqxd" Feb 25 12:20:00 crc kubenswrapper[5005]: I0225 12:20:00.159579 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 12:20:00 crc kubenswrapper[5005]: I0225 12:20:00.159673 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 12:20:00 crc kubenswrapper[5005]: I0225 12:20:00.159714 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 12:20:00 crc kubenswrapper[5005]: I0225 12:20:00.167031 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533700-dbqxd"] Feb 25 12:20:00 crc kubenswrapper[5005]: I0225 12:20:00.212294 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9jhm\" (UniqueName: \"kubernetes.io/projected/2ba1594a-9b58-4145-8bff-cf4ce40c3487-kube-api-access-g9jhm\") pod \"auto-csr-approver-29533700-dbqxd\" (UID: \"2ba1594a-9b58-4145-8bff-cf4ce40c3487\") " pod="openshift-infra/auto-csr-approver-29533700-dbqxd" Feb 25 12:20:00 crc kubenswrapper[5005]: I0225 12:20:00.315232 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9jhm\" (UniqueName: \"kubernetes.io/projected/2ba1594a-9b58-4145-8bff-cf4ce40c3487-kube-api-access-g9jhm\") pod \"auto-csr-approver-29533700-dbqxd\" (UID: \"2ba1594a-9b58-4145-8bff-cf4ce40c3487\") " pod="openshift-infra/auto-csr-approver-29533700-dbqxd" Feb 25 12:20:00 crc kubenswrapper[5005]: I0225 12:20:00.340438 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9jhm\" (UniqueName: \"kubernetes.io/projected/2ba1594a-9b58-4145-8bff-cf4ce40c3487-kube-api-access-g9jhm\") pod \"auto-csr-approver-29533700-dbqxd\" (UID: \"2ba1594a-9b58-4145-8bff-cf4ce40c3487\") " pod="openshift-infra/auto-csr-approver-29533700-dbqxd" Feb 25 12:20:00 crc kubenswrapper[5005]: I0225 12:20:00.506162 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533700-dbqxd" Feb 25 12:20:00 crc kubenswrapper[5005]: I0225 12:20:00.963768 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533700-dbqxd"] Feb 25 12:20:01 crc kubenswrapper[5005]: I0225 12:20:01.416446 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533700-dbqxd" event={"ID":"2ba1594a-9b58-4145-8bff-cf4ce40c3487","Type":"ContainerStarted","Data":"36896bf0f7a50fdf37d8fb6902ea50760980d70821e7a802830b766585a3a81b"} Feb 25 12:20:02 crc kubenswrapper[5005]: I0225 12:20:02.392561 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hkw4d"] Feb 25 12:20:02 crc kubenswrapper[5005]: I0225 12:20:02.395003 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hkw4d" Feb 25 12:20:02 crc kubenswrapper[5005]: I0225 12:20:02.406126 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hkw4d"] Feb 25 12:20:02 crc kubenswrapper[5005]: I0225 12:20:02.459045 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pbp9\" (UniqueName: \"kubernetes.io/projected/3fbdd309-c911-4e14-b9e7-213310bdf577-kube-api-access-2pbp9\") pod \"redhat-marketplace-hkw4d\" (UID: \"3fbdd309-c911-4e14-b9e7-213310bdf577\") " pod="openshift-marketplace/redhat-marketplace-hkw4d" Feb 25 12:20:02 crc kubenswrapper[5005]: I0225 12:20:02.459102 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fbdd309-c911-4e14-b9e7-213310bdf577-utilities\") pod \"redhat-marketplace-hkw4d\" (UID: \"3fbdd309-c911-4e14-b9e7-213310bdf577\") " pod="openshift-marketplace/redhat-marketplace-hkw4d" Feb 25 12:20:02 crc kubenswrapper[5005]: I0225 12:20:02.459192 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fbdd309-c911-4e14-b9e7-213310bdf577-catalog-content\") pod \"redhat-marketplace-hkw4d\" (UID: \"3fbdd309-c911-4e14-b9e7-213310bdf577\") " pod="openshift-marketplace/redhat-marketplace-hkw4d" Feb 25 12:20:02 crc kubenswrapper[5005]: I0225 12:20:02.561155 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pbp9\" (UniqueName: \"kubernetes.io/projected/3fbdd309-c911-4e14-b9e7-213310bdf577-kube-api-access-2pbp9\") pod \"redhat-marketplace-hkw4d\" (UID: \"3fbdd309-c911-4e14-b9e7-213310bdf577\") " pod="openshift-marketplace/redhat-marketplace-hkw4d" Feb 25 12:20:02 crc kubenswrapper[5005]: I0225 12:20:02.561219 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fbdd309-c911-4e14-b9e7-213310bdf577-utilities\") pod \"redhat-marketplace-hkw4d\" (UID: \"3fbdd309-c911-4e14-b9e7-213310bdf577\") " pod="openshift-marketplace/redhat-marketplace-hkw4d" Feb 25 12:20:02 crc kubenswrapper[5005]: I0225 12:20:02.561330 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fbdd309-c911-4e14-b9e7-213310bdf577-catalog-content\") pod \"redhat-marketplace-hkw4d\" (UID: \"3fbdd309-c911-4e14-b9e7-213310bdf577\") " pod="openshift-marketplace/redhat-marketplace-hkw4d" Feb 25 12:20:02 crc kubenswrapper[5005]: I0225 12:20:02.561985 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fbdd309-c911-4e14-b9e7-213310bdf577-catalog-content\") pod \"redhat-marketplace-hkw4d\" (UID: \"3fbdd309-c911-4e14-b9e7-213310bdf577\") " pod="openshift-marketplace/redhat-marketplace-hkw4d" Feb 25 12:20:02 crc kubenswrapper[5005]: I0225 12:20:02.562591 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fbdd309-c911-4e14-b9e7-213310bdf577-utilities\") pod \"redhat-marketplace-hkw4d\" (UID: \"3fbdd309-c911-4e14-b9e7-213310bdf577\") " pod="openshift-marketplace/redhat-marketplace-hkw4d" Feb 25 12:20:02 crc kubenswrapper[5005]: I0225 12:20:02.598142 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pbp9\" (UniqueName: \"kubernetes.io/projected/3fbdd309-c911-4e14-b9e7-213310bdf577-kube-api-access-2pbp9\") pod \"redhat-marketplace-hkw4d\" (UID: \"3fbdd309-c911-4e14-b9e7-213310bdf577\") " pod="openshift-marketplace/redhat-marketplace-hkw4d" Feb 25 12:20:02 crc kubenswrapper[5005]: I0225 12:20:02.739506 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hkw4d" Feb 25 12:20:03 crc kubenswrapper[5005]: I0225 12:20:03.273265 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hkw4d"] Feb 25 12:20:03 crc kubenswrapper[5005]: W0225 12:20:03.284623 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3fbdd309_c911_4e14_b9e7_213310bdf577.slice/crio-395cba5f2479d15f1119fde4d05b76eedf7617f48d351d53dcde1409bd14d086 WatchSource:0}: Error finding container 395cba5f2479d15f1119fde4d05b76eedf7617f48d351d53dcde1409bd14d086: Status 404 returned error can't find the container with id 395cba5f2479d15f1119fde4d05b76eedf7617f48d351d53dcde1409bd14d086 Feb 25 12:20:03 crc kubenswrapper[5005]: I0225 12:20:03.446405 5005 generic.go:334] "Generic (PLEG): container finished" podID="2ba1594a-9b58-4145-8bff-cf4ce40c3487" containerID="3600c71b3f21755048d2688300cd29c55f0eac48352ac202f719c0487c255ea5" exitCode=0 Feb 25 12:20:03 crc kubenswrapper[5005]: I0225 12:20:03.446503 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533700-dbqxd" event={"ID":"2ba1594a-9b58-4145-8bff-cf4ce40c3487","Type":"ContainerDied","Data":"3600c71b3f21755048d2688300cd29c55f0eac48352ac202f719c0487c255ea5"} Feb 25 12:20:03 crc kubenswrapper[5005]: I0225 12:20:03.449676 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hkw4d" event={"ID":"3fbdd309-c911-4e14-b9e7-213310bdf577","Type":"ContainerStarted","Data":"395cba5f2479d15f1119fde4d05b76eedf7617f48d351d53dcde1409bd14d086"} Feb 25 12:20:04 crc kubenswrapper[5005]: I0225 12:20:04.460909 5005 generic.go:334] "Generic (PLEG): container finished" podID="3fbdd309-c911-4e14-b9e7-213310bdf577" containerID="5b227c18132bf9e8bb00c65d9ea75fb2a8bbcf4b560107e88f01649da2b8baa3" exitCode=0 Feb 25 12:20:04 crc kubenswrapper[5005]: I0225 12:20:04.460978 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hkw4d" event={"ID":"3fbdd309-c911-4e14-b9e7-213310bdf577","Type":"ContainerDied","Data":"5b227c18132bf9e8bb00c65d9ea75fb2a8bbcf4b560107e88f01649da2b8baa3"} Feb 25 12:20:05 crc kubenswrapper[5005]: I0225 12:20:05.020869 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533700-dbqxd" Feb 25 12:20:05 crc kubenswrapper[5005]: I0225 12:20:05.116781 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9jhm\" (UniqueName: \"kubernetes.io/projected/2ba1594a-9b58-4145-8bff-cf4ce40c3487-kube-api-access-g9jhm\") pod \"2ba1594a-9b58-4145-8bff-cf4ce40c3487\" (UID: \"2ba1594a-9b58-4145-8bff-cf4ce40c3487\") " Feb 25 12:20:05 crc kubenswrapper[5005]: I0225 12:20:05.126966 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ba1594a-9b58-4145-8bff-cf4ce40c3487-kube-api-access-g9jhm" (OuterVolumeSpecName: "kube-api-access-g9jhm") pod "2ba1594a-9b58-4145-8bff-cf4ce40c3487" (UID: "2ba1594a-9b58-4145-8bff-cf4ce40c3487"). InnerVolumeSpecName "kube-api-access-g9jhm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:20:05 crc kubenswrapper[5005]: I0225 12:20:05.219869 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9jhm\" (UniqueName: \"kubernetes.io/projected/2ba1594a-9b58-4145-8bff-cf4ce40c3487-kube-api-access-g9jhm\") on node \"crc\" DevicePath \"\"" Feb 25 12:20:05 crc kubenswrapper[5005]: I0225 12:20:05.473642 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533700-dbqxd" event={"ID":"2ba1594a-9b58-4145-8bff-cf4ce40c3487","Type":"ContainerDied","Data":"36896bf0f7a50fdf37d8fb6902ea50760980d70821e7a802830b766585a3a81b"} Feb 25 12:20:05 crc kubenswrapper[5005]: I0225 12:20:05.474124 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36896bf0f7a50fdf37d8fb6902ea50760980d70821e7a802830b766585a3a81b" Feb 25 12:20:05 crc kubenswrapper[5005]: I0225 12:20:05.473704 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533700-dbqxd" Feb 25 12:20:05 crc kubenswrapper[5005]: I0225 12:20:05.685532 5005 scope.go:117] "RemoveContainer" containerID="21a4def3ca83173a08a0f2afdddef58f96111d1f23cd1ee43227b62abca86494" Feb 25 12:20:05 crc kubenswrapper[5005]: E0225 12:20:05.685947 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:20:06 crc kubenswrapper[5005]: I0225 12:20:06.116711 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533694-t5x7x"] Feb 25 12:20:06 crc kubenswrapper[5005]: I0225 12:20:06.131759 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533694-t5x7x"] Feb 25 12:20:06 crc kubenswrapper[5005]: I0225 12:20:06.485103 5005 generic.go:334] "Generic (PLEG): container finished" podID="3fbdd309-c911-4e14-b9e7-213310bdf577" containerID="0f884b1a753434d42f585b632b15063a2a731cd970cbf210a99bdbf311baeeab" exitCode=0 Feb 25 12:20:06 crc kubenswrapper[5005]: I0225 12:20:06.485252 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hkw4d" event={"ID":"3fbdd309-c911-4e14-b9e7-213310bdf577","Type":"ContainerDied","Data":"0f884b1a753434d42f585b632b15063a2a731cd970cbf210a99bdbf311baeeab"} Feb 25 12:20:06 crc kubenswrapper[5005]: I0225 12:20:06.699801 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df35c800-a1cf-4cd0-b9c2-08ab1667de3d" path="/var/lib/kubelet/pods/df35c800-a1cf-4cd0-b9c2-08ab1667de3d/volumes" Feb 25 12:20:07 crc kubenswrapper[5005]: I0225 12:20:07.498605 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hkw4d" event={"ID":"3fbdd309-c911-4e14-b9e7-213310bdf577","Type":"ContainerStarted","Data":"db0fe269ec708e2901ab3d9bcaf84de9392b8471b5644f8a0cf15747271fc4a7"} Feb 25 12:20:07 crc kubenswrapper[5005]: I0225 12:20:07.529929 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hkw4d" podStartSLOduration=3.006437408 podStartE2EDuration="5.529913178s" podCreationTimestamp="2026-02-25 12:20:02 +0000 UTC" firstStartedPulling="2026-02-25 12:20:04.464636195 +0000 UTC m=+3718.505368522" lastFinishedPulling="2026-02-25 12:20:06.988111965 +0000 UTC m=+3721.028844292" observedRunningTime="2026-02-25 12:20:07.522791835 +0000 UTC m=+3721.563524152" watchObservedRunningTime="2026-02-25 12:20:07.529913178 +0000 UTC m=+3721.570645505" Feb 25 12:20:12 crc kubenswrapper[5005]: I0225 12:20:12.740626 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hkw4d" Feb 25 12:20:12 crc kubenswrapper[5005]: I0225 12:20:12.741312 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hkw4d" Feb 25 12:20:12 crc kubenswrapper[5005]: I0225 12:20:12.796430 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hkw4d" Feb 25 12:20:13 crc kubenswrapper[5005]: I0225 12:20:13.621755 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hkw4d" Feb 25 12:20:13 crc kubenswrapper[5005]: I0225 12:20:13.680796 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hkw4d"] Feb 25 12:20:15 crc kubenswrapper[5005]: I0225 12:20:15.582398 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hkw4d" podUID="3fbdd309-c911-4e14-b9e7-213310bdf577" containerName="registry-server" containerID="cri-o://db0fe269ec708e2901ab3d9bcaf84de9392b8471b5644f8a0cf15747271fc4a7" gracePeriod=2 Feb 25 12:20:16 crc kubenswrapper[5005]: I0225 12:20:16.601357 5005 generic.go:334] "Generic (PLEG): container finished" podID="3fbdd309-c911-4e14-b9e7-213310bdf577" containerID="db0fe269ec708e2901ab3d9bcaf84de9392b8471b5644f8a0cf15747271fc4a7" exitCode=0 Feb 25 12:20:16 crc kubenswrapper[5005]: I0225 12:20:16.601417 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hkw4d" event={"ID":"3fbdd309-c911-4e14-b9e7-213310bdf577","Type":"ContainerDied","Data":"db0fe269ec708e2901ab3d9bcaf84de9392b8471b5644f8a0cf15747271fc4a7"} Feb 25 12:20:16 crc kubenswrapper[5005]: I0225 12:20:16.855952 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hkw4d" Feb 25 12:20:16 crc kubenswrapper[5005]: I0225 12:20:16.986635 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fbdd309-c911-4e14-b9e7-213310bdf577-catalog-content\") pod \"3fbdd309-c911-4e14-b9e7-213310bdf577\" (UID: \"3fbdd309-c911-4e14-b9e7-213310bdf577\") " Feb 25 12:20:16 crc kubenswrapper[5005]: I0225 12:20:16.986791 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fbdd309-c911-4e14-b9e7-213310bdf577-utilities\") pod \"3fbdd309-c911-4e14-b9e7-213310bdf577\" (UID: \"3fbdd309-c911-4e14-b9e7-213310bdf577\") " Feb 25 12:20:16 crc kubenswrapper[5005]: I0225 12:20:16.986856 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pbp9\" (UniqueName: \"kubernetes.io/projected/3fbdd309-c911-4e14-b9e7-213310bdf577-kube-api-access-2pbp9\") pod \"3fbdd309-c911-4e14-b9e7-213310bdf577\" (UID: \"3fbdd309-c911-4e14-b9e7-213310bdf577\") " Feb 25 12:20:16 crc kubenswrapper[5005]: I0225 12:20:16.988985 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3fbdd309-c911-4e14-b9e7-213310bdf577-utilities" (OuterVolumeSpecName: "utilities") pod "3fbdd309-c911-4e14-b9e7-213310bdf577" (UID: "3fbdd309-c911-4e14-b9e7-213310bdf577"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 12:20:16 crc kubenswrapper[5005]: I0225 12:20:16.996867 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fbdd309-c911-4e14-b9e7-213310bdf577-kube-api-access-2pbp9" (OuterVolumeSpecName: "kube-api-access-2pbp9") pod "3fbdd309-c911-4e14-b9e7-213310bdf577" (UID: "3fbdd309-c911-4e14-b9e7-213310bdf577"). InnerVolumeSpecName "kube-api-access-2pbp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:20:17 crc kubenswrapper[5005]: I0225 12:20:17.017303 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3fbdd309-c911-4e14-b9e7-213310bdf577-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3fbdd309-c911-4e14-b9e7-213310bdf577" (UID: "3fbdd309-c911-4e14-b9e7-213310bdf577"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 12:20:17 crc kubenswrapper[5005]: I0225 12:20:17.090551 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fbdd309-c911-4e14-b9e7-213310bdf577-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 12:20:17 crc kubenswrapper[5005]: I0225 12:20:17.090593 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pbp9\" (UniqueName: \"kubernetes.io/projected/3fbdd309-c911-4e14-b9e7-213310bdf577-kube-api-access-2pbp9\") on node \"crc\" DevicePath \"\"" Feb 25 12:20:17 crc kubenswrapper[5005]: I0225 12:20:17.090608 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fbdd309-c911-4e14-b9e7-213310bdf577-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 12:20:17 crc kubenswrapper[5005]: I0225 12:20:17.614238 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hkw4d" event={"ID":"3fbdd309-c911-4e14-b9e7-213310bdf577","Type":"ContainerDied","Data":"395cba5f2479d15f1119fde4d05b76eedf7617f48d351d53dcde1409bd14d086"} Feb 25 12:20:17 crc kubenswrapper[5005]: I0225 12:20:17.614320 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hkw4d" Feb 25 12:20:17 crc kubenswrapper[5005]: I0225 12:20:17.614691 5005 scope.go:117] "RemoveContainer" containerID="db0fe269ec708e2901ab3d9bcaf84de9392b8471b5644f8a0cf15747271fc4a7" Feb 25 12:20:17 crc kubenswrapper[5005]: I0225 12:20:17.636828 5005 scope.go:117] "RemoveContainer" containerID="0f884b1a753434d42f585b632b15063a2a731cd970cbf210a99bdbf311baeeab" Feb 25 12:20:17 crc kubenswrapper[5005]: I0225 12:20:17.657716 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hkw4d"] Feb 25 12:20:17 crc kubenswrapper[5005]: I0225 12:20:17.670875 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hkw4d"] Feb 25 12:20:17 crc kubenswrapper[5005]: I0225 12:20:17.684361 5005 scope.go:117] "RemoveContainer" containerID="5b227c18132bf9e8bb00c65d9ea75fb2a8bbcf4b560107e88f01649da2b8baa3" Feb 25 12:20:18 crc kubenswrapper[5005]: I0225 12:20:18.707758 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fbdd309-c911-4e14-b9e7-213310bdf577" path="/var/lib/kubelet/pods/3fbdd309-c911-4e14-b9e7-213310bdf577/volumes" Feb 25 12:20:20 crc kubenswrapper[5005]: I0225 12:20:20.687008 5005 scope.go:117] "RemoveContainer" containerID="21a4def3ca83173a08a0f2afdddef58f96111d1f23cd1ee43227b62abca86494" Feb 25 12:20:20 crc kubenswrapper[5005]: E0225 12:20:20.687896 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:20:20 crc kubenswrapper[5005]: I0225 12:20:20.792519 5005 scope.go:117] "RemoveContainer" containerID="ee76789ed3e87e2d1c6975a6feb22dcc21e0a79cd47f3e92af1c50a3e61ce530" Feb 25 12:20:33 crc kubenswrapper[5005]: I0225 12:20:33.685919 5005 scope.go:117] "RemoveContainer" containerID="21a4def3ca83173a08a0f2afdddef58f96111d1f23cd1ee43227b62abca86494" Feb 25 12:20:33 crc kubenswrapper[5005]: E0225 12:20:33.687160 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:20:45 crc kubenswrapper[5005]: I0225 12:20:45.687320 5005 scope.go:117] "RemoveContainer" containerID="21a4def3ca83173a08a0f2afdddef58f96111d1f23cd1ee43227b62abca86494" Feb 25 12:20:45 crc kubenswrapper[5005]: E0225 12:20:45.688509 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:20:59 crc kubenswrapper[5005]: I0225 12:20:59.686081 5005 scope.go:117] "RemoveContainer" containerID="21a4def3ca83173a08a0f2afdddef58f96111d1f23cd1ee43227b62abca86494" Feb 25 12:20:59 crc kubenswrapper[5005]: E0225 12:20:59.687246 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:21:13 crc kubenswrapper[5005]: I0225 12:21:13.687455 5005 scope.go:117] "RemoveContainer" containerID="21a4def3ca83173a08a0f2afdddef58f96111d1f23cd1ee43227b62abca86494" Feb 25 12:21:13 crc kubenswrapper[5005]: E0225 12:21:13.689079 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:21:25 crc kubenswrapper[5005]: I0225 12:21:25.685826 5005 scope.go:117] "RemoveContainer" containerID="21a4def3ca83173a08a0f2afdddef58f96111d1f23cd1ee43227b62abca86494" Feb 25 12:21:25 crc kubenswrapper[5005]: E0225 12:21:25.687229 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:21:40 crc kubenswrapper[5005]: I0225 12:21:40.686552 5005 scope.go:117] "RemoveContainer" containerID="21a4def3ca83173a08a0f2afdddef58f96111d1f23cd1ee43227b62abca86494" Feb 25 12:21:40 crc kubenswrapper[5005]: E0225 12:21:40.687920 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:21:51 crc kubenswrapper[5005]: I0225 12:21:51.686873 5005 scope.go:117] "RemoveContainer" containerID="21a4def3ca83173a08a0f2afdddef58f96111d1f23cd1ee43227b62abca86494" Feb 25 12:21:51 crc kubenswrapper[5005]: E0225 12:21:51.687928 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:22:00 crc kubenswrapper[5005]: I0225 12:22:00.152650 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533702-mflqn"] Feb 25 12:22:00 crc kubenswrapper[5005]: E0225 12:22:00.153997 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fbdd309-c911-4e14-b9e7-213310bdf577" containerName="registry-server" Feb 25 12:22:00 crc kubenswrapper[5005]: I0225 12:22:00.154013 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fbdd309-c911-4e14-b9e7-213310bdf577" containerName="registry-server" Feb 25 12:22:00 crc kubenswrapper[5005]: E0225 12:22:00.154033 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ba1594a-9b58-4145-8bff-cf4ce40c3487" containerName="oc" Feb 25 12:22:00 crc kubenswrapper[5005]: I0225 12:22:00.154040 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ba1594a-9b58-4145-8bff-cf4ce40c3487" containerName="oc" Feb 25 12:22:00 crc kubenswrapper[5005]: E0225 12:22:00.154050 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fbdd309-c911-4e14-b9e7-213310bdf577" containerName="extract-utilities" Feb 25 12:22:00 crc kubenswrapper[5005]: I0225 12:22:00.154056 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fbdd309-c911-4e14-b9e7-213310bdf577" containerName="extract-utilities" Feb 25 12:22:00 crc kubenswrapper[5005]: E0225 12:22:00.154085 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fbdd309-c911-4e14-b9e7-213310bdf577" containerName="extract-content" Feb 25 12:22:00 crc kubenswrapper[5005]: I0225 12:22:00.154091 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fbdd309-c911-4e14-b9e7-213310bdf577" containerName="extract-content" Feb 25 12:22:00 crc kubenswrapper[5005]: I0225 12:22:00.154245 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fbdd309-c911-4e14-b9e7-213310bdf577" containerName="registry-server" Feb 25 12:22:00 crc kubenswrapper[5005]: I0225 12:22:00.154271 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ba1594a-9b58-4145-8bff-cf4ce40c3487" containerName="oc" Feb 25 12:22:00 crc kubenswrapper[5005]: I0225 12:22:00.155070 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533702-mflqn" Feb 25 12:22:00 crc kubenswrapper[5005]: I0225 12:22:00.157803 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 12:22:00 crc kubenswrapper[5005]: I0225 12:22:00.158741 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 12:22:00 crc kubenswrapper[5005]: I0225 12:22:00.158914 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 12:22:00 crc kubenswrapper[5005]: I0225 12:22:00.177556 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533702-mflqn"] Feb 25 12:22:00 crc kubenswrapper[5005]: I0225 12:22:00.303211 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpspp\" (UniqueName: \"kubernetes.io/projected/e7fac4b7-f6b0-452a-afdc-cc013e781567-kube-api-access-xpspp\") pod \"auto-csr-approver-29533702-mflqn\" (UID: \"e7fac4b7-f6b0-452a-afdc-cc013e781567\") " pod="openshift-infra/auto-csr-approver-29533702-mflqn" Feb 25 12:22:00 crc kubenswrapper[5005]: I0225 12:22:00.406128 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpspp\" (UniqueName: \"kubernetes.io/projected/e7fac4b7-f6b0-452a-afdc-cc013e781567-kube-api-access-xpspp\") pod \"auto-csr-approver-29533702-mflqn\" (UID: \"e7fac4b7-f6b0-452a-afdc-cc013e781567\") " pod="openshift-infra/auto-csr-approver-29533702-mflqn" Feb 25 12:22:00 crc kubenswrapper[5005]: I0225 12:22:00.428485 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpspp\" (UniqueName: \"kubernetes.io/projected/e7fac4b7-f6b0-452a-afdc-cc013e781567-kube-api-access-xpspp\") pod \"auto-csr-approver-29533702-mflqn\" (UID: \"e7fac4b7-f6b0-452a-afdc-cc013e781567\") " pod="openshift-infra/auto-csr-approver-29533702-mflqn" Feb 25 12:22:00 crc kubenswrapper[5005]: I0225 12:22:00.516773 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533702-mflqn" Feb 25 12:22:01 crc kubenswrapper[5005]: I0225 12:22:01.026071 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533702-mflqn"] Feb 25 12:22:01 crc kubenswrapper[5005]: W0225 12:22:01.033126 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7fac4b7_f6b0_452a_afdc_cc013e781567.slice/crio-a93f48bf54da5e6da37292b815908955e1dcfb82166ced9f985306350504fd30 WatchSource:0}: Error finding container a93f48bf54da5e6da37292b815908955e1dcfb82166ced9f985306350504fd30: Status 404 returned error can't find the container with id a93f48bf54da5e6da37292b815908955e1dcfb82166ced9f985306350504fd30 Feb 25 12:22:01 crc kubenswrapper[5005]: I0225 12:22:01.685646 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533702-mflqn" event={"ID":"e7fac4b7-f6b0-452a-afdc-cc013e781567","Type":"ContainerStarted","Data":"a93f48bf54da5e6da37292b815908955e1dcfb82166ced9f985306350504fd30"} Feb 25 12:22:02 crc kubenswrapper[5005]: I0225 12:22:02.718953 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533702-mflqn" event={"ID":"e7fac4b7-f6b0-452a-afdc-cc013e781567","Type":"ContainerStarted","Data":"d937268573642beff0cb3e0f2f6f7f5768567fbece68bbc8f0fde64bb4693e97"} Feb 25 12:22:02 crc kubenswrapper[5005]: I0225 12:22:02.742906 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29533702-mflqn" podStartSLOduration=1.678475581 podStartE2EDuration="2.7428849s" podCreationTimestamp="2026-02-25 12:22:00 +0000 UTC" firstStartedPulling="2026-02-25 12:22:01.037067285 +0000 UTC m=+3835.077799612" lastFinishedPulling="2026-02-25 12:22:02.101476604 +0000 UTC m=+3836.142208931" observedRunningTime="2026-02-25 12:22:02.740871789 +0000 UTC m=+3836.781604116" watchObservedRunningTime="2026-02-25 12:22:02.7428849 +0000 UTC m=+3836.783617227" Feb 25 12:22:03 crc kubenswrapper[5005]: I0225 12:22:03.728647 5005 generic.go:334] "Generic (PLEG): container finished" podID="e7fac4b7-f6b0-452a-afdc-cc013e781567" containerID="d937268573642beff0cb3e0f2f6f7f5768567fbece68bbc8f0fde64bb4693e97" exitCode=0 Feb 25 12:22:03 crc kubenswrapper[5005]: I0225 12:22:03.728878 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533702-mflqn" event={"ID":"e7fac4b7-f6b0-452a-afdc-cc013e781567","Type":"ContainerDied","Data":"d937268573642beff0cb3e0f2f6f7f5768567fbece68bbc8f0fde64bb4693e97"} Feb 25 12:22:05 crc kubenswrapper[5005]: I0225 12:22:05.380850 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533702-mflqn" Feb 25 12:22:05 crc kubenswrapper[5005]: I0225 12:22:05.436780 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpspp\" (UniqueName: \"kubernetes.io/projected/e7fac4b7-f6b0-452a-afdc-cc013e781567-kube-api-access-xpspp\") pod \"e7fac4b7-f6b0-452a-afdc-cc013e781567\" (UID: \"e7fac4b7-f6b0-452a-afdc-cc013e781567\") " Feb 25 12:22:05 crc kubenswrapper[5005]: I0225 12:22:05.456768 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7fac4b7-f6b0-452a-afdc-cc013e781567-kube-api-access-xpspp" (OuterVolumeSpecName: "kube-api-access-xpspp") pod "e7fac4b7-f6b0-452a-afdc-cc013e781567" (UID: "e7fac4b7-f6b0-452a-afdc-cc013e781567"). InnerVolumeSpecName "kube-api-access-xpspp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:22:05 crc kubenswrapper[5005]: I0225 12:22:05.539814 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpspp\" (UniqueName: \"kubernetes.io/projected/e7fac4b7-f6b0-452a-afdc-cc013e781567-kube-api-access-xpspp\") on node \"crc\" DevicePath \"\"" Feb 25 12:22:05 crc kubenswrapper[5005]: I0225 12:22:05.749422 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533702-mflqn" event={"ID":"e7fac4b7-f6b0-452a-afdc-cc013e781567","Type":"ContainerDied","Data":"a93f48bf54da5e6da37292b815908955e1dcfb82166ced9f985306350504fd30"} Feb 25 12:22:05 crc kubenswrapper[5005]: I0225 12:22:05.749466 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a93f48bf54da5e6da37292b815908955e1dcfb82166ced9f985306350504fd30" Feb 25 12:22:05 crc kubenswrapper[5005]: I0225 12:22:05.749527 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533702-mflqn" Feb 25 12:22:05 crc kubenswrapper[5005]: I0225 12:22:05.818263 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533696-vrhl2"] Feb 25 12:22:05 crc kubenswrapper[5005]: I0225 12:22:05.828970 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533696-vrhl2"] Feb 25 12:22:06 crc kubenswrapper[5005]: I0225 12:22:06.695918 5005 scope.go:117] "RemoveContainer" containerID="21a4def3ca83173a08a0f2afdddef58f96111d1f23cd1ee43227b62abca86494" Feb 25 12:22:06 crc kubenswrapper[5005]: E0225 12:22:06.696765 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:22:06 crc kubenswrapper[5005]: I0225 12:22:06.722557 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d470f4d-6787-4784-8a87-bb3f71090ad1" path="/var/lib/kubelet/pods/0d470f4d-6787-4784-8a87-bb3f71090ad1/volumes" Feb 25 12:22:19 crc kubenswrapper[5005]: I0225 12:22:19.685772 5005 scope.go:117] "RemoveContainer" containerID="21a4def3ca83173a08a0f2afdddef58f96111d1f23cd1ee43227b62abca86494" Feb 25 12:22:19 crc kubenswrapper[5005]: E0225 12:22:19.686720 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:22:20 crc kubenswrapper[5005]: I0225 12:22:20.910619 5005 scope.go:117] "RemoveContainer" containerID="82b17989f4115fca9f65c18f0999bdf5f5a39e44e257a3ef1d617d2d388a29df" Feb 25 12:22:31 crc kubenswrapper[5005]: I0225 12:22:31.687052 5005 scope.go:117] "RemoveContainer" containerID="21a4def3ca83173a08a0f2afdddef58f96111d1f23cd1ee43227b62abca86494" Feb 25 12:22:31 crc kubenswrapper[5005]: E0225 12:22:31.688773 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:22:45 crc kubenswrapper[5005]: I0225 12:22:45.687070 5005 scope.go:117] "RemoveContainer" containerID="21a4def3ca83173a08a0f2afdddef58f96111d1f23cd1ee43227b62abca86494" Feb 25 12:22:45 crc kubenswrapper[5005]: E0225 12:22:45.688246 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:22:57 crc kubenswrapper[5005]: I0225 12:22:57.686027 5005 scope.go:117] "RemoveContainer" containerID="21a4def3ca83173a08a0f2afdddef58f96111d1f23cd1ee43227b62abca86494" Feb 25 12:22:57 crc kubenswrapper[5005]: E0225 12:22:57.688420 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:23:12 crc kubenswrapper[5005]: I0225 12:23:12.686102 5005 scope.go:117] "RemoveContainer" containerID="21a4def3ca83173a08a0f2afdddef58f96111d1f23cd1ee43227b62abca86494" Feb 25 12:23:12 crc kubenswrapper[5005]: E0225 12:23:12.689021 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:23:24 crc kubenswrapper[5005]: I0225 12:23:24.685829 5005 scope.go:117] "RemoveContainer" containerID="21a4def3ca83173a08a0f2afdddef58f96111d1f23cd1ee43227b62abca86494" Feb 25 12:23:24 crc kubenswrapper[5005]: E0225 12:23:24.687027 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:23:35 crc kubenswrapper[5005]: I0225 12:23:35.685927 5005 scope.go:117] "RemoveContainer" containerID="21a4def3ca83173a08a0f2afdddef58f96111d1f23cd1ee43227b62abca86494" Feb 25 12:23:36 crc kubenswrapper[5005]: I0225 12:23:36.696215 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerStarted","Data":"750df94252d948a2c0d1f4ad27cf96c18e2420aea84dab143e82aaae647602ef"} Feb 25 12:24:00 crc kubenswrapper[5005]: I0225 12:24:00.157932 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533704-dfphk"] Feb 25 12:24:00 crc kubenswrapper[5005]: E0225 12:24:00.159084 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7fac4b7-f6b0-452a-afdc-cc013e781567" containerName="oc" Feb 25 12:24:00 crc kubenswrapper[5005]: I0225 12:24:00.159095 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7fac4b7-f6b0-452a-afdc-cc013e781567" containerName="oc" Feb 25 12:24:00 crc kubenswrapper[5005]: I0225 12:24:00.159296 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7fac4b7-f6b0-452a-afdc-cc013e781567" containerName="oc" Feb 25 12:24:00 crc kubenswrapper[5005]: I0225 12:24:00.160201 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533704-dfphk" Feb 25 12:24:00 crc kubenswrapper[5005]: I0225 12:24:00.163127 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 12:24:00 crc kubenswrapper[5005]: I0225 12:24:00.163180 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 12:24:00 crc kubenswrapper[5005]: I0225 12:24:00.163341 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 12:24:00 crc kubenswrapper[5005]: I0225 12:24:00.173561 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533704-dfphk"] Feb 25 12:24:00 crc kubenswrapper[5005]: I0225 12:24:00.302257 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57x95\" (UniqueName: \"kubernetes.io/projected/4157a67e-a639-41cb-885f-68a5988a987e-kube-api-access-57x95\") pod \"auto-csr-approver-29533704-dfphk\" (UID: \"4157a67e-a639-41cb-885f-68a5988a987e\") " pod="openshift-infra/auto-csr-approver-29533704-dfphk" Feb 25 12:24:00 crc kubenswrapper[5005]: I0225 12:24:00.404263 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57x95\" (UniqueName: \"kubernetes.io/projected/4157a67e-a639-41cb-885f-68a5988a987e-kube-api-access-57x95\") pod \"auto-csr-approver-29533704-dfphk\" (UID: \"4157a67e-a639-41cb-885f-68a5988a987e\") " pod="openshift-infra/auto-csr-approver-29533704-dfphk" Feb 25 12:24:00 crc kubenswrapper[5005]: I0225 12:24:00.437291 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57x95\" (UniqueName: \"kubernetes.io/projected/4157a67e-a639-41cb-885f-68a5988a987e-kube-api-access-57x95\") pod \"auto-csr-approver-29533704-dfphk\" (UID: \"4157a67e-a639-41cb-885f-68a5988a987e\") " pod="openshift-infra/auto-csr-approver-29533704-dfphk" Feb 25 12:24:00 crc kubenswrapper[5005]: I0225 12:24:00.494963 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533704-dfphk" Feb 25 12:24:00 crc kubenswrapper[5005]: I0225 12:24:00.972174 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533704-dfphk"] Feb 25 12:24:00 crc kubenswrapper[5005]: I0225 12:24:00.979830 5005 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 25 12:24:01 crc kubenswrapper[5005]: I0225 12:24:01.936406 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533704-dfphk" event={"ID":"4157a67e-a639-41cb-885f-68a5988a987e","Type":"ContainerStarted","Data":"352d9a3541ce0df5c1c65e7cd3c5d560dc8c23567b07ebc338ad97117412be22"} Feb 25 12:24:02 crc kubenswrapper[5005]: I0225 12:24:02.957529 5005 generic.go:334] "Generic (PLEG): container finished" podID="4157a67e-a639-41cb-885f-68a5988a987e" containerID="50fddc9dad1e4c7967fa60abf7f333278686890eab977f5fb9346f6def993240" exitCode=0 Feb 25 12:24:02 crc kubenswrapper[5005]: I0225 12:24:02.957625 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533704-dfphk" event={"ID":"4157a67e-a639-41cb-885f-68a5988a987e","Type":"ContainerDied","Data":"50fddc9dad1e4c7967fa60abf7f333278686890eab977f5fb9346f6def993240"} Feb 25 12:24:04 crc kubenswrapper[5005]: I0225 12:24:04.550851 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533704-dfphk" Feb 25 12:24:04 crc kubenswrapper[5005]: I0225 12:24:04.612360 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57x95\" (UniqueName: \"kubernetes.io/projected/4157a67e-a639-41cb-885f-68a5988a987e-kube-api-access-57x95\") pod \"4157a67e-a639-41cb-885f-68a5988a987e\" (UID: \"4157a67e-a639-41cb-885f-68a5988a987e\") " Feb 25 12:24:04 crc kubenswrapper[5005]: I0225 12:24:04.624854 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4157a67e-a639-41cb-885f-68a5988a987e-kube-api-access-57x95" (OuterVolumeSpecName: "kube-api-access-57x95") pod "4157a67e-a639-41cb-885f-68a5988a987e" (UID: "4157a67e-a639-41cb-885f-68a5988a987e"). InnerVolumeSpecName "kube-api-access-57x95". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:24:04 crc kubenswrapper[5005]: I0225 12:24:04.716911 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57x95\" (UniqueName: \"kubernetes.io/projected/4157a67e-a639-41cb-885f-68a5988a987e-kube-api-access-57x95\") on node \"crc\" DevicePath \"\"" Feb 25 12:24:04 crc kubenswrapper[5005]: I0225 12:24:04.978385 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533704-dfphk" event={"ID":"4157a67e-a639-41cb-885f-68a5988a987e","Type":"ContainerDied","Data":"352d9a3541ce0df5c1c65e7cd3c5d560dc8c23567b07ebc338ad97117412be22"} Feb 25 12:24:04 crc kubenswrapper[5005]: I0225 12:24:04.978779 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="352d9a3541ce0df5c1c65e7cd3c5d560dc8c23567b07ebc338ad97117412be22" Feb 25 12:24:04 crc kubenswrapper[5005]: I0225 12:24:04.978448 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533704-dfphk" Feb 25 12:24:05 crc kubenswrapper[5005]: I0225 12:24:05.657689 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533698-px9xp"] Feb 25 12:24:05 crc kubenswrapper[5005]: I0225 12:24:05.675710 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533698-px9xp"] Feb 25 12:24:06 crc kubenswrapper[5005]: I0225 12:24:06.699849 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db977573-14f3-461f-96ff-a0c6b05526ba" path="/var/lib/kubelet/pods/db977573-14f3-461f-96ff-a0c6b05526ba/volumes" Feb 25 12:24:21 crc kubenswrapper[5005]: I0225 12:24:21.042215 5005 scope.go:117] "RemoveContainer" containerID="84afb84acc4e4d0587ee75eb9554b1a5df35733d0e8981e9d8d1831256c711f4" Feb 25 12:25:58 crc kubenswrapper[5005]: I0225 12:25:58.087146 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 12:25:58 crc kubenswrapper[5005]: I0225 12:25:58.088038 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 12:26:00 crc kubenswrapper[5005]: I0225 12:26:00.160604 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533706-jmmlm"] Feb 25 12:26:00 crc kubenswrapper[5005]: E0225 12:26:00.161932 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4157a67e-a639-41cb-885f-68a5988a987e" containerName="oc" Feb 25 12:26:00 crc kubenswrapper[5005]: I0225 12:26:00.161950 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="4157a67e-a639-41cb-885f-68a5988a987e" containerName="oc" Feb 25 12:26:00 crc kubenswrapper[5005]: I0225 12:26:00.162206 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="4157a67e-a639-41cb-885f-68a5988a987e" containerName="oc" Feb 25 12:26:00 crc kubenswrapper[5005]: I0225 12:26:00.163020 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533706-jmmlm" Feb 25 12:26:00 crc kubenswrapper[5005]: I0225 12:26:00.166409 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 12:26:00 crc kubenswrapper[5005]: I0225 12:26:00.167019 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 12:26:00 crc kubenswrapper[5005]: I0225 12:26:00.167194 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 12:26:00 crc kubenswrapper[5005]: I0225 12:26:00.183236 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533706-jmmlm"] Feb 25 12:26:00 crc kubenswrapper[5005]: I0225 12:26:00.206493 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd8w5\" (UniqueName: \"kubernetes.io/projected/8f8b6094-492a-489f-995d-181d6dfcf077-kube-api-access-gd8w5\") pod \"auto-csr-approver-29533706-jmmlm\" (UID: \"8f8b6094-492a-489f-995d-181d6dfcf077\") " pod="openshift-infra/auto-csr-approver-29533706-jmmlm" Feb 25 12:26:00 crc kubenswrapper[5005]: I0225 12:26:00.308996 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gd8w5\" (UniqueName: \"kubernetes.io/projected/8f8b6094-492a-489f-995d-181d6dfcf077-kube-api-access-gd8w5\") pod \"auto-csr-approver-29533706-jmmlm\" (UID: \"8f8b6094-492a-489f-995d-181d6dfcf077\") " pod="openshift-infra/auto-csr-approver-29533706-jmmlm" Feb 25 12:26:00 crc kubenswrapper[5005]: I0225 12:26:00.336082 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gd8w5\" (UniqueName: \"kubernetes.io/projected/8f8b6094-492a-489f-995d-181d6dfcf077-kube-api-access-gd8w5\") pod \"auto-csr-approver-29533706-jmmlm\" (UID: \"8f8b6094-492a-489f-995d-181d6dfcf077\") " pod="openshift-infra/auto-csr-approver-29533706-jmmlm" Feb 25 12:26:00 crc kubenswrapper[5005]: I0225 12:26:00.490662 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533706-jmmlm" Feb 25 12:26:01 crc kubenswrapper[5005]: I0225 12:26:01.055520 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533706-jmmlm"] Feb 25 12:26:01 crc kubenswrapper[5005]: I0225 12:26:01.415296 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533706-jmmlm" event={"ID":"8f8b6094-492a-489f-995d-181d6dfcf077","Type":"ContainerStarted","Data":"294236f032a57adc284454bd2cd97ce0eb584018a0436b7858d4a201ef9b32ce"} Feb 25 12:26:03 crc kubenswrapper[5005]: I0225 12:26:03.443436 5005 generic.go:334] "Generic (PLEG): container finished" podID="8f8b6094-492a-489f-995d-181d6dfcf077" containerID="384fe9e12b16a19b70a2f2a519b735ef4b418e1ddb465f7ddd41c2032498c976" exitCode=0 Feb 25 12:26:03 crc kubenswrapper[5005]: I0225 12:26:03.443511 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533706-jmmlm" event={"ID":"8f8b6094-492a-489f-995d-181d6dfcf077","Type":"ContainerDied","Data":"384fe9e12b16a19b70a2f2a519b735ef4b418e1ddb465f7ddd41c2032498c976"} Feb 25 12:26:05 crc kubenswrapper[5005]: I0225 12:26:05.128768 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533706-jmmlm" Feb 25 12:26:05 crc kubenswrapper[5005]: I0225 12:26:05.333277 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gd8w5\" (UniqueName: \"kubernetes.io/projected/8f8b6094-492a-489f-995d-181d6dfcf077-kube-api-access-gd8w5\") pod \"8f8b6094-492a-489f-995d-181d6dfcf077\" (UID: \"8f8b6094-492a-489f-995d-181d6dfcf077\") " Feb 25 12:26:05 crc kubenswrapper[5005]: I0225 12:26:05.339311 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f8b6094-492a-489f-995d-181d6dfcf077-kube-api-access-gd8w5" (OuterVolumeSpecName: "kube-api-access-gd8w5") pod "8f8b6094-492a-489f-995d-181d6dfcf077" (UID: "8f8b6094-492a-489f-995d-181d6dfcf077"). InnerVolumeSpecName "kube-api-access-gd8w5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:26:05 crc kubenswrapper[5005]: I0225 12:26:05.435870 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gd8w5\" (UniqueName: \"kubernetes.io/projected/8f8b6094-492a-489f-995d-181d6dfcf077-kube-api-access-gd8w5\") on node \"crc\" DevicePath \"\"" Feb 25 12:26:05 crc kubenswrapper[5005]: I0225 12:26:05.476787 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533706-jmmlm" event={"ID":"8f8b6094-492a-489f-995d-181d6dfcf077","Type":"ContainerDied","Data":"294236f032a57adc284454bd2cd97ce0eb584018a0436b7858d4a201ef9b32ce"} Feb 25 12:26:05 crc kubenswrapper[5005]: I0225 12:26:05.476833 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="294236f032a57adc284454bd2cd97ce0eb584018a0436b7858d4a201ef9b32ce" Feb 25 12:26:05 crc kubenswrapper[5005]: I0225 12:26:05.477548 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533706-jmmlm" Feb 25 12:26:06 crc kubenswrapper[5005]: I0225 12:26:06.237000 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533700-dbqxd"] Feb 25 12:26:06 crc kubenswrapper[5005]: I0225 12:26:06.249207 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533700-dbqxd"] Feb 25 12:26:06 crc kubenswrapper[5005]: I0225 12:26:06.702459 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ba1594a-9b58-4145-8bff-cf4ce40c3487" path="/var/lib/kubelet/pods/2ba1594a-9b58-4145-8bff-cf4ce40c3487/volumes" Feb 25 12:26:21 crc kubenswrapper[5005]: I0225 12:26:21.184735 5005 scope.go:117] "RemoveContainer" containerID="3600c71b3f21755048d2688300cd29c55f0eac48352ac202f719c0487c255ea5" Feb 25 12:26:28 crc kubenswrapper[5005]: I0225 12:26:28.087446 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 12:26:28 crc kubenswrapper[5005]: I0225 12:26:28.088011 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 12:26:37 crc kubenswrapper[5005]: I0225 12:26:37.790576 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-56smr"] Feb 25 12:26:37 crc kubenswrapper[5005]: E0225 12:26:37.793623 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f8b6094-492a-489f-995d-181d6dfcf077" containerName="oc" Feb 25 12:26:37 crc kubenswrapper[5005]: I0225 12:26:37.793648 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f8b6094-492a-489f-995d-181d6dfcf077" containerName="oc" Feb 25 12:26:37 crc kubenswrapper[5005]: I0225 12:26:37.794837 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f8b6094-492a-489f-995d-181d6dfcf077" containerName="oc" Feb 25 12:26:37 crc kubenswrapper[5005]: I0225 12:26:37.800576 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-56smr" Feb 25 12:26:37 crc kubenswrapper[5005]: I0225 12:26:37.829683 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-56smr"] Feb 25 12:26:37 crc kubenswrapper[5005]: I0225 12:26:37.969794 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzt29\" (UniqueName: \"kubernetes.io/projected/24a966eb-1b03-43b5-b26e-a6ba5d150458-kube-api-access-mzt29\") pod \"community-operators-56smr\" (UID: \"24a966eb-1b03-43b5-b26e-a6ba5d150458\") " pod="openshift-marketplace/community-operators-56smr" Feb 25 12:26:37 crc kubenswrapper[5005]: I0225 12:26:37.969906 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24a966eb-1b03-43b5-b26e-a6ba5d150458-catalog-content\") pod \"community-operators-56smr\" (UID: \"24a966eb-1b03-43b5-b26e-a6ba5d150458\") " pod="openshift-marketplace/community-operators-56smr" Feb 25 12:26:37 crc kubenswrapper[5005]: I0225 12:26:37.969936 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24a966eb-1b03-43b5-b26e-a6ba5d150458-utilities\") pod \"community-operators-56smr\" (UID: \"24a966eb-1b03-43b5-b26e-a6ba5d150458\") " pod="openshift-marketplace/community-operators-56smr" Feb 25 12:26:38 crc kubenswrapper[5005]: I0225 12:26:38.071836 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24a966eb-1b03-43b5-b26e-a6ba5d150458-utilities\") pod \"community-operators-56smr\" (UID: \"24a966eb-1b03-43b5-b26e-a6ba5d150458\") " pod="openshift-marketplace/community-operators-56smr" Feb 25 12:26:38 crc kubenswrapper[5005]: I0225 12:26:38.071997 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzt29\" (UniqueName: \"kubernetes.io/projected/24a966eb-1b03-43b5-b26e-a6ba5d150458-kube-api-access-mzt29\") pod \"community-operators-56smr\" (UID: \"24a966eb-1b03-43b5-b26e-a6ba5d150458\") " pod="openshift-marketplace/community-operators-56smr" Feb 25 12:26:38 crc kubenswrapper[5005]: I0225 12:26:38.072054 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24a966eb-1b03-43b5-b26e-a6ba5d150458-catalog-content\") pod \"community-operators-56smr\" (UID: \"24a966eb-1b03-43b5-b26e-a6ba5d150458\") " pod="openshift-marketplace/community-operators-56smr" Feb 25 12:26:38 crc kubenswrapper[5005]: I0225 12:26:38.072625 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24a966eb-1b03-43b5-b26e-a6ba5d150458-utilities\") pod \"community-operators-56smr\" (UID: \"24a966eb-1b03-43b5-b26e-a6ba5d150458\") " pod="openshift-marketplace/community-operators-56smr" Feb 25 12:26:38 crc kubenswrapper[5005]: I0225 12:26:38.072635 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24a966eb-1b03-43b5-b26e-a6ba5d150458-catalog-content\") pod \"community-operators-56smr\" (UID: \"24a966eb-1b03-43b5-b26e-a6ba5d150458\") " pod="openshift-marketplace/community-operators-56smr" Feb 25 12:26:38 crc kubenswrapper[5005]: I0225 12:26:38.104225 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzt29\" (UniqueName: \"kubernetes.io/projected/24a966eb-1b03-43b5-b26e-a6ba5d150458-kube-api-access-mzt29\") pod \"community-operators-56smr\" (UID: \"24a966eb-1b03-43b5-b26e-a6ba5d150458\") " pod="openshift-marketplace/community-operators-56smr" Feb 25 12:26:38 crc kubenswrapper[5005]: I0225 12:26:38.132574 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-56smr" Feb 25 12:26:38 crc kubenswrapper[5005]: I0225 12:26:38.732995 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-56smr"] Feb 25 12:26:39 crc kubenswrapper[5005]: I0225 12:26:39.388652 5005 generic.go:334] "Generic (PLEG): container finished" podID="24a966eb-1b03-43b5-b26e-a6ba5d150458" containerID="f52684f7d4613069723f0b992c1b9d951f6fa25830c5ee587547397b1bfbd629" exitCode=0 Feb 25 12:26:39 crc kubenswrapper[5005]: I0225 12:26:39.388712 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-56smr" event={"ID":"24a966eb-1b03-43b5-b26e-a6ba5d150458","Type":"ContainerDied","Data":"f52684f7d4613069723f0b992c1b9d951f6fa25830c5ee587547397b1bfbd629"} Feb 25 12:26:39 crc kubenswrapper[5005]: I0225 12:26:39.389089 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-56smr" event={"ID":"24a966eb-1b03-43b5-b26e-a6ba5d150458","Type":"ContainerStarted","Data":"a3e5906abc56ecafcc0a9a252a3845aa12f11e5967338db7ad00481f2e13615f"} Feb 25 12:26:41 crc kubenswrapper[5005]: I0225 12:26:41.413533 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-56smr" event={"ID":"24a966eb-1b03-43b5-b26e-a6ba5d150458","Type":"ContainerStarted","Data":"b5d19c1655eea308ec0f68a67c6a3962a51ea490617e9d6b5b927d83c2d56372"} Feb 25 12:26:42 crc kubenswrapper[5005]: I0225 12:26:42.423717 5005 generic.go:334] "Generic (PLEG): container finished" podID="24a966eb-1b03-43b5-b26e-a6ba5d150458" containerID="b5d19c1655eea308ec0f68a67c6a3962a51ea490617e9d6b5b927d83c2d56372" exitCode=0 Feb 25 12:26:42 crc kubenswrapper[5005]: I0225 12:26:42.423773 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-56smr" event={"ID":"24a966eb-1b03-43b5-b26e-a6ba5d150458","Type":"ContainerDied","Data":"b5d19c1655eea308ec0f68a67c6a3962a51ea490617e9d6b5b927d83c2d56372"} Feb 25 12:26:43 crc kubenswrapper[5005]: I0225 12:26:43.447462 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-56smr" event={"ID":"24a966eb-1b03-43b5-b26e-a6ba5d150458","Type":"ContainerStarted","Data":"bb050c99904167c3a0392348af1a72428a98a94ce83361f96e79c56d3b987fdb"} Feb 25 12:26:43 crc kubenswrapper[5005]: I0225 12:26:43.471846 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-56smr" podStartSLOduration=3.025136412 podStartE2EDuration="6.471829003s" podCreationTimestamp="2026-02-25 12:26:37 +0000 UTC" firstStartedPulling="2026-02-25 12:26:39.395147888 +0000 UTC m=+4113.435880215" lastFinishedPulling="2026-02-25 12:26:42.841840479 +0000 UTC m=+4116.882572806" observedRunningTime="2026-02-25 12:26:43.469906214 +0000 UTC m=+4117.510638541" watchObservedRunningTime="2026-02-25 12:26:43.471829003 +0000 UTC m=+4117.512561330" Feb 25 12:26:48 crc kubenswrapper[5005]: I0225 12:26:48.133629 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-56smr" Feb 25 12:26:48 crc kubenswrapper[5005]: I0225 12:26:48.134474 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-56smr" Feb 25 12:26:48 crc kubenswrapper[5005]: I0225 12:26:48.191160 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-56smr" Feb 25 12:26:48 crc kubenswrapper[5005]: I0225 12:26:48.597349 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-56smr" Feb 25 12:26:48 crc kubenswrapper[5005]: I0225 12:26:48.646773 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-56smr"] Feb 25 12:26:50 crc kubenswrapper[5005]: I0225 12:26:50.515182 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-56smr" podUID="24a966eb-1b03-43b5-b26e-a6ba5d150458" containerName="registry-server" containerID="cri-o://bb050c99904167c3a0392348af1a72428a98a94ce83361f96e79c56d3b987fdb" gracePeriod=2 Feb 25 12:26:51 crc kubenswrapper[5005]: I0225 12:26:51.307211 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-56smr" Feb 25 12:26:51 crc kubenswrapper[5005]: I0225 12:26:51.372315 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24a966eb-1b03-43b5-b26e-a6ba5d150458-catalog-content\") pod \"24a966eb-1b03-43b5-b26e-a6ba5d150458\" (UID: \"24a966eb-1b03-43b5-b26e-a6ba5d150458\") " Feb 25 12:26:51 crc kubenswrapper[5005]: I0225 12:26:51.372627 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24a966eb-1b03-43b5-b26e-a6ba5d150458-utilities\") pod \"24a966eb-1b03-43b5-b26e-a6ba5d150458\" (UID: \"24a966eb-1b03-43b5-b26e-a6ba5d150458\") " Feb 25 12:26:51 crc kubenswrapper[5005]: I0225 12:26:51.372692 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzt29\" (UniqueName: \"kubernetes.io/projected/24a966eb-1b03-43b5-b26e-a6ba5d150458-kube-api-access-mzt29\") pod \"24a966eb-1b03-43b5-b26e-a6ba5d150458\" (UID: \"24a966eb-1b03-43b5-b26e-a6ba5d150458\") " Feb 25 12:26:51 crc kubenswrapper[5005]: I0225 12:26:51.375141 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24a966eb-1b03-43b5-b26e-a6ba5d150458-utilities" (OuterVolumeSpecName: "utilities") pod "24a966eb-1b03-43b5-b26e-a6ba5d150458" (UID: "24a966eb-1b03-43b5-b26e-a6ba5d150458"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 12:26:51 crc kubenswrapper[5005]: I0225 12:26:51.379780 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24a966eb-1b03-43b5-b26e-a6ba5d150458-kube-api-access-mzt29" (OuterVolumeSpecName: "kube-api-access-mzt29") pod "24a966eb-1b03-43b5-b26e-a6ba5d150458" (UID: "24a966eb-1b03-43b5-b26e-a6ba5d150458"). InnerVolumeSpecName "kube-api-access-mzt29". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:26:51 crc kubenswrapper[5005]: I0225 12:26:51.432482 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24a966eb-1b03-43b5-b26e-a6ba5d150458-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "24a966eb-1b03-43b5-b26e-a6ba5d150458" (UID: "24a966eb-1b03-43b5-b26e-a6ba5d150458"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 12:26:51 crc kubenswrapper[5005]: I0225 12:26:51.475161 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24a966eb-1b03-43b5-b26e-a6ba5d150458-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 12:26:51 crc kubenswrapper[5005]: I0225 12:26:51.475200 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzt29\" (UniqueName: \"kubernetes.io/projected/24a966eb-1b03-43b5-b26e-a6ba5d150458-kube-api-access-mzt29\") on node \"crc\" DevicePath \"\"" Feb 25 12:26:51 crc kubenswrapper[5005]: I0225 12:26:51.475211 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24a966eb-1b03-43b5-b26e-a6ba5d150458-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 12:26:51 crc kubenswrapper[5005]: I0225 12:26:51.527904 5005 generic.go:334] "Generic (PLEG): container finished" podID="24a966eb-1b03-43b5-b26e-a6ba5d150458" containerID="bb050c99904167c3a0392348af1a72428a98a94ce83361f96e79c56d3b987fdb" exitCode=0 Feb 25 12:26:51 crc kubenswrapper[5005]: I0225 12:26:51.527979 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-56smr" event={"ID":"24a966eb-1b03-43b5-b26e-a6ba5d150458","Type":"ContainerDied","Data":"bb050c99904167c3a0392348af1a72428a98a94ce83361f96e79c56d3b987fdb"} Feb 25 12:26:51 crc kubenswrapper[5005]: I0225 12:26:51.528010 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-56smr" event={"ID":"24a966eb-1b03-43b5-b26e-a6ba5d150458","Type":"ContainerDied","Data":"a3e5906abc56ecafcc0a9a252a3845aa12f11e5967338db7ad00481f2e13615f"} Feb 25 12:26:51 crc kubenswrapper[5005]: I0225 12:26:51.528027 5005 scope.go:117] "RemoveContainer" containerID="bb050c99904167c3a0392348af1a72428a98a94ce83361f96e79c56d3b987fdb" Feb 25 12:26:51 crc kubenswrapper[5005]: I0225 12:26:51.528193 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-56smr" Feb 25 12:26:51 crc kubenswrapper[5005]: I0225 12:26:51.565693 5005 scope.go:117] "RemoveContainer" containerID="b5d19c1655eea308ec0f68a67c6a3962a51ea490617e9d6b5b927d83c2d56372" Feb 25 12:26:51 crc kubenswrapper[5005]: I0225 12:26:51.586682 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-56smr"] Feb 25 12:26:51 crc kubenswrapper[5005]: I0225 12:26:51.595345 5005 scope.go:117] "RemoveContainer" containerID="f52684f7d4613069723f0b992c1b9d951f6fa25830c5ee587547397b1bfbd629" Feb 25 12:26:51 crc kubenswrapper[5005]: I0225 12:26:51.597913 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-56smr"] Feb 25 12:26:51 crc kubenswrapper[5005]: I0225 12:26:51.641097 5005 scope.go:117] "RemoveContainer" containerID="bb050c99904167c3a0392348af1a72428a98a94ce83361f96e79c56d3b987fdb" Feb 25 12:26:51 crc kubenswrapper[5005]: E0225 12:26:51.641704 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb050c99904167c3a0392348af1a72428a98a94ce83361f96e79c56d3b987fdb\": container with ID starting with bb050c99904167c3a0392348af1a72428a98a94ce83361f96e79c56d3b987fdb not found: ID does not exist" containerID="bb050c99904167c3a0392348af1a72428a98a94ce83361f96e79c56d3b987fdb" Feb 25 12:26:51 crc kubenswrapper[5005]: I0225 12:26:51.641761 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb050c99904167c3a0392348af1a72428a98a94ce83361f96e79c56d3b987fdb"} err="failed to get container status \"bb050c99904167c3a0392348af1a72428a98a94ce83361f96e79c56d3b987fdb\": rpc error: code = NotFound desc = could not find container \"bb050c99904167c3a0392348af1a72428a98a94ce83361f96e79c56d3b987fdb\": container with ID starting with bb050c99904167c3a0392348af1a72428a98a94ce83361f96e79c56d3b987fdb not found: ID does not exist" Feb 25 12:26:51 crc kubenswrapper[5005]: I0225 12:26:51.641797 5005 scope.go:117] "RemoveContainer" containerID="b5d19c1655eea308ec0f68a67c6a3962a51ea490617e9d6b5b927d83c2d56372" Feb 25 12:26:51 crc kubenswrapper[5005]: E0225 12:26:51.642070 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5d19c1655eea308ec0f68a67c6a3962a51ea490617e9d6b5b927d83c2d56372\": container with ID starting with b5d19c1655eea308ec0f68a67c6a3962a51ea490617e9d6b5b927d83c2d56372 not found: ID does not exist" containerID="b5d19c1655eea308ec0f68a67c6a3962a51ea490617e9d6b5b927d83c2d56372" Feb 25 12:26:51 crc kubenswrapper[5005]: I0225 12:26:51.642096 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5d19c1655eea308ec0f68a67c6a3962a51ea490617e9d6b5b927d83c2d56372"} err="failed to get container status \"b5d19c1655eea308ec0f68a67c6a3962a51ea490617e9d6b5b927d83c2d56372\": rpc error: code = NotFound desc = could not find container \"b5d19c1655eea308ec0f68a67c6a3962a51ea490617e9d6b5b927d83c2d56372\": container with ID starting with b5d19c1655eea308ec0f68a67c6a3962a51ea490617e9d6b5b927d83c2d56372 not found: ID does not exist" Feb 25 12:26:51 crc kubenswrapper[5005]: I0225 12:26:51.642112 5005 scope.go:117] "RemoveContainer" containerID="f52684f7d4613069723f0b992c1b9d951f6fa25830c5ee587547397b1bfbd629" Feb 25 12:26:51 crc kubenswrapper[5005]: E0225 12:26:51.642361 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f52684f7d4613069723f0b992c1b9d951f6fa25830c5ee587547397b1bfbd629\": container with ID starting with f52684f7d4613069723f0b992c1b9d951f6fa25830c5ee587547397b1bfbd629 not found: ID does not exist" containerID="f52684f7d4613069723f0b992c1b9d951f6fa25830c5ee587547397b1bfbd629" Feb 25 12:26:51 crc kubenswrapper[5005]: I0225 12:26:51.642443 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f52684f7d4613069723f0b992c1b9d951f6fa25830c5ee587547397b1bfbd629"} err="failed to get container status \"f52684f7d4613069723f0b992c1b9d951f6fa25830c5ee587547397b1bfbd629\": rpc error: code = NotFound desc = could not find container \"f52684f7d4613069723f0b992c1b9d951f6fa25830c5ee587547397b1bfbd629\": container with ID starting with f52684f7d4613069723f0b992c1b9d951f6fa25830c5ee587547397b1bfbd629 not found: ID does not exist" Feb 25 12:26:52 crc kubenswrapper[5005]: I0225 12:26:52.699846 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24a966eb-1b03-43b5-b26e-a6ba5d150458" path="/var/lib/kubelet/pods/24a966eb-1b03-43b5-b26e-a6ba5d150458/volumes" Feb 25 12:26:58 crc kubenswrapper[5005]: I0225 12:26:58.088015 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 12:26:58 crc kubenswrapper[5005]: I0225 12:26:58.088801 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 12:26:58 crc kubenswrapper[5005]: I0225 12:26:58.088877 5005 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" Feb 25 12:26:58 crc kubenswrapper[5005]: I0225 12:26:58.090123 5005 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"750df94252d948a2c0d1f4ad27cf96c18e2420aea84dab143e82aaae647602ef"} pod="openshift-machine-config-operator/machine-config-daemon-tct5q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 25 12:26:58 crc kubenswrapper[5005]: I0225 12:26:58.090178 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" containerID="cri-o://750df94252d948a2c0d1f4ad27cf96c18e2420aea84dab143e82aaae647602ef" gracePeriod=600 Feb 25 12:26:58 crc kubenswrapper[5005]: I0225 12:26:58.604849 5005 generic.go:334] "Generic (PLEG): container finished" podID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerID="750df94252d948a2c0d1f4ad27cf96c18e2420aea84dab143e82aaae647602ef" exitCode=0 Feb 25 12:26:58 crc kubenswrapper[5005]: I0225 12:26:58.604945 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerDied","Data":"750df94252d948a2c0d1f4ad27cf96c18e2420aea84dab143e82aaae647602ef"} Feb 25 12:26:58 crc kubenswrapper[5005]: I0225 12:26:58.605871 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerStarted","Data":"3816c847e7671787ebbfa119db6802c720258a20258c425319f98485afae68d6"} Feb 25 12:26:58 crc kubenswrapper[5005]: I0225 12:26:58.605905 5005 scope.go:117] "RemoveContainer" containerID="21a4def3ca83173a08a0f2afdddef58f96111d1f23cd1ee43227b62abca86494" Feb 25 12:27:25 crc kubenswrapper[5005]: I0225 12:27:25.121897 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rdl8m"] Feb 25 12:27:25 crc kubenswrapper[5005]: E0225 12:27:25.123151 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24a966eb-1b03-43b5-b26e-a6ba5d150458" containerName="extract-content" Feb 25 12:27:25 crc kubenswrapper[5005]: I0225 12:27:25.123167 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="24a966eb-1b03-43b5-b26e-a6ba5d150458" containerName="extract-content" Feb 25 12:27:25 crc kubenswrapper[5005]: E0225 12:27:25.123203 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24a966eb-1b03-43b5-b26e-a6ba5d150458" containerName="registry-server" Feb 25 12:27:25 crc kubenswrapper[5005]: I0225 12:27:25.123209 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="24a966eb-1b03-43b5-b26e-a6ba5d150458" containerName="registry-server" Feb 25 12:27:25 crc kubenswrapper[5005]: E0225 12:27:25.123225 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24a966eb-1b03-43b5-b26e-a6ba5d150458" containerName="extract-utilities" Feb 25 12:27:25 crc kubenswrapper[5005]: I0225 12:27:25.123234 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="24a966eb-1b03-43b5-b26e-a6ba5d150458" containerName="extract-utilities" Feb 25 12:27:25 crc kubenswrapper[5005]: I0225 12:27:25.123487 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="24a966eb-1b03-43b5-b26e-a6ba5d150458" containerName="registry-server" Feb 25 12:27:25 crc kubenswrapper[5005]: I0225 12:27:25.125249 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rdl8m" Feb 25 12:27:25 crc kubenswrapper[5005]: I0225 12:27:25.144266 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rdl8m"] Feb 25 12:27:25 crc kubenswrapper[5005]: I0225 12:27:25.290159 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e4ba16d-2f15-4af5-b426-d1c90caeec47-catalog-content\") pod \"certified-operators-rdl8m\" (UID: \"8e4ba16d-2f15-4af5-b426-d1c90caeec47\") " pod="openshift-marketplace/certified-operators-rdl8m" Feb 25 12:27:25 crc kubenswrapper[5005]: I0225 12:27:25.290318 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e4ba16d-2f15-4af5-b426-d1c90caeec47-utilities\") pod \"certified-operators-rdl8m\" (UID: \"8e4ba16d-2f15-4af5-b426-d1c90caeec47\") " pod="openshift-marketplace/certified-operators-rdl8m" Feb 25 12:27:25 crc kubenswrapper[5005]: I0225 12:27:25.290356 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jhf7\" (UniqueName: \"kubernetes.io/projected/8e4ba16d-2f15-4af5-b426-d1c90caeec47-kube-api-access-6jhf7\") pod \"certified-operators-rdl8m\" (UID: \"8e4ba16d-2f15-4af5-b426-d1c90caeec47\") " pod="openshift-marketplace/certified-operators-rdl8m" Feb 25 12:27:25 crc kubenswrapper[5005]: I0225 12:27:25.392001 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e4ba16d-2f15-4af5-b426-d1c90caeec47-utilities\") pod \"certified-operators-rdl8m\" (UID: \"8e4ba16d-2f15-4af5-b426-d1c90caeec47\") " pod="openshift-marketplace/certified-operators-rdl8m" Feb 25 12:27:25 crc kubenswrapper[5005]: I0225 12:27:25.392054 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jhf7\" (UniqueName: \"kubernetes.io/projected/8e4ba16d-2f15-4af5-b426-d1c90caeec47-kube-api-access-6jhf7\") pod \"certified-operators-rdl8m\" (UID: \"8e4ba16d-2f15-4af5-b426-d1c90caeec47\") " pod="openshift-marketplace/certified-operators-rdl8m" Feb 25 12:27:25 crc kubenswrapper[5005]: I0225 12:27:25.392194 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e4ba16d-2f15-4af5-b426-d1c90caeec47-catalog-content\") pod \"certified-operators-rdl8m\" (UID: \"8e4ba16d-2f15-4af5-b426-d1c90caeec47\") " pod="openshift-marketplace/certified-operators-rdl8m" Feb 25 12:27:25 crc kubenswrapper[5005]: I0225 12:27:25.392664 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e4ba16d-2f15-4af5-b426-d1c90caeec47-utilities\") pod \"certified-operators-rdl8m\" (UID: \"8e4ba16d-2f15-4af5-b426-d1c90caeec47\") " pod="openshift-marketplace/certified-operators-rdl8m" Feb 25 12:27:25 crc kubenswrapper[5005]: I0225 12:27:25.392706 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e4ba16d-2f15-4af5-b426-d1c90caeec47-catalog-content\") pod \"certified-operators-rdl8m\" (UID: \"8e4ba16d-2f15-4af5-b426-d1c90caeec47\") " pod="openshift-marketplace/certified-operators-rdl8m" Feb 25 12:27:25 crc kubenswrapper[5005]: I0225 12:27:25.420297 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jhf7\" (UniqueName: \"kubernetes.io/projected/8e4ba16d-2f15-4af5-b426-d1c90caeec47-kube-api-access-6jhf7\") pod \"certified-operators-rdl8m\" (UID: \"8e4ba16d-2f15-4af5-b426-d1c90caeec47\") " pod="openshift-marketplace/certified-operators-rdl8m" Feb 25 12:27:25 crc kubenswrapper[5005]: I0225 12:27:25.450891 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rdl8m" Feb 25 12:27:26 crc kubenswrapper[5005]: I0225 12:27:26.014505 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rdl8m"] Feb 25 12:27:26 crc kubenswrapper[5005]: I0225 12:27:26.870550 5005 generic.go:334] "Generic (PLEG): container finished" podID="8e4ba16d-2f15-4af5-b426-d1c90caeec47" containerID="25ade70dc636924e4e258eac49445a71002cf893d81056fb9383e52ef51446c0" exitCode=0 Feb 25 12:27:26 crc kubenswrapper[5005]: I0225 12:27:26.870919 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rdl8m" event={"ID":"8e4ba16d-2f15-4af5-b426-d1c90caeec47","Type":"ContainerDied","Data":"25ade70dc636924e4e258eac49445a71002cf893d81056fb9383e52ef51446c0"} Feb 25 12:27:26 crc kubenswrapper[5005]: I0225 12:27:26.870946 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rdl8m" event={"ID":"8e4ba16d-2f15-4af5-b426-d1c90caeec47","Type":"ContainerStarted","Data":"ecc7a4f9ccccb7f01390aeab9212d34f9c9afccfbdda74a8a119514e59f82354"} Feb 25 12:27:27 crc kubenswrapper[5005]: I0225 12:27:27.884202 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rdl8m" event={"ID":"8e4ba16d-2f15-4af5-b426-d1c90caeec47","Type":"ContainerStarted","Data":"254c57b79d3f41144a647e6b109b56236d4d37ec4424b536cb44eb2d43843cee"} Feb 25 12:27:29 crc kubenswrapper[5005]: I0225 12:27:29.903227 5005 generic.go:334] "Generic (PLEG): container finished" podID="8e4ba16d-2f15-4af5-b426-d1c90caeec47" containerID="254c57b79d3f41144a647e6b109b56236d4d37ec4424b536cb44eb2d43843cee" exitCode=0 Feb 25 12:27:29 crc kubenswrapper[5005]: I0225 12:27:29.903306 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rdl8m" event={"ID":"8e4ba16d-2f15-4af5-b426-d1c90caeec47","Type":"ContainerDied","Data":"254c57b79d3f41144a647e6b109b56236d4d37ec4424b536cb44eb2d43843cee"} Feb 25 12:27:30 crc kubenswrapper[5005]: I0225 12:27:30.922267 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rdl8m" event={"ID":"8e4ba16d-2f15-4af5-b426-d1c90caeec47","Type":"ContainerStarted","Data":"f07ce38f21adb9e3238f017644135b97f66a953f1c4a231a6008e93e9f889387"} Feb 25 12:27:30 crc kubenswrapper[5005]: I0225 12:27:30.947459 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rdl8m" podStartSLOduration=2.505364552 podStartE2EDuration="5.94743476s" podCreationTimestamp="2026-02-25 12:27:25 +0000 UTC" firstStartedPulling="2026-02-25 12:27:26.876269115 +0000 UTC m=+4160.917001442" lastFinishedPulling="2026-02-25 12:27:30.318339313 +0000 UTC m=+4164.359071650" observedRunningTime="2026-02-25 12:27:30.939945569 +0000 UTC m=+4164.980677896" watchObservedRunningTime="2026-02-25 12:27:30.94743476 +0000 UTC m=+4164.988167107" Feb 25 12:27:35 crc kubenswrapper[5005]: I0225 12:27:35.451694 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rdl8m" Feb 25 12:27:35 crc kubenswrapper[5005]: I0225 12:27:35.452845 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rdl8m" Feb 25 12:27:35 crc kubenswrapper[5005]: I0225 12:27:35.507677 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rdl8m" Feb 25 12:27:36 crc kubenswrapper[5005]: I0225 12:27:36.018473 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rdl8m" Feb 25 12:27:36 crc kubenswrapper[5005]: I0225 12:27:36.074201 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rdl8m"] Feb 25 12:27:37 crc kubenswrapper[5005]: I0225 12:27:37.985946 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rdl8m" podUID="8e4ba16d-2f15-4af5-b426-d1c90caeec47" containerName="registry-server" containerID="cri-o://f07ce38f21adb9e3238f017644135b97f66a953f1c4a231a6008e93e9f889387" gracePeriod=2 Feb 25 12:27:38 crc kubenswrapper[5005]: I0225 12:27:38.772491 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rdl8m" Feb 25 12:27:38 crc kubenswrapper[5005]: I0225 12:27:38.882109 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jhf7\" (UniqueName: \"kubernetes.io/projected/8e4ba16d-2f15-4af5-b426-d1c90caeec47-kube-api-access-6jhf7\") pod \"8e4ba16d-2f15-4af5-b426-d1c90caeec47\" (UID: \"8e4ba16d-2f15-4af5-b426-d1c90caeec47\") " Feb 25 12:27:38 crc kubenswrapper[5005]: I0225 12:27:38.882237 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e4ba16d-2f15-4af5-b426-d1c90caeec47-catalog-content\") pod \"8e4ba16d-2f15-4af5-b426-d1c90caeec47\" (UID: \"8e4ba16d-2f15-4af5-b426-d1c90caeec47\") " Feb 25 12:27:38 crc kubenswrapper[5005]: I0225 12:27:38.882356 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e4ba16d-2f15-4af5-b426-d1c90caeec47-utilities\") pod \"8e4ba16d-2f15-4af5-b426-d1c90caeec47\" (UID: \"8e4ba16d-2f15-4af5-b426-d1c90caeec47\") " Feb 25 12:27:38 crc kubenswrapper[5005]: I0225 12:27:38.883653 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e4ba16d-2f15-4af5-b426-d1c90caeec47-utilities" (OuterVolumeSpecName: "utilities") pod "8e4ba16d-2f15-4af5-b426-d1c90caeec47" (UID: "8e4ba16d-2f15-4af5-b426-d1c90caeec47"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 12:27:38 crc kubenswrapper[5005]: I0225 12:27:38.900581 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e4ba16d-2f15-4af5-b426-d1c90caeec47-kube-api-access-6jhf7" (OuterVolumeSpecName: "kube-api-access-6jhf7") pod "8e4ba16d-2f15-4af5-b426-d1c90caeec47" (UID: "8e4ba16d-2f15-4af5-b426-d1c90caeec47"). InnerVolumeSpecName "kube-api-access-6jhf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:27:38 crc kubenswrapper[5005]: I0225 12:27:38.945218 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e4ba16d-2f15-4af5-b426-d1c90caeec47-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8e4ba16d-2f15-4af5-b426-d1c90caeec47" (UID: "8e4ba16d-2f15-4af5-b426-d1c90caeec47"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 12:27:38 crc kubenswrapper[5005]: I0225 12:27:38.984954 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jhf7\" (UniqueName: \"kubernetes.io/projected/8e4ba16d-2f15-4af5-b426-d1c90caeec47-kube-api-access-6jhf7\") on node \"crc\" DevicePath \"\"" Feb 25 12:27:38 crc kubenswrapper[5005]: I0225 12:27:38.985257 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e4ba16d-2f15-4af5-b426-d1c90caeec47-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 12:27:38 crc kubenswrapper[5005]: I0225 12:27:38.985361 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e4ba16d-2f15-4af5-b426-d1c90caeec47-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 12:27:38 crc kubenswrapper[5005]: I0225 12:27:38.997349 5005 generic.go:334] "Generic (PLEG): container finished" podID="8e4ba16d-2f15-4af5-b426-d1c90caeec47" containerID="f07ce38f21adb9e3238f017644135b97f66a953f1c4a231a6008e93e9f889387" exitCode=0 Feb 25 12:27:38 crc kubenswrapper[5005]: I0225 12:27:38.997407 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rdl8m" event={"ID":"8e4ba16d-2f15-4af5-b426-d1c90caeec47","Type":"ContainerDied","Data":"f07ce38f21adb9e3238f017644135b97f66a953f1c4a231a6008e93e9f889387"} Feb 25 12:27:38 crc kubenswrapper[5005]: I0225 12:27:38.997469 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rdl8m" event={"ID":"8e4ba16d-2f15-4af5-b426-d1c90caeec47","Type":"ContainerDied","Data":"ecc7a4f9ccccb7f01390aeab9212d34f9c9afccfbdda74a8a119514e59f82354"} Feb 25 12:27:38 crc kubenswrapper[5005]: I0225 12:27:38.997472 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rdl8m" Feb 25 12:27:38 crc kubenswrapper[5005]: I0225 12:27:38.997495 5005 scope.go:117] "RemoveContainer" containerID="f07ce38f21adb9e3238f017644135b97f66a953f1c4a231a6008e93e9f889387" Feb 25 12:27:39 crc kubenswrapper[5005]: I0225 12:27:39.050196 5005 scope.go:117] "RemoveContainer" containerID="254c57b79d3f41144a647e6b109b56236d4d37ec4424b536cb44eb2d43843cee" Feb 25 12:27:39 crc kubenswrapper[5005]: I0225 12:27:39.057975 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rdl8m"] Feb 25 12:27:39 crc kubenswrapper[5005]: I0225 12:27:39.067726 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rdl8m"] Feb 25 12:27:39 crc kubenswrapper[5005]: I0225 12:27:39.085947 5005 scope.go:117] "RemoveContainer" containerID="25ade70dc636924e4e258eac49445a71002cf893d81056fb9383e52ef51446c0" Feb 25 12:27:39 crc kubenswrapper[5005]: I0225 12:27:39.138206 5005 scope.go:117] "RemoveContainer" containerID="f07ce38f21adb9e3238f017644135b97f66a953f1c4a231a6008e93e9f889387" Feb 25 12:27:39 crc kubenswrapper[5005]: E0225 12:27:39.140891 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f07ce38f21adb9e3238f017644135b97f66a953f1c4a231a6008e93e9f889387\": container with ID starting with f07ce38f21adb9e3238f017644135b97f66a953f1c4a231a6008e93e9f889387 not found: ID does not exist" containerID="f07ce38f21adb9e3238f017644135b97f66a953f1c4a231a6008e93e9f889387" Feb 25 12:27:39 crc kubenswrapper[5005]: I0225 12:27:39.141224 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f07ce38f21adb9e3238f017644135b97f66a953f1c4a231a6008e93e9f889387"} err="failed to get container status \"f07ce38f21adb9e3238f017644135b97f66a953f1c4a231a6008e93e9f889387\": rpc error: code = NotFound desc = could not find container \"f07ce38f21adb9e3238f017644135b97f66a953f1c4a231a6008e93e9f889387\": container with ID starting with f07ce38f21adb9e3238f017644135b97f66a953f1c4a231a6008e93e9f889387 not found: ID does not exist" Feb 25 12:27:39 crc kubenswrapper[5005]: I0225 12:27:39.141439 5005 scope.go:117] "RemoveContainer" containerID="254c57b79d3f41144a647e6b109b56236d4d37ec4424b536cb44eb2d43843cee" Feb 25 12:27:39 crc kubenswrapper[5005]: E0225 12:27:39.143916 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"254c57b79d3f41144a647e6b109b56236d4d37ec4424b536cb44eb2d43843cee\": container with ID starting with 254c57b79d3f41144a647e6b109b56236d4d37ec4424b536cb44eb2d43843cee not found: ID does not exist" containerID="254c57b79d3f41144a647e6b109b56236d4d37ec4424b536cb44eb2d43843cee" Feb 25 12:27:39 crc kubenswrapper[5005]: I0225 12:27:39.144111 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"254c57b79d3f41144a647e6b109b56236d4d37ec4424b536cb44eb2d43843cee"} err="failed to get container status \"254c57b79d3f41144a647e6b109b56236d4d37ec4424b536cb44eb2d43843cee\": rpc error: code = NotFound desc = could not find container \"254c57b79d3f41144a647e6b109b56236d4d37ec4424b536cb44eb2d43843cee\": container with ID starting with 254c57b79d3f41144a647e6b109b56236d4d37ec4424b536cb44eb2d43843cee not found: ID does not exist" Feb 25 12:27:39 crc kubenswrapper[5005]: I0225 12:27:39.144244 5005 scope.go:117] "RemoveContainer" containerID="25ade70dc636924e4e258eac49445a71002cf893d81056fb9383e52ef51446c0" Feb 25 12:27:39 crc kubenswrapper[5005]: E0225 12:27:39.144789 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25ade70dc636924e4e258eac49445a71002cf893d81056fb9383e52ef51446c0\": container with ID starting with 25ade70dc636924e4e258eac49445a71002cf893d81056fb9383e52ef51446c0 not found: ID does not exist" containerID="25ade70dc636924e4e258eac49445a71002cf893d81056fb9383e52ef51446c0" Feb 25 12:27:39 crc kubenswrapper[5005]: I0225 12:27:39.144828 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25ade70dc636924e4e258eac49445a71002cf893d81056fb9383e52ef51446c0"} err="failed to get container status \"25ade70dc636924e4e258eac49445a71002cf893d81056fb9383e52ef51446c0\": rpc error: code = NotFound desc = could not find container \"25ade70dc636924e4e258eac49445a71002cf893d81056fb9383e52ef51446c0\": container with ID starting with 25ade70dc636924e4e258eac49445a71002cf893d81056fb9383e52ef51446c0 not found: ID does not exist" Feb 25 12:27:40 crc kubenswrapper[5005]: I0225 12:27:40.697303 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e4ba16d-2f15-4af5-b426-d1c90caeec47" path="/var/lib/kubelet/pods/8e4ba16d-2f15-4af5-b426-d1c90caeec47/volumes" Feb 25 12:28:00 crc kubenswrapper[5005]: I0225 12:28:00.148673 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533708-cxkpn"] Feb 25 12:28:00 crc kubenswrapper[5005]: E0225 12:28:00.155845 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e4ba16d-2f15-4af5-b426-d1c90caeec47" containerName="extract-content" Feb 25 12:28:00 crc kubenswrapper[5005]: I0225 12:28:00.155882 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e4ba16d-2f15-4af5-b426-d1c90caeec47" containerName="extract-content" Feb 25 12:28:00 crc kubenswrapper[5005]: E0225 12:28:00.155896 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e4ba16d-2f15-4af5-b426-d1c90caeec47" containerName="registry-server" Feb 25 12:28:00 crc kubenswrapper[5005]: I0225 12:28:00.155904 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e4ba16d-2f15-4af5-b426-d1c90caeec47" containerName="registry-server" Feb 25 12:28:00 crc kubenswrapper[5005]: E0225 12:28:00.155919 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e4ba16d-2f15-4af5-b426-d1c90caeec47" containerName="extract-utilities" Feb 25 12:28:00 crc kubenswrapper[5005]: I0225 12:28:00.155925 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e4ba16d-2f15-4af5-b426-d1c90caeec47" containerName="extract-utilities" Feb 25 12:28:00 crc kubenswrapper[5005]: I0225 12:28:00.156106 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e4ba16d-2f15-4af5-b426-d1c90caeec47" containerName="registry-server" Feb 25 12:28:00 crc kubenswrapper[5005]: I0225 12:28:00.156805 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533708-cxkpn" Feb 25 12:28:00 crc kubenswrapper[5005]: I0225 12:28:00.165231 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 12:28:00 crc kubenswrapper[5005]: I0225 12:28:00.165346 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 12:28:00 crc kubenswrapper[5005]: I0225 12:28:00.165737 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 12:28:00 crc kubenswrapper[5005]: I0225 12:28:00.186039 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533708-cxkpn"] Feb 25 12:28:00 crc kubenswrapper[5005]: I0225 12:28:00.293627 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4kl2\" (UniqueName: \"kubernetes.io/projected/ce9f5582-5dde-49f9-81ce-0a95cdcc5d2a-kube-api-access-f4kl2\") pod \"auto-csr-approver-29533708-cxkpn\" (UID: \"ce9f5582-5dde-49f9-81ce-0a95cdcc5d2a\") " pod="openshift-infra/auto-csr-approver-29533708-cxkpn" Feb 25 12:28:00 crc kubenswrapper[5005]: I0225 12:28:00.397157 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4kl2\" (UniqueName: \"kubernetes.io/projected/ce9f5582-5dde-49f9-81ce-0a95cdcc5d2a-kube-api-access-f4kl2\") pod \"auto-csr-approver-29533708-cxkpn\" (UID: \"ce9f5582-5dde-49f9-81ce-0a95cdcc5d2a\") " pod="openshift-infra/auto-csr-approver-29533708-cxkpn" Feb 25 12:28:00 crc kubenswrapper[5005]: I0225 12:28:00.420113 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4kl2\" (UniqueName: \"kubernetes.io/projected/ce9f5582-5dde-49f9-81ce-0a95cdcc5d2a-kube-api-access-f4kl2\") pod \"auto-csr-approver-29533708-cxkpn\" (UID: \"ce9f5582-5dde-49f9-81ce-0a95cdcc5d2a\") " pod="openshift-infra/auto-csr-approver-29533708-cxkpn" Feb 25 12:28:00 crc kubenswrapper[5005]: I0225 12:28:00.475260 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533708-cxkpn" Feb 25 12:28:00 crc kubenswrapper[5005]: I0225 12:28:00.972129 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533708-cxkpn"] Feb 25 12:28:01 crc kubenswrapper[5005]: I0225 12:28:01.217688 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533708-cxkpn" event={"ID":"ce9f5582-5dde-49f9-81ce-0a95cdcc5d2a","Type":"ContainerStarted","Data":"e8dcea5a455fc4a56ab5ca89b8661af53dcd46fffef407b310bcc75f8c41f29b"} Feb 25 12:28:03 crc kubenswrapper[5005]: I0225 12:28:03.274736 5005 generic.go:334] "Generic (PLEG): container finished" podID="ce9f5582-5dde-49f9-81ce-0a95cdcc5d2a" containerID="5f13ea91fa9659b2abe15368553af64a409ca2ac85d288fb67212fe90fe5e401" exitCode=0 Feb 25 12:28:03 crc kubenswrapper[5005]: I0225 12:28:03.274859 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533708-cxkpn" event={"ID":"ce9f5582-5dde-49f9-81ce-0a95cdcc5d2a","Type":"ContainerDied","Data":"5f13ea91fa9659b2abe15368553af64a409ca2ac85d288fb67212fe90fe5e401"} Feb 25 12:28:04 crc kubenswrapper[5005]: I0225 12:28:04.877628 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533708-cxkpn" Feb 25 12:28:04 crc kubenswrapper[5005]: I0225 12:28:04.997468 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4kl2\" (UniqueName: \"kubernetes.io/projected/ce9f5582-5dde-49f9-81ce-0a95cdcc5d2a-kube-api-access-f4kl2\") pod \"ce9f5582-5dde-49f9-81ce-0a95cdcc5d2a\" (UID: \"ce9f5582-5dde-49f9-81ce-0a95cdcc5d2a\") " Feb 25 12:28:05 crc kubenswrapper[5005]: I0225 12:28:05.017211 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce9f5582-5dde-49f9-81ce-0a95cdcc5d2a-kube-api-access-f4kl2" (OuterVolumeSpecName: "kube-api-access-f4kl2") pod "ce9f5582-5dde-49f9-81ce-0a95cdcc5d2a" (UID: "ce9f5582-5dde-49f9-81ce-0a95cdcc5d2a"). InnerVolumeSpecName "kube-api-access-f4kl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:28:05 crc kubenswrapper[5005]: I0225 12:28:05.100173 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4kl2\" (UniqueName: \"kubernetes.io/projected/ce9f5582-5dde-49f9-81ce-0a95cdcc5d2a-kube-api-access-f4kl2\") on node \"crc\" DevicePath \"\"" Feb 25 12:28:05 crc kubenswrapper[5005]: I0225 12:28:05.298940 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533708-cxkpn" event={"ID":"ce9f5582-5dde-49f9-81ce-0a95cdcc5d2a","Type":"ContainerDied","Data":"e8dcea5a455fc4a56ab5ca89b8661af53dcd46fffef407b310bcc75f8c41f29b"} Feb 25 12:28:05 crc kubenswrapper[5005]: I0225 12:28:05.299474 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8dcea5a455fc4a56ab5ca89b8661af53dcd46fffef407b310bcc75f8c41f29b" Feb 25 12:28:05 crc kubenswrapper[5005]: I0225 12:28:05.299205 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533708-cxkpn" Feb 25 12:28:06 crc kubenswrapper[5005]: I0225 12:28:06.003672 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533702-mflqn"] Feb 25 12:28:06 crc kubenswrapper[5005]: I0225 12:28:06.016469 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533702-mflqn"] Feb 25 12:28:06 crc kubenswrapper[5005]: I0225 12:28:06.698971 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7fac4b7-f6b0-452a-afdc-cc013e781567" path="/var/lib/kubelet/pods/e7fac4b7-f6b0-452a-afdc-cc013e781567/volumes" Feb 25 12:28:21 crc kubenswrapper[5005]: I0225 12:28:21.303085 5005 scope.go:117] "RemoveContainer" containerID="d937268573642beff0cb3e0f2f6f7f5768567fbece68bbc8f0fde64bb4693e97" Feb 25 12:28:58 crc kubenswrapper[5005]: I0225 12:28:58.087528 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 12:28:58 crc kubenswrapper[5005]: I0225 12:28:58.088463 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 12:29:28 crc kubenswrapper[5005]: I0225 12:29:28.088031 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 12:29:28 crc kubenswrapper[5005]: I0225 12:29:28.088946 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 12:29:58 crc kubenswrapper[5005]: I0225 12:29:58.087364 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 12:29:58 crc kubenswrapper[5005]: I0225 12:29:58.088423 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 12:29:58 crc kubenswrapper[5005]: I0225 12:29:58.088512 5005 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" Feb 25 12:29:58 crc kubenswrapper[5005]: I0225 12:29:58.090022 5005 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3816c847e7671787ebbfa119db6802c720258a20258c425319f98485afae68d6"} pod="openshift-machine-config-operator/machine-config-daemon-tct5q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 25 12:29:58 crc kubenswrapper[5005]: I0225 12:29:58.090113 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" containerID="cri-o://3816c847e7671787ebbfa119db6802c720258a20258c425319f98485afae68d6" gracePeriod=600 Feb 25 12:29:58 crc kubenswrapper[5005]: E0225 12:29:58.225815 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:29:58 crc kubenswrapper[5005]: I0225 12:29:58.406367 5005 generic.go:334] "Generic (PLEG): container finished" podID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerID="3816c847e7671787ebbfa119db6802c720258a20258c425319f98485afae68d6" exitCode=0 Feb 25 12:29:58 crc kubenswrapper[5005]: I0225 12:29:58.406403 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerDied","Data":"3816c847e7671787ebbfa119db6802c720258a20258c425319f98485afae68d6"} Feb 25 12:29:58 crc kubenswrapper[5005]: I0225 12:29:58.406460 5005 scope.go:117] "RemoveContainer" containerID="750df94252d948a2c0d1f4ad27cf96c18e2420aea84dab143e82aaae647602ef" Feb 25 12:29:58 crc kubenswrapper[5005]: I0225 12:29:58.407186 5005 scope.go:117] "RemoveContainer" containerID="3816c847e7671787ebbfa119db6802c720258a20258c425319f98485afae68d6" Feb 25 12:29:58 crc kubenswrapper[5005]: E0225 12:29:58.407557 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:30:00 crc kubenswrapper[5005]: I0225 12:30:00.158205 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533710-nx8m5"] Feb 25 12:30:00 crc kubenswrapper[5005]: E0225 12:30:00.159409 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce9f5582-5dde-49f9-81ce-0a95cdcc5d2a" containerName="oc" Feb 25 12:30:00 crc kubenswrapper[5005]: I0225 12:30:00.159426 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce9f5582-5dde-49f9-81ce-0a95cdcc5d2a" containerName="oc" Feb 25 12:30:00 crc kubenswrapper[5005]: I0225 12:30:00.159718 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce9f5582-5dde-49f9-81ce-0a95cdcc5d2a" containerName="oc" Feb 25 12:30:00 crc kubenswrapper[5005]: I0225 12:30:00.160650 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533710-nx8m5" Feb 25 12:30:00 crc kubenswrapper[5005]: I0225 12:30:00.167653 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 12:30:00 crc kubenswrapper[5005]: I0225 12:30:00.167742 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 12:30:00 crc kubenswrapper[5005]: I0225 12:30:00.167755 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 12:30:00 crc kubenswrapper[5005]: I0225 12:30:00.173047 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29533710-9pqdf"] Feb 25 12:30:00 crc kubenswrapper[5005]: I0225 12:30:00.174749 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29533710-9pqdf" Feb 25 12:30:00 crc kubenswrapper[5005]: I0225 12:30:00.177745 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 25 12:30:00 crc kubenswrapper[5005]: I0225 12:30:00.182823 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 25 12:30:00 crc kubenswrapper[5005]: I0225 12:30:00.187403 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533710-nx8m5"] Feb 25 12:30:00 crc kubenswrapper[5005]: I0225 12:30:00.196810 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29533710-9pqdf"] Feb 25 12:30:00 crc kubenswrapper[5005]: I0225 12:30:00.271954 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwgrs\" (UniqueName: \"kubernetes.io/projected/2a26754f-a311-4c24-9009-051e7017c182-kube-api-access-hwgrs\") pod \"auto-csr-approver-29533710-nx8m5\" (UID: \"2a26754f-a311-4c24-9009-051e7017c182\") " pod="openshift-infra/auto-csr-approver-29533710-nx8m5" Feb 25 12:30:00 crc kubenswrapper[5005]: I0225 12:30:00.374523 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwgrs\" (UniqueName: \"kubernetes.io/projected/2a26754f-a311-4c24-9009-051e7017c182-kube-api-access-hwgrs\") pod \"auto-csr-approver-29533710-nx8m5\" (UID: \"2a26754f-a311-4c24-9009-051e7017c182\") " pod="openshift-infra/auto-csr-approver-29533710-nx8m5" Feb 25 12:30:00 crc kubenswrapper[5005]: I0225 12:30:00.374714 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kh7c\" (UniqueName: \"kubernetes.io/projected/e019bb9f-e3cf-4478-a93c-964ec21e955c-kube-api-access-4kh7c\") pod \"collect-profiles-29533710-9pqdf\" (UID: \"e019bb9f-e3cf-4478-a93c-964ec21e955c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533710-9pqdf" Feb 25 12:30:00 crc kubenswrapper[5005]: I0225 12:30:00.374768 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e019bb9f-e3cf-4478-a93c-964ec21e955c-secret-volume\") pod \"collect-profiles-29533710-9pqdf\" (UID: \"e019bb9f-e3cf-4478-a93c-964ec21e955c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533710-9pqdf" Feb 25 12:30:00 crc kubenswrapper[5005]: I0225 12:30:00.374809 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e019bb9f-e3cf-4478-a93c-964ec21e955c-config-volume\") pod \"collect-profiles-29533710-9pqdf\" (UID: \"e019bb9f-e3cf-4478-a93c-964ec21e955c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533710-9pqdf" Feb 25 12:30:00 crc kubenswrapper[5005]: I0225 12:30:00.407536 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwgrs\" (UniqueName: \"kubernetes.io/projected/2a26754f-a311-4c24-9009-051e7017c182-kube-api-access-hwgrs\") pod \"auto-csr-approver-29533710-nx8m5\" (UID: \"2a26754f-a311-4c24-9009-051e7017c182\") " pod="openshift-infra/auto-csr-approver-29533710-nx8m5" Feb 25 12:30:00 crc kubenswrapper[5005]: I0225 12:30:00.478625 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kh7c\" (UniqueName: \"kubernetes.io/projected/e019bb9f-e3cf-4478-a93c-964ec21e955c-kube-api-access-4kh7c\") pod \"collect-profiles-29533710-9pqdf\" (UID: \"e019bb9f-e3cf-4478-a93c-964ec21e955c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533710-9pqdf" Feb 25 12:30:00 crc kubenswrapper[5005]: I0225 12:30:00.478719 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e019bb9f-e3cf-4478-a93c-964ec21e955c-secret-volume\") pod \"collect-profiles-29533710-9pqdf\" (UID: \"e019bb9f-e3cf-4478-a93c-964ec21e955c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533710-9pqdf" Feb 25 12:30:00 crc kubenswrapper[5005]: I0225 12:30:00.478770 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e019bb9f-e3cf-4478-a93c-964ec21e955c-config-volume\") pod \"collect-profiles-29533710-9pqdf\" (UID: \"e019bb9f-e3cf-4478-a93c-964ec21e955c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533710-9pqdf" Feb 25 12:30:00 crc kubenswrapper[5005]: I0225 12:30:00.482051 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e019bb9f-e3cf-4478-a93c-964ec21e955c-config-volume\") pod \"collect-profiles-29533710-9pqdf\" (UID: \"e019bb9f-e3cf-4478-a93c-964ec21e955c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533710-9pqdf" Feb 25 12:30:00 crc kubenswrapper[5005]: I0225 12:30:00.488098 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e019bb9f-e3cf-4478-a93c-964ec21e955c-secret-volume\") pod \"collect-profiles-29533710-9pqdf\" (UID: \"e019bb9f-e3cf-4478-a93c-964ec21e955c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533710-9pqdf" Feb 25 12:30:00 crc kubenswrapper[5005]: I0225 12:30:00.489614 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533710-nx8m5" Feb 25 12:30:00 crc kubenswrapper[5005]: I0225 12:30:00.535623 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kh7c\" (UniqueName: \"kubernetes.io/projected/e019bb9f-e3cf-4478-a93c-964ec21e955c-kube-api-access-4kh7c\") pod \"collect-profiles-29533710-9pqdf\" (UID: \"e019bb9f-e3cf-4478-a93c-964ec21e955c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533710-9pqdf" Feb 25 12:30:00 crc kubenswrapper[5005]: I0225 12:30:00.799564 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29533710-9pqdf" Feb 25 12:30:01 crc kubenswrapper[5005]: I0225 12:30:01.005736 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533710-nx8m5"] Feb 25 12:30:01 crc kubenswrapper[5005]: I0225 12:30:01.011534 5005 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 25 12:30:01 crc kubenswrapper[5005]: I0225 12:30:01.275653 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29533710-9pqdf"] Feb 25 12:30:01 crc kubenswrapper[5005]: I0225 12:30:01.440401 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29533710-9pqdf" event={"ID":"e019bb9f-e3cf-4478-a93c-964ec21e955c","Type":"ContainerStarted","Data":"77acfbb19d022de41092234f4c493730473c1fd4d3d7332ac0ac051adaa5e7b7"} Feb 25 12:30:01 crc kubenswrapper[5005]: I0225 12:30:01.443334 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533710-nx8m5" event={"ID":"2a26754f-a311-4c24-9009-051e7017c182","Type":"ContainerStarted","Data":"e4136765bc50949b2eaa6a4ea35212de064d011a22b4ae6891c5172a7dbc06f7"} Feb 25 12:30:02 crc kubenswrapper[5005]: I0225 12:30:02.460550 5005 generic.go:334] "Generic (PLEG): container finished" podID="e019bb9f-e3cf-4478-a93c-964ec21e955c" containerID="4a9703ca3bdc865f38f9f904bbe6322b957b88fac42f2c8950a3006c99f74dd7" exitCode=0 Feb 25 12:30:02 crc kubenswrapper[5005]: I0225 12:30:02.460703 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29533710-9pqdf" event={"ID":"e019bb9f-e3cf-4478-a93c-964ec21e955c","Type":"ContainerDied","Data":"4a9703ca3bdc865f38f9f904bbe6322b957b88fac42f2c8950a3006c99f74dd7"} Feb 25 12:30:03 crc kubenswrapper[5005]: I0225 12:30:03.475248 5005 generic.go:334] "Generic (PLEG): container finished" podID="2a26754f-a311-4c24-9009-051e7017c182" containerID="e522cf7fe3ca817370a11b710c3f95128252551aabeddab077247bb16eb4a131" exitCode=0 Feb 25 12:30:03 crc kubenswrapper[5005]: I0225 12:30:03.475519 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533710-nx8m5" event={"ID":"2a26754f-a311-4c24-9009-051e7017c182","Type":"ContainerDied","Data":"e522cf7fe3ca817370a11b710c3f95128252551aabeddab077247bb16eb4a131"} Feb 25 12:30:04 crc kubenswrapper[5005]: I0225 12:30:04.084252 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29533710-9pqdf" Feb 25 12:30:04 crc kubenswrapper[5005]: I0225 12:30:04.165491 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e019bb9f-e3cf-4478-a93c-964ec21e955c-config-volume\") pod \"e019bb9f-e3cf-4478-a93c-964ec21e955c\" (UID: \"e019bb9f-e3cf-4478-a93c-964ec21e955c\") " Feb 25 12:30:04 crc kubenswrapper[5005]: I0225 12:30:04.165757 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kh7c\" (UniqueName: \"kubernetes.io/projected/e019bb9f-e3cf-4478-a93c-964ec21e955c-kube-api-access-4kh7c\") pod \"e019bb9f-e3cf-4478-a93c-964ec21e955c\" (UID: \"e019bb9f-e3cf-4478-a93c-964ec21e955c\") " Feb 25 12:30:04 crc kubenswrapper[5005]: I0225 12:30:04.165994 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e019bb9f-e3cf-4478-a93c-964ec21e955c-secret-volume\") pod \"e019bb9f-e3cf-4478-a93c-964ec21e955c\" (UID: \"e019bb9f-e3cf-4478-a93c-964ec21e955c\") " Feb 25 12:30:04 crc kubenswrapper[5005]: I0225 12:30:04.167548 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e019bb9f-e3cf-4478-a93c-964ec21e955c-config-volume" (OuterVolumeSpecName: "config-volume") pod "e019bb9f-e3cf-4478-a93c-964ec21e955c" (UID: "e019bb9f-e3cf-4478-a93c-964ec21e955c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 12:30:04 crc kubenswrapper[5005]: I0225 12:30:04.173567 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e019bb9f-e3cf-4478-a93c-964ec21e955c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e019bb9f-e3cf-4478-a93c-964ec21e955c" (UID: "e019bb9f-e3cf-4478-a93c-964ec21e955c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 12:30:04 crc kubenswrapper[5005]: I0225 12:30:04.174878 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e019bb9f-e3cf-4478-a93c-964ec21e955c-kube-api-access-4kh7c" (OuterVolumeSpecName: "kube-api-access-4kh7c") pod "e019bb9f-e3cf-4478-a93c-964ec21e955c" (UID: "e019bb9f-e3cf-4478-a93c-964ec21e955c"). InnerVolumeSpecName "kube-api-access-4kh7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:30:04 crc kubenswrapper[5005]: I0225 12:30:04.269172 5005 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e019bb9f-e3cf-4478-a93c-964ec21e955c-config-volume\") on node \"crc\" DevicePath \"\"" Feb 25 12:30:04 crc kubenswrapper[5005]: I0225 12:30:04.269223 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kh7c\" (UniqueName: \"kubernetes.io/projected/e019bb9f-e3cf-4478-a93c-964ec21e955c-kube-api-access-4kh7c\") on node \"crc\" DevicePath \"\"" Feb 25 12:30:04 crc kubenswrapper[5005]: I0225 12:30:04.269239 5005 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e019bb9f-e3cf-4478-a93c-964ec21e955c-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 25 12:30:04 crc kubenswrapper[5005]: I0225 12:30:04.490104 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29533710-9pqdf" event={"ID":"e019bb9f-e3cf-4478-a93c-964ec21e955c","Type":"ContainerDied","Data":"77acfbb19d022de41092234f4c493730473c1fd4d3d7332ac0ac051adaa5e7b7"} Feb 25 12:30:04 crc kubenswrapper[5005]: I0225 12:30:04.490185 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77acfbb19d022de41092234f4c493730473c1fd4d3d7332ac0ac051adaa5e7b7" Feb 25 12:30:04 crc kubenswrapper[5005]: I0225 12:30:04.490203 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29533710-9pqdf" Feb 25 12:30:05 crc kubenswrapper[5005]: I0225 12:30:05.079460 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533710-nx8m5" Feb 25 12:30:05 crc kubenswrapper[5005]: I0225 12:30:05.191313 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwgrs\" (UniqueName: \"kubernetes.io/projected/2a26754f-a311-4c24-9009-051e7017c182-kube-api-access-hwgrs\") pod \"2a26754f-a311-4c24-9009-051e7017c182\" (UID: \"2a26754f-a311-4c24-9009-051e7017c182\") " Feb 25 12:30:05 crc kubenswrapper[5005]: I0225 12:30:05.193100 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29533665-tgqfp"] Feb 25 12:30:05 crc kubenswrapper[5005]: I0225 12:30:05.199355 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a26754f-a311-4c24-9009-051e7017c182-kube-api-access-hwgrs" (OuterVolumeSpecName: "kube-api-access-hwgrs") pod "2a26754f-a311-4c24-9009-051e7017c182" (UID: "2a26754f-a311-4c24-9009-051e7017c182"). InnerVolumeSpecName "kube-api-access-hwgrs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:30:05 crc kubenswrapper[5005]: I0225 12:30:05.213902 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29533665-tgqfp"] Feb 25 12:30:05 crc kubenswrapper[5005]: I0225 12:30:05.294716 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwgrs\" (UniqueName: \"kubernetes.io/projected/2a26754f-a311-4c24-9009-051e7017c182-kube-api-access-hwgrs\") on node \"crc\" DevicePath \"\"" Feb 25 12:30:05 crc kubenswrapper[5005]: I0225 12:30:05.514076 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533710-nx8m5" event={"ID":"2a26754f-a311-4c24-9009-051e7017c182","Type":"ContainerDied","Data":"e4136765bc50949b2eaa6a4ea35212de064d011a22b4ae6891c5172a7dbc06f7"} Feb 25 12:30:05 crc kubenswrapper[5005]: I0225 12:30:05.514510 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4136765bc50949b2eaa6a4ea35212de064d011a22b4ae6891c5172a7dbc06f7" Feb 25 12:30:05 crc kubenswrapper[5005]: I0225 12:30:05.514610 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533710-nx8m5" Feb 25 12:30:06 crc kubenswrapper[5005]: I0225 12:30:06.172671 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533704-dfphk"] Feb 25 12:30:06 crc kubenswrapper[5005]: I0225 12:30:06.185661 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533704-dfphk"] Feb 25 12:30:06 crc kubenswrapper[5005]: I0225 12:30:06.699182 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4157a67e-a639-41cb-885f-68a5988a987e" path="/var/lib/kubelet/pods/4157a67e-a639-41cb-885f-68a5988a987e/volumes" Feb 25 12:30:06 crc kubenswrapper[5005]: I0225 12:30:06.701603 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e97869f9-ad44-4d89-809d-45be0594ebd2" path="/var/lib/kubelet/pods/e97869f9-ad44-4d89-809d-45be0594ebd2/volumes" Feb 25 12:30:10 crc kubenswrapper[5005]: I0225 12:30:10.686435 5005 scope.go:117] "RemoveContainer" containerID="3816c847e7671787ebbfa119db6802c720258a20258c425319f98485afae68d6" Feb 25 12:30:10 crc kubenswrapper[5005]: E0225 12:30:10.687531 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:30:21 crc kubenswrapper[5005]: I0225 12:30:21.416976 5005 scope.go:117] "RemoveContainer" containerID="ee3d4566c7a389e10f857ce4e23156b3ecdbcc37942b82016599207325df9035" Feb 25 12:30:21 crc kubenswrapper[5005]: I0225 12:30:21.456044 5005 scope.go:117] "RemoveContainer" containerID="50fddc9dad1e4c7967fa60abf7f333278686890eab977f5fb9346f6def993240" Feb 25 12:30:25 crc kubenswrapper[5005]: I0225 12:30:25.685837 5005 scope.go:117] "RemoveContainer" containerID="3816c847e7671787ebbfa119db6802c720258a20258c425319f98485afae68d6" Feb 25 12:30:25 crc kubenswrapper[5005]: E0225 12:30:25.686831 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:30:38 crc kubenswrapper[5005]: I0225 12:30:38.696908 5005 scope.go:117] "RemoveContainer" containerID="3816c847e7671787ebbfa119db6802c720258a20258c425319f98485afae68d6" Feb 25 12:30:38 crc kubenswrapper[5005]: E0225 12:30:38.698386 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:30:53 crc kubenswrapper[5005]: I0225 12:30:53.686069 5005 scope.go:117] "RemoveContainer" containerID="3816c847e7671787ebbfa119db6802c720258a20258c425319f98485afae68d6" Feb 25 12:30:53 crc kubenswrapper[5005]: E0225 12:30:53.686815 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:31:08 crc kubenswrapper[5005]: I0225 12:31:08.685389 5005 scope.go:117] "RemoveContainer" containerID="3816c847e7671787ebbfa119db6802c720258a20258c425319f98485afae68d6" Feb 25 12:31:08 crc kubenswrapper[5005]: E0225 12:31:08.686311 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:31:20 crc kubenswrapper[5005]: I0225 12:31:20.686225 5005 scope.go:117] "RemoveContainer" containerID="3816c847e7671787ebbfa119db6802c720258a20258c425319f98485afae68d6" Feb 25 12:31:20 crc kubenswrapper[5005]: E0225 12:31:20.687098 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:31:33 crc kubenswrapper[5005]: I0225 12:31:33.685858 5005 scope.go:117] "RemoveContainer" containerID="3816c847e7671787ebbfa119db6802c720258a20258c425319f98485afae68d6" Feb 25 12:31:33 crc kubenswrapper[5005]: E0225 12:31:33.686816 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:31:47 crc kubenswrapper[5005]: I0225 12:31:47.686405 5005 scope.go:117] "RemoveContainer" containerID="3816c847e7671787ebbfa119db6802c720258a20258c425319f98485afae68d6" Feb 25 12:31:47 crc kubenswrapper[5005]: E0225 12:31:47.687405 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:31:58 crc kubenswrapper[5005]: I0225 12:31:58.691637 5005 scope.go:117] "RemoveContainer" containerID="3816c847e7671787ebbfa119db6802c720258a20258c425319f98485afae68d6" Feb 25 12:31:58 crc kubenswrapper[5005]: E0225 12:31:58.692641 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:32:00 crc kubenswrapper[5005]: I0225 12:32:00.157086 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533712-jcrgk"] Feb 25 12:32:00 crc kubenswrapper[5005]: E0225 12:32:00.157946 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e019bb9f-e3cf-4478-a93c-964ec21e955c" containerName="collect-profiles" Feb 25 12:32:00 crc kubenswrapper[5005]: I0225 12:32:00.157960 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="e019bb9f-e3cf-4478-a93c-964ec21e955c" containerName="collect-profiles" Feb 25 12:32:00 crc kubenswrapper[5005]: E0225 12:32:00.157974 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a26754f-a311-4c24-9009-051e7017c182" containerName="oc" Feb 25 12:32:00 crc kubenswrapper[5005]: I0225 12:32:00.157980 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a26754f-a311-4c24-9009-051e7017c182" containerName="oc" Feb 25 12:32:00 crc kubenswrapper[5005]: I0225 12:32:00.158162 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="e019bb9f-e3cf-4478-a93c-964ec21e955c" containerName="collect-profiles" Feb 25 12:32:00 crc kubenswrapper[5005]: I0225 12:32:00.158179 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a26754f-a311-4c24-9009-051e7017c182" containerName="oc" Feb 25 12:32:00 crc kubenswrapper[5005]: I0225 12:32:00.160622 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533712-jcrgk" Feb 25 12:32:00 crc kubenswrapper[5005]: I0225 12:32:00.170270 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 12:32:00 crc kubenswrapper[5005]: I0225 12:32:00.170560 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 12:32:00 crc kubenswrapper[5005]: I0225 12:32:00.171183 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 12:32:00 crc kubenswrapper[5005]: I0225 12:32:00.172885 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533712-jcrgk"] Feb 25 12:32:00 crc kubenswrapper[5005]: I0225 12:32:00.223147 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7t2v\" (UniqueName: \"kubernetes.io/projected/eb672101-16b9-485b-96a8-c6e88d843930-kube-api-access-n7t2v\") pod \"auto-csr-approver-29533712-jcrgk\" (UID: \"eb672101-16b9-485b-96a8-c6e88d843930\") " pod="openshift-infra/auto-csr-approver-29533712-jcrgk" Feb 25 12:32:00 crc kubenswrapper[5005]: I0225 12:32:00.325178 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7t2v\" (UniqueName: \"kubernetes.io/projected/eb672101-16b9-485b-96a8-c6e88d843930-kube-api-access-n7t2v\") pod \"auto-csr-approver-29533712-jcrgk\" (UID: \"eb672101-16b9-485b-96a8-c6e88d843930\") " pod="openshift-infra/auto-csr-approver-29533712-jcrgk" Feb 25 12:32:00 crc kubenswrapper[5005]: I0225 12:32:00.344866 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7t2v\" (UniqueName: \"kubernetes.io/projected/eb672101-16b9-485b-96a8-c6e88d843930-kube-api-access-n7t2v\") pod \"auto-csr-approver-29533712-jcrgk\" (UID: \"eb672101-16b9-485b-96a8-c6e88d843930\") " pod="openshift-infra/auto-csr-approver-29533712-jcrgk" Feb 25 12:32:00 crc kubenswrapper[5005]: I0225 12:32:00.486489 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533712-jcrgk" Feb 25 12:32:01 crc kubenswrapper[5005]: I0225 12:32:01.002975 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533712-jcrgk"] Feb 25 12:32:01 crc kubenswrapper[5005]: I0225 12:32:01.622890 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533712-jcrgk" event={"ID":"eb672101-16b9-485b-96a8-c6e88d843930","Type":"ContainerStarted","Data":"888935d2a25164bdf988d509133d88f3020d5d276ffd623862fe71967f6af169"} Feb 25 12:32:03 crc kubenswrapper[5005]: I0225 12:32:03.645544 5005 generic.go:334] "Generic (PLEG): container finished" podID="eb672101-16b9-485b-96a8-c6e88d843930" containerID="523a2930dbcceeb6f06d80e824dbc90ea2bc328c6f87c044b2f781c766e87e75" exitCode=0 Feb 25 12:32:03 crc kubenswrapper[5005]: I0225 12:32:03.645759 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533712-jcrgk" event={"ID":"eb672101-16b9-485b-96a8-c6e88d843930","Type":"ContainerDied","Data":"523a2930dbcceeb6f06d80e824dbc90ea2bc328c6f87c044b2f781c766e87e75"} Feb 25 12:32:05 crc kubenswrapper[5005]: I0225 12:32:05.168268 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533712-jcrgk" Feb 25 12:32:05 crc kubenswrapper[5005]: I0225 12:32:05.233779 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7t2v\" (UniqueName: \"kubernetes.io/projected/eb672101-16b9-485b-96a8-c6e88d843930-kube-api-access-n7t2v\") pod \"eb672101-16b9-485b-96a8-c6e88d843930\" (UID: \"eb672101-16b9-485b-96a8-c6e88d843930\") " Feb 25 12:32:05 crc kubenswrapper[5005]: I0225 12:32:05.243572 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb672101-16b9-485b-96a8-c6e88d843930-kube-api-access-n7t2v" (OuterVolumeSpecName: "kube-api-access-n7t2v") pod "eb672101-16b9-485b-96a8-c6e88d843930" (UID: "eb672101-16b9-485b-96a8-c6e88d843930"). InnerVolumeSpecName "kube-api-access-n7t2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:32:05 crc kubenswrapper[5005]: I0225 12:32:05.335944 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7t2v\" (UniqueName: \"kubernetes.io/projected/eb672101-16b9-485b-96a8-c6e88d843930-kube-api-access-n7t2v\") on node \"crc\" DevicePath \"\"" Feb 25 12:32:05 crc kubenswrapper[5005]: I0225 12:32:05.665599 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533712-jcrgk" event={"ID":"eb672101-16b9-485b-96a8-c6e88d843930","Type":"ContainerDied","Data":"888935d2a25164bdf988d509133d88f3020d5d276ffd623862fe71967f6af169"} Feb 25 12:32:05 crc kubenswrapper[5005]: I0225 12:32:05.665660 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="888935d2a25164bdf988d509133d88f3020d5d276ffd623862fe71967f6af169" Feb 25 12:32:05 crc kubenswrapper[5005]: I0225 12:32:05.665675 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533712-jcrgk" Feb 25 12:32:06 crc kubenswrapper[5005]: I0225 12:32:06.245534 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533706-jmmlm"] Feb 25 12:32:06 crc kubenswrapper[5005]: I0225 12:32:06.257295 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533706-jmmlm"] Feb 25 12:32:06 crc kubenswrapper[5005]: I0225 12:32:06.701138 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f8b6094-492a-489f-995d-181d6dfcf077" path="/var/lib/kubelet/pods/8f8b6094-492a-489f-995d-181d6dfcf077/volumes" Feb 25 12:32:13 crc kubenswrapper[5005]: I0225 12:32:13.685871 5005 scope.go:117] "RemoveContainer" containerID="3816c847e7671787ebbfa119db6802c720258a20258c425319f98485afae68d6" Feb 25 12:32:13 crc kubenswrapper[5005]: E0225 12:32:13.687001 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:32:21 crc kubenswrapper[5005]: I0225 12:32:21.592723 5005 scope.go:117] "RemoveContainer" containerID="384fe9e12b16a19b70a2f2a519b735ef4b418e1ddb465f7ddd41c2032498c976" Feb 25 12:32:24 crc kubenswrapper[5005]: I0225 12:32:24.686226 5005 scope.go:117] "RemoveContainer" containerID="3816c847e7671787ebbfa119db6802c720258a20258c425319f98485afae68d6" Feb 25 12:32:24 crc kubenswrapper[5005]: E0225 12:32:24.688714 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:32:39 crc kubenswrapper[5005]: I0225 12:32:39.685729 5005 scope.go:117] "RemoveContainer" containerID="3816c847e7671787ebbfa119db6802c720258a20258c425319f98485afae68d6" Feb 25 12:32:39 crc kubenswrapper[5005]: E0225 12:32:39.686701 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:32:51 crc kubenswrapper[5005]: I0225 12:32:51.686847 5005 scope.go:117] "RemoveContainer" containerID="3816c847e7671787ebbfa119db6802c720258a20258c425319f98485afae68d6" Feb 25 12:32:51 crc kubenswrapper[5005]: E0225 12:32:51.688048 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:33:03 crc kubenswrapper[5005]: I0225 12:33:03.686302 5005 scope.go:117] "RemoveContainer" containerID="3816c847e7671787ebbfa119db6802c720258a20258c425319f98485afae68d6" Feb 25 12:33:03 crc kubenswrapper[5005]: E0225 12:33:03.688703 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:33:15 crc kubenswrapper[5005]: I0225 12:33:15.685709 5005 scope.go:117] "RemoveContainer" containerID="3816c847e7671787ebbfa119db6802c720258a20258c425319f98485afae68d6" Feb 25 12:33:15 crc kubenswrapper[5005]: E0225 12:33:15.687974 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:33:27 crc kubenswrapper[5005]: I0225 12:33:27.685898 5005 scope.go:117] "RemoveContainer" containerID="3816c847e7671787ebbfa119db6802c720258a20258c425319f98485afae68d6" Feb 25 12:33:27 crc kubenswrapper[5005]: E0225 12:33:27.687501 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:33:40 crc kubenswrapper[5005]: I0225 12:33:40.686035 5005 scope.go:117] "RemoveContainer" containerID="3816c847e7671787ebbfa119db6802c720258a20258c425319f98485afae68d6" Feb 25 12:33:40 crc kubenswrapper[5005]: E0225 12:33:40.687431 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:33:45 crc kubenswrapper[5005]: I0225 12:33:45.456803 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-b4dvp"] Feb 25 12:33:45 crc kubenswrapper[5005]: E0225 12:33:45.462945 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb672101-16b9-485b-96a8-c6e88d843930" containerName="oc" Feb 25 12:33:45 crc kubenswrapper[5005]: I0225 12:33:45.462967 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb672101-16b9-485b-96a8-c6e88d843930" containerName="oc" Feb 25 12:33:45 crc kubenswrapper[5005]: I0225 12:33:45.463245 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb672101-16b9-485b-96a8-c6e88d843930" containerName="oc" Feb 25 12:33:45 crc kubenswrapper[5005]: I0225 12:33:45.465068 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b4dvp" Feb 25 12:33:45 crc kubenswrapper[5005]: I0225 12:33:45.474351 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b4dvp"] Feb 25 12:33:45 crc kubenswrapper[5005]: I0225 12:33:45.520827 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7m54\" (UniqueName: \"kubernetes.io/projected/440f4fac-fef8-4093-923b-3d80bd96b6cb-kube-api-access-q7m54\") pod \"redhat-operators-b4dvp\" (UID: \"440f4fac-fef8-4093-923b-3d80bd96b6cb\") " pod="openshift-marketplace/redhat-operators-b4dvp" Feb 25 12:33:45 crc kubenswrapper[5005]: I0225 12:33:45.520981 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/440f4fac-fef8-4093-923b-3d80bd96b6cb-catalog-content\") pod \"redhat-operators-b4dvp\" (UID: \"440f4fac-fef8-4093-923b-3d80bd96b6cb\") " pod="openshift-marketplace/redhat-operators-b4dvp" Feb 25 12:33:45 crc kubenswrapper[5005]: I0225 12:33:45.521028 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/440f4fac-fef8-4093-923b-3d80bd96b6cb-utilities\") pod \"redhat-operators-b4dvp\" (UID: \"440f4fac-fef8-4093-923b-3d80bd96b6cb\") " pod="openshift-marketplace/redhat-operators-b4dvp" Feb 25 12:33:45 crc kubenswrapper[5005]: I0225 12:33:45.622963 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/440f4fac-fef8-4093-923b-3d80bd96b6cb-catalog-content\") pod \"redhat-operators-b4dvp\" (UID: \"440f4fac-fef8-4093-923b-3d80bd96b6cb\") " pod="openshift-marketplace/redhat-operators-b4dvp" Feb 25 12:33:45 crc kubenswrapper[5005]: I0225 12:33:45.623475 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/440f4fac-fef8-4093-923b-3d80bd96b6cb-catalog-content\") pod \"redhat-operators-b4dvp\" (UID: \"440f4fac-fef8-4093-923b-3d80bd96b6cb\") " pod="openshift-marketplace/redhat-operators-b4dvp" Feb 25 12:33:45 crc kubenswrapper[5005]: I0225 12:33:45.623567 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/440f4fac-fef8-4093-923b-3d80bd96b6cb-utilities\") pod \"redhat-operators-b4dvp\" (UID: \"440f4fac-fef8-4093-923b-3d80bd96b6cb\") " pod="openshift-marketplace/redhat-operators-b4dvp" Feb 25 12:33:45 crc kubenswrapper[5005]: I0225 12:33:45.623896 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7m54\" (UniqueName: \"kubernetes.io/projected/440f4fac-fef8-4093-923b-3d80bd96b6cb-kube-api-access-q7m54\") pod \"redhat-operators-b4dvp\" (UID: \"440f4fac-fef8-4093-923b-3d80bd96b6cb\") " pod="openshift-marketplace/redhat-operators-b4dvp" Feb 25 12:33:45 crc kubenswrapper[5005]: I0225 12:33:45.624768 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/440f4fac-fef8-4093-923b-3d80bd96b6cb-utilities\") pod \"redhat-operators-b4dvp\" (UID: \"440f4fac-fef8-4093-923b-3d80bd96b6cb\") " pod="openshift-marketplace/redhat-operators-b4dvp" Feb 25 12:33:45 crc kubenswrapper[5005]: I0225 12:33:45.647409 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7m54\" (UniqueName: \"kubernetes.io/projected/440f4fac-fef8-4093-923b-3d80bd96b6cb-kube-api-access-q7m54\") pod \"redhat-operators-b4dvp\" (UID: \"440f4fac-fef8-4093-923b-3d80bd96b6cb\") " pod="openshift-marketplace/redhat-operators-b4dvp" Feb 25 12:33:45 crc kubenswrapper[5005]: I0225 12:33:45.796112 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b4dvp" Feb 25 12:33:46 crc kubenswrapper[5005]: I0225 12:33:46.325714 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b4dvp"] Feb 25 12:33:46 crc kubenswrapper[5005]: I0225 12:33:46.726852 5005 generic.go:334] "Generic (PLEG): container finished" podID="440f4fac-fef8-4093-923b-3d80bd96b6cb" containerID="bb88946f7f0655528a5631ffeeb8c489fd6e6c94bccb521c0b8dcd8931493e55" exitCode=0 Feb 25 12:33:46 crc kubenswrapper[5005]: I0225 12:33:46.727162 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b4dvp" event={"ID":"440f4fac-fef8-4093-923b-3d80bd96b6cb","Type":"ContainerDied","Data":"bb88946f7f0655528a5631ffeeb8c489fd6e6c94bccb521c0b8dcd8931493e55"} Feb 25 12:33:46 crc kubenswrapper[5005]: I0225 12:33:46.727191 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b4dvp" event={"ID":"440f4fac-fef8-4093-923b-3d80bd96b6cb","Type":"ContainerStarted","Data":"a3c29240c329120dc24d2ce908226803b48e10c8e8ecc0b8b0b64ae16229f3dc"} Feb 25 12:33:48 crc kubenswrapper[5005]: I0225 12:33:48.759923 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b4dvp" event={"ID":"440f4fac-fef8-4093-923b-3d80bd96b6cb","Type":"ContainerStarted","Data":"f2ef7e8ae91db7e3a579ccb924085e0a91224c95e4dc71cc031220916bdf3eaf"} Feb 25 12:33:52 crc kubenswrapper[5005]: I0225 12:33:52.686036 5005 scope.go:117] "RemoveContainer" containerID="3816c847e7671787ebbfa119db6802c720258a20258c425319f98485afae68d6" Feb 25 12:33:52 crc kubenswrapper[5005]: E0225 12:33:52.687277 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:33:55 crc kubenswrapper[5005]: I0225 12:33:55.826042 5005 generic.go:334] "Generic (PLEG): container finished" podID="440f4fac-fef8-4093-923b-3d80bd96b6cb" containerID="f2ef7e8ae91db7e3a579ccb924085e0a91224c95e4dc71cc031220916bdf3eaf" exitCode=0 Feb 25 12:33:55 crc kubenswrapper[5005]: I0225 12:33:55.826128 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b4dvp" event={"ID":"440f4fac-fef8-4093-923b-3d80bd96b6cb","Type":"ContainerDied","Data":"f2ef7e8ae91db7e3a579ccb924085e0a91224c95e4dc71cc031220916bdf3eaf"} Feb 25 12:33:56 crc kubenswrapper[5005]: I0225 12:33:56.844020 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b4dvp" event={"ID":"440f4fac-fef8-4093-923b-3d80bd96b6cb","Type":"ContainerStarted","Data":"65501d6843a155ad18807a3b0fb68deed552ec465276b92f9f54d7e9e8a748c7"} Feb 25 12:33:56 crc kubenswrapper[5005]: I0225 12:33:56.877892 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-b4dvp" podStartSLOduration=2.317384028 podStartE2EDuration="11.877869829s" podCreationTimestamp="2026-02-25 12:33:45 +0000 UTC" firstStartedPulling="2026-02-25 12:33:46.729053766 +0000 UTC m=+4540.769786093" lastFinishedPulling="2026-02-25 12:33:56.289539567 +0000 UTC m=+4550.330271894" observedRunningTime="2026-02-25 12:33:56.872256247 +0000 UTC m=+4550.912988584" watchObservedRunningTime="2026-02-25 12:33:56.877869829 +0000 UTC m=+4550.918602166" Feb 25 12:34:00 crc kubenswrapper[5005]: I0225 12:34:00.181331 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533714-h5dbw"] Feb 25 12:34:00 crc kubenswrapper[5005]: I0225 12:34:00.193578 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533714-h5dbw" Feb 25 12:34:00 crc kubenswrapper[5005]: I0225 12:34:00.197180 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 12:34:00 crc kubenswrapper[5005]: I0225 12:34:00.197570 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 12:34:00 crc kubenswrapper[5005]: I0225 12:34:00.197887 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 12:34:00 crc kubenswrapper[5005]: I0225 12:34:00.208225 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533714-h5dbw"] Feb 25 12:34:00 crc kubenswrapper[5005]: I0225 12:34:00.356469 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg7mf\" (UniqueName: \"kubernetes.io/projected/a5fd1577-065e-4c56-8157-b2c13fa896d2-kube-api-access-fg7mf\") pod \"auto-csr-approver-29533714-h5dbw\" (UID: \"a5fd1577-065e-4c56-8157-b2c13fa896d2\") " pod="openshift-infra/auto-csr-approver-29533714-h5dbw" Feb 25 12:34:00 crc kubenswrapper[5005]: I0225 12:34:00.458699 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fg7mf\" (UniqueName: \"kubernetes.io/projected/a5fd1577-065e-4c56-8157-b2c13fa896d2-kube-api-access-fg7mf\") pod \"auto-csr-approver-29533714-h5dbw\" (UID: \"a5fd1577-065e-4c56-8157-b2c13fa896d2\") " pod="openshift-infra/auto-csr-approver-29533714-h5dbw" Feb 25 12:34:00 crc kubenswrapper[5005]: I0225 12:34:00.491674 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fg7mf\" (UniqueName: \"kubernetes.io/projected/a5fd1577-065e-4c56-8157-b2c13fa896d2-kube-api-access-fg7mf\") pod \"auto-csr-approver-29533714-h5dbw\" (UID: \"a5fd1577-065e-4c56-8157-b2c13fa896d2\") " pod="openshift-infra/auto-csr-approver-29533714-h5dbw" Feb 25 12:34:00 crc kubenswrapper[5005]: I0225 12:34:00.526449 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533714-h5dbw" Feb 25 12:34:01 crc kubenswrapper[5005]: I0225 12:34:01.024791 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533714-h5dbw"] Feb 25 12:34:01 crc kubenswrapper[5005]: W0225 12:34:01.028254 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda5fd1577_065e_4c56_8157_b2c13fa896d2.slice/crio-7c526cafbaf6838ca7d7d96fe4302382cdf8e00d2079cc54203e226dfcbf97d9 WatchSource:0}: Error finding container 7c526cafbaf6838ca7d7d96fe4302382cdf8e00d2079cc54203e226dfcbf97d9: Status 404 returned error can't find the container with id 7c526cafbaf6838ca7d7d96fe4302382cdf8e00d2079cc54203e226dfcbf97d9 Feb 25 12:34:01 crc kubenswrapper[5005]: I0225 12:34:01.903923 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533714-h5dbw" event={"ID":"a5fd1577-065e-4c56-8157-b2c13fa896d2","Type":"ContainerStarted","Data":"7c526cafbaf6838ca7d7d96fe4302382cdf8e00d2079cc54203e226dfcbf97d9"} Feb 25 12:34:03 crc kubenswrapper[5005]: I0225 12:34:03.940559 5005 generic.go:334] "Generic (PLEG): container finished" podID="a5fd1577-065e-4c56-8157-b2c13fa896d2" containerID="b611eb5645afb572ec0046f9e840269122630108f0f09f0e1e383fb263167161" exitCode=0 Feb 25 12:34:03 crc kubenswrapper[5005]: I0225 12:34:03.940671 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533714-h5dbw" event={"ID":"a5fd1577-065e-4c56-8157-b2c13fa896d2","Type":"ContainerDied","Data":"b611eb5645afb572ec0046f9e840269122630108f0f09f0e1e383fb263167161"} Feb 25 12:34:05 crc kubenswrapper[5005]: I0225 12:34:05.548184 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533714-h5dbw" Feb 25 12:34:05 crc kubenswrapper[5005]: I0225 12:34:05.675881 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fg7mf\" (UniqueName: \"kubernetes.io/projected/a5fd1577-065e-4c56-8157-b2c13fa896d2-kube-api-access-fg7mf\") pod \"a5fd1577-065e-4c56-8157-b2c13fa896d2\" (UID: \"a5fd1577-065e-4c56-8157-b2c13fa896d2\") " Feb 25 12:34:05 crc kubenswrapper[5005]: I0225 12:34:05.688731 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5fd1577-065e-4c56-8157-b2c13fa896d2-kube-api-access-fg7mf" (OuterVolumeSpecName: "kube-api-access-fg7mf") pod "a5fd1577-065e-4c56-8157-b2c13fa896d2" (UID: "a5fd1577-065e-4c56-8157-b2c13fa896d2"). InnerVolumeSpecName "kube-api-access-fg7mf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:34:05 crc kubenswrapper[5005]: I0225 12:34:05.780712 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fg7mf\" (UniqueName: \"kubernetes.io/projected/a5fd1577-065e-4c56-8157-b2c13fa896d2-kube-api-access-fg7mf\") on node \"crc\" DevicePath \"\"" Feb 25 12:34:05 crc kubenswrapper[5005]: I0225 12:34:05.797260 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-b4dvp" Feb 25 12:34:05 crc kubenswrapper[5005]: I0225 12:34:05.797318 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-b4dvp" Feb 25 12:34:05 crc kubenswrapper[5005]: I0225 12:34:05.964887 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533714-h5dbw" event={"ID":"a5fd1577-065e-4c56-8157-b2c13fa896d2","Type":"ContainerDied","Data":"7c526cafbaf6838ca7d7d96fe4302382cdf8e00d2079cc54203e226dfcbf97d9"} Feb 25 12:34:05 crc kubenswrapper[5005]: I0225 12:34:05.964952 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c526cafbaf6838ca7d7d96fe4302382cdf8e00d2079cc54203e226dfcbf97d9" Feb 25 12:34:05 crc kubenswrapper[5005]: I0225 12:34:05.965320 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533714-h5dbw" Feb 25 12:34:06 crc kubenswrapper[5005]: I0225 12:34:06.625383 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533708-cxkpn"] Feb 25 12:34:06 crc kubenswrapper[5005]: I0225 12:34:06.639136 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533708-cxkpn"] Feb 25 12:34:06 crc kubenswrapper[5005]: I0225 12:34:06.699218 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce9f5582-5dde-49f9-81ce-0a95cdcc5d2a" path="/var/lib/kubelet/pods/ce9f5582-5dde-49f9-81ce-0a95cdcc5d2a/volumes" Feb 25 12:34:06 crc kubenswrapper[5005]: I0225 12:34:06.842593 5005 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-b4dvp" podUID="440f4fac-fef8-4093-923b-3d80bd96b6cb" containerName="registry-server" probeResult="failure" output=< Feb 25 12:34:06 crc kubenswrapper[5005]: timeout: failed to connect service ":50051" within 1s Feb 25 12:34:06 crc kubenswrapper[5005]: > Feb 25 12:34:07 crc kubenswrapper[5005]: I0225 12:34:07.686156 5005 scope.go:117] "RemoveContainer" containerID="3816c847e7671787ebbfa119db6802c720258a20258c425319f98485afae68d6" Feb 25 12:34:07 crc kubenswrapper[5005]: E0225 12:34:07.686403 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:34:08 crc kubenswrapper[5005]: I0225 12:34:08.621128 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pkxq6"] Feb 25 12:34:08 crc kubenswrapper[5005]: E0225 12:34:08.621840 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5fd1577-065e-4c56-8157-b2c13fa896d2" containerName="oc" Feb 25 12:34:08 crc kubenswrapper[5005]: I0225 12:34:08.621855 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5fd1577-065e-4c56-8157-b2c13fa896d2" containerName="oc" Feb 25 12:34:08 crc kubenswrapper[5005]: I0225 12:34:08.622038 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5fd1577-065e-4c56-8157-b2c13fa896d2" containerName="oc" Feb 25 12:34:08 crc kubenswrapper[5005]: I0225 12:34:08.623315 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pkxq6" Feb 25 12:34:08 crc kubenswrapper[5005]: I0225 12:34:08.641876 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pkxq6"] Feb 25 12:34:08 crc kubenswrapper[5005]: I0225 12:34:08.741905 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtq75\" (UniqueName: \"kubernetes.io/projected/943d43fd-a763-4953-9b3c-80fc792920a4-kube-api-access-jtq75\") pod \"redhat-marketplace-pkxq6\" (UID: \"943d43fd-a763-4953-9b3c-80fc792920a4\") " pod="openshift-marketplace/redhat-marketplace-pkxq6" Feb 25 12:34:08 crc kubenswrapper[5005]: I0225 12:34:08.743083 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/943d43fd-a763-4953-9b3c-80fc792920a4-utilities\") pod \"redhat-marketplace-pkxq6\" (UID: \"943d43fd-a763-4953-9b3c-80fc792920a4\") " pod="openshift-marketplace/redhat-marketplace-pkxq6" Feb 25 12:34:08 crc kubenswrapper[5005]: I0225 12:34:08.743126 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/943d43fd-a763-4953-9b3c-80fc792920a4-catalog-content\") pod \"redhat-marketplace-pkxq6\" (UID: \"943d43fd-a763-4953-9b3c-80fc792920a4\") " pod="openshift-marketplace/redhat-marketplace-pkxq6" Feb 25 12:34:08 crc kubenswrapper[5005]: I0225 12:34:08.845480 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/943d43fd-a763-4953-9b3c-80fc792920a4-utilities\") pod \"redhat-marketplace-pkxq6\" (UID: \"943d43fd-a763-4953-9b3c-80fc792920a4\") " pod="openshift-marketplace/redhat-marketplace-pkxq6" Feb 25 12:34:08 crc kubenswrapper[5005]: I0225 12:34:08.845535 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/943d43fd-a763-4953-9b3c-80fc792920a4-catalog-content\") pod \"redhat-marketplace-pkxq6\" (UID: \"943d43fd-a763-4953-9b3c-80fc792920a4\") " pod="openshift-marketplace/redhat-marketplace-pkxq6" Feb 25 12:34:08 crc kubenswrapper[5005]: I0225 12:34:08.845698 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtq75\" (UniqueName: \"kubernetes.io/projected/943d43fd-a763-4953-9b3c-80fc792920a4-kube-api-access-jtq75\") pod \"redhat-marketplace-pkxq6\" (UID: \"943d43fd-a763-4953-9b3c-80fc792920a4\") " pod="openshift-marketplace/redhat-marketplace-pkxq6" Feb 25 12:34:08 crc kubenswrapper[5005]: I0225 12:34:08.846238 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/943d43fd-a763-4953-9b3c-80fc792920a4-catalog-content\") pod \"redhat-marketplace-pkxq6\" (UID: \"943d43fd-a763-4953-9b3c-80fc792920a4\") " pod="openshift-marketplace/redhat-marketplace-pkxq6" Feb 25 12:34:08 crc kubenswrapper[5005]: I0225 12:34:08.846276 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/943d43fd-a763-4953-9b3c-80fc792920a4-utilities\") pod \"redhat-marketplace-pkxq6\" (UID: \"943d43fd-a763-4953-9b3c-80fc792920a4\") " pod="openshift-marketplace/redhat-marketplace-pkxq6" Feb 25 12:34:08 crc kubenswrapper[5005]: I0225 12:34:08.871158 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtq75\" (UniqueName: \"kubernetes.io/projected/943d43fd-a763-4953-9b3c-80fc792920a4-kube-api-access-jtq75\") pod \"redhat-marketplace-pkxq6\" (UID: \"943d43fd-a763-4953-9b3c-80fc792920a4\") " pod="openshift-marketplace/redhat-marketplace-pkxq6" Feb 25 12:34:08 crc kubenswrapper[5005]: I0225 12:34:08.950589 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pkxq6" Feb 25 12:34:09 crc kubenswrapper[5005]: I0225 12:34:09.462945 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pkxq6"] Feb 25 12:34:09 crc kubenswrapper[5005]: W0225 12:34:09.473797 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod943d43fd_a763_4953_9b3c_80fc792920a4.slice/crio-a088fe25463ba43733f95b1b638ce1e000292e3c658e0254faa19c9e89b24027 WatchSource:0}: Error finding container a088fe25463ba43733f95b1b638ce1e000292e3c658e0254faa19c9e89b24027: Status 404 returned error can't find the container with id a088fe25463ba43733f95b1b638ce1e000292e3c658e0254faa19c9e89b24027 Feb 25 12:34:10 crc kubenswrapper[5005]: I0225 12:34:10.005694 5005 generic.go:334] "Generic (PLEG): container finished" podID="943d43fd-a763-4953-9b3c-80fc792920a4" containerID="cf8e2562a00267de05f7078442950f09c2d4d4508ed6bebb1def11bacb380e8b" exitCode=0 Feb 25 12:34:10 crc kubenswrapper[5005]: I0225 12:34:10.005819 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pkxq6" event={"ID":"943d43fd-a763-4953-9b3c-80fc792920a4","Type":"ContainerDied","Data":"cf8e2562a00267de05f7078442950f09c2d4d4508ed6bebb1def11bacb380e8b"} Feb 25 12:34:10 crc kubenswrapper[5005]: I0225 12:34:10.006153 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pkxq6" event={"ID":"943d43fd-a763-4953-9b3c-80fc792920a4","Type":"ContainerStarted","Data":"a088fe25463ba43733f95b1b638ce1e000292e3c658e0254faa19c9e89b24027"} Feb 25 12:34:11 crc kubenswrapper[5005]: I0225 12:34:11.016498 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pkxq6" event={"ID":"943d43fd-a763-4953-9b3c-80fc792920a4","Type":"ContainerStarted","Data":"877d9c8ee255d77c6640956aec8d8fbc5553ea73c1bb3b98e144bd929f2ef35d"} Feb 25 12:34:13 crc kubenswrapper[5005]: I0225 12:34:13.034513 5005 generic.go:334] "Generic (PLEG): container finished" podID="943d43fd-a763-4953-9b3c-80fc792920a4" containerID="877d9c8ee255d77c6640956aec8d8fbc5553ea73c1bb3b98e144bd929f2ef35d" exitCode=0 Feb 25 12:34:13 crc kubenswrapper[5005]: I0225 12:34:13.034585 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pkxq6" event={"ID":"943d43fd-a763-4953-9b3c-80fc792920a4","Type":"ContainerDied","Data":"877d9c8ee255d77c6640956aec8d8fbc5553ea73c1bb3b98e144bd929f2ef35d"} Feb 25 12:34:14 crc kubenswrapper[5005]: I0225 12:34:14.047655 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pkxq6" event={"ID":"943d43fd-a763-4953-9b3c-80fc792920a4","Type":"ContainerStarted","Data":"eaf641d0ece80222a8997a7a3e4c88228d26723264c5d44f1c947151cb631536"} Feb 25 12:34:14 crc kubenswrapper[5005]: I0225 12:34:14.083012 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pkxq6" podStartSLOduration=2.344305459 podStartE2EDuration="6.08298854s" podCreationTimestamp="2026-02-25 12:34:08 +0000 UTC" firstStartedPulling="2026-02-25 12:34:10.010266695 +0000 UTC m=+4564.050999022" lastFinishedPulling="2026-02-25 12:34:13.748949776 +0000 UTC m=+4567.789682103" observedRunningTime="2026-02-25 12:34:14.073849368 +0000 UTC m=+4568.114581695" watchObservedRunningTime="2026-02-25 12:34:14.08298854 +0000 UTC m=+4568.123720867" Feb 25 12:34:15 crc kubenswrapper[5005]: I0225 12:34:15.842422 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-b4dvp" Feb 25 12:34:15 crc kubenswrapper[5005]: I0225 12:34:15.893946 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-b4dvp" Feb 25 12:34:17 crc kubenswrapper[5005]: I0225 12:34:17.612130 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b4dvp"] Feb 25 12:34:17 crc kubenswrapper[5005]: I0225 12:34:17.612417 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-b4dvp" podUID="440f4fac-fef8-4093-923b-3d80bd96b6cb" containerName="registry-server" containerID="cri-o://65501d6843a155ad18807a3b0fb68deed552ec465276b92f9f54d7e9e8a748c7" gracePeriod=2 Feb 25 12:34:18 crc kubenswrapper[5005]: I0225 12:34:18.086444 5005 generic.go:334] "Generic (PLEG): container finished" podID="440f4fac-fef8-4093-923b-3d80bd96b6cb" containerID="65501d6843a155ad18807a3b0fb68deed552ec465276b92f9f54d7e9e8a748c7" exitCode=0 Feb 25 12:34:18 crc kubenswrapper[5005]: I0225 12:34:18.086646 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b4dvp" event={"ID":"440f4fac-fef8-4093-923b-3d80bd96b6cb","Type":"ContainerDied","Data":"65501d6843a155ad18807a3b0fb68deed552ec465276b92f9f54d7e9e8a748c7"} Feb 25 12:34:18 crc kubenswrapper[5005]: I0225 12:34:18.086826 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b4dvp" event={"ID":"440f4fac-fef8-4093-923b-3d80bd96b6cb","Type":"ContainerDied","Data":"a3c29240c329120dc24d2ce908226803b48e10c8e8ecc0b8b0b64ae16229f3dc"} Feb 25 12:34:18 crc kubenswrapper[5005]: I0225 12:34:18.086843 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3c29240c329120dc24d2ce908226803b48e10c8e8ecc0b8b0b64ae16229f3dc" Feb 25 12:34:18 crc kubenswrapper[5005]: I0225 12:34:18.137584 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b4dvp" Feb 25 12:34:18 crc kubenswrapper[5005]: I0225 12:34:18.247129 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/440f4fac-fef8-4093-923b-3d80bd96b6cb-catalog-content\") pod \"440f4fac-fef8-4093-923b-3d80bd96b6cb\" (UID: \"440f4fac-fef8-4093-923b-3d80bd96b6cb\") " Feb 25 12:34:18 crc kubenswrapper[5005]: I0225 12:34:18.247563 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/440f4fac-fef8-4093-923b-3d80bd96b6cb-utilities\") pod \"440f4fac-fef8-4093-923b-3d80bd96b6cb\" (UID: \"440f4fac-fef8-4093-923b-3d80bd96b6cb\") " Feb 25 12:34:18 crc kubenswrapper[5005]: I0225 12:34:18.247587 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7m54\" (UniqueName: \"kubernetes.io/projected/440f4fac-fef8-4093-923b-3d80bd96b6cb-kube-api-access-q7m54\") pod \"440f4fac-fef8-4093-923b-3d80bd96b6cb\" (UID: \"440f4fac-fef8-4093-923b-3d80bd96b6cb\") " Feb 25 12:34:18 crc kubenswrapper[5005]: I0225 12:34:18.250061 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/440f4fac-fef8-4093-923b-3d80bd96b6cb-utilities" (OuterVolumeSpecName: "utilities") pod "440f4fac-fef8-4093-923b-3d80bd96b6cb" (UID: "440f4fac-fef8-4093-923b-3d80bd96b6cb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 12:34:18 crc kubenswrapper[5005]: I0225 12:34:18.258875 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/440f4fac-fef8-4093-923b-3d80bd96b6cb-kube-api-access-q7m54" (OuterVolumeSpecName: "kube-api-access-q7m54") pod "440f4fac-fef8-4093-923b-3d80bd96b6cb" (UID: "440f4fac-fef8-4093-923b-3d80bd96b6cb"). InnerVolumeSpecName "kube-api-access-q7m54". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:34:18 crc kubenswrapper[5005]: I0225 12:34:18.351278 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/440f4fac-fef8-4093-923b-3d80bd96b6cb-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 12:34:18 crc kubenswrapper[5005]: I0225 12:34:18.351358 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7m54\" (UniqueName: \"kubernetes.io/projected/440f4fac-fef8-4093-923b-3d80bd96b6cb-kube-api-access-q7m54\") on node \"crc\" DevicePath \"\"" Feb 25 12:34:18 crc kubenswrapper[5005]: I0225 12:34:18.369526 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/440f4fac-fef8-4093-923b-3d80bd96b6cb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "440f4fac-fef8-4093-923b-3d80bd96b6cb" (UID: "440f4fac-fef8-4093-923b-3d80bd96b6cb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 12:34:18 crc kubenswrapper[5005]: I0225 12:34:18.454282 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/440f4fac-fef8-4093-923b-3d80bd96b6cb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 12:34:18 crc kubenswrapper[5005]: I0225 12:34:18.951846 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pkxq6" Feb 25 12:34:18 crc kubenswrapper[5005]: I0225 12:34:18.954121 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pkxq6" Feb 25 12:34:19 crc kubenswrapper[5005]: I0225 12:34:19.002522 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pkxq6" Feb 25 12:34:19 crc kubenswrapper[5005]: I0225 12:34:19.094213 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b4dvp" Feb 25 12:34:19 crc kubenswrapper[5005]: I0225 12:34:19.124255 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b4dvp"] Feb 25 12:34:19 crc kubenswrapper[5005]: I0225 12:34:19.132381 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-b4dvp"] Feb 25 12:34:19 crc kubenswrapper[5005]: I0225 12:34:19.147322 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pkxq6" Feb 25 12:34:19 crc kubenswrapper[5005]: I0225 12:34:19.685692 5005 scope.go:117] "RemoveContainer" containerID="3816c847e7671787ebbfa119db6802c720258a20258c425319f98485afae68d6" Feb 25 12:34:19 crc kubenswrapper[5005]: E0225 12:34:19.685937 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:34:20 crc kubenswrapper[5005]: I0225 12:34:20.694928 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="440f4fac-fef8-4093-923b-3d80bd96b6cb" path="/var/lib/kubelet/pods/440f4fac-fef8-4093-923b-3d80bd96b6cb/volumes" Feb 25 12:34:21 crc kubenswrapper[5005]: I0225 12:34:21.415495 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pkxq6"] Feb 25 12:34:21 crc kubenswrapper[5005]: I0225 12:34:21.721956 5005 scope.go:117] "RemoveContainer" containerID="5f13ea91fa9659b2abe15368553af64a409ca2ac85d288fb67212fe90fe5e401" Feb 25 12:34:22 crc kubenswrapper[5005]: I0225 12:34:22.118108 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pkxq6" podUID="943d43fd-a763-4953-9b3c-80fc792920a4" containerName="registry-server" containerID="cri-o://eaf641d0ece80222a8997a7a3e4c88228d26723264c5d44f1c947151cb631536" gracePeriod=2 Feb 25 12:34:22 crc kubenswrapper[5005]: I0225 12:34:22.780714 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pkxq6" Feb 25 12:34:22 crc kubenswrapper[5005]: I0225 12:34:22.938285 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtq75\" (UniqueName: \"kubernetes.io/projected/943d43fd-a763-4953-9b3c-80fc792920a4-kube-api-access-jtq75\") pod \"943d43fd-a763-4953-9b3c-80fc792920a4\" (UID: \"943d43fd-a763-4953-9b3c-80fc792920a4\") " Feb 25 12:34:22 crc kubenswrapper[5005]: I0225 12:34:22.938521 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/943d43fd-a763-4953-9b3c-80fc792920a4-utilities\") pod \"943d43fd-a763-4953-9b3c-80fc792920a4\" (UID: \"943d43fd-a763-4953-9b3c-80fc792920a4\") " Feb 25 12:34:22 crc kubenswrapper[5005]: I0225 12:34:22.938620 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/943d43fd-a763-4953-9b3c-80fc792920a4-catalog-content\") pod \"943d43fd-a763-4953-9b3c-80fc792920a4\" (UID: \"943d43fd-a763-4953-9b3c-80fc792920a4\") " Feb 25 12:34:22 crc kubenswrapper[5005]: I0225 12:34:22.939228 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/943d43fd-a763-4953-9b3c-80fc792920a4-utilities" (OuterVolumeSpecName: "utilities") pod "943d43fd-a763-4953-9b3c-80fc792920a4" (UID: "943d43fd-a763-4953-9b3c-80fc792920a4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 12:34:22 crc kubenswrapper[5005]: I0225 12:34:22.944780 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/943d43fd-a763-4953-9b3c-80fc792920a4-kube-api-access-jtq75" (OuterVolumeSpecName: "kube-api-access-jtq75") pod "943d43fd-a763-4953-9b3c-80fc792920a4" (UID: "943d43fd-a763-4953-9b3c-80fc792920a4"). InnerVolumeSpecName "kube-api-access-jtq75". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:34:22 crc kubenswrapper[5005]: I0225 12:34:22.978214 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/943d43fd-a763-4953-9b3c-80fc792920a4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "943d43fd-a763-4953-9b3c-80fc792920a4" (UID: "943d43fd-a763-4953-9b3c-80fc792920a4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 12:34:23 crc kubenswrapper[5005]: I0225 12:34:23.040687 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtq75\" (UniqueName: \"kubernetes.io/projected/943d43fd-a763-4953-9b3c-80fc792920a4-kube-api-access-jtq75\") on node \"crc\" DevicePath \"\"" Feb 25 12:34:23 crc kubenswrapper[5005]: I0225 12:34:23.040724 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/943d43fd-a763-4953-9b3c-80fc792920a4-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 12:34:23 crc kubenswrapper[5005]: I0225 12:34:23.040737 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/943d43fd-a763-4953-9b3c-80fc792920a4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 12:34:23 crc kubenswrapper[5005]: I0225 12:34:23.130888 5005 generic.go:334] "Generic (PLEG): container finished" podID="943d43fd-a763-4953-9b3c-80fc792920a4" containerID="eaf641d0ece80222a8997a7a3e4c88228d26723264c5d44f1c947151cb631536" exitCode=0 Feb 25 12:34:23 crc kubenswrapper[5005]: I0225 12:34:23.131026 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pkxq6" event={"ID":"943d43fd-a763-4953-9b3c-80fc792920a4","Type":"ContainerDied","Data":"eaf641d0ece80222a8997a7a3e4c88228d26723264c5d44f1c947151cb631536"} Feb 25 12:34:23 crc kubenswrapper[5005]: I0225 12:34:23.131435 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pkxq6" event={"ID":"943d43fd-a763-4953-9b3c-80fc792920a4","Type":"ContainerDied","Data":"a088fe25463ba43733f95b1b638ce1e000292e3c658e0254faa19c9e89b24027"} Feb 25 12:34:23 crc kubenswrapper[5005]: I0225 12:34:23.131477 5005 scope.go:117] "RemoveContainer" containerID="eaf641d0ece80222a8997a7a3e4c88228d26723264c5d44f1c947151cb631536" Feb 25 12:34:23 crc kubenswrapper[5005]: I0225 12:34:23.131121 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pkxq6" Feb 25 12:34:23 crc kubenswrapper[5005]: I0225 12:34:23.156267 5005 scope.go:117] "RemoveContainer" containerID="877d9c8ee255d77c6640956aec8d8fbc5553ea73c1bb3b98e144bd929f2ef35d" Feb 25 12:34:23 crc kubenswrapper[5005]: I0225 12:34:23.181554 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pkxq6"] Feb 25 12:34:23 crc kubenswrapper[5005]: I0225 12:34:23.189529 5005 scope.go:117] "RemoveContainer" containerID="cf8e2562a00267de05f7078442950f09c2d4d4508ed6bebb1def11bacb380e8b" Feb 25 12:34:23 crc kubenswrapper[5005]: I0225 12:34:23.194820 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pkxq6"] Feb 25 12:34:23 crc kubenswrapper[5005]: I0225 12:34:23.232271 5005 scope.go:117] "RemoveContainer" containerID="eaf641d0ece80222a8997a7a3e4c88228d26723264c5d44f1c947151cb631536" Feb 25 12:34:23 crc kubenswrapper[5005]: E0225 12:34:23.234132 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eaf641d0ece80222a8997a7a3e4c88228d26723264c5d44f1c947151cb631536\": container with ID starting with eaf641d0ece80222a8997a7a3e4c88228d26723264c5d44f1c947151cb631536 not found: ID does not exist" containerID="eaf641d0ece80222a8997a7a3e4c88228d26723264c5d44f1c947151cb631536" Feb 25 12:34:23 crc kubenswrapper[5005]: I0225 12:34:23.234163 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eaf641d0ece80222a8997a7a3e4c88228d26723264c5d44f1c947151cb631536"} err="failed to get container status \"eaf641d0ece80222a8997a7a3e4c88228d26723264c5d44f1c947151cb631536\": rpc error: code = NotFound desc = could not find container \"eaf641d0ece80222a8997a7a3e4c88228d26723264c5d44f1c947151cb631536\": container with ID starting with eaf641d0ece80222a8997a7a3e4c88228d26723264c5d44f1c947151cb631536 not found: ID does not exist" Feb 25 12:34:23 crc kubenswrapper[5005]: I0225 12:34:23.234182 5005 scope.go:117] "RemoveContainer" containerID="877d9c8ee255d77c6640956aec8d8fbc5553ea73c1bb3b98e144bd929f2ef35d" Feb 25 12:34:23 crc kubenswrapper[5005]: E0225 12:34:23.234785 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"877d9c8ee255d77c6640956aec8d8fbc5553ea73c1bb3b98e144bd929f2ef35d\": container with ID starting with 877d9c8ee255d77c6640956aec8d8fbc5553ea73c1bb3b98e144bd929f2ef35d not found: ID does not exist" containerID="877d9c8ee255d77c6640956aec8d8fbc5553ea73c1bb3b98e144bd929f2ef35d" Feb 25 12:34:23 crc kubenswrapper[5005]: I0225 12:34:23.234845 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"877d9c8ee255d77c6640956aec8d8fbc5553ea73c1bb3b98e144bd929f2ef35d"} err="failed to get container status \"877d9c8ee255d77c6640956aec8d8fbc5553ea73c1bb3b98e144bd929f2ef35d\": rpc error: code = NotFound desc = could not find container \"877d9c8ee255d77c6640956aec8d8fbc5553ea73c1bb3b98e144bd929f2ef35d\": container with ID starting with 877d9c8ee255d77c6640956aec8d8fbc5553ea73c1bb3b98e144bd929f2ef35d not found: ID does not exist" Feb 25 12:34:23 crc kubenswrapper[5005]: I0225 12:34:23.234879 5005 scope.go:117] "RemoveContainer" containerID="cf8e2562a00267de05f7078442950f09c2d4d4508ed6bebb1def11bacb380e8b" Feb 25 12:34:23 crc kubenswrapper[5005]: E0225 12:34:23.235136 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf8e2562a00267de05f7078442950f09c2d4d4508ed6bebb1def11bacb380e8b\": container with ID starting with cf8e2562a00267de05f7078442950f09c2d4d4508ed6bebb1def11bacb380e8b not found: ID does not exist" containerID="cf8e2562a00267de05f7078442950f09c2d4d4508ed6bebb1def11bacb380e8b" Feb 25 12:34:23 crc kubenswrapper[5005]: I0225 12:34:23.235159 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf8e2562a00267de05f7078442950f09c2d4d4508ed6bebb1def11bacb380e8b"} err="failed to get container status \"cf8e2562a00267de05f7078442950f09c2d4d4508ed6bebb1def11bacb380e8b\": rpc error: code = NotFound desc = could not find container \"cf8e2562a00267de05f7078442950f09c2d4d4508ed6bebb1def11bacb380e8b\": container with ID starting with cf8e2562a00267de05f7078442950f09c2d4d4508ed6bebb1def11bacb380e8b not found: ID does not exist" Feb 25 12:34:24 crc kubenswrapper[5005]: I0225 12:34:24.697615 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="943d43fd-a763-4953-9b3c-80fc792920a4" path="/var/lib/kubelet/pods/943d43fd-a763-4953-9b3c-80fc792920a4/volumes" Feb 25 12:34:33 crc kubenswrapper[5005]: I0225 12:34:33.686549 5005 scope.go:117] "RemoveContainer" containerID="3816c847e7671787ebbfa119db6802c720258a20258c425319f98485afae68d6" Feb 25 12:34:33 crc kubenswrapper[5005]: E0225 12:34:33.687436 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:34:45 crc kubenswrapper[5005]: I0225 12:34:45.686178 5005 scope.go:117] "RemoveContainer" containerID="3816c847e7671787ebbfa119db6802c720258a20258c425319f98485afae68d6" Feb 25 12:34:45 crc kubenswrapper[5005]: E0225 12:34:45.687731 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:34:58 crc kubenswrapper[5005]: I0225 12:34:58.685287 5005 scope.go:117] "RemoveContainer" containerID="3816c847e7671787ebbfa119db6802c720258a20258c425319f98485afae68d6" Feb 25 12:34:59 crc kubenswrapper[5005]: I0225 12:34:59.570661 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerStarted","Data":"24bb7e6437945004160f29143f5d2520f56d9f7617902d02eb25cc7204439b73"} Feb 25 12:36:00 crc kubenswrapper[5005]: I0225 12:36:00.161155 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533716-vrjfj"] Feb 25 12:36:00 crc kubenswrapper[5005]: E0225 12:36:00.162241 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="440f4fac-fef8-4093-923b-3d80bd96b6cb" containerName="registry-server" Feb 25 12:36:00 crc kubenswrapper[5005]: I0225 12:36:00.162256 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="440f4fac-fef8-4093-923b-3d80bd96b6cb" containerName="registry-server" Feb 25 12:36:00 crc kubenswrapper[5005]: E0225 12:36:00.162268 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="440f4fac-fef8-4093-923b-3d80bd96b6cb" containerName="extract-content" Feb 25 12:36:00 crc kubenswrapper[5005]: I0225 12:36:00.162274 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="440f4fac-fef8-4093-923b-3d80bd96b6cb" containerName="extract-content" Feb 25 12:36:00 crc kubenswrapper[5005]: E0225 12:36:00.162292 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="943d43fd-a763-4953-9b3c-80fc792920a4" containerName="extract-utilities" Feb 25 12:36:00 crc kubenswrapper[5005]: I0225 12:36:00.162298 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="943d43fd-a763-4953-9b3c-80fc792920a4" containerName="extract-utilities" Feb 25 12:36:00 crc kubenswrapper[5005]: E0225 12:36:00.162309 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="440f4fac-fef8-4093-923b-3d80bd96b6cb" containerName="extract-utilities" Feb 25 12:36:00 crc kubenswrapper[5005]: I0225 12:36:00.162316 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="440f4fac-fef8-4093-923b-3d80bd96b6cb" containerName="extract-utilities" Feb 25 12:36:00 crc kubenswrapper[5005]: E0225 12:36:00.162328 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="943d43fd-a763-4953-9b3c-80fc792920a4" containerName="extract-content" Feb 25 12:36:00 crc kubenswrapper[5005]: I0225 12:36:00.162334 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="943d43fd-a763-4953-9b3c-80fc792920a4" containerName="extract-content" Feb 25 12:36:00 crc kubenswrapper[5005]: E0225 12:36:00.162347 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="943d43fd-a763-4953-9b3c-80fc792920a4" containerName="registry-server" Feb 25 12:36:00 crc kubenswrapper[5005]: I0225 12:36:00.162353 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="943d43fd-a763-4953-9b3c-80fc792920a4" containerName="registry-server" Feb 25 12:36:00 crc kubenswrapper[5005]: I0225 12:36:00.162548 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="943d43fd-a763-4953-9b3c-80fc792920a4" containerName="registry-server" Feb 25 12:36:00 crc kubenswrapper[5005]: I0225 12:36:00.162563 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="440f4fac-fef8-4093-923b-3d80bd96b6cb" containerName="registry-server" Feb 25 12:36:00 crc kubenswrapper[5005]: I0225 12:36:00.163139 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533716-vrjfj" Feb 25 12:36:00 crc kubenswrapper[5005]: I0225 12:36:00.165315 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 12:36:00 crc kubenswrapper[5005]: I0225 12:36:00.165428 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 12:36:00 crc kubenswrapper[5005]: I0225 12:36:00.181245 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533716-vrjfj"] Feb 25 12:36:00 crc kubenswrapper[5005]: I0225 12:36:00.187662 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 12:36:00 crc kubenswrapper[5005]: I0225 12:36:00.217263 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4cw7\" (UniqueName: \"kubernetes.io/projected/44946abf-0d79-492e-a68b-9f5e07fee40d-kube-api-access-k4cw7\") pod \"auto-csr-approver-29533716-vrjfj\" (UID: \"44946abf-0d79-492e-a68b-9f5e07fee40d\") " pod="openshift-infra/auto-csr-approver-29533716-vrjfj" Feb 25 12:36:00 crc kubenswrapper[5005]: I0225 12:36:00.320292 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4cw7\" (UniqueName: \"kubernetes.io/projected/44946abf-0d79-492e-a68b-9f5e07fee40d-kube-api-access-k4cw7\") pod \"auto-csr-approver-29533716-vrjfj\" (UID: \"44946abf-0d79-492e-a68b-9f5e07fee40d\") " pod="openshift-infra/auto-csr-approver-29533716-vrjfj" Feb 25 12:36:00 crc kubenswrapper[5005]: I0225 12:36:00.347931 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4cw7\" (UniqueName: \"kubernetes.io/projected/44946abf-0d79-492e-a68b-9f5e07fee40d-kube-api-access-k4cw7\") pod \"auto-csr-approver-29533716-vrjfj\" (UID: \"44946abf-0d79-492e-a68b-9f5e07fee40d\") " pod="openshift-infra/auto-csr-approver-29533716-vrjfj" Feb 25 12:36:00 crc kubenswrapper[5005]: I0225 12:36:00.492857 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533716-vrjfj" Feb 25 12:36:01 crc kubenswrapper[5005]: I0225 12:36:01.149360 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533716-vrjfj"] Feb 25 12:36:01 crc kubenswrapper[5005]: I0225 12:36:01.164586 5005 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 25 12:36:02 crc kubenswrapper[5005]: I0225 12:36:02.140557 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533716-vrjfj" event={"ID":"44946abf-0d79-492e-a68b-9f5e07fee40d","Type":"ContainerStarted","Data":"57ee590c6e0062f901dd4baf333ce2f5d8307e34942d96c72a4b6b5a81a4ce8f"} Feb 25 12:36:03 crc kubenswrapper[5005]: I0225 12:36:03.152410 5005 generic.go:334] "Generic (PLEG): container finished" podID="44946abf-0d79-492e-a68b-9f5e07fee40d" containerID="f8a573668de3e7a38ff3c5ec345772ce9aaa4c8a86b681150bcf95bf2f12cd18" exitCode=0 Feb 25 12:36:03 crc kubenswrapper[5005]: I0225 12:36:03.152612 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533716-vrjfj" event={"ID":"44946abf-0d79-492e-a68b-9f5e07fee40d","Type":"ContainerDied","Data":"f8a573668de3e7a38ff3c5ec345772ce9aaa4c8a86b681150bcf95bf2f12cd18"} Feb 25 12:36:04 crc kubenswrapper[5005]: I0225 12:36:04.706875 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533716-vrjfj" Feb 25 12:36:04 crc kubenswrapper[5005]: I0225 12:36:04.807303 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4cw7\" (UniqueName: \"kubernetes.io/projected/44946abf-0d79-492e-a68b-9f5e07fee40d-kube-api-access-k4cw7\") pod \"44946abf-0d79-492e-a68b-9f5e07fee40d\" (UID: \"44946abf-0d79-492e-a68b-9f5e07fee40d\") " Feb 25 12:36:04 crc kubenswrapper[5005]: I0225 12:36:04.843595 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44946abf-0d79-492e-a68b-9f5e07fee40d-kube-api-access-k4cw7" (OuterVolumeSpecName: "kube-api-access-k4cw7") pod "44946abf-0d79-492e-a68b-9f5e07fee40d" (UID: "44946abf-0d79-492e-a68b-9f5e07fee40d"). InnerVolumeSpecName "kube-api-access-k4cw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:36:04 crc kubenswrapper[5005]: I0225 12:36:04.912560 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4cw7\" (UniqueName: \"kubernetes.io/projected/44946abf-0d79-492e-a68b-9f5e07fee40d-kube-api-access-k4cw7\") on node \"crc\" DevicePath \"\"" Feb 25 12:36:05 crc kubenswrapper[5005]: I0225 12:36:05.176729 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533716-vrjfj" event={"ID":"44946abf-0d79-492e-a68b-9f5e07fee40d","Type":"ContainerDied","Data":"57ee590c6e0062f901dd4baf333ce2f5d8307e34942d96c72a4b6b5a81a4ce8f"} Feb 25 12:36:05 crc kubenswrapper[5005]: I0225 12:36:05.177011 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57ee590c6e0062f901dd4baf333ce2f5d8307e34942d96c72a4b6b5a81a4ce8f" Feb 25 12:36:05 crc kubenswrapper[5005]: I0225 12:36:05.176792 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533716-vrjfj" Feb 25 12:36:05 crc kubenswrapper[5005]: I0225 12:36:05.787464 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533710-nx8m5"] Feb 25 12:36:05 crc kubenswrapper[5005]: I0225 12:36:05.801595 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533710-nx8m5"] Feb 25 12:36:06 crc kubenswrapper[5005]: I0225 12:36:06.700610 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a26754f-a311-4c24-9009-051e7017c182" path="/var/lib/kubelet/pods/2a26754f-a311-4c24-9009-051e7017c182/volumes" Feb 25 12:36:21 crc kubenswrapper[5005]: I0225 12:36:21.839216 5005 scope.go:117] "RemoveContainer" containerID="e522cf7fe3ca817370a11b710c3f95128252551aabeddab077247bb16eb4a131" Feb 25 12:36:58 crc kubenswrapper[5005]: I0225 12:36:58.087476 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 12:36:58 crc kubenswrapper[5005]: I0225 12:36:58.088199 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 12:37:28 crc kubenswrapper[5005]: I0225 12:37:28.087726 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 12:37:28 crc kubenswrapper[5005]: I0225 12:37:28.088450 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 12:37:58 crc kubenswrapper[5005]: I0225 12:37:58.087395 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 12:37:58 crc kubenswrapper[5005]: I0225 12:37:58.088078 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 12:37:58 crc kubenswrapper[5005]: I0225 12:37:58.088128 5005 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" Feb 25 12:37:58 crc kubenswrapper[5005]: I0225 12:37:58.088923 5005 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"24bb7e6437945004160f29143f5d2520f56d9f7617902d02eb25cc7204439b73"} pod="openshift-machine-config-operator/machine-config-daemon-tct5q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 25 12:37:58 crc kubenswrapper[5005]: I0225 12:37:58.088983 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" containerID="cri-o://24bb7e6437945004160f29143f5d2520f56d9f7617902d02eb25cc7204439b73" gracePeriod=600 Feb 25 12:37:58 crc kubenswrapper[5005]: I0225 12:37:58.268211 5005 generic.go:334] "Generic (PLEG): container finished" podID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerID="24bb7e6437945004160f29143f5d2520f56d9f7617902d02eb25cc7204439b73" exitCode=0 Feb 25 12:37:58 crc kubenswrapper[5005]: I0225 12:37:58.268263 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerDied","Data":"24bb7e6437945004160f29143f5d2520f56d9f7617902d02eb25cc7204439b73"} Feb 25 12:37:58 crc kubenswrapper[5005]: I0225 12:37:58.268299 5005 scope.go:117] "RemoveContainer" containerID="3816c847e7671787ebbfa119db6802c720258a20258c425319f98485afae68d6" Feb 25 12:37:59 crc kubenswrapper[5005]: I0225 12:37:59.282448 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerStarted","Data":"d5323b9fb38e9e5b3654a15d617298ef1a2c7a35b633e8c3fa93501ca4da70f7"} Feb 25 12:38:00 crc kubenswrapper[5005]: I0225 12:38:00.141076 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533718-26pps"] Feb 25 12:38:00 crc kubenswrapper[5005]: E0225 12:38:00.142094 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44946abf-0d79-492e-a68b-9f5e07fee40d" containerName="oc" Feb 25 12:38:00 crc kubenswrapper[5005]: I0225 12:38:00.142127 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="44946abf-0d79-492e-a68b-9f5e07fee40d" containerName="oc" Feb 25 12:38:00 crc kubenswrapper[5005]: I0225 12:38:00.142496 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="44946abf-0d79-492e-a68b-9f5e07fee40d" containerName="oc" Feb 25 12:38:00 crc kubenswrapper[5005]: I0225 12:38:00.143431 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533718-26pps" Feb 25 12:38:00 crc kubenswrapper[5005]: I0225 12:38:00.146522 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 12:38:00 crc kubenswrapper[5005]: I0225 12:38:00.147555 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 12:38:00 crc kubenswrapper[5005]: I0225 12:38:00.147830 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 12:38:00 crc kubenswrapper[5005]: I0225 12:38:00.154464 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533718-26pps"] Feb 25 12:38:00 crc kubenswrapper[5005]: I0225 12:38:00.227082 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fh468\" (UniqueName: \"kubernetes.io/projected/051b9749-d9da-46ed-850f-764ada717dc6-kube-api-access-fh468\") pod \"auto-csr-approver-29533718-26pps\" (UID: \"051b9749-d9da-46ed-850f-764ada717dc6\") " pod="openshift-infra/auto-csr-approver-29533718-26pps" Feb 25 12:38:00 crc kubenswrapper[5005]: I0225 12:38:00.329493 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fh468\" (UniqueName: \"kubernetes.io/projected/051b9749-d9da-46ed-850f-764ada717dc6-kube-api-access-fh468\") pod \"auto-csr-approver-29533718-26pps\" (UID: \"051b9749-d9da-46ed-850f-764ada717dc6\") " pod="openshift-infra/auto-csr-approver-29533718-26pps" Feb 25 12:38:00 crc kubenswrapper[5005]: I0225 12:38:00.349495 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fh468\" (UniqueName: \"kubernetes.io/projected/051b9749-d9da-46ed-850f-764ada717dc6-kube-api-access-fh468\") pod \"auto-csr-approver-29533718-26pps\" (UID: \"051b9749-d9da-46ed-850f-764ada717dc6\") " pod="openshift-infra/auto-csr-approver-29533718-26pps" Feb 25 12:38:00 crc kubenswrapper[5005]: I0225 12:38:00.496166 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533718-26pps" Feb 25 12:38:00 crc kubenswrapper[5005]: I0225 12:38:00.947286 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533718-26pps"] Feb 25 12:38:01 crc kubenswrapper[5005]: I0225 12:38:01.300339 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533718-26pps" event={"ID":"051b9749-d9da-46ed-850f-764ada717dc6","Type":"ContainerStarted","Data":"a73009c3e65a2b563a9fdab12285b976fdf8f2b51c419c07fb8d80f68ce41ca0"} Feb 25 12:38:02 crc kubenswrapper[5005]: I0225 12:38:02.310794 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533718-26pps" event={"ID":"051b9749-d9da-46ed-850f-764ada717dc6","Type":"ContainerStarted","Data":"7c3bfe830fa145829ba01a5214499060ee84ca795864decce826ee198b3b95bb"} Feb 25 12:38:02 crc kubenswrapper[5005]: I0225 12:38:02.338219 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29533718-26pps" podStartSLOduration=1.355573568 podStartE2EDuration="2.338195389s" podCreationTimestamp="2026-02-25 12:38:00 +0000 UTC" firstStartedPulling="2026-02-25 12:38:00.959277317 +0000 UTC m=+4795.000009644" lastFinishedPulling="2026-02-25 12:38:01.941899118 +0000 UTC m=+4795.982631465" observedRunningTime="2026-02-25 12:38:02.328133368 +0000 UTC m=+4796.368865695" watchObservedRunningTime="2026-02-25 12:38:02.338195389 +0000 UTC m=+4796.378927716" Feb 25 12:38:03 crc kubenswrapper[5005]: I0225 12:38:03.321047 5005 generic.go:334] "Generic (PLEG): container finished" podID="051b9749-d9da-46ed-850f-764ada717dc6" containerID="7c3bfe830fa145829ba01a5214499060ee84ca795864decce826ee198b3b95bb" exitCode=0 Feb 25 12:38:03 crc kubenswrapper[5005]: I0225 12:38:03.321144 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533718-26pps" event={"ID":"051b9749-d9da-46ed-850f-764ada717dc6","Type":"ContainerDied","Data":"7c3bfe830fa145829ba01a5214499060ee84ca795864decce826ee198b3b95bb"} Feb 25 12:38:04 crc kubenswrapper[5005]: I0225 12:38:04.868191 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533718-26pps" Feb 25 12:38:04 crc kubenswrapper[5005]: I0225 12:38:04.926185 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fh468\" (UniqueName: \"kubernetes.io/projected/051b9749-d9da-46ed-850f-764ada717dc6-kube-api-access-fh468\") pod \"051b9749-d9da-46ed-850f-764ada717dc6\" (UID: \"051b9749-d9da-46ed-850f-764ada717dc6\") " Feb 25 12:38:04 crc kubenswrapper[5005]: I0225 12:38:04.933632 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/051b9749-d9da-46ed-850f-764ada717dc6-kube-api-access-fh468" (OuterVolumeSpecName: "kube-api-access-fh468") pod "051b9749-d9da-46ed-850f-764ada717dc6" (UID: "051b9749-d9da-46ed-850f-764ada717dc6"). InnerVolumeSpecName "kube-api-access-fh468". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:38:05 crc kubenswrapper[5005]: I0225 12:38:05.029265 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fh468\" (UniqueName: \"kubernetes.io/projected/051b9749-d9da-46ed-850f-764ada717dc6-kube-api-access-fh468\") on node \"crc\" DevicePath \"\"" Feb 25 12:38:05 crc kubenswrapper[5005]: I0225 12:38:05.342648 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533718-26pps" event={"ID":"051b9749-d9da-46ed-850f-764ada717dc6","Type":"ContainerDied","Data":"a73009c3e65a2b563a9fdab12285b976fdf8f2b51c419c07fb8d80f68ce41ca0"} Feb 25 12:38:05 crc kubenswrapper[5005]: I0225 12:38:05.342688 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a73009c3e65a2b563a9fdab12285b976fdf8f2b51c419c07fb8d80f68ce41ca0" Feb 25 12:38:05 crc kubenswrapper[5005]: I0225 12:38:05.342739 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533718-26pps" Feb 25 12:38:05 crc kubenswrapper[5005]: I0225 12:38:05.422597 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533712-jcrgk"] Feb 25 12:38:05 crc kubenswrapper[5005]: I0225 12:38:05.456200 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533712-jcrgk"] Feb 25 12:38:06 crc kubenswrapper[5005]: I0225 12:38:06.715940 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb672101-16b9-485b-96a8-c6e88d843930" path="/var/lib/kubelet/pods/eb672101-16b9-485b-96a8-c6e88d843930/volumes" Feb 25 12:38:21 crc kubenswrapper[5005]: I0225 12:38:21.957135 5005 scope.go:117] "RemoveContainer" containerID="523a2930dbcceeb6f06d80e824dbc90ea2bc328c6f87c044b2f781c766e87e75" Feb 25 12:39:58 crc kubenswrapper[5005]: I0225 12:39:58.087212 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 12:39:58 crc kubenswrapper[5005]: I0225 12:39:58.088561 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 12:40:00 crc kubenswrapper[5005]: I0225 12:40:00.146260 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533720-nwmhs"] Feb 25 12:40:00 crc kubenswrapper[5005]: E0225 12:40:00.147159 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="051b9749-d9da-46ed-850f-764ada717dc6" containerName="oc" Feb 25 12:40:00 crc kubenswrapper[5005]: I0225 12:40:00.147173 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="051b9749-d9da-46ed-850f-764ada717dc6" containerName="oc" Feb 25 12:40:00 crc kubenswrapper[5005]: I0225 12:40:00.147345 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="051b9749-d9da-46ed-850f-764ada717dc6" containerName="oc" Feb 25 12:40:00 crc kubenswrapper[5005]: I0225 12:40:00.147950 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533720-nwmhs" Feb 25 12:40:00 crc kubenswrapper[5005]: I0225 12:40:00.153752 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 12:40:00 crc kubenswrapper[5005]: I0225 12:40:00.154036 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 12:40:00 crc kubenswrapper[5005]: I0225 12:40:00.154162 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 12:40:00 crc kubenswrapper[5005]: I0225 12:40:00.167479 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533720-nwmhs"] Feb 25 12:40:00 crc kubenswrapper[5005]: I0225 12:40:00.315773 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glrfp\" (UniqueName: \"kubernetes.io/projected/35c3ec58-da0e-49f6-858d-a48acb64275e-kube-api-access-glrfp\") pod \"auto-csr-approver-29533720-nwmhs\" (UID: \"35c3ec58-da0e-49f6-858d-a48acb64275e\") " pod="openshift-infra/auto-csr-approver-29533720-nwmhs" Feb 25 12:40:00 crc kubenswrapper[5005]: I0225 12:40:00.418365 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glrfp\" (UniqueName: \"kubernetes.io/projected/35c3ec58-da0e-49f6-858d-a48acb64275e-kube-api-access-glrfp\") pod \"auto-csr-approver-29533720-nwmhs\" (UID: \"35c3ec58-da0e-49f6-858d-a48acb64275e\") " pod="openshift-infra/auto-csr-approver-29533720-nwmhs" Feb 25 12:40:00 crc kubenswrapper[5005]: I0225 12:40:00.451177 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glrfp\" (UniqueName: \"kubernetes.io/projected/35c3ec58-da0e-49f6-858d-a48acb64275e-kube-api-access-glrfp\") pod \"auto-csr-approver-29533720-nwmhs\" (UID: \"35c3ec58-da0e-49f6-858d-a48acb64275e\") " pod="openshift-infra/auto-csr-approver-29533720-nwmhs" Feb 25 12:40:00 crc kubenswrapper[5005]: I0225 12:40:00.467696 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533720-nwmhs" Feb 25 12:40:00 crc kubenswrapper[5005]: I0225 12:40:00.957246 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533720-nwmhs"] Feb 25 12:40:01 crc kubenswrapper[5005]: I0225 12:40:01.938394 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533720-nwmhs" event={"ID":"35c3ec58-da0e-49f6-858d-a48acb64275e","Type":"ContainerStarted","Data":"a8aef368bd753e067139d334eb3b3d53a20009ff9cb374fd926650b1ac4b4168"} Feb 25 12:40:02 crc kubenswrapper[5005]: I0225 12:40:02.948200 5005 generic.go:334] "Generic (PLEG): container finished" podID="35c3ec58-da0e-49f6-858d-a48acb64275e" containerID="b96677f4e7392b8d6b2bca1f66d988af7a38989cdb1fb2da003a68da0d102dc0" exitCode=0 Feb 25 12:40:02 crc kubenswrapper[5005]: I0225 12:40:02.948297 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533720-nwmhs" event={"ID":"35c3ec58-da0e-49f6-858d-a48acb64275e","Type":"ContainerDied","Data":"b96677f4e7392b8d6b2bca1f66d988af7a38989cdb1fb2da003a68da0d102dc0"} Feb 25 12:40:04 crc kubenswrapper[5005]: I0225 12:40:04.475268 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533720-nwmhs" Feb 25 12:40:04 crc kubenswrapper[5005]: I0225 12:40:04.610040 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glrfp\" (UniqueName: \"kubernetes.io/projected/35c3ec58-da0e-49f6-858d-a48acb64275e-kube-api-access-glrfp\") pod \"35c3ec58-da0e-49f6-858d-a48acb64275e\" (UID: \"35c3ec58-da0e-49f6-858d-a48acb64275e\") " Feb 25 12:40:04 crc kubenswrapper[5005]: I0225 12:40:04.616200 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35c3ec58-da0e-49f6-858d-a48acb64275e-kube-api-access-glrfp" (OuterVolumeSpecName: "kube-api-access-glrfp") pod "35c3ec58-da0e-49f6-858d-a48acb64275e" (UID: "35c3ec58-da0e-49f6-858d-a48acb64275e"). InnerVolumeSpecName "kube-api-access-glrfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:40:04 crc kubenswrapper[5005]: I0225 12:40:04.713018 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glrfp\" (UniqueName: \"kubernetes.io/projected/35c3ec58-da0e-49f6-858d-a48acb64275e-kube-api-access-glrfp\") on node \"crc\" DevicePath \"\"" Feb 25 12:40:04 crc kubenswrapper[5005]: I0225 12:40:04.979515 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533720-nwmhs" event={"ID":"35c3ec58-da0e-49f6-858d-a48acb64275e","Type":"ContainerDied","Data":"a8aef368bd753e067139d334eb3b3d53a20009ff9cb374fd926650b1ac4b4168"} Feb 25 12:40:04 crc kubenswrapper[5005]: I0225 12:40:04.979576 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8aef368bd753e067139d334eb3b3d53a20009ff9cb374fd926650b1ac4b4168" Feb 25 12:40:04 crc kubenswrapper[5005]: I0225 12:40:04.979604 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533720-nwmhs" Feb 25 12:40:05 crc kubenswrapper[5005]: I0225 12:40:05.567424 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533714-h5dbw"] Feb 25 12:40:05 crc kubenswrapper[5005]: I0225 12:40:05.579957 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533714-h5dbw"] Feb 25 12:40:06 crc kubenswrapper[5005]: I0225 12:40:06.704160 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5fd1577-065e-4c56-8157-b2c13fa896d2" path="/var/lib/kubelet/pods/a5fd1577-065e-4c56-8157-b2c13fa896d2/volumes" Feb 25 12:40:22 crc kubenswrapper[5005]: I0225 12:40:22.083207 5005 scope.go:117] "RemoveContainer" containerID="b611eb5645afb572ec0046f9e840269122630108f0f09f0e1e383fb263167161" Feb 25 12:40:22 crc kubenswrapper[5005]: I0225 12:40:22.129325 5005 scope.go:117] "RemoveContainer" containerID="f2ef7e8ae91db7e3a579ccb924085e0a91224c95e4dc71cc031220916bdf3eaf" Feb 25 12:40:22 crc kubenswrapper[5005]: I0225 12:40:22.170445 5005 scope.go:117] "RemoveContainer" containerID="bb88946f7f0655528a5631ffeeb8c489fd6e6c94bccb521c0b8dcd8931493e55" Feb 25 12:40:22 crc kubenswrapper[5005]: I0225 12:40:22.209161 5005 scope.go:117] "RemoveContainer" containerID="65501d6843a155ad18807a3b0fb68deed552ec465276b92f9f54d7e9e8a748c7" Feb 25 12:40:28 crc kubenswrapper[5005]: I0225 12:40:28.087242 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 12:40:28 crc kubenswrapper[5005]: I0225 12:40:28.087771 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 12:40:58 crc kubenswrapper[5005]: I0225 12:40:58.087973 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 12:40:58 crc kubenswrapper[5005]: I0225 12:40:58.088658 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 12:40:58 crc kubenswrapper[5005]: I0225 12:40:58.088717 5005 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" Feb 25 12:40:58 crc kubenswrapper[5005]: I0225 12:40:58.089705 5005 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d5323b9fb38e9e5b3654a15d617298ef1a2c7a35b633e8c3fa93501ca4da70f7"} pod="openshift-machine-config-operator/machine-config-daemon-tct5q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 25 12:40:58 crc kubenswrapper[5005]: I0225 12:40:58.089780 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" containerID="cri-o://d5323b9fb38e9e5b3654a15d617298ef1a2c7a35b633e8c3fa93501ca4da70f7" gracePeriod=600 Feb 25 12:40:58 crc kubenswrapper[5005]: E0225 12:40:58.230893 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:40:58 crc kubenswrapper[5005]: I0225 12:40:58.510167 5005 generic.go:334] "Generic (PLEG): container finished" podID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerID="d5323b9fb38e9e5b3654a15d617298ef1a2c7a35b633e8c3fa93501ca4da70f7" exitCode=0 Feb 25 12:40:58 crc kubenswrapper[5005]: I0225 12:40:58.510387 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerDied","Data":"d5323b9fb38e9e5b3654a15d617298ef1a2c7a35b633e8c3fa93501ca4da70f7"} Feb 25 12:40:58 crc kubenswrapper[5005]: I0225 12:40:58.510593 5005 scope.go:117] "RemoveContainer" containerID="24bb7e6437945004160f29143f5d2520f56d9f7617902d02eb25cc7204439b73" Feb 25 12:40:58 crc kubenswrapper[5005]: I0225 12:40:58.511821 5005 scope.go:117] "RemoveContainer" containerID="d5323b9fb38e9e5b3654a15d617298ef1a2c7a35b633e8c3fa93501ca4da70f7" Feb 25 12:40:58 crc kubenswrapper[5005]: E0225 12:40:58.512167 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:41:12 crc kubenswrapper[5005]: I0225 12:41:12.691007 5005 scope.go:117] "RemoveContainer" containerID="d5323b9fb38e9e5b3654a15d617298ef1a2c7a35b633e8c3fa93501ca4da70f7" Feb 25 12:41:12 crc kubenswrapper[5005]: E0225 12:41:12.691820 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:41:25 crc kubenswrapper[5005]: I0225 12:41:25.686006 5005 scope.go:117] "RemoveContainer" containerID="d5323b9fb38e9e5b3654a15d617298ef1a2c7a35b633e8c3fa93501ca4da70f7" Feb 25 12:41:25 crc kubenswrapper[5005]: E0225 12:41:25.686654 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:41:37 crc kubenswrapper[5005]: I0225 12:41:37.686042 5005 scope.go:117] "RemoveContainer" containerID="d5323b9fb38e9e5b3654a15d617298ef1a2c7a35b633e8c3fa93501ca4da70f7" Feb 25 12:41:37 crc kubenswrapper[5005]: E0225 12:41:37.688087 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:41:52 crc kubenswrapper[5005]: I0225 12:41:52.685536 5005 scope.go:117] "RemoveContainer" containerID="d5323b9fb38e9e5b3654a15d617298ef1a2c7a35b633e8c3fa93501ca4da70f7" Feb 25 12:41:52 crc kubenswrapper[5005]: E0225 12:41:52.686320 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:42:00 crc kubenswrapper[5005]: I0225 12:42:00.197186 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533722-cmbf6"] Feb 25 12:42:00 crc kubenswrapper[5005]: E0225 12:42:00.198443 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35c3ec58-da0e-49f6-858d-a48acb64275e" containerName="oc" Feb 25 12:42:00 crc kubenswrapper[5005]: I0225 12:42:00.198462 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="35c3ec58-da0e-49f6-858d-a48acb64275e" containerName="oc" Feb 25 12:42:00 crc kubenswrapper[5005]: I0225 12:42:00.198729 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="35c3ec58-da0e-49f6-858d-a48acb64275e" containerName="oc" Feb 25 12:42:00 crc kubenswrapper[5005]: I0225 12:42:00.199571 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533722-cmbf6" Feb 25 12:42:00 crc kubenswrapper[5005]: I0225 12:42:00.203324 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 12:42:00 crc kubenswrapper[5005]: I0225 12:42:00.203421 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 12:42:00 crc kubenswrapper[5005]: I0225 12:42:00.203421 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 12:42:00 crc kubenswrapper[5005]: I0225 12:42:00.225966 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533722-cmbf6"] Feb 25 12:42:00 crc kubenswrapper[5005]: I0225 12:42:00.295091 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb2mv\" (UniqueName: \"kubernetes.io/projected/92a7b7c4-0ff6-4cef-83b9-5a57df3a0a0c-kube-api-access-wb2mv\") pod \"auto-csr-approver-29533722-cmbf6\" (UID: \"92a7b7c4-0ff6-4cef-83b9-5a57df3a0a0c\") " pod="openshift-infra/auto-csr-approver-29533722-cmbf6" Feb 25 12:42:00 crc kubenswrapper[5005]: I0225 12:42:00.397418 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb2mv\" (UniqueName: \"kubernetes.io/projected/92a7b7c4-0ff6-4cef-83b9-5a57df3a0a0c-kube-api-access-wb2mv\") pod \"auto-csr-approver-29533722-cmbf6\" (UID: \"92a7b7c4-0ff6-4cef-83b9-5a57df3a0a0c\") " pod="openshift-infra/auto-csr-approver-29533722-cmbf6" Feb 25 12:42:00 crc kubenswrapper[5005]: I0225 12:42:00.417259 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb2mv\" (UniqueName: \"kubernetes.io/projected/92a7b7c4-0ff6-4cef-83b9-5a57df3a0a0c-kube-api-access-wb2mv\") pod \"auto-csr-approver-29533722-cmbf6\" (UID: \"92a7b7c4-0ff6-4cef-83b9-5a57df3a0a0c\") " pod="openshift-infra/auto-csr-approver-29533722-cmbf6" Feb 25 12:42:00 crc kubenswrapper[5005]: I0225 12:42:00.581811 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533722-cmbf6" Feb 25 12:42:01 crc kubenswrapper[5005]: I0225 12:42:01.980956 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533722-cmbf6"] Feb 25 12:42:02 crc kubenswrapper[5005]: I0225 12:42:01.997128 5005 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 25 12:42:02 crc kubenswrapper[5005]: I0225 12:42:02.093192 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533722-cmbf6" event={"ID":"92a7b7c4-0ff6-4cef-83b9-5a57df3a0a0c","Type":"ContainerStarted","Data":"e3b1a370ec6457562d654521f6c1eddf3db92906f39cba8fa97194662a8f0f1e"} Feb 25 12:42:03 crc kubenswrapper[5005]: I0225 12:42:03.686533 5005 scope.go:117] "RemoveContainer" containerID="d5323b9fb38e9e5b3654a15d617298ef1a2c7a35b633e8c3fa93501ca4da70f7" Feb 25 12:42:03 crc kubenswrapper[5005]: E0225 12:42:03.687849 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:42:04 crc kubenswrapper[5005]: I0225 12:42:04.112778 5005 generic.go:334] "Generic (PLEG): container finished" podID="92a7b7c4-0ff6-4cef-83b9-5a57df3a0a0c" containerID="1b6c217f684b6f033908f6ff727acc77fde0defd9fdb61b5c2aa9711a21324fc" exitCode=0 Feb 25 12:42:04 crc kubenswrapper[5005]: I0225 12:42:04.113036 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533722-cmbf6" event={"ID":"92a7b7c4-0ff6-4cef-83b9-5a57df3a0a0c","Type":"ContainerDied","Data":"1b6c217f684b6f033908f6ff727acc77fde0defd9fdb61b5c2aa9711a21324fc"} Feb 25 12:42:05 crc kubenswrapper[5005]: I0225 12:42:05.604316 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533722-cmbf6" Feb 25 12:42:05 crc kubenswrapper[5005]: I0225 12:42:05.704589 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wb2mv\" (UniqueName: \"kubernetes.io/projected/92a7b7c4-0ff6-4cef-83b9-5a57df3a0a0c-kube-api-access-wb2mv\") pod \"92a7b7c4-0ff6-4cef-83b9-5a57df3a0a0c\" (UID: \"92a7b7c4-0ff6-4cef-83b9-5a57df3a0a0c\") " Feb 25 12:42:05 crc kubenswrapper[5005]: I0225 12:42:05.711611 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92a7b7c4-0ff6-4cef-83b9-5a57df3a0a0c-kube-api-access-wb2mv" (OuterVolumeSpecName: "kube-api-access-wb2mv") pod "92a7b7c4-0ff6-4cef-83b9-5a57df3a0a0c" (UID: "92a7b7c4-0ff6-4cef-83b9-5a57df3a0a0c"). InnerVolumeSpecName "kube-api-access-wb2mv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:42:05 crc kubenswrapper[5005]: I0225 12:42:05.808995 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wb2mv\" (UniqueName: \"kubernetes.io/projected/92a7b7c4-0ff6-4cef-83b9-5a57df3a0a0c-kube-api-access-wb2mv\") on node \"crc\" DevicePath \"\"" Feb 25 12:42:06 crc kubenswrapper[5005]: I0225 12:42:06.131891 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533722-cmbf6" event={"ID":"92a7b7c4-0ff6-4cef-83b9-5a57df3a0a0c","Type":"ContainerDied","Data":"e3b1a370ec6457562d654521f6c1eddf3db92906f39cba8fa97194662a8f0f1e"} Feb 25 12:42:06 crc kubenswrapper[5005]: I0225 12:42:06.131948 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3b1a370ec6457562d654521f6c1eddf3db92906f39cba8fa97194662a8f0f1e" Feb 25 12:42:06 crc kubenswrapper[5005]: I0225 12:42:06.132027 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533722-cmbf6" Feb 25 12:42:06 crc kubenswrapper[5005]: I0225 12:42:06.684162 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533716-vrjfj"] Feb 25 12:42:06 crc kubenswrapper[5005]: I0225 12:42:06.709733 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533716-vrjfj"] Feb 25 12:42:08 crc kubenswrapper[5005]: I0225 12:42:08.696702 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44946abf-0d79-492e-a68b-9f5e07fee40d" path="/var/lib/kubelet/pods/44946abf-0d79-492e-a68b-9f5e07fee40d/volumes" Feb 25 12:42:15 crc kubenswrapper[5005]: I0225 12:42:15.686361 5005 scope.go:117] "RemoveContainer" containerID="d5323b9fb38e9e5b3654a15d617298ef1a2c7a35b633e8c3fa93501ca4da70f7" Feb 25 12:42:15 crc kubenswrapper[5005]: E0225 12:42:15.687606 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:42:22 crc kubenswrapper[5005]: I0225 12:42:22.340910 5005 scope.go:117] "RemoveContainer" containerID="f8a573668de3e7a38ff3c5ec345772ce9aaa4c8a86b681150bcf95bf2f12cd18" Feb 25 12:42:28 crc kubenswrapper[5005]: I0225 12:42:28.685384 5005 scope.go:117] "RemoveContainer" containerID="d5323b9fb38e9e5b3654a15d617298ef1a2c7a35b633e8c3fa93501ca4da70f7" Feb 25 12:42:28 crc kubenswrapper[5005]: E0225 12:42:28.686088 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:42:43 crc kubenswrapper[5005]: I0225 12:42:43.685912 5005 scope.go:117] "RemoveContainer" containerID="d5323b9fb38e9e5b3654a15d617298ef1a2c7a35b633e8c3fa93501ca4da70f7" Feb 25 12:42:43 crc kubenswrapper[5005]: E0225 12:42:43.687168 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:42:54 crc kubenswrapper[5005]: I0225 12:42:54.686037 5005 scope.go:117] "RemoveContainer" containerID="d5323b9fb38e9e5b3654a15d617298ef1a2c7a35b633e8c3fa93501ca4da70f7" Feb 25 12:42:54 crc kubenswrapper[5005]: E0225 12:42:54.686813 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:43:05 crc kubenswrapper[5005]: I0225 12:43:05.685859 5005 scope.go:117] "RemoveContainer" containerID="d5323b9fb38e9e5b3654a15d617298ef1a2c7a35b633e8c3fa93501ca4da70f7" Feb 25 12:43:05 crc kubenswrapper[5005]: E0225 12:43:05.686857 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:43:20 crc kubenswrapper[5005]: I0225 12:43:20.686061 5005 scope.go:117] "RemoveContainer" containerID="d5323b9fb38e9e5b3654a15d617298ef1a2c7a35b633e8c3fa93501ca4da70f7" Feb 25 12:43:20 crc kubenswrapper[5005]: E0225 12:43:20.687210 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:43:34 crc kubenswrapper[5005]: I0225 12:43:34.688599 5005 scope.go:117] "RemoveContainer" containerID="d5323b9fb38e9e5b3654a15d617298ef1a2c7a35b633e8c3fa93501ca4da70f7" Feb 25 12:43:34 crc kubenswrapper[5005]: E0225 12:43:34.689730 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:43:46 crc kubenswrapper[5005]: I0225 12:43:46.693116 5005 scope.go:117] "RemoveContainer" containerID="d5323b9fb38e9e5b3654a15d617298ef1a2c7a35b633e8c3fa93501ca4da70f7" Feb 25 12:43:46 crc kubenswrapper[5005]: E0225 12:43:46.694080 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:44:00 crc kubenswrapper[5005]: I0225 12:44:00.150523 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533724-jtvnw"] Feb 25 12:44:00 crc kubenswrapper[5005]: E0225 12:44:00.151727 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92a7b7c4-0ff6-4cef-83b9-5a57df3a0a0c" containerName="oc" Feb 25 12:44:00 crc kubenswrapper[5005]: I0225 12:44:00.151746 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="92a7b7c4-0ff6-4cef-83b9-5a57df3a0a0c" containerName="oc" Feb 25 12:44:00 crc kubenswrapper[5005]: I0225 12:44:00.152016 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="92a7b7c4-0ff6-4cef-83b9-5a57df3a0a0c" containerName="oc" Feb 25 12:44:00 crc kubenswrapper[5005]: I0225 12:44:00.152766 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533724-jtvnw" Feb 25 12:44:00 crc kubenswrapper[5005]: I0225 12:44:00.155014 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 12:44:00 crc kubenswrapper[5005]: I0225 12:44:00.155207 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 12:44:00 crc kubenswrapper[5005]: I0225 12:44:00.155347 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 12:44:00 crc kubenswrapper[5005]: I0225 12:44:00.161191 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533724-jtvnw"] Feb 25 12:44:00 crc kubenswrapper[5005]: I0225 12:44:00.224128 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5bkv\" (UniqueName: \"kubernetes.io/projected/72227b2a-5882-4611-8534-7d06c7f26649-kube-api-access-k5bkv\") pod \"auto-csr-approver-29533724-jtvnw\" (UID: \"72227b2a-5882-4611-8534-7d06c7f26649\") " pod="openshift-infra/auto-csr-approver-29533724-jtvnw" Feb 25 12:44:00 crc kubenswrapper[5005]: I0225 12:44:00.326526 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5bkv\" (UniqueName: \"kubernetes.io/projected/72227b2a-5882-4611-8534-7d06c7f26649-kube-api-access-k5bkv\") pod \"auto-csr-approver-29533724-jtvnw\" (UID: \"72227b2a-5882-4611-8534-7d06c7f26649\") " pod="openshift-infra/auto-csr-approver-29533724-jtvnw" Feb 25 12:44:00 crc kubenswrapper[5005]: I0225 12:44:00.339317 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8kxd5"] Feb 25 12:44:00 crc kubenswrapper[5005]: I0225 12:44:00.346533 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8kxd5" Feb 25 12:44:00 crc kubenswrapper[5005]: I0225 12:44:00.367745 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5bkv\" (UniqueName: \"kubernetes.io/projected/72227b2a-5882-4611-8534-7d06c7f26649-kube-api-access-k5bkv\") pod \"auto-csr-approver-29533724-jtvnw\" (UID: \"72227b2a-5882-4611-8534-7d06c7f26649\") " pod="openshift-infra/auto-csr-approver-29533724-jtvnw" Feb 25 12:44:00 crc kubenswrapper[5005]: I0225 12:44:00.370086 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8kxd5"] Feb 25 12:44:00 crc kubenswrapper[5005]: I0225 12:44:00.428493 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-588pb\" (UniqueName: \"kubernetes.io/projected/ffaa6723-7110-4200-a903-e89455f3b0d0-kube-api-access-588pb\") pod \"redhat-operators-8kxd5\" (UID: \"ffaa6723-7110-4200-a903-e89455f3b0d0\") " pod="openshift-marketplace/redhat-operators-8kxd5" Feb 25 12:44:00 crc kubenswrapper[5005]: I0225 12:44:00.428685 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffaa6723-7110-4200-a903-e89455f3b0d0-catalog-content\") pod \"redhat-operators-8kxd5\" (UID: \"ffaa6723-7110-4200-a903-e89455f3b0d0\") " pod="openshift-marketplace/redhat-operators-8kxd5" Feb 25 12:44:00 crc kubenswrapper[5005]: I0225 12:44:00.428992 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffaa6723-7110-4200-a903-e89455f3b0d0-utilities\") pod \"redhat-operators-8kxd5\" (UID: \"ffaa6723-7110-4200-a903-e89455f3b0d0\") " pod="openshift-marketplace/redhat-operators-8kxd5" Feb 25 12:44:00 crc kubenswrapper[5005]: I0225 12:44:00.471700 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533724-jtvnw" Feb 25 12:44:00 crc kubenswrapper[5005]: I0225 12:44:00.532691 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-588pb\" (UniqueName: \"kubernetes.io/projected/ffaa6723-7110-4200-a903-e89455f3b0d0-kube-api-access-588pb\") pod \"redhat-operators-8kxd5\" (UID: \"ffaa6723-7110-4200-a903-e89455f3b0d0\") " pod="openshift-marketplace/redhat-operators-8kxd5" Feb 25 12:44:00 crc kubenswrapper[5005]: I0225 12:44:00.533287 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffaa6723-7110-4200-a903-e89455f3b0d0-catalog-content\") pod \"redhat-operators-8kxd5\" (UID: \"ffaa6723-7110-4200-a903-e89455f3b0d0\") " pod="openshift-marketplace/redhat-operators-8kxd5" Feb 25 12:44:00 crc kubenswrapper[5005]: I0225 12:44:00.533357 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffaa6723-7110-4200-a903-e89455f3b0d0-utilities\") pod \"redhat-operators-8kxd5\" (UID: \"ffaa6723-7110-4200-a903-e89455f3b0d0\") " pod="openshift-marketplace/redhat-operators-8kxd5" Feb 25 12:44:00 crc kubenswrapper[5005]: I0225 12:44:00.533891 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffaa6723-7110-4200-a903-e89455f3b0d0-catalog-content\") pod \"redhat-operators-8kxd5\" (UID: \"ffaa6723-7110-4200-a903-e89455f3b0d0\") " pod="openshift-marketplace/redhat-operators-8kxd5" Feb 25 12:44:00 crc kubenswrapper[5005]: I0225 12:44:00.534037 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffaa6723-7110-4200-a903-e89455f3b0d0-utilities\") pod \"redhat-operators-8kxd5\" (UID: \"ffaa6723-7110-4200-a903-e89455f3b0d0\") " pod="openshift-marketplace/redhat-operators-8kxd5" Feb 25 12:44:00 crc kubenswrapper[5005]: I0225 12:44:00.553314 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-588pb\" (UniqueName: \"kubernetes.io/projected/ffaa6723-7110-4200-a903-e89455f3b0d0-kube-api-access-588pb\") pod \"redhat-operators-8kxd5\" (UID: \"ffaa6723-7110-4200-a903-e89455f3b0d0\") " pod="openshift-marketplace/redhat-operators-8kxd5" Feb 25 12:44:00 crc kubenswrapper[5005]: I0225 12:44:00.687899 5005 scope.go:117] "RemoveContainer" containerID="d5323b9fb38e9e5b3654a15d617298ef1a2c7a35b633e8c3fa93501ca4da70f7" Feb 25 12:44:00 crc kubenswrapper[5005]: E0225 12:44:00.688199 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:44:00 crc kubenswrapper[5005]: I0225 12:44:00.725932 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8kxd5" Feb 25 12:44:00 crc kubenswrapper[5005]: I0225 12:44:00.975633 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533724-jtvnw"] Feb 25 12:44:01 crc kubenswrapper[5005]: I0225 12:44:01.214236 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533724-jtvnw" event={"ID":"72227b2a-5882-4611-8534-7d06c7f26649","Type":"ContainerStarted","Data":"23095237d9de471120ed2694f854120f8029b77c26889227fd496c1dd3a543f5"} Feb 25 12:44:01 crc kubenswrapper[5005]: I0225 12:44:01.243089 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8kxd5"] Feb 25 12:44:02 crc kubenswrapper[5005]: I0225 12:44:02.230961 5005 generic.go:334] "Generic (PLEG): container finished" podID="ffaa6723-7110-4200-a903-e89455f3b0d0" containerID="679c2ff6984a52b1768239aac5243b4630a4441066c879eaf09066ce82d9ce3b" exitCode=0 Feb 25 12:44:02 crc kubenswrapper[5005]: I0225 12:44:02.231170 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8kxd5" event={"ID":"ffaa6723-7110-4200-a903-e89455f3b0d0","Type":"ContainerDied","Data":"679c2ff6984a52b1768239aac5243b4630a4441066c879eaf09066ce82d9ce3b"} Feb 25 12:44:02 crc kubenswrapper[5005]: I0225 12:44:02.231409 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8kxd5" event={"ID":"ffaa6723-7110-4200-a903-e89455f3b0d0","Type":"ContainerStarted","Data":"7071d24e9a1b7aff71ee1dd5c26364b896f895b18c63b8c564f0b49f82fe747e"} Feb 25 12:44:03 crc kubenswrapper[5005]: I0225 12:44:03.240350 5005 generic.go:334] "Generic (PLEG): container finished" podID="72227b2a-5882-4611-8534-7d06c7f26649" containerID="513606bc5b1ca31b14de7bdf7cd618c77e1affee5f3a0e9bdb31ddcd8f416c32" exitCode=0 Feb 25 12:44:03 crc kubenswrapper[5005]: I0225 12:44:03.240781 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533724-jtvnw" event={"ID":"72227b2a-5882-4611-8534-7d06c7f26649","Type":"ContainerDied","Data":"513606bc5b1ca31b14de7bdf7cd618c77e1affee5f3a0e9bdb31ddcd8f416c32"} Feb 25 12:44:04 crc kubenswrapper[5005]: I0225 12:44:04.256719 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8kxd5" event={"ID":"ffaa6723-7110-4200-a903-e89455f3b0d0","Type":"ContainerStarted","Data":"aaa0e351ee42acf347ee30db53764b0df3b29c70ef9209091f4c811ad7b0fa73"} Feb 25 12:44:05 crc kubenswrapper[5005]: I0225 12:44:05.143296 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533724-jtvnw" Feb 25 12:44:05 crc kubenswrapper[5005]: I0225 12:44:05.239444 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5bkv\" (UniqueName: \"kubernetes.io/projected/72227b2a-5882-4611-8534-7d06c7f26649-kube-api-access-k5bkv\") pod \"72227b2a-5882-4611-8534-7d06c7f26649\" (UID: \"72227b2a-5882-4611-8534-7d06c7f26649\") " Feb 25 12:44:05 crc kubenswrapper[5005]: I0225 12:44:05.246311 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72227b2a-5882-4611-8534-7d06c7f26649-kube-api-access-k5bkv" (OuterVolumeSpecName: "kube-api-access-k5bkv") pod "72227b2a-5882-4611-8534-7d06c7f26649" (UID: "72227b2a-5882-4611-8534-7d06c7f26649"). InnerVolumeSpecName "kube-api-access-k5bkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:44:05 crc kubenswrapper[5005]: I0225 12:44:05.269185 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533724-jtvnw" event={"ID":"72227b2a-5882-4611-8534-7d06c7f26649","Type":"ContainerDied","Data":"23095237d9de471120ed2694f854120f8029b77c26889227fd496c1dd3a543f5"} Feb 25 12:44:05 crc kubenswrapper[5005]: I0225 12:44:05.269248 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23095237d9de471120ed2694f854120f8029b77c26889227fd496c1dd3a543f5" Feb 25 12:44:05 crc kubenswrapper[5005]: I0225 12:44:05.269210 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533724-jtvnw" Feb 25 12:44:05 crc kubenswrapper[5005]: I0225 12:44:05.342561 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5bkv\" (UniqueName: \"kubernetes.io/projected/72227b2a-5882-4611-8534-7d06c7f26649-kube-api-access-k5bkv\") on node \"crc\" DevicePath \"\"" Feb 25 12:44:06 crc kubenswrapper[5005]: I0225 12:44:06.240160 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533718-26pps"] Feb 25 12:44:06 crc kubenswrapper[5005]: I0225 12:44:06.254592 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533718-26pps"] Feb 25 12:44:06 crc kubenswrapper[5005]: I0225 12:44:06.698810 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="051b9749-d9da-46ed-850f-764ada717dc6" path="/var/lib/kubelet/pods/051b9749-d9da-46ed-850f-764ada717dc6/volumes" Feb 25 12:44:10 crc kubenswrapper[5005]: I0225 12:44:10.324527 5005 generic.go:334] "Generic (PLEG): container finished" podID="ffaa6723-7110-4200-a903-e89455f3b0d0" containerID="aaa0e351ee42acf347ee30db53764b0df3b29c70ef9209091f4c811ad7b0fa73" exitCode=0 Feb 25 12:44:10 crc kubenswrapper[5005]: I0225 12:44:10.324647 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8kxd5" event={"ID":"ffaa6723-7110-4200-a903-e89455f3b0d0","Type":"ContainerDied","Data":"aaa0e351ee42acf347ee30db53764b0df3b29c70ef9209091f4c811ad7b0fa73"} Feb 25 12:44:11 crc kubenswrapper[5005]: I0225 12:44:11.335485 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8kxd5" event={"ID":"ffaa6723-7110-4200-a903-e89455f3b0d0","Type":"ContainerStarted","Data":"cd3a647afe48089efd9b5134158e11ce96c9595e7cfc37fd6f469dee8b96ab14"} Feb 25 12:44:11 crc kubenswrapper[5005]: I0225 12:44:11.374050 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8kxd5" podStartSLOduration=2.790232155 podStartE2EDuration="11.373948554s" podCreationTimestamp="2026-02-25 12:44:00 +0000 UTC" firstStartedPulling="2026-02-25 12:44:02.234740163 +0000 UTC m=+5156.275472490" lastFinishedPulling="2026-02-25 12:44:10.818456562 +0000 UTC m=+5164.859188889" observedRunningTime="2026-02-25 12:44:11.364309367 +0000 UTC m=+5165.405041694" watchObservedRunningTime="2026-02-25 12:44:11.373948554 +0000 UTC m=+5165.414680881" Feb 25 12:44:11 crc kubenswrapper[5005]: I0225 12:44:11.686023 5005 scope.go:117] "RemoveContainer" containerID="d5323b9fb38e9e5b3654a15d617298ef1a2c7a35b633e8c3fa93501ca4da70f7" Feb 25 12:44:11 crc kubenswrapper[5005]: E0225 12:44:11.686303 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:44:20 crc kubenswrapper[5005]: I0225 12:44:20.726047 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8kxd5" Feb 25 12:44:20 crc kubenswrapper[5005]: I0225 12:44:20.726571 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8kxd5" Feb 25 12:44:21 crc kubenswrapper[5005]: I0225 12:44:21.776332 5005 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8kxd5" podUID="ffaa6723-7110-4200-a903-e89455f3b0d0" containerName="registry-server" probeResult="failure" output=< Feb 25 12:44:21 crc kubenswrapper[5005]: timeout: failed to connect service ":50051" within 1s Feb 25 12:44:21 crc kubenswrapper[5005]: > Feb 25 12:44:22 crc kubenswrapper[5005]: I0225 12:44:22.425815 5005 scope.go:117] "RemoveContainer" containerID="7c3bfe830fa145829ba01a5214499060ee84ca795864decce826ee198b3b95bb" Feb 25 12:44:22 crc kubenswrapper[5005]: I0225 12:44:22.686406 5005 scope.go:117] "RemoveContainer" containerID="d5323b9fb38e9e5b3654a15d617298ef1a2c7a35b633e8c3fa93501ca4da70f7" Feb 25 12:44:22 crc kubenswrapper[5005]: E0225 12:44:22.686658 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:44:31 crc kubenswrapper[5005]: I0225 12:44:31.780222 5005 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8kxd5" podUID="ffaa6723-7110-4200-a903-e89455f3b0d0" containerName="registry-server" probeResult="failure" output=< Feb 25 12:44:31 crc kubenswrapper[5005]: timeout: failed to connect service ":50051" within 1s Feb 25 12:44:31 crc kubenswrapper[5005]: > Feb 25 12:44:33 crc kubenswrapper[5005]: I0225 12:44:33.705056 5005 scope.go:117] "RemoveContainer" containerID="d5323b9fb38e9e5b3654a15d617298ef1a2c7a35b633e8c3fa93501ca4da70f7" Feb 25 12:44:33 crc kubenswrapper[5005]: E0225 12:44:33.706275 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:44:41 crc kubenswrapper[5005]: I0225 12:44:41.791363 5005 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8kxd5" podUID="ffaa6723-7110-4200-a903-e89455f3b0d0" containerName="registry-server" probeResult="failure" output=< Feb 25 12:44:41 crc kubenswrapper[5005]: timeout: failed to connect service ":50051" within 1s Feb 25 12:44:41 crc kubenswrapper[5005]: > Feb 25 12:44:46 crc kubenswrapper[5005]: I0225 12:44:46.692228 5005 scope.go:117] "RemoveContainer" containerID="d5323b9fb38e9e5b3654a15d617298ef1a2c7a35b633e8c3fa93501ca4da70f7" Feb 25 12:44:46 crc kubenswrapper[5005]: E0225 12:44:46.693042 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:44:50 crc kubenswrapper[5005]: I0225 12:44:50.797091 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8kxd5" Feb 25 12:44:50 crc kubenswrapper[5005]: I0225 12:44:50.881107 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8kxd5" Feb 25 12:44:51 crc kubenswrapper[5005]: I0225 12:44:51.030928 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8kxd5"] Feb 25 12:44:52 crc kubenswrapper[5005]: I0225 12:44:52.695974 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8kxd5" podUID="ffaa6723-7110-4200-a903-e89455f3b0d0" containerName="registry-server" containerID="cri-o://cd3a647afe48089efd9b5134158e11ce96c9595e7cfc37fd6f469dee8b96ab14" gracePeriod=2 Feb 25 12:44:53 crc kubenswrapper[5005]: I0225 12:44:53.414086 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8kxd5" Feb 25 12:44:53 crc kubenswrapper[5005]: I0225 12:44:53.518781 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-588pb\" (UniqueName: \"kubernetes.io/projected/ffaa6723-7110-4200-a903-e89455f3b0d0-kube-api-access-588pb\") pod \"ffaa6723-7110-4200-a903-e89455f3b0d0\" (UID: \"ffaa6723-7110-4200-a903-e89455f3b0d0\") " Feb 25 12:44:53 crc kubenswrapper[5005]: I0225 12:44:53.518926 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffaa6723-7110-4200-a903-e89455f3b0d0-catalog-content\") pod \"ffaa6723-7110-4200-a903-e89455f3b0d0\" (UID: \"ffaa6723-7110-4200-a903-e89455f3b0d0\") " Feb 25 12:44:53 crc kubenswrapper[5005]: I0225 12:44:53.519032 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffaa6723-7110-4200-a903-e89455f3b0d0-utilities\") pod \"ffaa6723-7110-4200-a903-e89455f3b0d0\" (UID: \"ffaa6723-7110-4200-a903-e89455f3b0d0\") " Feb 25 12:44:53 crc kubenswrapper[5005]: I0225 12:44:53.519781 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffaa6723-7110-4200-a903-e89455f3b0d0-utilities" (OuterVolumeSpecName: "utilities") pod "ffaa6723-7110-4200-a903-e89455f3b0d0" (UID: "ffaa6723-7110-4200-a903-e89455f3b0d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 12:44:53 crc kubenswrapper[5005]: I0225 12:44:53.535744 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffaa6723-7110-4200-a903-e89455f3b0d0-kube-api-access-588pb" (OuterVolumeSpecName: "kube-api-access-588pb") pod "ffaa6723-7110-4200-a903-e89455f3b0d0" (UID: "ffaa6723-7110-4200-a903-e89455f3b0d0"). InnerVolumeSpecName "kube-api-access-588pb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:44:53 crc kubenswrapper[5005]: I0225 12:44:53.621183 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffaa6723-7110-4200-a903-e89455f3b0d0-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 12:44:53 crc kubenswrapper[5005]: I0225 12:44:53.621212 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-588pb\" (UniqueName: \"kubernetes.io/projected/ffaa6723-7110-4200-a903-e89455f3b0d0-kube-api-access-588pb\") on node \"crc\" DevicePath \"\"" Feb 25 12:44:53 crc kubenswrapper[5005]: I0225 12:44:53.625251 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffaa6723-7110-4200-a903-e89455f3b0d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ffaa6723-7110-4200-a903-e89455f3b0d0" (UID: "ffaa6723-7110-4200-a903-e89455f3b0d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 12:44:53 crc kubenswrapper[5005]: I0225 12:44:53.704863 5005 generic.go:334] "Generic (PLEG): container finished" podID="ffaa6723-7110-4200-a903-e89455f3b0d0" containerID="cd3a647afe48089efd9b5134158e11ce96c9595e7cfc37fd6f469dee8b96ab14" exitCode=0 Feb 25 12:44:53 crc kubenswrapper[5005]: I0225 12:44:53.704940 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8kxd5" Feb 25 12:44:53 crc kubenswrapper[5005]: I0225 12:44:53.704979 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8kxd5" event={"ID":"ffaa6723-7110-4200-a903-e89455f3b0d0","Type":"ContainerDied","Data":"cd3a647afe48089efd9b5134158e11ce96c9595e7cfc37fd6f469dee8b96ab14"} Feb 25 12:44:53 crc kubenswrapper[5005]: I0225 12:44:53.705977 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8kxd5" event={"ID":"ffaa6723-7110-4200-a903-e89455f3b0d0","Type":"ContainerDied","Data":"7071d24e9a1b7aff71ee1dd5c26364b896f895b18c63b8c564f0b49f82fe747e"} Feb 25 12:44:53 crc kubenswrapper[5005]: I0225 12:44:53.705997 5005 scope.go:117] "RemoveContainer" containerID="cd3a647afe48089efd9b5134158e11ce96c9595e7cfc37fd6f469dee8b96ab14" Feb 25 12:44:53 crc kubenswrapper[5005]: I0225 12:44:53.722895 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffaa6723-7110-4200-a903-e89455f3b0d0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 12:44:53 crc kubenswrapper[5005]: I0225 12:44:53.730726 5005 scope.go:117] "RemoveContainer" containerID="aaa0e351ee42acf347ee30db53764b0df3b29c70ef9209091f4c811ad7b0fa73" Feb 25 12:44:53 crc kubenswrapper[5005]: I0225 12:44:53.746696 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8kxd5"] Feb 25 12:44:53 crc kubenswrapper[5005]: I0225 12:44:53.755689 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8kxd5"] Feb 25 12:44:53 crc kubenswrapper[5005]: I0225 12:44:53.767878 5005 scope.go:117] "RemoveContainer" containerID="679c2ff6984a52b1768239aac5243b4630a4441066c879eaf09066ce82d9ce3b" Feb 25 12:44:53 crc kubenswrapper[5005]: I0225 12:44:53.798322 5005 scope.go:117] "RemoveContainer" containerID="cd3a647afe48089efd9b5134158e11ce96c9595e7cfc37fd6f469dee8b96ab14" Feb 25 12:44:53 crc kubenswrapper[5005]: E0225 12:44:53.798877 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd3a647afe48089efd9b5134158e11ce96c9595e7cfc37fd6f469dee8b96ab14\": container with ID starting with cd3a647afe48089efd9b5134158e11ce96c9595e7cfc37fd6f469dee8b96ab14 not found: ID does not exist" containerID="cd3a647afe48089efd9b5134158e11ce96c9595e7cfc37fd6f469dee8b96ab14" Feb 25 12:44:53 crc kubenswrapper[5005]: I0225 12:44:53.798926 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd3a647afe48089efd9b5134158e11ce96c9595e7cfc37fd6f469dee8b96ab14"} err="failed to get container status \"cd3a647afe48089efd9b5134158e11ce96c9595e7cfc37fd6f469dee8b96ab14\": rpc error: code = NotFound desc = could not find container \"cd3a647afe48089efd9b5134158e11ce96c9595e7cfc37fd6f469dee8b96ab14\": container with ID starting with cd3a647afe48089efd9b5134158e11ce96c9595e7cfc37fd6f469dee8b96ab14 not found: ID does not exist" Feb 25 12:44:53 crc kubenswrapper[5005]: I0225 12:44:53.798959 5005 scope.go:117] "RemoveContainer" containerID="aaa0e351ee42acf347ee30db53764b0df3b29c70ef9209091f4c811ad7b0fa73" Feb 25 12:44:53 crc kubenswrapper[5005]: E0225 12:44:53.799419 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aaa0e351ee42acf347ee30db53764b0df3b29c70ef9209091f4c811ad7b0fa73\": container with ID starting with aaa0e351ee42acf347ee30db53764b0df3b29c70ef9209091f4c811ad7b0fa73 not found: ID does not exist" containerID="aaa0e351ee42acf347ee30db53764b0df3b29c70ef9209091f4c811ad7b0fa73" Feb 25 12:44:53 crc kubenswrapper[5005]: I0225 12:44:53.799452 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aaa0e351ee42acf347ee30db53764b0df3b29c70ef9209091f4c811ad7b0fa73"} err="failed to get container status \"aaa0e351ee42acf347ee30db53764b0df3b29c70ef9209091f4c811ad7b0fa73\": rpc error: code = NotFound desc = could not find container \"aaa0e351ee42acf347ee30db53764b0df3b29c70ef9209091f4c811ad7b0fa73\": container with ID starting with aaa0e351ee42acf347ee30db53764b0df3b29c70ef9209091f4c811ad7b0fa73 not found: ID does not exist" Feb 25 12:44:53 crc kubenswrapper[5005]: I0225 12:44:53.799477 5005 scope.go:117] "RemoveContainer" containerID="679c2ff6984a52b1768239aac5243b4630a4441066c879eaf09066ce82d9ce3b" Feb 25 12:44:53 crc kubenswrapper[5005]: E0225 12:44:53.800027 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"679c2ff6984a52b1768239aac5243b4630a4441066c879eaf09066ce82d9ce3b\": container with ID starting with 679c2ff6984a52b1768239aac5243b4630a4441066c879eaf09066ce82d9ce3b not found: ID does not exist" containerID="679c2ff6984a52b1768239aac5243b4630a4441066c879eaf09066ce82d9ce3b" Feb 25 12:44:53 crc kubenswrapper[5005]: I0225 12:44:53.800078 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"679c2ff6984a52b1768239aac5243b4630a4441066c879eaf09066ce82d9ce3b"} err="failed to get container status \"679c2ff6984a52b1768239aac5243b4630a4441066c879eaf09066ce82d9ce3b\": rpc error: code = NotFound desc = could not find container \"679c2ff6984a52b1768239aac5243b4630a4441066c879eaf09066ce82d9ce3b\": container with ID starting with 679c2ff6984a52b1768239aac5243b4630a4441066c879eaf09066ce82d9ce3b not found: ID does not exist" Feb 25 12:44:54 crc kubenswrapper[5005]: I0225 12:44:54.698831 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffaa6723-7110-4200-a903-e89455f3b0d0" path="/var/lib/kubelet/pods/ffaa6723-7110-4200-a903-e89455f3b0d0/volumes" Feb 25 12:44:58 crc kubenswrapper[5005]: I0225 12:44:58.688172 5005 scope.go:117] "RemoveContainer" containerID="d5323b9fb38e9e5b3654a15d617298ef1a2c7a35b633e8c3fa93501ca4da70f7" Feb 25 12:44:58 crc kubenswrapper[5005]: E0225 12:44:58.690269 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:45:00 crc kubenswrapper[5005]: I0225 12:45:00.157074 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29533725-k4qtk"] Feb 25 12:45:00 crc kubenswrapper[5005]: E0225 12:45:00.157774 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffaa6723-7110-4200-a903-e89455f3b0d0" containerName="extract-content" Feb 25 12:45:00 crc kubenswrapper[5005]: I0225 12:45:00.157786 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffaa6723-7110-4200-a903-e89455f3b0d0" containerName="extract-content" Feb 25 12:45:00 crc kubenswrapper[5005]: E0225 12:45:00.157802 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72227b2a-5882-4611-8534-7d06c7f26649" containerName="oc" Feb 25 12:45:00 crc kubenswrapper[5005]: I0225 12:45:00.157808 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="72227b2a-5882-4611-8534-7d06c7f26649" containerName="oc" Feb 25 12:45:00 crc kubenswrapper[5005]: E0225 12:45:00.157829 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffaa6723-7110-4200-a903-e89455f3b0d0" containerName="registry-server" Feb 25 12:45:00 crc kubenswrapper[5005]: I0225 12:45:00.157835 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffaa6723-7110-4200-a903-e89455f3b0d0" containerName="registry-server" Feb 25 12:45:00 crc kubenswrapper[5005]: E0225 12:45:00.157848 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffaa6723-7110-4200-a903-e89455f3b0d0" containerName="extract-utilities" Feb 25 12:45:00 crc kubenswrapper[5005]: I0225 12:45:00.157856 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffaa6723-7110-4200-a903-e89455f3b0d0" containerName="extract-utilities" Feb 25 12:45:00 crc kubenswrapper[5005]: I0225 12:45:00.158017 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffaa6723-7110-4200-a903-e89455f3b0d0" containerName="registry-server" Feb 25 12:45:00 crc kubenswrapper[5005]: I0225 12:45:00.158031 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="72227b2a-5882-4611-8534-7d06c7f26649" containerName="oc" Feb 25 12:45:00 crc kubenswrapper[5005]: I0225 12:45:00.158659 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29533725-k4qtk" Feb 25 12:45:00 crc kubenswrapper[5005]: I0225 12:45:00.161886 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 25 12:45:00 crc kubenswrapper[5005]: I0225 12:45:00.162595 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 25 12:45:00 crc kubenswrapper[5005]: I0225 12:45:00.168442 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29533725-k4qtk"] Feb 25 12:45:00 crc kubenswrapper[5005]: I0225 12:45:00.244301 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn6jm\" (UniqueName: \"kubernetes.io/projected/2e3eb95e-5fe6-437b-bcb2-6398a67047e3-kube-api-access-wn6jm\") pod \"collect-profiles-29533725-k4qtk\" (UID: \"2e3eb95e-5fe6-437b-bcb2-6398a67047e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533725-k4qtk" Feb 25 12:45:00 crc kubenswrapper[5005]: I0225 12:45:00.244506 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e3eb95e-5fe6-437b-bcb2-6398a67047e3-config-volume\") pod \"collect-profiles-29533725-k4qtk\" (UID: \"2e3eb95e-5fe6-437b-bcb2-6398a67047e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533725-k4qtk" Feb 25 12:45:00 crc kubenswrapper[5005]: I0225 12:45:00.245008 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e3eb95e-5fe6-437b-bcb2-6398a67047e3-secret-volume\") pod \"collect-profiles-29533725-k4qtk\" (UID: \"2e3eb95e-5fe6-437b-bcb2-6398a67047e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533725-k4qtk" Feb 25 12:45:00 crc kubenswrapper[5005]: I0225 12:45:00.346864 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn6jm\" (UniqueName: \"kubernetes.io/projected/2e3eb95e-5fe6-437b-bcb2-6398a67047e3-kube-api-access-wn6jm\") pod \"collect-profiles-29533725-k4qtk\" (UID: \"2e3eb95e-5fe6-437b-bcb2-6398a67047e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533725-k4qtk" Feb 25 12:45:00 crc kubenswrapper[5005]: I0225 12:45:00.347012 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e3eb95e-5fe6-437b-bcb2-6398a67047e3-config-volume\") pod \"collect-profiles-29533725-k4qtk\" (UID: \"2e3eb95e-5fe6-437b-bcb2-6398a67047e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533725-k4qtk" Feb 25 12:45:00 crc kubenswrapper[5005]: I0225 12:45:00.347119 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e3eb95e-5fe6-437b-bcb2-6398a67047e3-secret-volume\") pod \"collect-profiles-29533725-k4qtk\" (UID: \"2e3eb95e-5fe6-437b-bcb2-6398a67047e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533725-k4qtk" Feb 25 12:45:00 crc kubenswrapper[5005]: I0225 12:45:00.347976 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e3eb95e-5fe6-437b-bcb2-6398a67047e3-config-volume\") pod \"collect-profiles-29533725-k4qtk\" (UID: \"2e3eb95e-5fe6-437b-bcb2-6398a67047e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533725-k4qtk" Feb 25 12:45:00 crc kubenswrapper[5005]: I0225 12:45:00.352854 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e3eb95e-5fe6-437b-bcb2-6398a67047e3-secret-volume\") pod \"collect-profiles-29533725-k4qtk\" (UID: \"2e3eb95e-5fe6-437b-bcb2-6398a67047e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533725-k4qtk" Feb 25 12:45:00 crc kubenswrapper[5005]: I0225 12:45:00.369192 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn6jm\" (UniqueName: \"kubernetes.io/projected/2e3eb95e-5fe6-437b-bcb2-6398a67047e3-kube-api-access-wn6jm\") pod \"collect-profiles-29533725-k4qtk\" (UID: \"2e3eb95e-5fe6-437b-bcb2-6398a67047e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533725-k4qtk" Feb 25 12:45:00 crc kubenswrapper[5005]: I0225 12:45:00.491589 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29533725-k4qtk" Feb 25 12:45:00 crc kubenswrapper[5005]: I0225 12:45:00.907744 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29533725-k4qtk"] Feb 25 12:45:00 crc kubenswrapper[5005]: W0225 12:45:00.917242 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e3eb95e_5fe6_437b_bcb2_6398a67047e3.slice/crio-6335edc6f1e8bb20caac0ada5579aca0f0ae10303d73af306d8056ffbab4f549 WatchSource:0}: Error finding container 6335edc6f1e8bb20caac0ada5579aca0f0ae10303d73af306d8056ffbab4f549: Status 404 returned error can't find the container with id 6335edc6f1e8bb20caac0ada5579aca0f0ae10303d73af306d8056ffbab4f549 Feb 25 12:45:01 crc kubenswrapper[5005]: I0225 12:45:01.800640 5005 generic.go:334] "Generic (PLEG): container finished" podID="2e3eb95e-5fe6-437b-bcb2-6398a67047e3" containerID="60b626f194b038864217313f9b0d414505f3828f31558686c5072876a4fb0af5" exitCode=0 Feb 25 12:45:01 crc kubenswrapper[5005]: I0225 12:45:01.800787 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29533725-k4qtk" event={"ID":"2e3eb95e-5fe6-437b-bcb2-6398a67047e3","Type":"ContainerDied","Data":"60b626f194b038864217313f9b0d414505f3828f31558686c5072876a4fb0af5"} Feb 25 12:45:01 crc kubenswrapper[5005]: I0225 12:45:01.800964 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29533725-k4qtk" event={"ID":"2e3eb95e-5fe6-437b-bcb2-6398a67047e3","Type":"ContainerStarted","Data":"6335edc6f1e8bb20caac0ada5579aca0f0ae10303d73af306d8056ffbab4f549"} Feb 25 12:45:03 crc kubenswrapper[5005]: I0225 12:45:03.345483 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29533725-k4qtk" Feb 25 12:45:03 crc kubenswrapper[5005]: I0225 12:45:03.524647 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e3eb95e-5fe6-437b-bcb2-6398a67047e3-config-volume\") pod \"2e3eb95e-5fe6-437b-bcb2-6398a67047e3\" (UID: \"2e3eb95e-5fe6-437b-bcb2-6398a67047e3\") " Feb 25 12:45:03 crc kubenswrapper[5005]: I0225 12:45:03.524713 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e3eb95e-5fe6-437b-bcb2-6398a67047e3-secret-volume\") pod \"2e3eb95e-5fe6-437b-bcb2-6398a67047e3\" (UID: \"2e3eb95e-5fe6-437b-bcb2-6398a67047e3\") " Feb 25 12:45:03 crc kubenswrapper[5005]: I0225 12:45:03.524771 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wn6jm\" (UniqueName: \"kubernetes.io/projected/2e3eb95e-5fe6-437b-bcb2-6398a67047e3-kube-api-access-wn6jm\") pod \"2e3eb95e-5fe6-437b-bcb2-6398a67047e3\" (UID: \"2e3eb95e-5fe6-437b-bcb2-6398a67047e3\") " Feb 25 12:45:03 crc kubenswrapper[5005]: I0225 12:45:03.525445 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e3eb95e-5fe6-437b-bcb2-6398a67047e3-config-volume" (OuterVolumeSpecName: "config-volume") pod "2e3eb95e-5fe6-437b-bcb2-6398a67047e3" (UID: "2e3eb95e-5fe6-437b-bcb2-6398a67047e3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 12:45:03 crc kubenswrapper[5005]: I0225 12:45:03.530788 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e3eb95e-5fe6-437b-bcb2-6398a67047e3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2e3eb95e-5fe6-437b-bcb2-6398a67047e3" (UID: "2e3eb95e-5fe6-437b-bcb2-6398a67047e3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 12:45:03 crc kubenswrapper[5005]: I0225 12:45:03.530889 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e3eb95e-5fe6-437b-bcb2-6398a67047e3-kube-api-access-wn6jm" (OuterVolumeSpecName: "kube-api-access-wn6jm") pod "2e3eb95e-5fe6-437b-bcb2-6398a67047e3" (UID: "2e3eb95e-5fe6-437b-bcb2-6398a67047e3"). InnerVolumeSpecName "kube-api-access-wn6jm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:45:03 crc kubenswrapper[5005]: I0225 12:45:03.627568 5005 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e3eb95e-5fe6-437b-bcb2-6398a67047e3-config-volume\") on node \"crc\" DevicePath \"\"" Feb 25 12:45:03 crc kubenswrapper[5005]: I0225 12:45:03.627599 5005 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e3eb95e-5fe6-437b-bcb2-6398a67047e3-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 25 12:45:03 crc kubenswrapper[5005]: I0225 12:45:03.627609 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wn6jm\" (UniqueName: \"kubernetes.io/projected/2e3eb95e-5fe6-437b-bcb2-6398a67047e3-kube-api-access-wn6jm\") on node \"crc\" DevicePath \"\"" Feb 25 12:45:03 crc kubenswrapper[5005]: I0225 12:45:03.820032 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29533725-k4qtk" event={"ID":"2e3eb95e-5fe6-437b-bcb2-6398a67047e3","Type":"ContainerDied","Data":"6335edc6f1e8bb20caac0ada5579aca0f0ae10303d73af306d8056ffbab4f549"} Feb 25 12:45:03 crc kubenswrapper[5005]: I0225 12:45:03.820402 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6335edc6f1e8bb20caac0ada5579aca0f0ae10303d73af306d8056ffbab4f549" Feb 25 12:45:03 crc kubenswrapper[5005]: I0225 12:45:03.820133 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29533725-k4qtk" Feb 25 12:45:04 crc kubenswrapper[5005]: I0225 12:45:04.433318 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29533680-t4666"] Feb 25 12:45:04 crc kubenswrapper[5005]: I0225 12:45:04.445229 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29533680-t4666"] Feb 25 12:45:04 crc kubenswrapper[5005]: I0225 12:45:04.700758 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf323fa1-3a7a-4244-88b9-e704b090f3ed" path="/var/lib/kubelet/pods/cf323fa1-3a7a-4244-88b9-e704b090f3ed/volumes" Feb 25 12:45:11 crc kubenswrapper[5005]: I0225 12:45:11.685651 5005 scope.go:117] "RemoveContainer" containerID="d5323b9fb38e9e5b3654a15d617298ef1a2c7a35b633e8c3fa93501ca4da70f7" Feb 25 12:45:11 crc kubenswrapper[5005]: E0225 12:45:11.686367 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:45:22 crc kubenswrapper[5005]: I0225 12:45:22.492451 5005 scope.go:117] "RemoveContainer" containerID="629658fa21a9ae797a251bb9e8690b3742201b6361addbe198cb95f6b4ebc248" Feb 25 12:45:26 crc kubenswrapper[5005]: I0225 12:45:26.691359 5005 scope.go:117] "RemoveContainer" containerID="d5323b9fb38e9e5b3654a15d617298ef1a2c7a35b633e8c3fa93501ca4da70f7" Feb 25 12:45:26 crc kubenswrapper[5005]: E0225 12:45:26.691982 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:45:39 crc kubenswrapper[5005]: I0225 12:45:39.685991 5005 scope.go:117] "RemoveContainer" containerID="d5323b9fb38e9e5b3654a15d617298ef1a2c7a35b633e8c3fa93501ca4da70f7" Feb 25 12:45:39 crc kubenswrapper[5005]: E0225 12:45:39.686991 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:45:52 crc kubenswrapper[5005]: I0225 12:45:52.686363 5005 scope.go:117] "RemoveContainer" containerID="d5323b9fb38e9e5b3654a15d617298ef1a2c7a35b633e8c3fa93501ca4da70f7" Feb 25 12:45:52 crc kubenswrapper[5005]: E0225 12:45:52.687293 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:46:00 crc kubenswrapper[5005]: I0225 12:46:00.145708 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533726-p95lm"] Feb 25 12:46:00 crc kubenswrapper[5005]: E0225 12:46:00.146639 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e3eb95e-5fe6-437b-bcb2-6398a67047e3" containerName="collect-profiles" Feb 25 12:46:00 crc kubenswrapper[5005]: I0225 12:46:00.146651 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e3eb95e-5fe6-437b-bcb2-6398a67047e3" containerName="collect-profiles" Feb 25 12:46:00 crc kubenswrapper[5005]: I0225 12:46:00.146849 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e3eb95e-5fe6-437b-bcb2-6398a67047e3" containerName="collect-profiles" Feb 25 12:46:00 crc kubenswrapper[5005]: I0225 12:46:00.147491 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533726-p95lm" Feb 25 12:46:00 crc kubenswrapper[5005]: I0225 12:46:00.149350 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 12:46:00 crc kubenswrapper[5005]: I0225 12:46:00.149616 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 12:46:00 crc kubenswrapper[5005]: I0225 12:46:00.149960 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 12:46:00 crc kubenswrapper[5005]: I0225 12:46:00.161157 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533726-p95lm"] Feb 25 12:46:00 crc kubenswrapper[5005]: I0225 12:46:00.279748 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4bz2\" (UniqueName: \"kubernetes.io/projected/acf36d31-44e4-460a-85b5-168cdfdb31a0-kube-api-access-z4bz2\") pod \"auto-csr-approver-29533726-p95lm\" (UID: \"acf36d31-44e4-460a-85b5-168cdfdb31a0\") " pod="openshift-infra/auto-csr-approver-29533726-p95lm" Feb 25 12:46:00 crc kubenswrapper[5005]: I0225 12:46:00.381439 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4bz2\" (UniqueName: \"kubernetes.io/projected/acf36d31-44e4-460a-85b5-168cdfdb31a0-kube-api-access-z4bz2\") pod \"auto-csr-approver-29533726-p95lm\" (UID: \"acf36d31-44e4-460a-85b5-168cdfdb31a0\") " pod="openshift-infra/auto-csr-approver-29533726-p95lm" Feb 25 12:46:00 crc kubenswrapper[5005]: I0225 12:46:00.403748 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4bz2\" (UniqueName: \"kubernetes.io/projected/acf36d31-44e4-460a-85b5-168cdfdb31a0-kube-api-access-z4bz2\") pod \"auto-csr-approver-29533726-p95lm\" (UID: \"acf36d31-44e4-460a-85b5-168cdfdb31a0\") " pod="openshift-infra/auto-csr-approver-29533726-p95lm" Feb 25 12:46:00 crc kubenswrapper[5005]: I0225 12:46:00.470041 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533726-p95lm" Feb 25 12:46:00 crc kubenswrapper[5005]: I0225 12:46:00.904110 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533726-p95lm"] Feb 25 12:46:01 crc kubenswrapper[5005]: I0225 12:46:01.306463 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533726-p95lm" event={"ID":"acf36d31-44e4-460a-85b5-168cdfdb31a0","Type":"ContainerStarted","Data":"efddeccd5443309c86ece6b17bd9fced35d85665067197a34348d57a147d8772"} Feb 25 12:46:02 crc kubenswrapper[5005]: I0225 12:46:02.316742 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533726-p95lm" event={"ID":"acf36d31-44e4-460a-85b5-168cdfdb31a0","Type":"ContainerStarted","Data":"930abb6f9f2aa655e8b5f4748b047ffd7caf64b53241d890e7b79b949f58b9ab"} Feb 25 12:46:03 crc kubenswrapper[5005]: I0225 12:46:03.327429 5005 generic.go:334] "Generic (PLEG): container finished" podID="acf36d31-44e4-460a-85b5-168cdfdb31a0" containerID="930abb6f9f2aa655e8b5f4748b047ffd7caf64b53241d890e7b79b949f58b9ab" exitCode=0 Feb 25 12:46:03 crc kubenswrapper[5005]: I0225 12:46:03.327526 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533726-p95lm" event={"ID":"acf36d31-44e4-460a-85b5-168cdfdb31a0","Type":"ContainerDied","Data":"930abb6f9f2aa655e8b5f4748b047ffd7caf64b53241d890e7b79b949f58b9ab"} Feb 25 12:46:04 crc kubenswrapper[5005]: I0225 12:46:04.869605 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533726-p95lm" Feb 25 12:46:04 crc kubenswrapper[5005]: I0225 12:46:04.970928 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4bz2\" (UniqueName: \"kubernetes.io/projected/acf36d31-44e4-460a-85b5-168cdfdb31a0-kube-api-access-z4bz2\") pod \"acf36d31-44e4-460a-85b5-168cdfdb31a0\" (UID: \"acf36d31-44e4-460a-85b5-168cdfdb31a0\") " Feb 25 12:46:04 crc kubenswrapper[5005]: I0225 12:46:04.976256 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acf36d31-44e4-460a-85b5-168cdfdb31a0-kube-api-access-z4bz2" (OuterVolumeSpecName: "kube-api-access-z4bz2") pod "acf36d31-44e4-460a-85b5-168cdfdb31a0" (UID: "acf36d31-44e4-460a-85b5-168cdfdb31a0"). InnerVolumeSpecName "kube-api-access-z4bz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:46:05 crc kubenswrapper[5005]: I0225 12:46:05.073269 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4bz2\" (UniqueName: \"kubernetes.io/projected/acf36d31-44e4-460a-85b5-168cdfdb31a0-kube-api-access-z4bz2\") on node \"crc\" DevicePath \"\"" Feb 25 12:46:05 crc kubenswrapper[5005]: I0225 12:46:05.355290 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533726-p95lm" event={"ID":"acf36d31-44e4-460a-85b5-168cdfdb31a0","Type":"ContainerDied","Data":"efddeccd5443309c86ece6b17bd9fced35d85665067197a34348d57a147d8772"} Feb 25 12:46:05 crc kubenswrapper[5005]: I0225 12:46:05.355329 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="efddeccd5443309c86ece6b17bd9fced35d85665067197a34348d57a147d8772" Feb 25 12:46:05 crc kubenswrapper[5005]: I0225 12:46:05.355430 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533726-p95lm" Feb 25 12:46:05 crc kubenswrapper[5005]: I0225 12:46:05.686213 5005 scope.go:117] "RemoveContainer" containerID="d5323b9fb38e9e5b3654a15d617298ef1a2c7a35b633e8c3fa93501ca4da70f7" Feb 25 12:46:05 crc kubenswrapper[5005]: I0225 12:46:05.969734 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533720-nwmhs"] Feb 25 12:46:05 crc kubenswrapper[5005]: I0225 12:46:05.981923 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533720-nwmhs"] Feb 25 12:46:06 crc kubenswrapper[5005]: I0225 12:46:06.364903 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerStarted","Data":"afc7fe4cfa5ffc5c5f3d76fa24f478e0e6978fea9ed24e2c2fab6f085ed5dd41"} Feb 25 12:46:06 crc kubenswrapper[5005]: I0225 12:46:06.694967 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35c3ec58-da0e-49f6-858d-a48acb64275e" path="/var/lib/kubelet/pods/35c3ec58-da0e-49f6-858d-a48acb64275e/volumes" Feb 25 12:46:22 crc kubenswrapper[5005]: I0225 12:46:22.581467 5005 scope.go:117] "RemoveContainer" containerID="b96677f4e7392b8d6b2bca1f66d988af7a38989cdb1fb2da003a68da0d102dc0" Feb 25 12:46:47 crc kubenswrapper[5005]: I0225 12:46:47.829446 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rqcmg"] Feb 25 12:46:47 crc kubenswrapper[5005]: E0225 12:46:47.830568 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acf36d31-44e4-460a-85b5-168cdfdb31a0" containerName="oc" Feb 25 12:46:47 crc kubenswrapper[5005]: I0225 12:46:47.830587 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="acf36d31-44e4-460a-85b5-168cdfdb31a0" containerName="oc" Feb 25 12:46:47 crc kubenswrapper[5005]: I0225 12:46:47.830822 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="acf36d31-44e4-460a-85b5-168cdfdb31a0" containerName="oc" Feb 25 12:46:47 crc kubenswrapper[5005]: I0225 12:46:47.832496 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rqcmg" Feb 25 12:46:47 crc kubenswrapper[5005]: I0225 12:46:47.841744 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rqcmg"] Feb 25 12:46:47 crc kubenswrapper[5005]: I0225 12:46:47.875796 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/637bc7f7-5255-45e1-9986-66625e92bb81-utilities\") pod \"community-operators-rqcmg\" (UID: \"637bc7f7-5255-45e1-9986-66625e92bb81\") " pod="openshift-marketplace/community-operators-rqcmg" Feb 25 12:46:47 crc kubenswrapper[5005]: I0225 12:46:47.875888 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/637bc7f7-5255-45e1-9986-66625e92bb81-catalog-content\") pod \"community-operators-rqcmg\" (UID: \"637bc7f7-5255-45e1-9986-66625e92bb81\") " pod="openshift-marketplace/community-operators-rqcmg" Feb 25 12:46:47 crc kubenswrapper[5005]: I0225 12:46:47.875974 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fncd\" (UniqueName: \"kubernetes.io/projected/637bc7f7-5255-45e1-9986-66625e92bb81-kube-api-access-2fncd\") pod \"community-operators-rqcmg\" (UID: \"637bc7f7-5255-45e1-9986-66625e92bb81\") " pod="openshift-marketplace/community-operators-rqcmg" Feb 25 12:46:47 crc kubenswrapper[5005]: I0225 12:46:47.978224 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/637bc7f7-5255-45e1-9986-66625e92bb81-catalog-content\") pod \"community-operators-rqcmg\" (UID: \"637bc7f7-5255-45e1-9986-66625e92bb81\") " pod="openshift-marketplace/community-operators-rqcmg" Feb 25 12:46:47 crc kubenswrapper[5005]: I0225 12:46:47.978330 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fncd\" (UniqueName: \"kubernetes.io/projected/637bc7f7-5255-45e1-9986-66625e92bb81-kube-api-access-2fncd\") pod \"community-operators-rqcmg\" (UID: \"637bc7f7-5255-45e1-9986-66625e92bb81\") " pod="openshift-marketplace/community-operators-rqcmg" Feb 25 12:46:47 crc kubenswrapper[5005]: I0225 12:46:47.978456 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/637bc7f7-5255-45e1-9986-66625e92bb81-utilities\") pod \"community-operators-rqcmg\" (UID: \"637bc7f7-5255-45e1-9986-66625e92bb81\") " pod="openshift-marketplace/community-operators-rqcmg" Feb 25 12:46:47 crc kubenswrapper[5005]: I0225 12:46:47.978891 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/637bc7f7-5255-45e1-9986-66625e92bb81-utilities\") pod \"community-operators-rqcmg\" (UID: \"637bc7f7-5255-45e1-9986-66625e92bb81\") " pod="openshift-marketplace/community-operators-rqcmg" Feb 25 12:46:47 crc kubenswrapper[5005]: I0225 12:46:47.979176 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/637bc7f7-5255-45e1-9986-66625e92bb81-catalog-content\") pod \"community-operators-rqcmg\" (UID: \"637bc7f7-5255-45e1-9986-66625e92bb81\") " pod="openshift-marketplace/community-operators-rqcmg" Feb 25 12:46:47 crc kubenswrapper[5005]: I0225 12:46:47.997295 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fncd\" (UniqueName: \"kubernetes.io/projected/637bc7f7-5255-45e1-9986-66625e92bb81-kube-api-access-2fncd\") pod \"community-operators-rqcmg\" (UID: \"637bc7f7-5255-45e1-9986-66625e92bb81\") " pod="openshift-marketplace/community-operators-rqcmg" Feb 25 12:46:48 crc kubenswrapper[5005]: I0225 12:46:48.149801 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rqcmg" Feb 25 12:46:48 crc kubenswrapper[5005]: I0225 12:46:48.640493 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rqcmg"] Feb 25 12:46:48 crc kubenswrapper[5005]: I0225 12:46:48.742961 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rqcmg" event={"ID":"637bc7f7-5255-45e1-9986-66625e92bb81","Type":"ContainerStarted","Data":"1bd25139ff5004339523bc630f61a113a7c2f0d90a157d7c823f2aef7567ab59"} Feb 25 12:46:49 crc kubenswrapper[5005]: I0225 12:46:49.766219 5005 generic.go:334] "Generic (PLEG): container finished" podID="637bc7f7-5255-45e1-9986-66625e92bb81" containerID="942bdb54aa7e9650915a39e35b5e40e589575267fdd5135ce597b09cbbe86811" exitCode=0 Feb 25 12:46:49 crc kubenswrapper[5005]: I0225 12:46:49.766808 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rqcmg" event={"ID":"637bc7f7-5255-45e1-9986-66625e92bb81","Type":"ContainerDied","Data":"942bdb54aa7e9650915a39e35b5e40e589575267fdd5135ce597b09cbbe86811"} Feb 25 12:46:50 crc kubenswrapper[5005]: I0225 12:46:50.777753 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rqcmg" event={"ID":"637bc7f7-5255-45e1-9986-66625e92bb81","Type":"ContainerStarted","Data":"252c92c7aacadc03ad288786ce62544e05e5bb42b91cd1139e08992636197956"} Feb 25 12:46:52 crc kubenswrapper[5005]: I0225 12:46:52.799944 5005 generic.go:334] "Generic (PLEG): container finished" podID="637bc7f7-5255-45e1-9986-66625e92bb81" containerID="252c92c7aacadc03ad288786ce62544e05e5bb42b91cd1139e08992636197956" exitCode=0 Feb 25 12:46:52 crc kubenswrapper[5005]: I0225 12:46:52.800039 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rqcmg" event={"ID":"637bc7f7-5255-45e1-9986-66625e92bb81","Type":"ContainerDied","Data":"252c92c7aacadc03ad288786ce62544e05e5bb42b91cd1139e08992636197956"} Feb 25 12:46:53 crc kubenswrapper[5005]: I0225 12:46:53.810517 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rqcmg" event={"ID":"637bc7f7-5255-45e1-9986-66625e92bb81","Type":"ContainerStarted","Data":"24d5e94f20ce4443d9926d89f7424b60a746df72e797c8ceca225000da0c95e9"} Feb 25 12:46:53 crc kubenswrapper[5005]: I0225 12:46:53.840047 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rqcmg" podStartSLOduration=3.378319694 podStartE2EDuration="6.840020856s" podCreationTimestamp="2026-02-25 12:46:47 +0000 UTC" firstStartedPulling="2026-02-25 12:46:49.770518052 +0000 UTC m=+5323.811250379" lastFinishedPulling="2026-02-25 12:46:53.232219214 +0000 UTC m=+5327.272951541" observedRunningTime="2026-02-25 12:46:53.826986415 +0000 UTC m=+5327.867718762" watchObservedRunningTime="2026-02-25 12:46:53.840020856 +0000 UTC m=+5327.880753203" Feb 25 12:46:58 crc kubenswrapper[5005]: I0225 12:46:58.150716 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rqcmg" Feb 25 12:46:58 crc kubenswrapper[5005]: I0225 12:46:58.151144 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rqcmg" Feb 25 12:46:58 crc kubenswrapper[5005]: I0225 12:46:58.202751 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rqcmg" Feb 25 12:46:58 crc kubenswrapper[5005]: I0225 12:46:58.908656 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rqcmg" Feb 25 12:46:58 crc kubenswrapper[5005]: I0225 12:46:58.972207 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rqcmg"] Feb 25 12:47:00 crc kubenswrapper[5005]: I0225 12:47:00.865534 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rqcmg" podUID="637bc7f7-5255-45e1-9986-66625e92bb81" containerName="registry-server" containerID="cri-o://24d5e94f20ce4443d9926d89f7424b60a746df72e797c8ceca225000da0c95e9" gracePeriod=2 Feb 25 12:47:01 crc kubenswrapper[5005]: I0225 12:47:01.548395 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rqcmg" Feb 25 12:47:01 crc kubenswrapper[5005]: I0225 12:47:01.601120 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/637bc7f7-5255-45e1-9986-66625e92bb81-catalog-content\") pod \"637bc7f7-5255-45e1-9986-66625e92bb81\" (UID: \"637bc7f7-5255-45e1-9986-66625e92bb81\") " Feb 25 12:47:01 crc kubenswrapper[5005]: I0225 12:47:01.601395 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/637bc7f7-5255-45e1-9986-66625e92bb81-utilities\") pod \"637bc7f7-5255-45e1-9986-66625e92bb81\" (UID: \"637bc7f7-5255-45e1-9986-66625e92bb81\") " Feb 25 12:47:01 crc kubenswrapper[5005]: I0225 12:47:01.601524 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fncd\" (UniqueName: \"kubernetes.io/projected/637bc7f7-5255-45e1-9986-66625e92bb81-kube-api-access-2fncd\") pod \"637bc7f7-5255-45e1-9986-66625e92bb81\" (UID: \"637bc7f7-5255-45e1-9986-66625e92bb81\") " Feb 25 12:47:01 crc kubenswrapper[5005]: I0225 12:47:01.603276 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/637bc7f7-5255-45e1-9986-66625e92bb81-utilities" (OuterVolumeSpecName: "utilities") pod "637bc7f7-5255-45e1-9986-66625e92bb81" (UID: "637bc7f7-5255-45e1-9986-66625e92bb81"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 12:47:01 crc kubenswrapper[5005]: I0225 12:47:01.609486 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/637bc7f7-5255-45e1-9986-66625e92bb81-kube-api-access-2fncd" (OuterVolumeSpecName: "kube-api-access-2fncd") pod "637bc7f7-5255-45e1-9986-66625e92bb81" (UID: "637bc7f7-5255-45e1-9986-66625e92bb81"). InnerVolumeSpecName "kube-api-access-2fncd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:47:01 crc kubenswrapper[5005]: I0225 12:47:01.664550 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/637bc7f7-5255-45e1-9986-66625e92bb81-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "637bc7f7-5255-45e1-9986-66625e92bb81" (UID: "637bc7f7-5255-45e1-9986-66625e92bb81"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 12:47:01 crc kubenswrapper[5005]: I0225 12:47:01.704779 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/637bc7f7-5255-45e1-9986-66625e92bb81-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 12:47:01 crc kubenswrapper[5005]: I0225 12:47:01.704834 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/637bc7f7-5255-45e1-9986-66625e92bb81-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 12:47:01 crc kubenswrapper[5005]: I0225 12:47:01.704856 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fncd\" (UniqueName: \"kubernetes.io/projected/637bc7f7-5255-45e1-9986-66625e92bb81-kube-api-access-2fncd\") on node \"crc\" DevicePath \"\"" Feb 25 12:47:01 crc kubenswrapper[5005]: I0225 12:47:01.875862 5005 generic.go:334] "Generic (PLEG): container finished" podID="637bc7f7-5255-45e1-9986-66625e92bb81" containerID="24d5e94f20ce4443d9926d89f7424b60a746df72e797c8ceca225000da0c95e9" exitCode=0 Feb 25 12:47:01 crc kubenswrapper[5005]: I0225 12:47:01.875927 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rqcmg" event={"ID":"637bc7f7-5255-45e1-9986-66625e92bb81","Type":"ContainerDied","Data":"24d5e94f20ce4443d9926d89f7424b60a746df72e797c8ceca225000da0c95e9"} Feb 25 12:47:01 crc kubenswrapper[5005]: I0225 12:47:01.876287 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rqcmg" event={"ID":"637bc7f7-5255-45e1-9986-66625e92bb81","Type":"ContainerDied","Data":"1bd25139ff5004339523bc630f61a113a7c2f0d90a157d7c823f2aef7567ab59"} Feb 25 12:47:01 crc kubenswrapper[5005]: I0225 12:47:01.876318 5005 scope.go:117] "RemoveContainer" containerID="24d5e94f20ce4443d9926d89f7424b60a746df72e797c8ceca225000da0c95e9" Feb 25 12:47:01 crc kubenswrapper[5005]: I0225 12:47:01.875946 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rqcmg" Feb 25 12:47:01 crc kubenswrapper[5005]: I0225 12:47:01.896632 5005 scope.go:117] "RemoveContainer" containerID="252c92c7aacadc03ad288786ce62544e05e5bb42b91cd1139e08992636197956" Feb 25 12:47:01 crc kubenswrapper[5005]: I0225 12:47:01.937525 5005 scope.go:117] "RemoveContainer" containerID="942bdb54aa7e9650915a39e35b5e40e589575267fdd5135ce597b09cbbe86811" Feb 25 12:47:01 crc kubenswrapper[5005]: I0225 12:47:01.943185 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rqcmg"] Feb 25 12:47:01 crc kubenswrapper[5005]: I0225 12:47:01.955041 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rqcmg"] Feb 25 12:47:01 crc kubenswrapper[5005]: I0225 12:47:01.991611 5005 scope.go:117] "RemoveContainer" containerID="24d5e94f20ce4443d9926d89f7424b60a746df72e797c8ceca225000da0c95e9" Feb 25 12:47:01 crc kubenswrapper[5005]: E0225 12:47:01.992095 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24d5e94f20ce4443d9926d89f7424b60a746df72e797c8ceca225000da0c95e9\": container with ID starting with 24d5e94f20ce4443d9926d89f7424b60a746df72e797c8ceca225000da0c95e9 not found: ID does not exist" containerID="24d5e94f20ce4443d9926d89f7424b60a746df72e797c8ceca225000da0c95e9" Feb 25 12:47:01 crc kubenswrapper[5005]: I0225 12:47:01.992127 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24d5e94f20ce4443d9926d89f7424b60a746df72e797c8ceca225000da0c95e9"} err="failed to get container status \"24d5e94f20ce4443d9926d89f7424b60a746df72e797c8ceca225000da0c95e9\": rpc error: code = NotFound desc = could not find container \"24d5e94f20ce4443d9926d89f7424b60a746df72e797c8ceca225000da0c95e9\": container with ID starting with 24d5e94f20ce4443d9926d89f7424b60a746df72e797c8ceca225000da0c95e9 not found: ID does not exist" Feb 25 12:47:01 crc kubenswrapper[5005]: I0225 12:47:01.992148 5005 scope.go:117] "RemoveContainer" containerID="252c92c7aacadc03ad288786ce62544e05e5bb42b91cd1139e08992636197956" Feb 25 12:47:01 crc kubenswrapper[5005]: E0225 12:47:01.992520 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"252c92c7aacadc03ad288786ce62544e05e5bb42b91cd1139e08992636197956\": container with ID starting with 252c92c7aacadc03ad288786ce62544e05e5bb42b91cd1139e08992636197956 not found: ID does not exist" containerID="252c92c7aacadc03ad288786ce62544e05e5bb42b91cd1139e08992636197956" Feb 25 12:47:01 crc kubenswrapper[5005]: I0225 12:47:01.992539 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"252c92c7aacadc03ad288786ce62544e05e5bb42b91cd1139e08992636197956"} err="failed to get container status \"252c92c7aacadc03ad288786ce62544e05e5bb42b91cd1139e08992636197956\": rpc error: code = NotFound desc = could not find container \"252c92c7aacadc03ad288786ce62544e05e5bb42b91cd1139e08992636197956\": container with ID starting with 252c92c7aacadc03ad288786ce62544e05e5bb42b91cd1139e08992636197956 not found: ID does not exist" Feb 25 12:47:01 crc kubenswrapper[5005]: I0225 12:47:01.992551 5005 scope.go:117] "RemoveContainer" containerID="942bdb54aa7e9650915a39e35b5e40e589575267fdd5135ce597b09cbbe86811" Feb 25 12:47:01 crc kubenswrapper[5005]: E0225 12:47:01.992820 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"942bdb54aa7e9650915a39e35b5e40e589575267fdd5135ce597b09cbbe86811\": container with ID starting with 942bdb54aa7e9650915a39e35b5e40e589575267fdd5135ce597b09cbbe86811 not found: ID does not exist" containerID="942bdb54aa7e9650915a39e35b5e40e589575267fdd5135ce597b09cbbe86811" Feb 25 12:47:01 crc kubenswrapper[5005]: I0225 12:47:01.992860 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"942bdb54aa7e9650915a39e35b5e40e589575267fdd5135ce597b09cbbe86811"} err="failed to get container status \"942bdb54aa7e9650915a39e35b5e40e589575267fdd5135ce597b09cbbe86811\": rpc error: code = NotFound desc = could not find container \"942bdb54aa7e9650915a39e35b5e40e589575267fdd5135ce597b09cbbe86811\": container with ID starting with 942bdb54aa7e9650915a39e35b5e40e589575267fdd5135ce597b09cbbe86811 not found: ID does not exist" Feb 25 12:47:02 crc kubenswrapper[5005]: I0225 12:47:02.704211 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="637bc7f7-5255-45e1-9986-66625e92bb81" path="/var/lib/kubelet/pods/637bc7f7-5255-45e1-9986-66625e92bb81/volumes" Feb 25 12:48:00 crc kubenswrapper[5005]: I0225 12:48:00.152851 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533728-65q7n"] Feb 25 12:48:00 crc kubenswrapper[5005]: E0225 12:48:00.153881 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="637bc7f7-5255-45e1-9986-66625e92bb81" containerName="registry-server" Feb 25 12:48:00 crc kubenswrapper[5005]: I0225 12:48:00.153897 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="637bc7f7-5255-45e1-9986-66625e92bb81" containerName="registry-server" Feb 25 12:48:00 crc kubenswrapper[5005]: E0225 12:48:00.153930 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="637bc7f7-5255-45e1-9986-66625e92bb81" containerName="extract-utilities" Feb 25 12:48:00 crc kubenswrapper[5005]: I0225 12:48:00.153940 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="637bc7f7-5255-45e1-9986-66625e92bb81" containerName="extract-utilities" Feb 25 12:48:00 crc kubenswrapper[5005]: E0225 12:48:00.153980 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="637bc7f7-5255-45e1-9986-66625e92bb81" containerName="extract-content" Feb 25 12:48:00 crc kubenswrapper[5005]: I0225 12:48:00.153990 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="637bc7f7-5255-45e1-9986-66625e92bb81" containerName="extract-content" Feb 25 12:48:00 crc kubenswrapper[5005]: I0225 12:48:00.154214 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="637bc7f7-5255-45e1-9986-66625e92bb81" containerName="registry-server" Feb 25 12:48:00 crc kubenswrapper[5005]: I0225 12:48:00.155047 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533728-65q7n" Feb 25 12:48:00 crc kubenswrapper[5005]: I0225 12:48:00.159955 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 12:48:00 crc kubenswrapper[5005]: I0225 12:48:00.160100 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 12:48:00 crc kubenswrapper[5005]: I0225 12:48:00.160585 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 12:48:00 crc kubenswrapper[5005]: I0225 12:48:00.162622 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533728-65q7n"] Feb 25 12:48:00 crc kubenswrapper[5005]: I0225 12:48:00.272126 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlt5w\" (UniqueName: \"kubernetes.io/projected/49b8fe63-d4f0-43e0-ab11-e7cd3fdc2e79-kube-api-access-mlt5w\") pod \"auto-csr-approver-29533728-65q7n\" (UID: \"49b8fe63-d4f0-43e0-ab11-e7cd3fdc2e79\") " pod="openshift-infra/auto-csr-approver-29533728-65q7n" Feb 25 12:48:00 crc kubenswrapper[5005]: I0225 12:48:00.373521 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlt5w\" (UniqueName: \"kubernetes.io/projected/49b8fe63-d4f0-43e0-ab11-e7cd3fdc2e79-kube-api-access-mlt5w\") pod \"auto-csr-approver-29533728-65q7n\" (UID: \"49b8fe63-d4f0-43e0-ab11-e7cd3fdc2e79\") " pod="openshift-infra/auto-csr-approver-29533728-65q7n" Feb 25 12:48:00 crc kubenswrapper[5005]: I0225 12:48:00.393481 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlt5w\" (UniqueName: \"kubernetes.io/projected/49b8fe63-d4f0-43e0-ab11-e7cd3fdc2e79-kube-api-access-mlt5w\") pod \"auto-csr-approver-29533728-65q7n\" (UID: \"49b8fe63-d4f0-43e0-ab11-e7cd3fdc2e79\") " pod="openshift-infra/auto-csr-approver-29533728-65q7n" Feb 25 12:48:00 crc kubenswrapper[5005]: I0225 12:48:00.472974 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533728-65q7n" Feb 25 12:48:00 crc kubenswrapper[5005]: I0225 12:48:00.965772 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533728-65q7n"] Feb 25 12:48:00 crc kubenswrapper[5005]: W0225 12:48:00.969900 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49b8fe63_d4f0_43e0_ab11_e7cd3fdc2e79.slice/crio-5a6d11163b5686f6f09beb527a850d515c12ccefe239b8796f1ce03a6be4fe1b WatchSource:0}: Error finding container 5a6d11163b5686f6f09beb527a850d515c12ccefe239b8796f1ce03a6be4fe1b: Status 404 returned error can't find the container with id 5a6d11163b5686f6f09beb527a850d515c12ccefe239b8796f1ce03a6be4fe1b Feb 25 12:48:00 crc kubenswrapper[5005]: I0225 12:48:00.972423 5005 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 25 12:48:01 crc kubenswrapper[5005]: I0225 12:48:01.490365 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533728-65q7n" event={"ID":"49b8fe63-d4f0-43e0-ab11-e7cd3fdc2e79","Type":"ContainerStarted","Data":"5a6d11163b5686f6f09beb527a850d515c12ccefe239b8796f1ce03a6be4fe1b"} Feb 25 12:48:03 crc kubenswrapper[5005]: I0225 12:48:03.517168 5005 generic.go:334] "Generic (PLEG): container finished" podID="49b8fe63-d4f0-43e0-ab11-e7cd3fdc2e79" containerID="d5aae84e05ed2e95952589cb3d90cab7b21ac26b2dff15ee43ac52146e9866aa" exitCode=0 Feb 25 12:48:03 crc kubenswrapper[5005]: I0225 12:48:03.517593 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533728-65q7n" event={"ID":"49b8fe63-d4f0-43e0-ab11-e7cd3fdc2e79","Type":"ContainerDied","Data":"d5aae84e05ed2e95952589cb3d90cab7b21ac26b2dff15ee43ac52146e9866aa"} Feb 25 12:48:05 crc kubenswrapper[5005]: I0225 12:48:05.243973 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533728-65q7n" Feb 25 12:48:05 crc kubenswrapper[5005]: I0225 12:48:05.303418 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlt5w\" (UniqueName: \"kubernetes.io/projected/49b8fe63-d4f0-43e0-ab11-e7cd3fdc2e79-kube-api-access-mlt5w\") pod \"49b8fe63-d4f0-43e0-ab11-e7cd3fdc2e79\" (UID: \"49b8fe63-d4f0-43e0-ab11-e7cd3fdc2e79\") " Feb 25 12:48:05 crc kubenswrapper[5005]: I0225 12:48:05.311922 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49b8fe63-d4f0-43e0-ab11-e7cd3fdc2e79-kube-api-access-mlt5w" (OuterVolumeSpecName: "kube-api-access-mlt5w") pod "49b8fe63-d4f0-43e0-ab11-e7cd3fdc2e79" (UID: "49b8fe63-d4f0-43e0-ab11-e7cd3fdc2e79"). InnerVolumeSpecName "kube-api-access-mlt5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:48:05 crc kubenswrapper[5005]: I0225 12:48:05.405343 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlt5w\" (UniqueName: \"kubernetes.io/projected/49b8fe63-d4f0-43e0-ab11-e7cd3fdc2e79-kube-api-access-mlt5w\") on node \"crc\" DevicePath \"\"" Feb 25 12:48:05 crc kubenswrapper[5005]: I0225 12:48:05.536619 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533728-65q7n" event={"ID":"49b8fe63-d4f0-43e0-ab11-e7cd3fdc2e79","Type":"ContainerDied","Data":"5a6d11163b5686f6f09beb527a850d515c12ccefe239b8796f1ce03a6be4fe1b"} Feb 25 12:48:05 crc kubenswrapper[5005]: I0225 12:48:05.536667 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a6d11163b5686f6f09beb527a850d515c12ccefe239b8796f1ce03a6be4fe1b" Feb 25 12:48:05 crc kubenswrapper[5005]: I0225 12:48:05.536669 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533728-65q7n" Feb 25 12:48:06 crc kubenswrapper[5005]: I0225 12:48:06.329877 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533722-cmbf6"] Feb 25 12:48:06 crc kubenswrapper[5005]: I0225 12:48:06.345288 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533722-cmbf6"] Feb 25 12:48:06 crc kubenswrapper[5005]: I0225 12:48:06.725331 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92a7b7c4-0ff6-4cef-83b9-5a57df3a0a0c" path="/var/lib/kubelet/pods/92a7b7c4-0ff6-4cef-83b9-5a57df3a0a0c/volumes" Feb 25 12:48:15 crc kubenswrapper[5005]: I0225 12:48:15.984975 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8kzkr"] Feb 25 12:48:15 crc kubenswrapper[5005]: E0225 12:48:15.986010 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49b8fe63-d4f0-43e0-ab11-e7cd3fdc2e79" containerName="oc" Feb 25 12:48:15 crc kubenswrapper[5005]: I0225 12:48:15.986029 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="49b8fe63-d4f0-43e0-ab11-e7cd3fdc2e79" containerName="oc" Feb 25 12:48:15 crc kubenswrapper[5005]: I0225 12:48:15.986274 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="49b8fe63-d4f0-43e0-ab11-e7cd3fdc2e79" containerName="oc" Feb 25 12:48:15 crc kubenswrapper[5005]: I0225 12:48:15.987913 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8kzkr" Feb 25 12:48:16 crc kubenswrapper[5005]: I0225 12:48:16.000636 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8kzkr"] Feb 25 12:48:16 crc kubenswrapper[5005]: I0225 12:48:16.110747 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wpn5\" (UniqueName: \"kubernetes.io/projected/2cfbf8e7-a0f1-4be1-a5db-ddcda6b801ed-kube-api-access-7wpn5\") pod \"certified-operators-8kzkr\" (UID: \"2cfbf8e7-a0f1-4be1-a5db-ddcda6b801ed\") " pod="openshift-marketplace/certified-operators-8kzkr" Feb 25 12:48:16 crc kubenswrapper[5005]: I0225 12:48:16.110875 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cfbf8e7-a0f1-4be1-a5db-ddcda6b801ed-utilities\") pod \"certified-operators-8kzkr\" (UID: \"2cfbf8e7-a0f1-4be1-a5db-ddcda6b801ed\") " pod="openshift-marketplace/certified-operators-8kzkr" Feb 25 12:48:16 crc kubenswrapper[5005]: I0225 12:48:16.110898 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cfbf8e7-a0f1-4be1-a5db-ddcda6b801ed-catalog-content\") pod \"certified-operators-8kzkr\" (UID: \"2cfbf8e7-a0f1-4be1-a5db-ddcda6b801ed\") " pod="openshift-marketplace/certified-operators-8kzkr" Feb 25 12:48:16 crc kubenswrapper[5005]: I0225 12:48:16.212637 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cfbf8e7-a0f1-4be1-a5db-ddcda6b801ed-utilities\") pod \"certified-operators-8kzkr\" (UID: \"2cfbf8e7-a0f1-4be1-a5db-ddcda6b801ed\") " pod="openshift-marketplace/certified-operators-8kzkr" Feb 25 12:48:16 crc kubenswrapper[5005]: I0225 12:48:16.212681 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cfbf8e7-a0f1-4be1-a5db-ddcda6b801ed-catalog-content\") pod \"certified-operators-8kzkr\" (UID: \"2cfbf8e7-a0f1-4be1-a5db-ddcda6b801ed\") " pod="openshift-marketplace/certified-operators-8kzkr" Feb 25 12:48:16 crc kubenswrapper[5005]: I0225 12:48:16.212778 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wpn5\" (UniqueName: \"kubernetes.io/projected/2cfbf8e7-a0f1-4be1-a5db-ddcda6b801ed-kube-api-access-7wpn5\") pod \"certified-operators-8kzkr\" (UID: \"2cfbf8e7-a0f1-4be1-a5db-ddcda6b801ed\") " pod="openshift-marketplace/certified-operators-8kzkr" Feb 25 12:48:16 crc kubenswrapper[5005]: I0225 12:48:16.213400 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cfbf8e7-a0f1-4be1-a5db-ddcda6b801ed-catalog-content\") pod \"certified-operators-8kzkr\" (UID: \"2cfbf8e7-a0f1-4be1-a5db-ddcda6b801ed\") " pod="openshift-marketplace/certified-operators-8kzkr" Feb 25 12:48:16 crc kubenswrapper[5005]: I0225 12:48:16.213574 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cfbf8e7-a0f1-4be1-a5db-ddcda6b801ed-utilities\") pod \"certified-operators-8kzkr\" (UID: \"2cfbf8e7-a0f1-4be1-a5db-ddcda6b801ed\") " pod="openshift-marketplace/certified-operators-8kzkr" Feb 25 12:48:16 crc kubenswrapper[5005]: I0225 12:48:16.566181 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wpn5\" (UniqueName: \"kubernetes.io/projected/2cfbf8e7-a0f1-4be1-a5db-ddcda6b801ed-kube-api-access-7wpn5\") pod \"certified-operators-8kzkr\" (UID: \"2cfbf8e7-a0f1-4be1-a5db-ddcda6b801ed\") " pod="openshift-marketplace/certified-operators-8kzkr" Feb 25 12:48:16 crc kubenswrapper[5005]: I0225 12:48:16.627748 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8kzkr" Feb 25 12:48:17 crc kubenswrapper[5005]: I0225 12:48:17.127657 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8kzkr"] Feb 25 12:48:17 crc kubenswrapper[5005]: I0225 12:48:17.653432 5005 generic.go:334] "Generic (PLEG): container finished" podID="2cfbf8e7-a0f1-4be1-a5db-ddcda6b801ed" containerID="89af8567bcd5c8f6775d9843bd5fc1f1bb76d9de227c84fa2eaf7cda57df9de8" exitCode=0 Feb 25 12:48:17 crc kubenswrapper[5005]: I0225 12:48:17.653486 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8kzkr" event={"ID":"2cfbf8e7-a0f1-4be1-a5db-ddcda6b801ed","Type":"ContainerDied","Data":"89af8567bcd5c8f6775d9843bd5fc1f1bb76d9de227c84fa2eaf7cda57df9de8"} Feb 25 12:48:17 crc kubenswrapper[5005]: I0225 12:48:17.653517 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8kzkr" event={"ID":"2cfbf8e7-a0f1-4be1-a5db-ddcda6b801ed","Type":"ContainerStarted","Data":"d4cdfe43cf30b4497d788e6a5ab10676cc0207c3d0c7b8a8648818ee15048df9"} Feb 25 12:48:18 crc kubenswrapper[5005]: I0225 12:48:18.663216 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8kzkr" event={"ID":"2cfbf8e7-a0f1-4be1-a5db-ddcda6b801ed","Type":"ContainerStarted","Data":"2ae3b916da14e00a4bdfa5855cca83a27c2746b977c495cd0cd33c08eb519c97"} Feb 25 12:48:19 crc kubenswrapper[5005]: I0225 12:48:19.768708 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-x4d88"] Feb 25 12:48:19 crc kubenswrapper[5005]: I0225 12:48:19.777548 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x4d88" Feb 25 12:48:19 crc kubenswrapper[5005]: I0225 12:48:19.780262 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x4d88"] Feb 25 12:48:19 crc kubenswrapper[5005]: I0225 12:48:19.887540 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llxbk\" (UniqueName: \"kubernetes.io/projected/df705b09-876e-4d70-93d2-8b0ba613f35d-kube-api-access-llxbk\") pod \"redhat-marketplace-x4d88\" (UID: \"df705b09-876e-4d70-93d2-8b0ba613f35d\") " pod="openshift-marketplace/redhat-marketplace-x4d88" Feb 25 12:48:19 crc kubenswrapper[5005]: I0225 12:48:19.887610 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df705b09-876e-4d70-93d2-8b0ba613f35d-utilities\") pod \"redhat-marketplace-x4d88\" (UID: \"df705b09-876e-4d70-93d2-8b0ba613f35d\") " pod="openshift-marketplace/redhat-marketplace-x4d88" Feb 25 12:48:19 crc kubenswrapper[5005]: I0225 12:48:19.887721 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df705b09-876e-4d70-93d2-8b0ba613f35d-catalog-content\") pod \"redhat-marketplace-x4d88\" (UID: \"df705b09-876e-4d70-93d2-8b0ba613f35d\") " pod="openshift-marketplace/redhat-marketplace-x4d88" Feb 25 12:48:19 crc kubenswrapper[5005]: I0225 12:48:19.989609 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df705b09-876e-4d70-93d2-8b0ba613f35d-catalog-content\") pod \"redhat-marketplace-x4d88\" (UID: \"df705b09-876e-4d70-93d2-8b0ba613f35d\") " pod="openshift-marketplace/redhat-marketplace-x4d88" Feb 25 12:48:19 crc kubenswrapper[5005]: I0225 12:48:19.989765 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llxbk\" (UniqueName: \"kubernetes.io/projected/df705b09-876e-4d70-93d2-8b0ba613f35d-kube-api-access-llxbk\") pod \"redhat-marketplace-x4d88\" (UID: \"df705b09-876e-4d70-93d2-8b0ba613f35d\") " pod="openshift-marketplace/redhat-marketplace-x4d88" Feb 25 12:48:19 crc kubenswrapper[5005]: I0225 12:48:19.989806 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df705b09-876e-4d70-93d2-8b0ba613f35d-utilities\") pod \"redhat-marketplace-x4d88\" (UID: \"df705b09-876e-4d70-93d2-8b0ba613f35d\") " pod="openshift-marketplace/redhat-marketplace-x4d88" Feb 25 12:48:19 crc kubenswrapper[5005]: I0225 12:48:19.990420 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df705b09-876e-4d70-93d2-8b0ba613f35d-utilities\") pod \"redhat-marketplace-x4d88\" (UID: \"df705b09-876e-4d70-93d2-8b0ba613f35d\") " pod="openshift-marketplace/redhat-marketplace-x4d88" Feb 25 12:48:19 crc kubenswrapper[5005]: I0225 12:48:19.990788 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df705b09-876e-4d70-93d2-8b0ba613f35d-catalog-content\") pod \"redhat-marketplace-x4d88\" (UID: \"df705b09-876e-4d70-93d2-8b0ba613f35d\") " pod="openshift-marketplace/redhat-marketplace-x4d88" Feb 25 12:48:20 crc kubenswrapper[5005]: I0225 12:48:20.015599 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llxbk\" (UniqueName: \"kubernetes.io/projected/df705b09-876e-4d70-93d2-8b0ba613f35d-kube-api-access-llxbk\") pod \"redhat-marketplace-x4d88\" (UID: \"df705b09-876e-4d70-93d2-8b0ba613f35d\") " pod="openshift-marketplace/redhat-marketplace-x4d88" Feb 25 12:48:20 crc kubenswrapper[5005]: I0225 12:48:20.110122 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x4d88" Feb 25 12:48:20 crc kubenswrapper[5005]: I0225 12:48:20.617660 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x4d88"] Feb 25 12:48:20 crc kubenswrapper[5005]: I0225 12:48:20.697179 5005 generic.go:334] "Generic (PLEG): container finished" podID="2cfbf8e7-a0f1-4be1-a5db-ddcda6b801ed" containerID="2ae3b916da14e00a4bdfa5855cca83a27c2746b977c495cd0cd33c08eb519c97" exitCode=0 Feb 25 12:48:20 crc kubenswrapper[5005]: I0225 12:48:20.699513 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x4d88" event={"ID":"df705b09-876e-4d70-93d2-8b0ba613f35d","Type":"ContainerStarted","Data":"137a92eacf15f65fe1139e7efbcad299b651bdbaa8a4b9ef8730ee8f3f1165b3"} Feb 25 12:48:20 crc kubenswrapper[5005]: I0225 12:48:20.699541 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8kzkr" event={"ID":"2cfbf8e7-a0f1-4be1-a5db-ddcda6b801ed","Type":"ContainerDied","Data":"2ae3b916da14e00a4bdfa5855cca83a27c2746b977c495cd0cd33c08eb519c97"} Feb 25 12:48:21 crc kubenswrapper[5005]: I0225 12:48:21.707611 5005 generic.go:334] "Generic (PLEG): container finished" podID="df705b09-876e-4d70-93d2-8b0ba613f35d" containerID="18c5bac44215210e9c6e83d5296221d06a4a766ddbcc8ed1dca75233f08c3ca9" exitCode=0 Feb 25 12:48:21 crc kubenswrapper[5005]: I0225 12:48:21.707720 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x4d88" event={"ID":"df705b09-876e-4d70-93d2-8b0ba613f35d","Type":"ContainerDied","Data":"18c5bac44215210e9c6e83d5296221d06a4a766ddbcc8ed1dca75233f08c3ca9"} Feb 25 12:48:21 crc kubenswrapper[5005]: I0225 12:48:21.710242 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8kzkr" event={"ID":"2cfbf8e7-a0f1-4be1-a5db-ddcda6b801ed","Type":"ContainerStarted","Data":"ba5bc5817195ad8b4ae8469346fc4c0097369590559ecdb6eb86acf31c5de011"} Feb 25 12:48:21 crc kubenswrapper[5005]: I0225 12:48:21.752010 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8kzkr" podStartSLOduration=3.135526242 podStartE2EDuration="6.751995153s" podCreationTimestamp="2026-02-25 12:48:15 +0000 UTC" firstStartedPulling="2026-02-25 12:48:17.656458286 +0000 UTC m=+5411.697190613" lastFinishedPulling="2026-02-25 12:48:21.272927207 +0000 UTC m=+5415.313659524" observedRunningTime="2026-02-25 12:48:21.74829741 +0000 UTC m=+5415.789029737" watchObservedRunningTime="2026-02-25 12:48:21.751995153 +0000 UTC m=+5415.792727480" Feb 25 12:48:22 crc kubenswrapper[5005]: I0225 12:48:22.698508 5005 scope.go:117] "RemoveContainer" containerID="1b6c217f684b6f033908f6ff727acc77fde0defd9fdb61b5c2aa9711a21324fc" Feb 25 12:48:22 crc kubenswrapper[5005]: I0225 12:48:22.723723 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x4d88" event={"ID":"df705b09-876e-4d70-93d2-8b0ba613f35d","Type":"ContainerStarted","Data":"b4732f7b231e7ab8df8ad46bf16bed1cfac88e9061eef3527d1f7abf5a09e3fd"} Feb 25 12:48:23 crc kubenswrapper[5005]: I0225 12:48:23.737222 5005 generic.go:334] "Generic (PLEG): container finished" podID="df705b09-876e-4d70-93d2-8b0ba613f35d" containerID="b4732f7b231e7ab8df8ad46bf16bed1cfac88e9061eef3527d1f7abf5a09e3fd" exitCode=0 Feb 25 12:48:23 crc kubenswrapper[5005]: I0225 12:48:23.737329 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x4d88" event={"ID":"df705b09-876e-4d70-93d2-8b0ba613f35d","Type":"ContainerDied","Data":"b4732f7b231e7ab8df8ad46bf16bed1cfac88e9061eef3527d1f7abf5a09e3fd"} Feb 25 12:48:24 crc kubenswrapper[5005]: I0225 12:48:24.750192 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x4d88" event={"ID":"df705b09-876e-4d70-93d2-8b0ba613f35d","Type":"ContainerStarted","Data":"c2272efdeab72643621880226e2bc3d83ab20fbffe5a5d613873959758112c2e"} Feb 25 12:48:24 crc kubenswrapper[5005]: I0225 12:48:24.780818 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-x4d88" podStartSLOduration=3.3075027710000002 podStartE2EDuration="5.780797115s" podCreationTimestamp="2026-02-25 12:48:19 +0000 UTC" firstStartedPulling="2026-02-25 12:48:21.711030919 +0000 UTC m=+5415.751763256" lastFinishedPulling="2026-02-25 12:48:24.184325263 +0000 UTC m=+5418.225057600" observedRunningTime="2026-02-25 12:48:24.769111074 +0000 UTC m=+5418.809843411" watchObservedRunningTime="2026-02-25 12:48:24.780797115 +0000 UTC m=+5418.821529452" Feb 25 12:48:26 crc kubenswrapper[5005]: I0225 12:48:26.628161 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8kzkr" Feb 25 12:48:26 crc kubenswrapper[5005]: I0225 12:48:26.628586 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8kzkr" Feb 25 12:48:27 crc kubenswrapper[5005]: I0225 12:48:27.707419 5005 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-8kzkr" podUID="2cfbf8e7-a0f1-4be1-a5db-ddcda6b801ed" containerName="registry-server" probeResult="failure" output=< Feb 25 12:48:27 crc kubenswrapper[5005]: timeout: failed to connect service ":50051" within 1s Feb 25 12:48:27 crc kubenswrapper[5005]: > Feb 25 12:48:28 crc kubenswrapper[5005]: I0225 12:48:28.087319 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 12:48:28 crc kubenswrapper[5005]: I0225 12:48:28.087411 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 12:48:30 crc kubenswrapper[5005]: I0225 12:48:30.110595 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-x4d88" Feb 25 12:48:30 crc kubenswrapper[5005]: I0225 12:48:30.110981 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-x4d88" Feb 25 12:48:30 crc kubenswrapper[5005]: I0225 12:48:30.164469 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-x4d88" Feb 25 12:48:30 crc kubenswrapper[5005]: I0225 12:48:30.867341 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-x4d88" Feb 25 12:48:30 crc kubenswrapper[5005]: I0225 12:48:30.940396 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x4d88"] Feb 25 12:48:32 crc kubenswrapper[5005]: I0225 12:48:32.824425 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-x4d88" podUID="df705b09-876e-4d70-93d2-8b0ba613f35d" containerName="registry-server" containerID="cri-o://c2272efdeab72643621880226e2bc3d83ab20fbffe5a5d613873959758112c2e" gracePeriod=2 Feb 25 12:48:33 crc kubenswrapper[5005]: I0225 12:48:33.447135 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x4d88" Feb 25 12:48:33 crc kubenswrapper[5005]: I0225 12:48:33.599789 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llxbk\" (UniqueName: \"kubernetes.io/projected/df705b09-876e-4d70-93d2-8b0ba613f35d-kube-api-access-llxbk\") pod \"df705b09-876e-4d70-93d2-8b0ba613f35d\" (UID: \"df705b09-876e-4d70-93d2-8b0ba613f35d\") " Feb 25 12:48:33 crc kubenswrapper[5005]: I0225 12:48:33.600181 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df705b09-876e-4d70-93d2-8b0ba613f35d-catalog-content\") pod \"df705b09-876e-4d70-93d2-8b0ba613f35d\" (UID: \"df705b09-876e-4d70-93d2-8b0ba613f35d\") " Feb 25 12:48:33 crc kubenswrapper[5005]: I0225 12:48:33.600389 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df705b09-876e-4d70-93d2-8b0ba613f35d-utilities\") pod \"df705b09-876e-4d70-93d2-8b0ba613f35d\" (UID: \"df705b09-876e-4d70-93d2-8b0ba613f35d\") " Feb 25 12:48:33 crc kubenswrapper[5005]: I0225 12:48:33.601457 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df705b09-876e-4d70-93d2-8b0ba613f35d-utilities" (OuterVolumeSpecName: "utilities") pod "df705b09-876e-4d70-93d2-8b0ba613f35d" (UID: "df705b09-876e-4d70-93d2-8b0ba613f35d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 12:48:33 crc kubenswrapper[5005]: I0225 12:48:33.607612 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df705b09-876e-4d70-93d2-8b0ba613f35d-kube-api-access-llxbk" (OuterVolumeSpecName: "kube-api-access-llxbk") pod "df705b09-876e-4d70-93d2-8b0ba613f35d" (UID: "df705b09-876e-4d70-93d2-8b0ba613f35d"). InnerVolumeSpecName "kube-api-access-llxbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:48:33 crc kubenswrapper[5005]: I0225 12:48:33.636077 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df705b09-876e-4d70-93d2-8b0ba613f35d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "df705b09-876e-4d70-93d2-8b0ba613f35d" (UID: "df705b09-876e-4d70-93d2-8b0ba613f35d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 12:48:33 crc kubenswrapper[5005]: I0225 12:48:33.702605 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df705b09-876e-4d70-93d2-8b0ba613f35d-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 12:48:33 crc kubenswrapper[5005]: I0225 12:48:33.702831 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llxbk\" (UniqueName: \"kubernetes.io/projected/df705b09-876e-4d70-93d2-8b0ba613f35d-kube-api-access-llxbk\") on node \"crc\" DevicePath \"\"" Feb 25 12:48:33 crc kubenswrapper[5005]: I0225 12:48:33.702891 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df705b09-876e-4d70-93d2-8b0ba613f35d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 12:48:33 crc kubenswrapper[5005]: I0225 12:48:33.834077 5005 generic.go:334] "Generic (PLEG): container finished" podID="df705b09-876e-4d70-93d2-8b0ba613f35d" containerID="c2272efdeab72643621880226e2bc3d83ab20fbffe5a5d613873959758112c2e" exitCode=0 Feb 25 12:48:33 crc kubenswrapper[5005]: I0225 12:48:33.834144 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x4d88" Feb 25 12:48:33 crc kubenswrapper[5005]: I0225 12:48:33.834165 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x4d88" event={"ID":"df705b09-876e-4d70-93d2-8b0ba613f35d","Type":"ContainerDied","Data":"c2272efdeab72643621880226e2bc3d83ab20fbffe5a5d613873959758112c2e"} Feb 25 12:48:33 crc kubenswrapper[5005]: I0225 12:48:33.835440 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x4d88" event={"ID":"df705b09-876e-4d70-93d2-8b0ba613f35d","Type":"ContainerDied","Data":"137a92eacf15f65fe1139e7efbcad299b651bdbaa8a4b9ef8730ee8f3f1165b3"} Feb 25 12:48:33 crc kubenswrapper[5005]: I0225 12:48:33.835472 5005 scope.go:117] "RemoveContainer" containerID="c2272efdeab72643621880226e2bc3d83ab20fbffe5a5d613873959758112c2e" Feb 25 12:48:33 crc kubenswrapper[5005]: I0225 12:48:33.874951 5005 scope.go:117] "RemoveContainer" containerID="b4732f7b231e7ab8df8ad46bf16bed1cfac88e9061eef3527d1f7abf5a09e3fd" Feb 25 12:48:33 crc kubenswrapper[5005]: I0225 12:48:33.878432 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x4d88"] Feb 25 12:48:33 crc kubenswrapper[5005]: I0225 12:48:33.894290 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-x4d88"] Feb 25 12:48:33 crc kubenswrapper[5005]: I0225 12:48:33.898592 5005 scope.go:117] "RemoveContainer" containerID="18c5bac44215210e9c6e83d5296221d06a4a766ddbcc8ed1dca75233f08c3ca9" Feb 25 12:48:33 crc kubenswrapper[5005]: I0225 12:48:33.941537 5005 scope.go:117] "RemoveContainer" containerID="c2272efdeab72643621880226e2bc3d83ab20fbffe5a5d613873959758112c2e" Feb 25 12:48:33 crc kubenswrapper[5005]: E0225 12:48:33.941952 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2272efdeab72643621880226e2bc3d83ab20fbffe5a5d613873959758112c2e\": container with ID starting with c2272efdeab72643621880226e2bc3d83ab20fbffe5a5d613873959758112c2e not found: ID does not exist" containerID="c2272efdeab72643621880226e2bc3d83ab20fbffe5a5d613873959758112c2e" Feb 25 12:48:33 crc kubenswrapper[5005]: I0225 12:48:33.942006 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2272efdeab72643621880226e2bc3d83ab20fbffe5a5d613873959758112c2e"} err="failed to get container status \"c2272efdeab72643621880226e2bc3d83ab20fbffe5a5d613873959758112c2e\": rpc error: code = NotFound desc = could not find container \"c2272efdeab72643621880226e2bc3d83ab20fbffe5a5d613873959758112c2e\": container with ID starting with c2272efdeab72643621880226e2bc3d83ab20fbffe5a5d613873959758112c2e not found: ID does not exist" Feb 25 12:48:33 crc kubenswrapper[5005]: I0225 12:48:33.942039 5005 scope.go:117] "RemoveContainer" containerID="b4732f7b231e7ab8df8ad46bf16bed1cfac88e9061eef3527d1f7abf5a09e3fd" Feb 25 12:48:33 crc kubenswrapper[5005]: E0225 12:48:33.942576 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4732f7b231e7ab8df8ad46bf16bed1cfac88e9061eef3527d1f7abf5a09e3fd\": container with ID starting with b4732f7b231e7ab8df8ad46bf16bed1cfac88e9061eef3527d1f7abf5a09e3fd not found: ID does not exist" containerID="b4732f7b231e7ab8df8ad46bf16bed1cfac88e9061eef3527d1f7abf5a09e3fd" Feb 25 12:48:33 crc kubenswrapper[5005]: I0225 12:48:33.942634 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4732f7b231e7ab8df8ad46bf16bed1cfac88e9061eef3527d1f7abf5a09e3fd"} err="failed to get container status \"b4732f7b231e7ab8df8ad46bf16bed1cfac88e9061eef3527d1f7abf5a09e3fd\": rpc error: code = NotFound desc = could not find container \"b4732f7b231e7ab8df8ad46bf16bed1cfac88e9061eef3527d1f7abf5a09e3fd\": container with ID starting with b4732f7b231e7ab8df8ad46bf16bed1cfac88e9061eef3527d1f7abf5a09e3fd not found: ID does not exist" Feb 25 12:48:33 crc kubenswrapper[5005]: I0225 12:48:33.942677 5005 scope.go:117] "RemoveContainer" containerID="18c5bac44215210e9c6e83d5296221d06a4a766ddbcc8ed1dca75233f08c3ca9" Feb 25 12:48:33 crc kubenswrapper[5005]: E0225 12:48:33.943275 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18c5bac44215210e9c6e83d5296221d06a4a766ddbcc8ed1dca75233f08c3ca9\": container with ID starting with 18c5bac44215210e9c6e83d5296221d06a4a766ddbcc8ed1dca75233f08c3ca9 not found: ID does not exist" containerID="18c5bac44215210e9c6e83d5296221d06a4a766ddbcc8ed1dca75233f08c3ca9" Feb 25 12:48:33 crc kubenswrapper[5005]: I0225 12:48:33.943306 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18c5bac44215210e9c6e83d5296221d06a4a766ddbcc8ed1dca75233f08c3ca9"} err="failed to get container status \"18c5bac44215210e9c6e83d5296221d06a4a766ddbcc8ed1dca75233f08c3ca9\": rpc error: code = NotFound desc = could not find container \"18c5bac44215210e9c6e83d5296221d06a4a766ddbcc8ed1dca75233f08c3ca9\": container with ID starting with 18c5bac44215210e9c6e83d5296221d06a4a766ddbcc8ed1dca75233f08c3ca9 not found: ID does not exist" Feb 25 12:48:34 crc kubenswrapper[5005]: I0225 12:48:34.700097 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df705b09-876e-4d70-93d2-8b0ba613f35d" path="/var/lib/kubelet/pods/df705b09-876e-4d70-93d2-8b0ba613f35d/volumes" Feb 25 12:48:36 crc kubenswrapper[5005]: I0225 12:48:36.673819 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8kzkr" Feb 25 12:48:36 crc kubenswrapper[5005]: I0225 12:48:36.720860 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8kzkr" Feb 25 12:48:36 crc kubenswrapper[5005]: I0225 12:48:36.912653 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8kzkr"] Feb 25 12:48:37 crc kubenswrapper[5005]: I0225 12:48:37.873136 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8kzkr" podUID="2cfbf8e7-a0f1-4be1-a5db-ddcda6b801ed" containerName="registry-server" containerID="cri-o://ba5bc5817195ad8b4ae8469346fc4c0097369590559ecdb6eb86acf31c5de011" gracePeriod=2 Feb 25 12:48:38 crc kubenswrapper[5005]: I0225 12:48:38.885401 5005 generic.go:334] "Generic (PLEG): container finished" podID="2cfbf8e7-a0f1-4be1-a5db-ddcda6b801ed" containerID="ba5bc5817195ad8b4ae8469346fc4c0097369590559ecdb6eb86acf31c5de011" exitCode=0 Feb 25 12:48:38 crc kubenswrapper[5005]: I0225 12:48:38.885467 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8kzkr" event={"ID":"2cfbf8e7-a0f1-4be1-a5db-ddcda6b801ed","Type":"ContainerDied","Data":"ba5bc5817195ad8b4ae8469346fc4c0097369590559ecdb6eb86acf31c5de011"} Feb 25 12:48:39 crc kubenswrapper[5005]: I0225 12:48:39.180050 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8kzkr" Feb 25 12:48:39 crc kubenswrapper[5005]: I0225 12:48:39.315962 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wpn5\" (UniqueName: \"kubernetes.io/projected/2cfbf8e7-a0f1-4be1-a5db-ddcda6b801ed-kube-api-access-7wpn5\") pod \"2cfbf8e7-a0f1-4be1-a5db-ddcda6b801ed\" (UID: \"2cfbf8e7-a0f1-4be1-a5db-ddcda6b801ed\") " Feb 25 12:48:39 crc kubenswrapper[5005]: I0225 12:48:39.316012 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cfbf8e7-a0f1-4be1-a5db-ddcda6b801ed-catalog-content\") pod \"2cfbf8e7-a0f1-4be1-a5db-ddcda6b801ed\" (UID: \"2cfbf8e7-a0f1-4be1-a5db-ddcda6b801ed\") " Feb 25 12:48:39 crc kubenswrapper[5005]: I0225 12:48:39.316056 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cfbf8e7-a0f1-4be1-a5db-ddcda6b801ed-utilities\") pod \"2cfbf8e7-a0f1-4be1-a5db-ddcda6b801ed\" (UID: \"2cfbf8e7-a0f1-4be1-a5db-ddcda6b801ed\") " Feb 25 12:48:39 crc kubenswrapper[5005]: I0225 12:48:39.317028 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cfbf8e7-a0f1-4be1-a5db-ddcda6b801ed-utilities" (OuterVolumeSpecName: "utilities") pod "2cfbf8e7-a0f1-4be1-a5db-ddcda6b801ed" (UID: "2cfbf8e7-a0f1-4be1-a5db-ddcda6b801ed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 12:48:39 crc kubenswrapper[5005]: I0225 12:48:39.322681 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cfbf8e7-a0f1-4be1-a5db-ddcda6b801ed-kube-api-access-7wpn5" (OuterVolumeSpecName: "kube-api-access-7wpn5") pod "2cfbf8e7-a0f1-4be1-a5db-ddcda6b801ed" (UID: "2cfbf8e7-a0f1-4be1-a5db-ddcda6b801ed"). InnerVolumeSpecName "kube-api-access-7wpn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:48:39 crc kubenswrapper[5005]: I0225 12:48:39.408320 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cfbf8e7-a0f1-4be1-a5db-ddcda6b801ed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2cfbf8e7-a0f1-4be1-a5db-ddcda6b801ed" (UID: "2cfbf8e7-a0f1-4be1-a5db-ddcda6b801ed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 12:48:39 crc kubenswrapper[5005]: I0225 12:48:39.418008 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wpn5\" (UniqueName: \"kubernetes.io/projected/2cfbf8e7-a0f1-4be1-a5db-ddcda6b801ed-kube-api-access-7wpn5\") on node \"crc\" DevicePath \"\"" Feb 25 12:48:39 crc kubenswrapper[5005]: I0225 12:48:39.418040 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cfbf8e7-a0f1-4be1-a5db-ddcda6b801ed-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 12:48:39 crc kubenswrapper[5005]: I0225 12:48:39.418050 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cfbf8e7-a0f1-4be1-a5db-ddcda6b801ed-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 12:48:39 crc kubenswrapper[5005]: I0225 12:48:39.920836 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8kzkr" event={"ID":"2cfbf8e7-a0f1-4be1-a5db-ddcda6b801ed","Type":"ContainerDied","Data":"d4cdfe43cf30b4497d788e6a5ab10676cc0207c3d0c7b8a8648818ee15048df9"} Feb 25 12:48:39 crc kubenswrapper[5005]: I0225 12:48:39.921162 5005 scope.go:117] "RemoveContainer" containerID="ba5bc5817195ad8b4ae8469346fc4c0097369590559ecdb6eb86acf31c5de011" Feb 25 12:48:39 crc kubenswrapper[5005]: I0225 12:48:39.920917 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8kzkr" Feb 25 12:48:39 crc kubenswrapper[5005]: I0225 12:48:39.970728 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8kzkr"] Feb 25 12:48:39 crc kubenswrapper[5005]: I0225 12:48:39.970842 5005 scope.go:117] "RemoveContainer" containerID="2ae3b916da14e00a4bdfa5855cca83a27c2746b977c495cd0cd33c08eb519c97" Feb 25 12:48:39 crc kubenswrapper[5005]: I0225 12:48:39.987461 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8kzkr"] Feb 25 12:48:40 crc kubenswrapper[5005]: I0225 12:48:40.014975 5005 scope.go:117] "RemoveContainer" containerID="89af8567bcd5c8f6775d9843bd5fc1f1bb76d9de227c84fa2eaf7cda57df9de8" Feb 25 12:48:40 crc kubenswrapper[5005]: I0225 12:48:40.696046 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cfbf8e7-a0f1-4be1-a5db-ddcda6b801ed" path="/var/lib/kubelet/pods/2cfbf8e7-a0f1-4be1-a5db-ddcda6b801ed/volumes" Feb 25 12:48:58 crc kubenswrapper[5005]: I0225 12:48:58.087701 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 12:48:58 crc kubenswrapper[5005]: I0225 12:48:58.088272 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 12:49:28 crc kubenswrapper[5005]: I0225 12:49:28.114456 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 12:49:28 crc kubenswrapper[5005]: I0225 12:49:28.115074 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 12:49:28 crc kubenswrapper[5005]: I0225 12:49:28.115124 5005 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" Feb 25 12:49:28 crc kubenswrapper[5005]: I0225 12:49:28.116179 5005 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"afc7fe4cfa5ffc5c5f3d76fa24f478e0e6978fea9ed24e2c2fab6f085ed5dd41"} pod="openshift-machine-config-operator/machine-config-daemon-tct5q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 25 12:49:28 crc kubenswrapper[5005]: I0225 12:49:28.116329 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" containerID="cri-o://afc7fe4cfa5ffc5c5f3d76fa24f478e0e6978fea9ed24e2c2fab6f085ed5dd41" gracePeriod=600 Feb 25 12:49:28 crc kubenswrapper[5005]: I0225 12:49:28.343026 5005 generic.go:334] "Generic (PLEG): container finished" podID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerID="afc7fe4cfa5ffc5c5f3d76fa24f478e0e6978fea9ed24e2c2fab6f085ed5dd41" exitCode=0 Feb 25 12:49:28 crc kubenswrapper[5005]: I0225 12:49:28.343067 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerDied","Data":"afc7fe4cfa5ffc5c5f3d76fa24f478e0e6978fea9ed24e2c2fab6f085ed5dd41"} Feb 25 12:49:28 crc kubenswrapper[5005]: I0225 12:49:28.343098 5005 scope.go:117] "RemoveContainer" containerID="d5323b9fb38e9e5b3654a15d617298ef1a2c7a35b633e8c3fa93501ca4da70f7" Feb 25 12:49:29 crc kubenswrapper[5005]: I0225 12:49:29.353779 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerStarted","Data":"961885a00519aaff92ed827bc5e9b0b521218b07be93348ca17604bea49ccce2"} Feb 25 12:50:00 crc kubenswrapper[5005]: I0225 12:50:00.149278 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533730-wdgxw"] Feb 25 12:50:00 crc kubenswrapper[5005]: E0225 12:50:00.150195 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df705b09-876e-4d70-93d2-8b0ba613f35d" containerName="extract-utilities" Feb 25 12:50:00 crc kubenswrapper[5005]: I0225 12:50:00.150207 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="df705b09-876e-4d70-93d2-8b0ba613f35d" containerName="extract-utilities" Feb 25 12:50:00 crc kubenswrapper[5005]: E0225 12:50:00.150248 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cfbf8e7-a0f1-4be1-a5db-ddcda6b801ed" containerName="extract-utilities" Feb 25 12:50:00 crc kubenswrapper[5005]: I0225 12:50:00.150255 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cfbf8e7-a0f1-4be1-a5db-ddcda6b801ed" containerName="extract-utilities" Feb 25 12:50:00 crc kubenswrapper[5005]: E0225 12:50:00.150269 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df705b09-876e-4d70-93d2-8b0ba613f35d" containerName="extract-content" Feb 25 12:50:00 crc kubenswrapper[5005]: I0225 12:50:00.150275 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="df705b09-876e-4d70-93d2-8b0ba613f35d" containerName="extract-content" Feb 25 12:50:00 crc kubenswrapper[5005]: E0225 12:50:00.150284 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df705b09-876e-4d70-93d2-8b0ba613f35d" containerName="registry-server" Feb 25 12:50:00 crc kubenswrapper[5005]: I0225 12:50:00.150291 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="df705b09-876e-4d70-93d2-8b0ba613f35d" containerName="registry-server" Feb 25 12:50:00 crc kubenswrapper[5005]: E0225 12:50:00.150323 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cfbf8e7-a0f1-4be1-a5db-ddcda6b801ed" containerName="extract-content" Feb 25 12:50:00 crc kubenswrapper[5005]: I0225 12:50:00.150328 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cfbf8e7-a0f1-4be1-a5db-ddcda6b801ed" containerName="extract-content" Feb 25 12:50:00 crc kubenswrapper[5005]: E0225 12:50:00.150339 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cfbf8e7-a0f1-4be1-a5db-ddcda6b801ed" containerName="registry-server" Feb 25 12:50:00 crc kubenswrapper[5005]: I0225 12:50:00.150345 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cfbf8e7-a0f1-4be1-a5db-ddcda6b801ed" containerName="registry-server" Feb 25 12:50:00 crc kubenswrapper[5005]: I0225 12:50:00.150570 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="df705b09-876e-4d70-93d2-8b0ba613f35d" containerName="registry-server" Feb 25 12:50:00 crc kubenswrapper[5005]: I0225 12:50:00.150584 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cfbf8e7-a0f1-4be1-a5db-ddcda6b801ed" containerName="registry-server" Feb 25 12:50:00 crc kubenswrapper[5005]: I0225 12:50:00.151195 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533730-wdgxw" Feb 25 12:50:00 crc kubenswrapper[5005]: I0225 12:50:00.154078 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 12:50:00 crc kubenswrapper[5005]: I0225 12:50:00.154632 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 12:50:00 crc kubenswrapper[5005]: I0225 12:50:00.154808 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 12:50:00 crc kubenswrapper[5005]: I0225 12:50:00.162024 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9q72\" (UniqueName: \"kubernetes.io/projected/9571a5f1-50e5-4990-a329-aac058e610a9-kube-api-access-p9q72\") pod \"auto-csr-approver-29533730-wdgxw\" (UID: \"9571a5f1-50e5-4990-a329-aac058e610a9\") " pod="openshift-infra/auto-csr-approver-29533730-wdgxw" Feb 25 12:50:00 crc kubenswrapper[5005]: I0225 12:50:00.165089 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533730-wdgxw"] Feb 25 12:50:00 crc kubenswrapper[5005]: I0225 12:50:00.263441 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9q72\" (UniqueName: \"kubernetes.io/projected/9571a5f1-50e5-4990-a329-aac058e610a9-kube-api-access-p9q72\") pod \"auto-csr-approver-29533730-wdgxw\" (UID: \"9571a5f1-50e5-4990-a329-aac058e610a9\") " pod="openshift-infra/auto-csr-approver-29533730-wdgxw" Feb 25 12:50:00 crc kubenswrapper[5005]: I0225 12:50:00.558828 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9q72\" (UniqueName: \"kubernetes.io/projected/9571a5f1-50e5-4990-a329-aac058e610a9-kube-api-access-p9q72\") pod \"auto-csr-approver-29533730-wdgxw\" (UID: \"9571a5f1-50e5-4990-a329-aac058e610a9\") " pod="openshift-infra/auto-csr-approver-29533730-wdgxw" Feb 25 12:50:00 crc kubenswrapper[5005]: I0225 12:50:00.771707 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533730-wdgxw" Feb 25 12:50:01 crc kubenswrapper[5005]: I0225 12:50:01.264936 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533730-wdgxw"] Feb 25 12:50:01 crc kubenswrapper[5005]: I0225 12:50:01.647146 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533730-wdgxw" event={"ID":"9571a5f1-50e5-4990-a329-aac058e610a9","Type":"ContainerStarted","Data":"6516ed43196c2a266bac9c293cc6589fa8987ad9d8feee0cf1ccde043fe18b90"} Feb 25 12:50:03 crc kubenswrapper[5005]: E0225 12:50:03.432926 5005 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9571a5f1_50e5_4990_a329_aac058e610a9.slice/crio-e6297a18bf623213401e4ab570e53539ec95962a60a2df0362b104687919f1e9.scope\": RecentStats: unable to find data in memory cache]" Feb 25 12:50:03 crc kubenswrapper[5005]: I0225 12:50:03.664668 5005 generic.go:334] "Generic (PLEG): container finished" podID="9571a5f1-50e5-4990-a329-aac058e610a9" containerID="e6297a18bf623213401e4ab570e53539ec95962a60a2df0362b104687919f1e9" exitCode=0 Feb 25 12:50:03 crc kubenswrapper[5005]: I0225 12:50:03.664752 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533730-wdgxw" event={"ID":"9571a5f1-50e5-4990-a329-aac058e610a9","Type":"ContainerDied","Data":"e6297a18bf623213401e4ab570e53539ec95962a60a2df0362b104687919f1e9"} Feb 25 12:50:05 crc kubenswrapper[5005]: I0225 12:50:05.181928 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533730-wdgxw" Feb 25 12:50:05 crc kubenswrapper[5005]: I0225 12:50:05.360479 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9q72\" (UniqueName: \"kubernetes.io/projected/9571a5f1-50e5-4990-a329-aac058e610a9-kube-api-access-p9q72\") pod \"9571a5f1-50e5-4990-a329-aac058e610a9\" (UID: \"9571a5f1-50e5-4990-a329-aac058e610a9\") " Feb 25 12:50:05 crc kubenswrapper[5005]: I0225 12:50:05.373246 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9571a5f1-50e5-4990-a329-aac058e610a9-kube-api-access-p9q72" (OuterVolumeSpecName: "kube-api-access-p9q72") pod "9571a5f1-50e5-4990-a329-aac058e610a9" (UID: "9571a5f1-50e5-4990-a329-aac058e610a9"). InnerVolumeSpecName "kube-api-access-p9q72". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:50:05 crc kubenswrapper[5005]: I0225 12:50:05.462983 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9q72\" (UniqueName: \"kubernetes.io/projected/9571a5f1-50e5-4990-a329-aac058e610a9-kube-api-access-p9q72\") on node \"crc\" DevicePath \"\"" Feb 25 12:50:05 crc kubenswrapper[5005]: I0225 12:50:05.686645 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533730-wdgxw" event={"ID":"9571a5f1-50e5-4990-a329-aac058e610a9","Type":"ContainerDied","Data":"6516ed43196c2a266bac9c293cc6589fa8987ad9d8feee0cf1ccde043fe18b90"} Feb 25 12:50:05 crc kubenswrapper[5005]: I0225 12:50:05.686917 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6516ed43196c2a266bac9c293cc6589fa8987ad9d8feee0cf1ccde043fe18b90" Feb 25 12:50:05 crc kubenswrapper[5005]: I0225 12:50:05.686970 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533730-wdgxw" Feb 25 12:50:06 crc kubenswrapper[5005]: I0225 12:50:06.286414 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533724-jtvnw"] Feb 25 12:50:06 crc kubenswrapper[5005]: I0225 12:50:06.298183 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533724-jtvnw"] Feb 25 12:50:06 crc kubenswrapper[5005]: I0225 12:50:06.703779 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72227b2a-5882-4611-8534-7d06c7f26649" path="/var/lib/kubelet/pods/72227b2a-5882-4611-8534-7d06c7f26649/volumes" Feb 25 12:50:22 crc kubenswrapper[5005]: I0225 12:50:22.854221 5005 scope.go:117] "RemoveContainer" containerID="513606bc5b1ca31b14de7bdf7cd618c77e1affee5f3a0e9bdb31ddcd8f416c32" Feb 25 12:51:28 crc kubenswrapper[5005]: I0225 12:51:28.087425 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 12:51:28 crc kubenswrapper[5005]: I0225 12:51:28.088118 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 12:51:58 crc kubenswrapper[5005]: I0225 12:51:58.087414 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 12:51:58 crc kubenswrapper[5005]: I0225 12:51:58.088144 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 12:52:00 crc kubenswrapper[5005]: I0225 12:52:00.141198 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533732-g68z8"] Feb 25 12:52:00 crc kubenswrapper[5005]: E0225 12:52:00.141874 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9571a5f1-50e5-4990-a329-aac058e610a9" containerName="oc" Feb 25 12:52:00 crc kubenswrapper[5005]: I0225 12:52:00.141889 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="9571a5f1-50e5-4990-a329-aac058e610a9" containerName="oc" Feb 25 12:52:00 crc kubenswrapper[5005]: I0225 12:52:00.142094 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="9571a5f1-50e5-4990-a329-aac058e610a9" containerName="oc" Feb 25 12:52:00 crc kubenswrapper[5005]: I0225 12:52:00.142789 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533732-g68z8" Feb 25 12:52:00 crc kubenswrapper[5005]: I0225 12:52:00.145874 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 12:52:00 crc kubenswrapper[5005]: I0225 12:52:00.146420 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 12:52:00 crc kubenswrapper[5005]: I0225 12:52:00.146458 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 12:52:00 crc kubenswrapper[5005]: I0225 12:52:00.161335 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533732-g68z8"] Feb 25 12:52:00 crc kubenswrapper[5005]: I0225 12:52:00.223283 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-865fw\" (UniqueName: \"kubernetes.io/projected/4d833769-8a65-4715-a8e8-a578d2f993f2-kube-api-access-865fw\") pod \"auto-csr-approver-29533732-g68z8\" (UID: \"4d833769-8a65-4715-a8e8-a578d2f993f2\") " pod="openshift-infra/auto-csr-approver-29533732-g68z8" Feb 25 12:52:00 crc kubenswrapper[5005]: I0225 12:52:00.325809 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-865fw\" (UniqueName: \"kubernetes.io/projected/4d833769-8a65-4715-a8e8-a578d2f993f2-kube-api-access-865fw\") pod \"auto-csr-approver-29533732-g68z8\" (UID: \"4d833769-8a65-4715-a8e8-a578d2f993f2\") " pod="openshift-infra/auto-csr-approver-29533732-g68z8" Feb 25 12:52:00 crc kubenswrapper[5005]: I0225 12:52:00.350975 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-865fw\" (UniqueName: \"kubernetes.io/projected/4d833769-8a65-4715-a8e8-a578d2f993f2-kube-api-access-865fw\") pod \"auto-csr-approver-29533732-g68z8\" (UID: \"4d833769-8a65-4715-a8e8-a578d2f993f2\") " pod="openshift-infra/auto-csr-approver-29533732-g68z8" Feb 25 12:52:00 crc kubenswrapper[5005]: I0225 12:52:00.464833 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533732-g68z8" Feb 25 12:52:00 crc kubenswrapper[5005]: I0225 12:52:00.928881 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533732-g68z8"] Feb 25 12:52:01 crc kubenswrapper[5005]: I0225 12:52:01.823815 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533732-g68z8" event={"ID":"4d833769-8a65-4715-a8e8-a578d2f993f2","Type":"ContainerStarted","Data":"b534ad02dcb18326637866ba416352f3f4db7e01e2ea918d96e175dbbc50eb38"} Feb 25 12:52:02 crc kubenswrapper[5005]: I0225 12:52:02.834830 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533732-g68z8" event={"ID":"4d833769-8a65-4715-a8e8-a578d2f993f2","Type":"ContainerStarted","Data":"adeb0084a4510d369340a666dbfdcf1aa79111a8b99c7a9f63432b9c3a8e1970"} Feb 25 12:52:02 crc kubenswrapper[5005]: I0225 12:52:02.850035 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29533732-g68z8" podStartSLOduration=1.787513411 podStartE2EDuration="2.850013288s" podCreationTimestamp="2026-02-25 12:52:00 +0000 UTC" firstStartedPulling="2026-02-25 12:52:01.07698939 +0000 UTC m=+5635.117721717" lastFinishedPulling="2026-02-25 12:52:02.139489267 +0000 UTC m=+5636.180221594" observedRunningTime="2026-02-25 12:52:02.846959944 +0000 UTC m=+5636.887692281" watchObservedRunningTime="2026-02-25 12:52:02.850013288 +0000 UTC m=+5636.890745635" Feb 25 12:52:03 crc kubenswrapper[5005]: I0225 12:52:03.846214 5005 generic.go:334] "Generic (PLEG): container finished" podID="4d833769-8a65-4715-a8e8-a578d2f993f2" containerID="adeb0084a4510d369340a666dbfdcf1aa79111a8b99c7a9f63432b9c3a8e1970" exitCode=0 Feb 25 12:52:03 crc kubenswrapper[5005]: I0225 12:52:03.846275 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533732-g68z8" event={"ID":"4d833769-8a65-4715-a8e8-a578d2f993f2","Type":"ContainerDied","Data":"adeb0084a4510d369340a666dbfdcf1aa79111a8b99c7a9f63432b9c3a8e1970"} Feb 25 12:52:05 crc kubenswrapper[5005]: I0225 12:52:05.407683 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533732-g68z8" Feb 25 12:52:05 crc kubenswrapper[5005]: I0225 12:52:05.531914 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-865fw\" (UniqueName: \"kubernetes.io/projected/4d833769-8a65-4715-a8e8-a578d2f993f2-kube-api-access-865fw\") pod \"4d833769-8a65-4715-a8e8-a578d2f993f2\" (UID: \"4d833769-8a65-4715-a8e8-a578d2f993f2\") " Feb 25 12:52:05 crc kubenswrapper[5005]: I0225 12:52:05.544016 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d833769-8a65-4715-a8e8-a578d2f993f2-kube-api-access-865fw" (OuterVolumeSpecName: "kube-api-access-865fw") pod "4d833769-8a65-4715-a8e8-a578d2f993f2" (UID: "4d833769-8a65-4715-a8e8-a578d2f993f2"). InnerVolumeSpecName "kube-api-access-865fw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:52:05 crc kubenswrapper[5005]: I0225 12:52:05.634264 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-865fw\" (UniqueName: \"kubernetes.io/projected/4d833769-8a65-4715-a8e8-a578d2f993f2-kube-api-access-865fw\") on node \"crc\" DevicePath \"\"" Feb 25 12:52:05 crc kubenswrapper[5005]: I0225 12:52:05.868124 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533732-g68z8" event={"ID":"4d833769-8a65-4715-a8e8-a578d2f993f2","Type":"ContainerDied","Data":"b534ad02dcb18326637866ba416352f3f4db7e01e2ea918d96e175dbbc50eb38"} Feb 25 12:52:05 crc kubenswrapper[5005]: I0225 12:52:05.868158 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b534ad02dcb18326637866ba416352f3f4db7e01e2ea918d96e175dbbc50eb38" Feb 25 12:52:05 crc kubenswrapper[5005]: I0225 12:52:05.868228 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533732-g68z8" Feb 25 12:52:05 crc kubenswrapper[5005]: I0225 12:52:05.949409 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533726-p95lm"] Feb 25 12:52:05 crc kubenswrapper[5005]: I0225 12:52:05.956893 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533726-p95lm"] Feb 25 12:52:06 crc kubenswrapper[5005]: I0225 12:52:06.705140 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acf36d31-44e4-460a-85b5-168cdfdb31a0" path="/var/lib/kubelet/pods/acf36d31-44e4-460a-85b5-168cdfdb31a0/volumes" Feb 25 12:52:22 crc kubenswrapper[5005]: I0225 12:52:22.984852 5005 scope.go:117] "RemoveContainer" containerID="930abb6f9f2aa655e8b5f4748b047ffd7caf64b53241d890e7b79b949f58b9ab" Feb 25 12:52:28 crc kubenswrapper[5005]: I0225 12:52:28.087242 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 12:52:28 crc kubenswrapper[5005]: I0225 12:52:28.088005 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 12:52:28 crc kubenswrapper[5005]: I0225 12:52:28.088083 5005 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" Feb 25 12:52:28 crc kubenswrapper[5005]: I0225 12:52:28.089484 5005 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"961885a00519aaff92ed827bc5e9b0b521218b07be93348ca17604bea49ccce2"} pod="openshift-machine-config-operator/machine-config-daemon-tct5q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 25 12:52:28 crc kubenswrapper[5005]: I0225 12:52:28.089602 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" containerID="cri-o://961885a00519aaff92ed827bc5e9b0b521218b07be93348ca17604bea49ccce2" gracePeriod=600 Feb 25 12:52:28 crc kubenswrapper[5005]: E0225 12:52:28.232575 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:52:29 crc kubenswrapper[5005]: I0225 12:52:29.070996 5005 generic.go:334] "Generic (PLEG): container finished" podID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerID="961885a00519aaff92ed827bc5e9b0b521218b07be93348ca17604bea49ccce2" exitCode=0 Feb 25 12:52:29 crc kubenswrapper[5005]: I0225 12:52:29.071062 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerDied","Data":"961885a00519aaff92ed827bc5e9b0b521218b07be93348ca17604bea49ccce2"} Feb 25 12:52:29 crc kubenswrapper[5005]: I0225 12:52:29.071391 5005 scope.go:117] "RemoveContainer" containerID="afc7fe4cfa5ffc5c5f3d76fa24f478e0e6978fea9ed24e2c2fab6f085ed5dd41" Feb 25 12:52:29 crc kubenswrapper[5005]: I0225 12:52:29.072006 5005 scope.go:117] "RemoveContainer" containerID="961885a00519aaff92ed827bc5e9b0b521218b07be93348ca17604bea49ccce2" Feb 25 12:52:29 crc kubenswrapper[5005]: E0225 12:52:29.072234 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:52:42 crc kubenswrapper[5005]: I0225 12:52:42.686260 5005 scope.go:117] "RemoveContainer" containerID="961885a00519aaff92ed827bc5e9b0b521218b07be93348ca17604bea49ccce2" Feb 25 12:52:42 crc kubenswrapper[5005]: E0225 12:52:42.687504 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:52:54 crc kubenswrapper[5005]: I0225 12:52:54.686299 5005 scope.go:117] "RemoveContainer" containerID="961885a00519aaff92ed827bc5e9b0b521218b07be93348ca17604bea49ccce2" Feb 25 12:52:54 crc kubenswrapper[5005]: E0225 12:52:54.687138 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:53:09 crc kubenswrapper[5005]: I0225 12:53:09.688605 5005 scope.go:117] "RemoveContainer" containerID="961885a00519aaff92ed827bc5e9b0b521218b07be93348ca17604bea49ccce2" Feb 25 12:53:09 crc kubenswrapper[5005]: E0225 12:53:09.689580 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:53:22 crc kubenswrapper[5005]: I0225 12:53:22.686511 5005 scope.go:117] "RemoveContainer" containerID="961885a00519aaff92ed827bc5e9b0b521218b07be93348ca17604bea49ccce2" Feb 25 12:53:22 crc kubenswrapper[5005]: E0225 12:53:22.687785 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:53:37 crc kubenswrapper[5005]: I0225 12:53:37.684954 5005 scope.go:117] "RemoveContainer" containerID="961885a00519aaff92ed827bc5e9b0b521218b07be93348ca17604bea49ccce2" Feb 25 12:53:37 crc kubenswrapper[5005]: E0225 12:53:37.686466 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:53:50 crc kubenswrapper[5005]: I0225 12:53:50.686146 5005 scope.go:117] "RemoveContainer" containerID="961885a00519aaff92ed827bc5e9b0b521218b07be93348ca17604bea49ccce2" Feb 25 12:53:50 crc kubenswrapper[5005]: E0225 12:53:50.687136 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:54:00 crc kubenswrapper[5005]: I0225 12:54:00.148477 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533734-h7ck2"] Feb 25 12:54:00 crc kubenswrapper[5005]: E0225 12:54:00.149656 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d833769-8a65-4715-a8e8-a578d2f993f2" containerName="oc" Feb 25 12:54:00 crc kubenswrapper[5005]: I0225 12:54:00.149682 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d833769-8a65-4715-a8e8-a578d2f993f2" containerName="oc" Feb 25 12:54:00 crc kubenswrapper[5005]: I0225 12:54:00.149996 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d833769-8a65-4715-a8e8-a578d2f993f2" containerName="oc" Feb 25 12:54:00 crc kubenswrapper[5005]: I0225 12:54:00.150796 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533734-h7ck2" Feb 25 12:54:00 crc kubenswrapper[5005]: I0225 12:54:00.153323 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 12:54:00 crc kubenswrapper[5005]: I0225 12:54:00.153357 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 12:54:00 crc kubenswrapper[5005]: I0225 12:54:00.153425 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 12:54:00 crc kubenswrapper[5005]: I0225 12:54:00.164126 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533734-h7ck2"] Feb 25 12:54:00 crc kubenswrapper[5005]: I0225 12:54:00.343920 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsdq8\" (UniqueName: \"kubernetes.io/projected/8a43239f-6794-4566-999b-c8ecae920f9b-kube-api-access-dsdq8\") pod \"auto-csr-approver-29533734-h7ck2\" (UID: \"8a43239f-6794-4566-999b-c8ecae920f9b\") " pod="openshift-infra/auto-csr-approver-29533734-h7ck2" Feb 25 12:54:00 crc kubenswrapper[5005]: I0225 12:54:00.446632 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsdq8\" (UniqueName: \"kubernetes.io/projected/8a43239f-6794-4566-999b-c8ecae920f9b-kube-api-access-dsdq8\") pod \"auto-csr-approver-29533734-h7ck2\" (UID: \"8a43239f-6794-4566-999b-c8ecae920f9b\") " pod="openshift-infra/auto-csr-approver-29533734-h7ck2" Feb 25 12:54:00 crc kubenswrapper[5005]: I0225 12:54:00.467994 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsdq8\" (UniqueName: \"kubernetes.io/projected/8a43239f-6794-4566-999b-c8ecae920f9b-kube-api-access-dsdq8\") pod \"auto-csr-approver-29533734-h7ck2\" (UID: \"8a43239f-6794-4566-999b-c8ecae920f9b\") " pod="openshift-infra/auto-csr-approver-29533734-h7ck2" Feb 25 12:54:00 crc kubenswrapper[5005]: I0225 12:54:00.512644 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533734-h7ck2" Feb 25 12:54:00 crc kubenswrapper[5005]: I0225 12:54:00.981230 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533734-h7ck2"] Feb 25 12:54:00 crc kubenswrapper[5005]: I0225 12:54:00.984467 5005 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 25 12:54:01 crc kubenswrapper[5005]: I0225 12:54:01.771041 5005 scope.go:117] "RemoveContainer" containerID="961885a00519aaff92ed827bc5e9b0b521218b07be93348ca17604bea49ccce2" Feb 25 12:54:01 crc kubenswrapper[5005]: E0225 12:54:01.771513 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:54:01 crc kubenswrapper[5005]: I0225 12:54:01.959107 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533734-h7ck2" event={"ID":"8a43239f-6794-4566-999b-c8ecae920f9b","Type":"ContainerStarted","Data":"682ca0bdb815ed8e2c8c4cafeddf46c3709e28250a573f4e6b8caa203d79fe4e"} Feb 25 12:54:04 crc kubenswrapper[5005]: I0225 12:54:04.143777 5005 generic.go:334] "Generic (PLEG): container finished" podID="8a43239f-6794-4566-999b-c8ecae920f9b" containerID="e5278458fe4050606aa287d4525d23d0f6b96fbdc7a3b6b4bf2403d70246af88" exitCode=0 Feb 25 12:54:04 crc kubenswrapper[5005]: I0225 12:54:04.143837 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533734-h7ck2" event={"ID":"8a43239f-6794-4566-999b-c8ecae920f9b","Type":"ContainerDied","Data":"e5278458fe4050606aa287d4525d23d0f6b96fbdc7a3b6b4bf2403d70246af88"} Feb 25 12:54:05 crc kubenswrapper[5005]: I0225 12:54:05.629564 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533734-h7ck2" Feb 25 12:54:05 crc kubenswrapper[5005]: I0225 12:54:05.675551 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsdq8\" (UniqueName: \"kubernetes.io/projected/8a43239f-6794-4566-999b-c8ecae920f9b-kube-api-access-dsdq8\") pod \"8a43239f-6794-4566-999b-c8ecae920f9b\" (UID: \"8a43239f-6794-4566-999b-c8ecae920f9b\") " Feb 25 12:54:05 crc kubenswrapper[5005]: I0225 12:54:05.681392 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a43239f-6794-4566-999b-c8ecae920f9b-kube-api-access-dsdq8" (OuterVolumeSpecName: "kube-api-access-dsdq8") pod "8a43239f-6794-4566-999b-c8ecae920f9b" (UID: "8a43239f-6794-4566-999b-c8ecae920f9b"). InnerVolumeSpecName "kube-api-access-dsdq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:54:05 crc kubenswrapper[5005]: I0225 12:54:05.777582 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dsdq8\" (UniqueName: \"kubernetes.io/projected/8a43239f-6794-4566-999b-c8ecae920f9b-kube-api-access-dsdq8\") on node \"crc\" DevicePath \"\"" Feb 25 12:54:06 crc kubenswrapper[5005]: I0225 12:54:06.167180 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533734-h7ck2" event={"ID":"8a43239f-6794-4566-999b-c8ecae920f9b","Type":"ContainerDied","Data":"682ca0bdb815ed8e2c8c4cafeddf46c3709e28250a573f4e6b8caa203d79fe4e"} Feb 25 12:54:06 crc kubenswrapper[5005]: I0225 12:54:06.167250 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="682ca0bdb815ed8e2c8c4cafeddf46c3709e28250a573f4e6b8caa203d79fe4e" Feb 25 12:54:06 crc kubenswrapper[5005]: I0225 12:54:06.167253 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533734-h7ck2" Feb 25 12:54:06 crc kubenswrapper[5005]: I0225 12:54:06.701269 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533728-65q7n"] Feb 25 12:54:06 crc kubenswrapper[5005]: I0225 12:54:06.709827 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533728-65q7n"] Feb 25 12:54:08 crc kubenswrapper[5005]: I0225 12:54:08.696609 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49b8fe63-d4f0-43e0-ab11-e7cd3fdc2e79" path="/var/lib/kubelet/pods/49b8fe63-d4f0-43e0-ab11-e7cd3fdc2e79/volumes" Feb 25 12:54:15 crc kubenswrapper[5005]: I0225 12:54:15.686230 5005 scope.go:117] "RemoveContainer" containerID="961885a00519aaff92ed827bc5e9b0b521218b07be93348ca17604bea49ccce2" Feb 25 12:54:15 crc kubenswrapper[5005]: E0225 12:54:15.687538 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:54:23 crc kubenswrapper[5005]: I0225 12:54:23.104838 5005 scope.go:117] "RemoveContainer" containerID="d5aae84e05ed2e95952589cb3d90cab7b21ac26b2dff15ee43ac52146e9866aa" Feb 25 12:54:30 crc kubenswrapper[5005]: I0225 12:54:30.686410 5005 scope.go:117] "RemoveContainer" containerID="961885a00519aaff92ed827bc5e9b0b521218b07be93348ca17604bea49ccce2" Feb 25 12:54:30 crc kubenswrapper[5005]: E0225 12:54:30.687141 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:54:44 crc kubenswrapper[5005]: I0225 12:54:44.689712 5005 scope.go:117] "RemoveContainer" containerID="961885a00519aaff92ed827bc5e9b0b521218b07be93348ca17604bea49ccce2" Feb 25 12:54:44 crc kubenswrapper[5005]: E0225 12:54:44.690441 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:54:56 crc kubenswrapper[5005]: I0225 12:54:56.009013 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-z7mx6"] Feb 25 12:54:56 crc kubenswrapper[5005]: E0225 12:54:56.010730 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a43239f-6794-4566-999b-c8ecae920f9b" containerName="oc" Feb 25 12:54:56 crc kubenswrapper[5005]: I0225 12:54:56.010769 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a43239f-6794-4566-999b-c8ecae920f9b" containerName="oc" Feb 25 12:54:56 crc kubenswrapper[5005]: I0225 12:54:56.011240 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a43239f-6794-4566-999b-c8ecae920f9b" containerName="oc" Feb 25 12:54:56 crc kubenswrapper[5005]: I0225 12:54:56.013682 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z7mx6" Feb 25 12:54:56 crc kubenswrapper[5005]: I0225 12:54:56.032073 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z7mx6"] Feb 25 12:54:56 crc kubenswrapper[5005]: I0225 12:54:56.086765 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf9e297f-ad3c-4da2-a960-bc1c9973703b-catalog-content\") pod \"redhat-operators-z7mx6\" (UID: \"bf9e297f-ad3c-4da2-a960-bc1c9973703b\") " pod="openshift-marketplace/redhat-operators-z7mx6" Feb 25 12:54:56 crc kubenswrapper[5005]: I0225 12:54:56.087065 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqbzs\" (UniqueName: \"kubernetes.io/projected/bf9e297f-ad3c-4da2-a960-bc1c9973703b-kube-api-access-jqbzs\") pod \"redhat-operators-z7mx6\" (UID: \"bf9e297f-ad3c-4da2-a960-bc1c9973703b\") " pod="openshift-marketplace/redhat-operators-z7mx6" Feb 25 12:54:56 crc kubenswrapper[5005]: I0225 12:54:56.087311 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf9e297f-ad3c-4da2-a960-bc1c9973703b-utilities\") pod \"redhat-operators-z7mx6\" (UID: \"bf9e297f-ad3c-4da2-a960-bc1c9973703b\") " pod="openshift-marketplace/redhat-operators-z7mx6" Feb 25 12:54:56 crc kubenswrapper[5005]: I0225 12:54:56.189693 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf9e297f-ad3c-4da2-a960-bc1c9973703b-catalog-content\") pod \"redhat-operators-z7mx6\" (UID: \"bf9e297f-ad3c-4da2-a960-bc1c9973703b\") " pod="openshift-marketplace/redhat-operators-z7mx6" Feb 25 12:54:56 crc kubenswrapper[5005]: I0225 12:54:56.189816 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqbzs\" (UniqueName: \"kubernetes.io/projected/bf9e297f-ad3c-4da2-a960-bc1c9973703b-kube-api-access-jqbzs\") pod \"redhat-operators-z7mx6\" (UID: \"bf9e297f-ad3c-4da2-a960-bc1c9973703b\") " pod="openshift-marketplace/redhat-operators-z7mx6" Feb 25 12:54:56 crc kubenswrapper[5005]: I0225 12:54:56.189896 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf9e297f-ad3c-4da2-a960-bc1c9973703b-utilities\") pod \"redhat-operators-z7mx6\" (UID: \"bf9e297f-ad3c-4da2-a960-bc1c9973703b\") " pod="openshift-marketplace/redhat-operators-z7mx6" Feb 25 12:54:56 crc kubenswrapper[5005]: I0225 12:54:56.190311 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf9e297f-ad3c-4da2-a960-bc1c9973703b-catalog-content\") pod \"redhat-operators-z7mx6\" (UID: \"bf9e297f-ad3c-4da2-a960-bc1c9973703b\") " pod="openshift-marketplace/redhat-operators-z7mx6" Feb 25 12:54:56 crc kubenswrapper[5005]: I0225 12:54:56.190361 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf9e297f-ad3c-4da2-a960-bc1c9973703b-utilities\") pod \"redhat-operators-z7mx6\" (UID: \"bf9e297f-ad3c-4da2-a960-bc1c9973703b\") " pod="openshift-marketplace/redhat-operators-z7mx6" Feb 25 12:54:56 crc kubenswrapper[5005]: I0225 12:54:56.208730 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqbzs\" (UniqueName: \"kubernetes.io/projected/bf9e297f-ad3c-4da2-a960-bc1c9973703b-kube-api-access-jqbzs\") pod \"redhat-operators-z7mx6\" (UID: \"bf9e297f-ad3c-4da2-a960-bc1c9973703b\") " pod="openshift-marketplace/redhat-operators-z7mx6" Feb 25 12:54:56 crc kubenswrapper[5005]: I0225 12:54:56.337043 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z7mx6" Feb 25 12:54:56 crc kubenswrapper[5005]: I0225 12:54:56.836193 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z7mx6"] Feb 25 12:54:57 crc kubenswrapper[5005]: I0225 12:54:57.650209 5005 generic.go:334] "Generic (PLEG): container finished" podID="bf9e297f-ad3c-4da2-a960-bc1c9973703b" containerID="93d217dfa6791f73bd5e08fa05363bc3d72205d02a7e7c3aa634e1f13b7607b6" exitCode=0 Feb 25 12:54:57 crc kubenswrapper[5005]: I0225 12:54:57.650322 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z7mx6" event={"ID":"bf9e297f-ad3c-4da2-a960-bc1c9973703b","Type":"ContainerDied","Data":"93d217dfa6791f73bd5e08fa05363bc3d72205d02a7e7c3aa634e1f13b7607b6"} Feb 25 12:54:57 crc kubenswrapper[5005]: I0225 12:54:57.650513 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z7mx6" event={"ID":"bf9e297f-ad3c-4da2-a960-bc1c9973703b","Type":"ContainerStarted","Data":"92c5f2a31e6927a1327458a960f633a80f10fec7d7ddea8e07d9db317fd8ff40"} Feb 25 12:54:59 crc kubenswrapper[5005]: I0225 12:54:59.669836 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z7mx6" event={"ID":"bf9e297f-ad3c-4da2-a960-bc1c9973703b","Type":"ContainerStarted","Data":"b40b8f0f82feeedcc85940302f561ca2a79e0854c49e44b5fc93c9ffc56f0f24"} Feb 25 12:54:59 crc kubenswrapper[5005]: I0225 12:54:59.687023 5005 scope.go:117] "RemoveContainer" containerID="961885a00519aaff92ed827bc5e9b0b521218b07be93348ca17604bea49ccce2" Feb 25 12:54:59 crc kubenswrapper[5005]: E0225 12:54:59.687705 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:55:03 crc kubenswrapper[5005]: I0225 12:55:03.706938 5005 generic.go:334] "Generic (PLEG): container finished" podID="bf9e297f-ad3c-4da2-a960-bc1c9973703b" containerID="b40b8f0f82feeedcc85940302f561ca2a79e0854c49e44b5fc93c9ffc56f0f24" exitCode=0 Feb 25 12:55:03 crc kubenswrapper[5005]: I0225 12:55:03.707060 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z7mx6" event={"ID":"bf9e297f-ad3c-4da2-a960-bc1c9973703b","Type":"ContainerDied","Data":"b40b8f0f82feeedcc85940302f561ca2a79e0854c49e44b5fc93c9ffc56f0f24"} Feb 25 12:55:04 crc kubenswrapper[5005]: I0225 12:55:04.718452 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z7mx6" event={"ID":"bf9e297f-ad3c-4da2-a960-bc1c9973703b","Type":"ContainerStarted","Data":"c650154f598207c320e0a144118df5a11fee14261668d28778aff033a971df19"} Feb 25 12:55:04 crc kubenswrapper[5005]: I0225 12:55:04.744261 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-z7mx6" podStartSLOduration=3.152088539 podStartE2EDuration="9.7442408s" podCreationTimestamp="2026-02-25 12:54:55 +0000 UTC" firstStartedPulling="2026-02-25 12:54:57.651757216 +0000 UTC m=+5811.692489543" lastFinishedPulling="2026-02-25 12:55:04.243909437 +0000 UTC m=+5818.284641804" observedRunningTime="2026-02-25 12:55:04.743532059 +0000 UTC m=+5818.784264416" watchObservedRunningTime="2026-02-25 12:55:04.7442408 +0000 UTC m=+5818.784973137" Feb 25 12:55:06 crc kubenswrapper[5005]: I0225 12:55:06.338102 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-z7mx6" Feb 25 12:55:06 crc kubenswrapper[5005]: I0225 12:55:06.338162 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-z7mx6" Feb 25 12:55:07 crc kubenswrapper[5005]: I0225 12:55:07.423353 5005 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-z7mx6" podUID="bf9e297f-ad3c-4da2-a960-bc1c9973703b" containerName="registry-server" probeResult="failure" output=< Feb 25 12:55:07 crc kubenswrapper[5005]: timeout: failed to connect service ":50051" within 1s Feb 25 12:55:07 crc kubenswrapper[5005]: > Feb 25 12:55:10 crc kubenswrapper[5005]: I0225 12:55:10.685881 5005 scope.go:117] "RemoveContainer" containerID="961885a00519aaff92ed827bc5e9b0b521218b07be93348ca17604bea49ccce2" Feb 25 12:55:10 crc kubenswrapper[5005]: E0225 12:55:10.686804 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:55:16 crc kubenswrapper[5005]: I0225 12:55:16.390453 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-z7mx6" Feb 25 12:55:16 crc kubenswrapper[5005]: I0225 12:55:16.447576 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-z7mx6" Feb 25 12:55:16 crc kubenswrapper[5005]: I0225 12:55:16.624787 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z7mx6"] Feb 25 12:55:17 crc kubenswrapper[5005]: I0225 12:55:17.824327 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-z7mx6" podUID="bf9e297f-ad3c-4da2-a960-bc1c9973703b" containerName="registry-server" containerID="cri-o://c650154f598207c320e0a144118df5a11fee14261668d28778aff033a971df19" gracePeriod=2 Feb 25 12:55:18 crc kubenswrapper[5005]: I0225 12:55:18.412821 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z7mx6" Feb 25 12:55:18 crc kubenswrapper[5005]: I0225 12:55:18.447846 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf9e297f-ad3c-4da2-a960-bc1c9973703b-utilities\") pod \"bf9e297f-ad3c-4da2-a960-bc1c9973703b\" (UID: \"bf9e297f-ad3c-4da2-a960-bc1c9973703b\") " Feb 25 12:55:18 crc kubenswrapper[5005]: I0225 12:55:18.447920 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf9e297f-ad3c-4da2-a960-bc1c9973703b-catalog-content\") pod \"bf9e297f-ad3c-4da2-a960-bc1c9973703b\" (UID: \"bf9e297f-ad3c-4da2-a960-bc1c9973703b\") " Feb 25 12:55:18 crc kubenswrapper[5005]: I0225 12:55:18.448143 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqbzs\" (UniqueName: \"kubernetes.io/projected/bf9e297f-ad3c-4da2-a960-bc1c9973703b-kube-api-access-jqbzs\") pod \"bf9e297f-ad3c-4da2-a960-bc1c9973703b\" (UID: \"bf9e297f-ad3c-4da2-a960-bc1c9973703b\") " Feb 25 12:55:18 crc kubenswrapper[5005]: I0225 12:55:18.448595 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf9e297f-ad3c-4da2-a960-bc1c9973703b-utilities" (OuterVolumeSpecName: "utilities") pod "bf9e297f-ad3c-4da2-a960-bc1c9973703b" (UID: "bf9e297f-ad3c-4da2-a960-bc1c9973703b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 12:55:18 crc kubenswrapper[5005]: I0225 12:55:18.448994 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf9e297f-ad3c-4da2-a960-bc1c9973703b-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 12:55:18 crc kubenswrapper[5005]: I0225 12:55:18.496986 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf9e297f-ad3c-4da2-a960-bc1c9973703b-kube-api-access-jqbzs" (OuterVolumeSpecName: "kube-api-access-jqbzs") pod "bf9e297f-ad3c-4da2-a960-bc1c9973703b" (UID: "bf9e297f-ad3c-4da2-a960-bc1c9973703b"). InnerVolumeSpecName "kube-api-access-jqbzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:55:18 crc kubenswrapper[5005]: I0225 12:55:18.556917 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqbzs\" (UniqueName: \"kubernetes.io/projected/bf9e297f-ad3c-4da2-a960-bc1c9973703b-kube-api-access-jqbzs\") on node \"crc\" DevicePath \"\"" Feb 25 12:55:18 crc kubenswrapper[5005]: I0225 12:55:18.608216 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf9e297f-ad3c-4da2-a960-bc1c9973703b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bf9e297f-ad3c-4da2-a960-bc1c9973703b" (UID: "bf9e297f-ad3c-4da2-a960-bc1c9973703b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 12:55:18 crc kubenswrapper[5005]: I0225 12:55:18.659304 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf9e297f-ad3c-4da2-a960-bc1c9973703b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 12:55:18 crc kubenswrapper[5005]: I0225 12:55:18.835355 5005 generic.go:334] "Generic (PLEG): container finished" podID="bf9e297f-ad3c-4da2-a960-bc1c9973703b" containerID="c650154f598207c320e0a144118df5a11fee14261668d28778aff033a971df19" exitCode=0 Feb 25 12:55:18 crc kubenswrapper[5005]: I0225 12:55:18.835426 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z7mx6" event={"ID":"bf9e297f-ad3c-4da2-a960-bc1c9973703b","Type":"ContainerDied","Data":"c650154f598207c320e0a144118df5a11fee14261668d28778aff033a971df19"} Feb 25 12:55:18 crc kubenswrapper[5005]: I0225 12:55:18.835491 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z7mx6" event={"ID":"bf9e297f-ad3c-4da2-a960-bc1c9973703b","Type":"ContainerDied","Data":"92c5f2a31e6927a1327458a960f633a80f10fec7d7ddea8e07d9db317fd8ff40"} Feb 25 12:55:18 crc kubenswrapper[5005]: I0225 12:55:18.835521 5005 scope.go:117] "RemoveContainer" containerID="c650154f598207c320e0a144118df5a11fee14261668d28778aff033a971df19" Feb 25 12:55:18 crc kubenswrapper[5005]: I0225 12:55:18.835417 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z7mx6" Feb 25 12:55:18 crc kubenswrapper[5005]: I0225 12:55:18.857631 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z7mx6"] Feb 25 12:55:18 crc kubenswrapper[5005]: I0225 12:55:18.867336 5005 scope.go:117] "RemoveContainer" containerID="b40b8f0f82feeedcc85940302f561ca2a79e0854c49e44b5fc93c9ffc56f0f24" Feb 25 12:55:18 crc kubenswrapper[5005]: I0225 12:55:18.869060 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-z7mx6"] Feb 25 12:55:18 crc kubenswrapper[5005]: I0225 12:55:18.892824 5005 scope.go:117] "RemoveContainer" containerID="93d217dfa6791f73bd5e08fa05363bc3d72205d02a7e7c3aa634e1f13b7607b6" Feb 25 12:55:18 crc kubenswrapper[5005]: I0225 12:55:18.932560 5005 scope.go:117] "RemoveContainer" containerID="c650154f598207c320e0a144118df5a11fee14261668d28778aff033a971df19" Feb 25 12:55:18 crc kubenswrapper[5005]: E0225 12:55:18.933149 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c650154f598207c320e0a144118df5a11fee14261668d28778aff033a971df19\": container with ID starting with c650154f598207c320e0a144118df5a11fee14261668d28778aff033a971df19 not found: ID does not exist" containerID="c650154f598207c320e0a144118df5a11fee14261668d28778aff033a971df19" Feb 25 12:55:18 crc kubenswrapper[5005]: I0225 12:55:18.933183 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c650154f598207c320e0a144118df5a11fee14261668d28778aff033a971df19"} err="failed to get container status \"c650154f598207c320e0a144118df5a11fee14261668d28778aff033a971df19\": rpc error: code = NotFound desc = could not find container \"c650154f598207c320e0a144118df5a11fee14261668d28778aff033a971df19\": container with ID starting with c650154f598207c320e0a144118df5a11fee14261668d28778aff033a971df19 not found: ID does not exist" Feb 25 12:55:18 crc kubenswrapper[5005]: I0225 12:55:18.933220 5005 scope.go:117] "RemoveContainer" containerID="b40b8f0f82feeedcc85940302f561ca2a79e0854c49e44b5fc93c9ffc56f0f24" Feb 25 12:55:18 crc kubenswrapper[5005]: E0225 12:55:18.933709 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b40b8f0f82feeedcc85940302f561ca2a79e0854c49e44b5fc93c9ffc56f0f24\": container with ID starting with b40b8f0f82feeedcc85940302f561ca2a79e0854c49e44b5fc93c9ffc56f0f24 not found: ID does not exist" containerID="b40b8f0f82feeedcc85940302f561ca2a79e0854c49e44b5fc93c9ffc56f0f24" Feb 25 12:55:18 crc kubenswrapper[5005]: I0225 12:55:18.933744 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b40b8f0f82feeedcc85940302f561ca2a79e0854c49e44b5fc93c9ffc56f0f24"} err="failed to get container status \"b40b8f0f82feeedcc85940302f561ca2a79e0854c49e44b5fc93c9ffc56f0f24\": rpc error: code = NotFound desc = could not find container \"b40b8f0f82feeedcc85940302f561ca2a79e0854c49e44b5fc93c9ffc56f0f24\": container with ID starting with b40b8f0f82feeedcc85940302f561ca2a79e0854c49e44b5fc93c9ffc56f0f24 not found: ID does not exist" Feb 25 12:55:18 crc kubenswrapper[5005]: I0225 12:55:18.933764 5005 scope.go:117] "RemoveContainer" containerID="93d217dfa6791f73bd5e08fa05363bc3d72205d02a7e7c3aa634e1f13b7607b6" Feb 25 12:55:18 crc kubenswrapper[5005]: E0225 12:55:18.934249 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93d217dfa6791f73bd5e08fa05363bc3d72205d02a7e7c3aa634e1f13b7607b6\": container with ID starting with 93d217dfa6791f73bd5e08fa05363bc3d72205d02a7e7c3aa634e1f13b7607b6 not found: ID does not exist" containerID="93d217dfa6791f73bd5e08fa05363bc3d72205d02a7e7c3aa634e1f13b7607b6" Feb 25 12:55:18 crc kubenswrapper[5005]: I0225 12:55:18.934294 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93d217dfa6791f73bd5e08fa05363bc3d72205d02a7e7c3aa634e1f13b7607b6"} err="failed to get container status \"93d217dfa6791f73bd5e08fa05363bc3d72205d02a7e7c3aa634e1f13b7607b6\": rpc error: code = NotFound desc = could not find container \"93d217dfa6791f73bd5e08fa05363bc3d72205d02a7e7c3aa634e1f13b7607b6\": container with ID starting with 93d217dfa6791f73bd5e08fa05363bc3d72205d02a7e7c3aa634e1f13b7607b6 not found: ID does not exist" Feb 25 12:55:20 crc kubenswrapper[5005]: I0225 12:55:20.705167 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf9e297f-ad3c-4da2-a960-bc1c9973703b" path="/var/lib/kubelet/pods/bf9e297f-ad3c-4da2-a960-bc1c9973703b/volumes" Feb 25 12:55:25 crc kubenswrapper[5005]: I0225 12:55:25.685702 5005 scope.go:117] "RemoveContainer" containerID="961885a00519aaff92ed827bc5e9b0b521218b07be93348ca17604bea49ccce2" Feb 25 12:55:25 crc kubenswrapper[5005]: E0225 12:55:25.686554 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:55:37 crc kubenswrapper[5005]: I0225 12:55:37.686341 5005 scope.go:117] "RemoveContainer" containerID="961885a00519aaff92ed827bc5e9b0b521218b07be93348ca17604bea49ccce2" Feb 25 12:55:37 crc kubenswrapper[5005]: E0225 12:55:37.687033 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:55:52 crc kubenswrapper[5005]: I0225 12:55:52.702441 5005 scope.go:117] "RemoveContainer" containerID="961885a00519aaff92ed827bc5e9b0b521218b07be93348ca17604bea49ccce2" Feb 25 12:55:52 crc kubenswrapper[5005]: E0225 12:55:52.703900 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:56:00 crc kubenswrapper[5005]: I0225 12:56:00.158570 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533736-qmkqb"] Feb 25 12:56:00 crc kubenswrapper[5005]: E0225 12:56:00.159440 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf9e297f-ad3c-4da2-a960-bc1c9973703b" containerName="registry-server" Feb 25 12:56:00 crc kubenswrapper[5005]: I0225 12:56:00.159452 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf9e297f-ad3c-4da2-a960-bc1c9973703b" containerName="registry-server" Feb 25 12:56:00 crc kubenswrapper[5005]: E0225 12:56:00.159464 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf9e297f-ad3c-4da2-a960-bc1c9973703b" containerName="extract-content" Feb 25 12:56:00 crc kubenswrapper[5005]: I0225 12:56:00.159470 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf9e297f-ad3c-4da2-a960-bc1c9973703b" containerName="extract-content" Feb 25 12:56:00 crc kubenswrapper[5005]: E0225 12:56:00.159490 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf9e297f-ad3c-4da2-a960-bc1c9973703b" containerName="extract-utilities" Feb 25 12:56:00 crc kubenswrapper[5005]: I0225 12:56:00.159497 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf9e297f-ad3c-4da2-a960-bc1c9973703b" containerName="extract-utilities" Feb 25 12:56:00 crc kubenswrapper[5005]: I0225 12:56:00.159694 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf9e297f-ad3c-4da2-a960-bc1c9973703b" containerName="registry-server" Feb 25 12:56:00 crc kubenswrapper[5005]: I0225 12:56:00.160251 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533736-qmkqb" Feb 25 12:56:00 crc kubenswrapper[5005]: I0225 12:56:00.162452 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 12:56:00 crc kubenswrapper[5005]: I0225 12:56:00.162880 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 12:56:00 crc kubenswrapper[5005]: I0225 12:56:00.163471 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 12:56:00 crc kubenswrapper[5005]: I0225 12:56:00.171171 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533736-qmkqb"] Feb 25 12:56:00 crc kubenswrapper[5005]: I0225 12:56:00.259013 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5bsj\" (UniqueName: \"kubernetes.io/projected/bd0b9ea2-eb17-41fe-9a64-1683557efe11-kube-api-access-c5bsj\") pod \"auto-csr-approver-29533736-qmkqb\" (UID: \"bd0b9ea2-eb17-41fe-9a64-1683557efe11\") " pod="openshift-infra/auto-csr-approver-29533736-qmkqb" Feb 25 12:56:00 crc kubenswrapper[5005]: I0225 12:56:00.361159 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5bsj\" (UniqueName: \"kubernetes.io/projected/bd0b9ea2-eb17-41fe-9a64-1683557efe11-kube-api-access-c5bsj\") pod \"auto-csr-approver-29533736-qmkqb\" (UID: \"bd0b9ea2-eb17-41fe-9a64-1683557efe11\") " pod="openshift-infra/auto-csr-approver-29533736-qmkqb" Feb 25 12:56:00 crc kubenswrapper[5005]: I0225 12:56:00.379164 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5bsj\" (UniqueName: \"kubernetes.io/projected/bd0b9ea2-eb17-41fe-9a64-1683557efe11-kube-api-access-c5bsj\") pod \"auto-csr-approver-29533736-qmkqb\" (UID: \"bd0b9ea2-eb17-41fe-9a64-1683557efe11\") " pod="openshift-infra/auto-csr-approver-29533736-qmkqb" Feb 25 12:56:00 crc kubenswrapper[5005]: I0225 12:56:00.483598 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533736-qmkqb" Feb 25 12:56:00 crc kubenswrapper[5005]: I0225 12:56:00.957067 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533736-qmkqb"] Feb 25 12:56:01 crc kubenswrapper[5005]: I0225 12:56:01.272256 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533736-qmkqb" event={"ID":"bd0b9ea2-eb17-41fe-9a64-1683557efe11","Type":"ContainerStarted","Data":"75438e19e9082c5fbe50997e1eae3931713a8729527acc5c3aedeb45de4207c9"} Feb 25 12:56:03 crc kubenswrapper[5005]: I0225 12:56:03.295618 5005 generic.go:334] "Generic (PLEG): container finished" podID="bd0b9ea2-eb17-41fe-9a64-1683557efe11" containerID="3043e67c9f1cab4312ce236d718b37b4c7606fbe4b78d9e2e68b7460e41f6899" exitCode=0 Feb 25 12:56:03 crc kubenswrapper[5005]: I0225 12:56:03.295719 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533736-qmkqb" event={"ID":"bd0b9ea2-eb17-41fe-9a64-1683557efe11","Type":"ContainerDied","Data":"3043e67c9f1cab4312ce236d718b37b4c7606fbe4b78d9e2e68b7460e41f6899"} Feb 25 12:56:04 crc kubenswrapper[5005]: I0225 12:56:04.763612 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533736-qmkqb" Feb 25 12:56:04 crc kubenswrapper[5005]: I0225 12:56:04.867185 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5bsj\" (UniqueName: \"kubernetes.io/projected/bd0b9ea2-eb17-41fe-9a64-1683557efe11-kube-api-access-c5bsj\") pod \"bd0b9ea2-eb17-41fe-9a64-1683557efe11\" (UID: \"bd0b9ea2-eb17-41fe-9a64-1683557efe11\") " Feb 25 12:56:04 crc kubenswrapper[5005]: I0225 12:56:04.873265 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd0b9ea2-eb17-41fe-9a64-1683557efe11-kube-api-access-c5bsj" (OuterVolumeSpecName: "kube-api-access-c5bsj") pod "bd0b9ea2-eb17-41fe-9a64-1683557efe11" (UID: "bd0b9ea2-eb17-41fe-9a64-1683557efe11"). InnerVolumeSpecName "kube-api-access-c5bsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:56:04 crc kubenswrapper[5005]: I0225 12:56:04.970303 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5bsj\" (UniqueName: \"kubernetes.io/projected/bd0b9ea2-eb17-41fe-9a64-1683557efe11-kube-api-access-c5bsj\") on node \"crc\" DevicePath \"\"" Feb 25 12:56:05 crc kubenswrapper[5005]: I0225 12:56:05.312294 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533736-qmkqb" event={"ID":"bd0b9ea2-eb17-41fe-9a64-1683557efe11","Type":"ContainerDied","Data":"75438e19e9082c5fbe50997e1eae3931713a8729527acc5c3aedeb45de4207c9"} Feb 25 12:56:05 crc kubenswrapper[5005]: I0225 12:56:05.312333 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75438e19e9082c5fbe50997e1eae3931713a8729527acc5c3aedeb45de4207c9" Feb 25 12:56:05 crc kubenswrapper[5005]: I0225 12:56:05.312345 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533736-qmkqb" Feb 25 12:56:05 crc kubenswrapper[5005]: I0225 12:56:05.838948 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533730-wdgxw"] Feb 25 12:56:05 crc kubenswrapper[5005]: I0225 12:56:05.849413 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533730-wdgxw"] Feb 25 12:56:06 crc kubenswrapper[5005]: I0225 12:56:06.702681 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9571a5f1-50e5-4990-a329-aac058e610a9" path="/var/lib/kubelet/pods/9571a5f1-50e5-4990-a329-aac058e610a9/volumes" Feb 25 12:56:07 crc kubenswrapper[5005]: I0225 12:56:07.685136 5005 scope.go:117] "RemoveContainer" containerID="961885a00519aaff92ed827bc5e9b0b521218b07be93348ca17604bea49ccce2" Feb 25 12:56:07 crc kubenswrapper[5005]: E0225 12:56:07.685577 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:56:18 crc kubenswrapper[5005]: I0225 12:56:18.685534 5005 scope.go:117] "RemoveContainer" containerID="961885a00519aaff92ed827bc5e9b0b521218b07be93348ca17604bea49ccce2" Feb 25 12:56:18 crc kubenswrapper[5005]: E0225 12:56:18.686304 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:56:23 crc kubenswrapper[5005]: I0225 12:56:23.219571 5005 scope.go:117] "RemoveContainer" containerID="e6297a18bf623213401e4ab570e53539ec95962a60a2df0362b104687919f1e9" Feb 25 12:56:29 crc kubenswrapper[5005]: I0225 12:56:29.686645 5005 scope.go:117] "RemoveContainer" containerID="961885a00519aaff92ed827bc5e9b0b521218b07be93348ca17604bea49ccce2" Feb 25 12:56:29 crc kubenswrapper[5005]: E0225 12:56:29.688257 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:56:40 crc kubenswrapper[5005]: I0225 12:56:40.686209 5005 scope.go:117] "RemoveContainer" containerID="961885a00519aaff92ed827bc5e9b0b521218b07be93348ca17604bea49ccce2" Feb 25 12:56:40 crc kubenswrapper[5005]: E0225 12:56:40.687147 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:56:51 crc kubenswrapper[5005]: I0225 12:56:51.686027 5005 scope.go:117] "RemoveContainer" containerID="961885a00519aaff92ed827bc5e9b0b521218b07be93348ca17604bea49ccce2" Feb 25 12:56:51 crc kubenswrapper[5005]: E0225 12:56:51.686811 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:57:06 crc kubenswrapper[5005]: I0225 12:57:06.702028 5005 scope.go:117] "RemoveContainer" containerID="961885a00519aaff92ed827bc5e9b0b521218b07be93348ca17604bea49ccce2" Feb 25 12:57:06 crc kubenswrapper[5005]: E0225 12:57:06.703846 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:57:17 crc kubenswrapper[5005]: I0225 12:57:17.686430 5005 scope.go:117] "RemoveContainer" containerID="961885a00519aaff92ed827bc5e9b0b521218b07be93348ca17604bea49ccce2" Feb 25 12:57:17 crc kubenswrapper[5005]: E0225 12:57:17.687209 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 12:57:31 crc kubenswrapper[5005]: I0225 12:57:31.298286 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4qpp8"] Feb 25 12:57:31 crc kubenswrapper[5005]: E0225 12:57:31.299194 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd0b9ea2-eb17-41fe-9a64-1683557efe11" containerName="oc" Feb 25 12:57:31 crc kubenswrapper[5005]: I0225 12:57:31.299206 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd0b9ea2-eb17-41fe-9a64-1683557efe11" containerName="oc" Feb 25 12:57:31 crc kubenswrapper[5005]: I0225 12:57:31.299379 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd0b9ea2-eb17-41fe-9a64-1683557efe11" containerName="oc" Feb 25 12:57:31 crc kubenswrapper[5005]: I0225 12:57:31.300664 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4qpp8" Feb 25 12:57:31 crc kubenswrapper[5005]: I0225 12:57:31.336403 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4qpp8"] Feb 25 12:57:31 crc kubenswrapper[5005]: I0225 12:57:31.420340 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7m6f6\" (UniqueName: \"kubernetes.io/projected/44e7abad-f8a1-4ca2-92d1-f11f8bfdd68f-kube-api-access-7m6f6\") pod \"community-operators-4qpp8\" (UID: \"44e7abad-f8a1-4ca2-92d1-f11f8bfdd68f\") " pod="openshift-marketplace/community-operators-4qpp8" Feb 25 12:57:31 crc kubenswrapper[5005]: I0225 12:57:31.420523 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44e7abad-f8a1-4ca2-92d1-f11f8bfdd68f-catalog-content\") pod \"community-operators-4qpp8\" (UID: \"44e7abad-f8a1-4ca2-92d1-f11f8bfdd68f\") " pod="openshift-marketplace/community-operators-4qpp8" Feb 25 12:57:31 crc kubenswrapper[5005]: I0225 12:57:31.420561 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44e7abad-f8a1-4ca2-92d1-f11f8bfdd68f-utilities\") pod \"community-operators-4qpp8\" (UID: \"44e7abad-f8a1-4ca2-92d1-f11f8bfdd68f\") " pod="openshift-marketplace/community-operators-4qpp8" Feb 25 12:57:31 crc kubenswrapper[5005]: I0225 12:57:31.522736 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7m6f6\" (UniqueName: \"kubernetes.io/projected/44e7abad-f8a1-4ca2-92d1-f11f8bfdd68f-kube-api-access-7m6f6\") pod \"community-operators-4qpp8\" (UID: \"44e7abad-f8a1-4ca2-92d1-f11f8bfdd68f\") " pod="openshift-marketplace/community-operators-4qpp8" Feb 25 12:57:31 crc kubenswrapper[5005]: I0225 12:57:31.522907 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44e7abad-f8a1-4ca2-92d1-f11f8bfdd68f-catalog-content\") pod \"community-operators-4qpp8\" (UID: \"44e7abad-f8a1-4ca2-92d1-f11f8bfdd68f\") " pod="openshift-marketplace/community-operators-4qpp8" Feb 25 12:57:31 crc kubenswrapper[5005]: I0225 12:57:31.522933 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44e7abad-f8a1-4ca2-92d1-f11f8bfdd68f-utilities\") pod \"community-operators-4qpp8\" (UID: \"44e7abad-f8a1-4ca2-92d1-f11f8bfdd68f\") " pod="openshift-marketplace/community-operators-4qpp8" Feb 25 12:57:31 crc kubenswrapper[5005]: I0225 12:57:31.523528 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44e7abad-f8a1-4ca2-92d1-f11f8bfdd68f-utilities\") pod \"community-operators-4qpp8\" (UID: \"44e7abad-f8a1-4ca2-92d1-f11f8bfdd68f\") " pod="openshift-marketplace/community-operators-4qpp8" Feb 25 12:57:31 crc kubenswrapper[5005]: I0225 12:57:31.523702 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44e7abad-f8a1-4ca2-92d1-f11f8bfdd68f-catalog-content\") pod \"community-operators-4qpp8\" (UID: \"44e7abad-f8a1-4ca2-92d1-f11f8bfdd68f\") " pod="openshift-marketplace/community-operators-4qpp8" Feb 25 12:57:31 crc kubenswrapper[5005]: I0225 12:57:31.546131 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7m6f6\" (UniqueName: \"kubernetes.io/projected/44e7abad-f8a1-4ca2-92d1-f11f8bfdd68f-kube-api-access-7m6f6\") pod \"community-operators-4qpp8\" (UID: \"44e7abad-f8a1-4ca2-92d1-f11f8bfdd68f\") " pod="openshift-marketplace/community-operators-4qpp8" Feb 25 12:57:31 crc kubenswrapper[5005]: I0225 12:57:31.624401 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4qpp8" Feb 25 12:57:32 crc kubenswrapper[5005]: I0225 12:57:32.138746 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4qpp8"] Feb 25 12:57:32 crc kubenswrapper[5005]: I0225 12:57:32.685897 5005 scope.go:117] "RemoveContainer" containerID="961885a00519aaff92ed827bc5e9b0b521218b07be93348ca17604bea49ccce2" Feb 25 12:57:33 crc kubenswrapper[5005]: I0225 12:57:33.093600 5005 generic.go:334] "Generic (PLEG): container finished" podID="44e7abad-f8a1-4ca2-92d1-f11f8bfdd68f" containerID="c0d06a807c0520f1f40272caab8af0b29dd4113355beb72383aa3f0bf91c71f0" exitCode=0 Feb 25 12:57:33 crc kubenswrapper[5005]: I0225 12:57:33.093854 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4qpp8" event={"ID":"44e7abad-f8a1-4ca2-92d1-f11f8bfdd68f","Type":"ContainerDied","Data":"c0d06a807c0520f1f40272caab8af0b29dd4113355beb72383aa3f0bf91c71f0"} Feb 25 12:57:33 crc kubenswrapper[5005]: I0225 12:57:33.093878 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4qpp8" event={"ID":"44e7abad-f8a1-4ca2-92d1-f11f8bfdd68f","Type":"ContainerStarted","Data":"697a17e414c30fbbeee859ba857e5f9e7d9e570a12552a820beef5631e3d68ed"} Feb 25 12:57:33 crc kubenswrapper[5005]: I0225 12:57:33.099439 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerStarted","Data":"6c2ff527f1ec5ff1329d594b9ec21cfb96db8b4836612d2713336089cccc84e6"} Feb 25 12:57:34 crc kubenswrapper[5005]: I0225 12:57:34.110047 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4qpp8" event={"ID":"44e7abad-f8a1-4ca2-92d1-f11f8bfdd68f","Type":"ContainerStarted","Data":"6515fb97e3e595c486c288f45c60419c4ce98ab9193b80f712467d564bab75c4"} Feb 25 12:57:35 crc kubenswrapper[5005]: I0225 12:57:35.122947 5005 generic.go:334] "Generic (PLEG): container finished" podID="44e7abad-f8a1-4ca2-92d1-f11f8bfdd68f" containerID="6515fb97e3e595c486c288f45c60419c4ce98ab9193b80f712467d564bab75c4" exitCode=0 Feb 25 12:57:35 crc kubenswrapper[5005]: I0225 12:57:35.123011 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4qpp8" event={"ID":"44e7abad-f8a1-4ca2-92d1-f11f8bfdd68f","Type":"ContainerDied","Data":"6515fb97e3e595c486c288f45c60419c4ce98ab9193b80f712467d564bab75c4"} Feb 25 12:57:36 crc kubenswrapper[5005]: I0225 12:57:36.133994 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4qpp8" event={"ID":"44e7abad-f8a1-4ca2-92d1-f11f8bfdd68f","Type":"ContainerStarted","Data":"d3ce2a498d0c6aa066250c36d5b1b56d5642bd8c6fbcb613770fe0d4d7ab9bf5"} Feb 25 12:57:36 crc kubenswrapper[5005]: I0225 12:57:36.158289 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4qpp8" podStartSLOduration=2.672795578 podStartE2EDuration="5.158272178s" podCreationTimestamp="2026-02-25 12:57:31 +0000 UTC" firstStartedPulling="2026-02-25 12:57:33.095523229 +0000 UTC m=+5967.136255556" lastFinishedPulling="2026-02-25 12:57:35.580999819 +0000 UTC m=+5969.621732156" observedRunningTime="2026-02-25 12:57:36.153642544 +0000 UTC m=+5970.194374871" watchObservedRunningTime="2026-02-25 12:57:36.158272178 +0000 UTC m=+5970.199004505" Feb 25 12:57:41 crc kubenswrapper[5005]: I0225 12:57:41.625605 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4qpp8" Feb 25 12:57:41 crc kubenswrapper[5005]: I0225 12:57:41.626233 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4qpp8" Feb 25 12:57:41 crc kubenswrapper[5005]: I0225 12:57:41.719716 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4qpp8" Feb 25 12:57:42 crc kubenswrapper[5005]: I0225 12:57:42.228668 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4qpp8" Feb 25 12:57:42 crc kubenswrapper[5005]: I0225 12:57:42.272740 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4qpp8"] Feb 25 12:57:44 crc kubenswrapper[5005]: I0225 12:57:44.203766 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4qpp8" podUID="44e7abad-f8a1-4ca2-92d1-f11f8bfdd68f" containerName="registry-server" containerID="cri-o://d3ce2a498d0c6aa066250c36d5b1b56d5642bd8c6fbcb613770fe0d4d7ab9bf5" gracePeriod=2 Feb 25 12:57:44 crc kubenswrapper[5005]: I0225 12:57:44.788038 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4qpp8" Feb 25 12:57:44 crc kubenswrapper[5005]: I0225 12:57:44.899971 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44e7abad-f8a1-4ca2-92d1-f11f8bfdd68f-catalog-content\") pod \"44e7abad-f8a1-4ca2-92d1-f11f8bfdd68f\" (UID: \"44e7abad-f8a1-4ca2-92d1-f11f8bfdd68f\") " Feb 25 12:57:44 crc kubenswrapper[5005]: I0225 12:57:44.900037 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7m6f6\" (UniqueName: \"kubernetes.io/projected/44e7abad-f8a1-4ca2-92d1-f11f8bfdd68f-kube-api-access-7m6f6\") pod \"44e7abad-f8a1-4ca2-92d1-f11f8bfdd68f\" (UID: \"44e7abad-f8a1-4ca2-92d1-f11f8bfdd68f\") " Feb 25 12:57:44 crc kubenswrapper[5005]: I0225 12:57:44.900100 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44e7abad-f8a1-4ca2-92d1-f11f8bfdd68f-utilities\") pod \"44e7abad-f8a1-4ca2-92d1-f11f8bfdd68f\" (UID: \"44e7abad-f8a1-4ca2-92d1-f11f8bfdd68f\") " Feb 25 12:57:44 crc kubenswrapper[5005]: I0225 12:57:44.900921 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44e7abad-f8a1-4ca2-92d1-f11f8bfdd68f-utilities" (OuterVolumeSpecName: "utilities") pod "44e7abad-f8a1-4ca2-92d1-f11f8bfdd68f" (UID: "44e7abad-f8a1-4ca2-92d1-f11f8bfdd68f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 12:57:44 crc kubenswrapper[5005]: I0225 12:57:44.911963 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44e7abad-f8a1-4ca2-92d1-f11f8bfdd68f-kube-api-access-7m6f6" (OuterVolumeSpecName: "kube-api-access-7m6f6") pod "44e7abad-f8a1-4ca2-92d1-f11f8bfdd68f" (UID: "44e7abad-f8a1-4ca2-92d1-f11f8bfdd68f"). InnerVolumeSpecName "kube-api-access-7m6f6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:57:44 crc kubenswrapper[5005]: I0225 12:57:44.957630 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44e7abad-f8a1-4ca2-92d1-f11f8bfdd68f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "44e7abad-f8a1-4ca2-92d1-f11f8bfdd68f" (UID: "44e7abad-f8a1-4ca2-92d1-f11f8bfdd68f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 12:57:45 crc kubenswrapper[5005]: I0225 12:57:45.001914 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44e7abad-f8a1-4ca2-92d1-f11f8bfdd68f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 12:57:45 crc kubenswrapper[5005]: I0225 12:57:45.001943 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7m6f6\" (UniqueName: \"kubernetes.io/projected/44e7abad-f8a1-4ca2-92d1-f11f8bfdd68f-kube-api-access-7m6f6\") on node \"crc\" DevicePath \"\"" Feb 25 12:57:45 crc kubenswrapper[5005]: I0225 12:57:45.001952 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44e7abad-f8a1-4ca2-92d1-f11f8bfdd68f-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 12:57:45 crc kubenswrapper[5005]: I0225 12:57:45.217325 5005 generic.go:334] "Generic (PLEG): container finished" podID="44e7abad-f8a1-4ca2-92d1-f11f8bfdd68f" containerID="d3ce2a498d0c6aa066250c36d5b1b56d5642bd8c6fbcb613770fe0d4d7ab9bf5" exitCode=0 Feb 25 12:57:45 crc kubenswrapper[5005]: I0225 12:57:45.217407 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4qpp8" Feb 25 12:57:45 crc kubenswrapper[5005]: I0225 12:57:45.217410 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4qpp8" event={"ID":"44e7abad-f8a1-4ca2-92d1-f11f8bfdd68f","Type":"ContainerDied","Data":"d3ce2a498d0c6aa066250c36d5b1b56d5642bd8c6fbcb613770fe0d4d7ab9bf5"} Feb 25 12:57:45 crc kubenswrapper[5005]: I0225 12:57:45.217576 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4qpp8" event={"ID":"44e7abad-f8a1-4ca2-92d1-f11f8bfdd68f","Type":"ContainerDied","Data":"697a17e414c30fbbeee859ba857e5f9e7d9e570a12552a820beef5631e3d68ed"} Feb 25 12:57:45 crc kubenswrapper[5005]: I0225 12:57:45.217618 5005 scope.go:117] "RemoveContainer" containerID="d3ce2a498d0c6aa066250c36d5b1b56d5642bd8c6fbcb613770fe0d4d7ab9bf5" Feb 25 12:57:45 crc kubenswrapper[5005]: I0225 12:57:45.252187 5005 scope.go:117] "RemoveContainer" containerID="6515fb97e3e595c486c288f45c60419c4ce98ab9193b80f712467d564bab75c4" Feb 25 12:57:45 crc kubenswrapper[5005]: I0225 12:57:45.285709 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4qpp8"] Feb 25 12:57:45 crc kubenswrapper[5005]: I0225 12:57:45.290505 5005 scope.go:117] "RemoveContainer" containerID="c0d06a807c0520f1f40272caab8af0b29dd4113355beb72383aa3f0bf91c71f0" Feb 25 12:57:45 crc kubenswrapper[5005]: I0225 12:57:45.304784 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4qpp8"] Feb 25 12:57:45 crc kubenswrapper[5005]: I0225 12:57:45.330664 5005 scope.go:117] "RemoveContainer" containerID="d3ce2a498d0c6aa066250c36d5b1b56d5642bd8c6fbcb613770fe0d4d7ab9bf5" Feb 25 12:57:45 crc kubenswrapper[5005]: E0225 12:57:45.332061 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3ce2a498d0c6aa066250c36d5b1b56d5642bd8c6fbcb613770fe0d4d7ab9bf5\": container with ID starting with d3ce2a498d0c6aa066250c36d5b1b56d5642bd8c6fbcb613770fe0d4d7ab9bf5 not found: ID does not exist" containerID="d3ce2a498d0c6aa066250c36d5b1b56d5642bd8c6fbcb613770fe0d4d7ab9bf5" Feb 25 12:57:45 crc kubenswrapper[5005]: I0225 12:57:45.332105 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3ce2a498d0c6aa066250c36d5b1b56d5642bd8c6fbcb613770fe0d4d7ab9bf5"} err="failed to get container status \"d3ce2a498d0c6aa066250c36d5b1b56d5642bd8c6fbcb613770fe0d4d7ab9bf5\": rpc error: code = NotFound desc = could not find container \"d3ce2a498d0c6aa066250c36d5b1b56d5642bd8c6fbcb613770fe0d4d7ab9bf5\": container with ID starting with d3ce2a498d0c6aa066250c36d5b1b56d5642bd8c6fbcb613770fe0d4d7ab9bf5 not found: ID does not exist" Feb 25 12:57:45 crc kubenswrapper[5005]: I0225 12:57:45.332142 5005 scope.go:117] "RemoveContainer" containerID="6515fb97e3e595c486c288f45c60419c4ce98ab9193b80f712467d564bab75c4" Feb 25 12:57:45 crc kubenswrapper[5005]: E0225 12:57:45.332590 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6515fb97e3e595c486c288f45c60419c4ce98ab9193b80f712467d564bab75c4\": container with ID starting with 6515fb97e3e595c486c288f45c60419c4ce98ab9193b80f712467d564bab75c4 not found: ID does not exist" containerID="6515fb97e3e595c486c288f45c60419c4ce98ab9193b80f712467d564bab75c4" Feb 25 12:57:45 crc kubenswrapper[5005]: I0225 12:57:45.332733 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6515fb97e3e595c486c288f45c60419c4ce98ab9193b80f712467d564bab75c4"} err="failed to get container status \"6515fb97e3e595c486c288f45c60419c4ce98ab9193b80f712467d564bab75c4\": rpc error: code = NotFound desc = could not find container \"6515fb97e3e595c486c288f45c60419c4ce98ab9193b80f712467d564bab75c4\": container with ID starting with 6515fb97e3e595c486c288f45c60419c4ce98ab9193b80f712467d564bab75c4 not found: ID does not exist" Feb 25 12:57:45 crc kubenswrapper[5005]: I0225 12:57:45.332974 5005 scope.go:117] "RemoveContainer" containerID="c0d06a807c0520f1f40272caab8af0b29dd4113355beb72383aa3f0bf91c71f0" Feb 25 12:57:45 crc kubenswrapper[5005]: E0225 12:57:45.333492 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0d06a807c0520f1f40272caab8af0b29dd4113355beb72383aa3f0bf91c71f0\": container with ID starting with c0d06a807c0520f1f40272caab8af0b29dd4113355beb72383aa3f0bf91c71f0 not found: ID does not exist" containerID="c0d06a807c0520f1f40272caab8af0b29dd4113355beb72383aa3f0bf91c71f0" Feb 25 12:57:45 crc kubenswrapper[5005]: I0225 12:57:45.333579 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0d06a807c0520f1f40272caab8af0b29dd4113355beb72383aa3f0bf91c71f0"} err="failed to get container status \"c0d06a807c0520f1f40272caab8af0b29dd4113355beb72383aa3f0bf91c71f0\": rpc error: code = NotFound desc = could not find container \"c0d06a807c0520f1f40272caab8af0b29dd4113355beb72383aa3f0bf91c71f0\": container with ID starting with c0d06a807c0520f1f40272caab8af0b29dd4113355beb72383aa3f0bf91c71f0 not found: ID does not exist" Feb 25 12:57:46 crc kubenswrapper[5005]: I0225 12:57:46.729270 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44e7abad-f8a1-4ca2-92d1-f11f8bfdd68f" path="/var/lib/kubelet/pods/44e7abad-f8a1-4ca2-92d1-f11f8bfdd68f/volumes" Feb 25 12:58:00 crc kubenswrapper[5005]: I0225 12:58:00.381576 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533738-flzkb"] Feb 25 12:58:00 crc kubenswrapper[5005]: E0225 12:58:00.382734 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44e7abad-f8a1-4ca2-92d1-f11f8bfdd68f" containerName="extract-content" Feb 25 12:58:00 crc kubenswrapper[5005]: I0225 12:58:00.382748 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="44e7abad-f8a1-4ca2-92d1-f11f8bfdd68f" containerName="extract-content" Feb 25 12:58:00 crc kubenswrapper[5005]: E0225 12:58:00.382776 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44e7abad-f8a1-4ca2-92d1-f11f8bfdd68f" containerName="extract-utilities" Feb 25 12:58:00 crc kubenswrapper[5005]: I0225 12:58:00.382782 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="44e7abad-f8a1-4ca2-92d1-f11f8bfdd68f" containerName="extract-utilities" Feb 25 12:58:00 crc kubenswrapper[5005]: E0225 12:58:00.382800 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44e7abad-f8a1-4ca2-92d1-f11f8bfdd68f" containerName="registry-server" Feb 25 12:58:00 crc kubenswrapper[5005]: I0225 12:58:00.382805 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="44e7abad-f8a1-4ca2-92d1-f11f8bfdd68f" containerName="registry-server" Feb 25 12:58:00 crc kubenswrapper[5005]: I0225 12:58:00.382978 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="44e7abad-f8a1-4ca2-92d1-f11f8bfdd68f" containerName="registry-server" Feb 25 12:58:00 crc kubenswrapper[5005]: I0225 12:58:00.383573 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533738-flzkb" Feb 25 12:58:00 crc kubenswrapper[5005]: I0225 12:58:00.389518 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 12:58:00 crc kubenswrapper[5005]: I0225 12:58:00.389610 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 12:58:00 crc kubenswrapper[5005]: I0225 12:58:00.393158 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxbrj\" (UniqueName: \"kubernetes.io/projected/0f6296e4-8928-472d-98da-f638988ede4e-kube-api-access-cxbrj\") pod \"auto-csr-approver-29533738-flzkb\" (UID: \"0f6296e4-8928-472d-98da-f638988ede4e\") " pod="openshift-infra/auto-csr-approver-29533738-flzkb" Feb 25 12:58:00 crc kubenswrapper[5005]: I0225 12:58:00.393299 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 12:58:00 crc kubenswrapper[5005]: I0225 12:58:00.692684 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxbrj\" (UniqueName: \"kubernetes.io/projected/0f6296e4-8928-472d-98da-f638988ede4e-kube-api-access-cxbrj\") pod \"auto-csr-approver-29533738-flzkb\" (UID: \"0f6296e4-8928-472d-98da-f638988ede4e\") " pod="openshift-infra/auto-csr-approver-29533738-flzkb" Feb 25 12:58:00 crc kubenswrapper[5005]: I0225 12:58:00.727122 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxbrj\" (UniqueName: \"kubernetes.io/projected/0f6296e4-8928-472d-98da-f638988ede4e-kube-api-access-cxbrj\") pod \"auto-csr-approver-29533738-flzkb\" (UID: \"0f6296e4-8928-472d-98da-f638988ede4e\") " pod="openshift-infra/auto-csr-approver-29533738-flzkb" Feb 25 12:58:01 crc kubenswrapper[5005]: I0225 12:58:01.006454 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533738-flzkb" Feb 25 12:58:01 crc kubenswrapper[5005]: I0225 12:58:01.260241 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533738-flzkb"] Feb 25 12:58:02 crc kubenswrapper[5005]: I0225 12:58:02.044856 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533738-flzkb"] Feb 25 12:58:02 crc kubenswrapper[5005]: I0225 12:58:02.922421 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533738-flzkb" event={"ID":"0f6296e4-8928-472d-98da-f638988ede4e","Type":"ContainerStarted","Data":"eb781d191aa3f9b6c881bc9073d95058418882515ace3ba890e233595d084dae"} Feb 25 12:58:03 crc kubenswrapper[5005]: I0225 12:58:03.942337 5005 generic.go:334] "Generic (PLEG): container finished" podID="0f6296e4-8928-472d-98da-f638988ede4e" containerID="2b40e08a451c1dd4b6134e86fc6320c397c8910b9a59af796294df32e35794e5" exitCode=0 Feb 25 12:58:03 crc kubenswrapper[5005]: I0225 12:58:03.942638 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533738-flzkb" event={"ID":"0f6296e4-8928-472d-98da-f638988ede4e","Type":"ContainerDied","Data":"2b40e08a451c1dd4b6134e86fc6320c397c8910b9a59af796294df32e35794e5"} Feb 25 12:58:05 crc kubenswrapper[5005]: I0225 12:58:05.443718 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533738-flzkb" Feb 25 12:58:05 crc kubenswrapper[5005]: I0225 12:58:05.603871 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxbrj\" (UniqueName: \"kubernetes.io/projected/0f6296e4-8928-472d-98da-f638988ede4e-kube-api-access-cxbrj\") pod \"0f6296e4-8928-472d-98da-f638988ede4e\" (UID: \"0f6296e4-8928-472d-98da-f638988ede4e\") " Feb 25 12:58:05 crc kubenswrapper[5005]: I0225 12:58:05.609723 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f6296e4-8928-472d-98da-f638988ede4e-kube-api-access-cxbrj" (OuterVolumeSpecName: "kube-api-access-cxbrj") pod "0f6296e4-8928-472d-98da-f638988ede4e" (UID: "0f6296e4-8928-472d-98da-f638988ede4e"). InnerVolumeSpecName "kube-api-access-cxbrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:58:05 crc kubenswrapper[5005]: I0225 12:58:05.706432 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxbrj\" (UniqueName: \"kubernetes.io/projected/0f6296e4-8928-472d-98da-f638988ede4e-kube-api-access-cxbrj\") on node \"crc\" DevicePath \"\"" Feb 25 12:58:05 crc kubenswrapper[5005]: I0225 12:58:05.961833 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533738-flzkb" event={"ID":"0f6296e4-8928-472d-98da-f638988ede4e","Type":"ContainerDied","Data":"eb781d191aa3f9b6c881bc9073d95058418882515ace3ba890e233595d084dae"} Feb 25 12:58:05 crc kubenswrapper[5005]: I0225 12:58:05.961871 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb781d191aa3f9b6c881bc9073d95058418882515ace3ba890e233595d084dae" Feb 25 12:58:05 crc kubenswrapper[5005]: I0225 12:58:05.961907 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533738-flzkb" Feb 25 12:58:06 crc kubenswrapper[5005]: I0225 12:58:06.525030 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533732-g68z8"] Feb 25 12:58:06 crc kubenswrapper[5005]: I0225 12:58:06.535900 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533732-g68z8"] Feb 25 12:58:06 crc kubenswrapper[5005]: I0225 12:58:06.707253 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d833769-8a65-4715-a8e8-a578d2f993f2" path="/var/lib/kubelet/pods/4d833769-8a65-4715-a8e8-a578d2f993f2/volumes" Feb 25 12:58:23 crc kubenswrapper[5005]: I0225 12:58:23.329050 5005 scope.go:117] "RemoveContainer" containerID="adeb0084a4510d369340a666dbfdcf1aa79111a8b99c7a9f63432b9c3a8e1970" Feb 25 12:59:00 crc kubenswrapper[5005]: I0225 12:59:00.612302 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-45rbw"] Feb 25 12:59:00 crc kubenswrapper[5005]: E0225 12:59:00.613570 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f6296e4-8928-472d-98da-f638988ede4e" containerName="oc" Feb 25 12:59:00 crc kubenswrapper[5005]: I0225 12:59:00.613587 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f6296e4-8928-472d-98da-f638988ede4e" containerName="oc" Feb 25 12:59:00 crc kubenswrapper[5005]: I0225 12:59:00.614328 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f6296e4-8928-472d-98da-f638988ede4e" containerName="oc" Feb 25 12:59:00 crc kubenswrapper[5005]: I0225 12:59:00.616640 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-45rbw" Feb 25 12:59:00 crc kubenswrapper[5005]: I0225 12:59:00.644300 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-45rbw"] Feb 25 12:59:00 crc kubenswrapper[5005]: I0225 12:59:00.708924 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1b8bc8d-663f-470f-8991-327d03647d7a-catalog-content\") pod \"redhat-marketplace-45rbw\" (UID: \"b1b8bc8d-663f-470f-8991-327d03647d7a\") " pod="openshift-marketplace/redhat-marketplace-45rbw" Feb 25 12:59:00 crc kubenswrapper[5005]: I0225 12:59:00.708993 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbq5r\" (UniqueName: \"kubernetes.io/projected/b1b8bc8d-663f-470f-8991-327d03647d7a-kube-api-access-xbq5r\") pod \"redhat-marketplace-45rbw\" (UID: \"b1b8bc8d-663f-470f-8991-327d03647d7a\") " pod="openshift-marketplace/redhat-marketplace-45rbw" Feb 25 12:59:00 crc kubenswrapper[5005]: I0225 12:59:00.709026 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1b8bc8d-663f-470f-8991-327d03647d7a-utilities\") pod \"redhat-marketplace-45rbw\" (UID: \"b1b8bc8d-663f-470f-8991-327d03647d7a\") " pod="openshift-marketplace/redhat-marketplace-45rbw" Feb 25 12:59:00 crc kubenswrapper[5005]: I0225 12:59:00.810771 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1b8bc8d-663f-470f-8991-327d03647d7a-catalog-content\") pod \"redhat-marketplace-45rbw\" (UID: \"b1b8bc8d-663f-470f-8991-327d03647d7a\") " pod="openshift-marketplace/redhat-marketplace-45rbw" Feb 25 12:59:00 crc kubenswrapper[5005]: I0225 12:59:00.810875 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbq5r\" (UniqueName: \"kubernetes.io/projected/b1b8bc8d-663f-470f-8991-327d03647d7a-kube-api-access-xbq5r\") pod \"redhat-marketplace-45rbw\" (UID: \"b1b8bc8d-663f-470f-8991-327d03647d7a\") " pod="openshift-marketplace/redhat-marketplace-45rbw" Feb 25 12:59:00 crc kubenswrapper[5005]: I0225 12:59:00.810900 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1b8bc8d-663f-470f-8991-327d03647d7a-utilities\") pod \"redhat-marketplace-45rbw\" (UID: \"b1b8bc8d-663f-470f-8991-327d03647d7a\") " pod="openshift-marketplace/redhat-marketplace-45rbw" Feb 25 12:59:00 crc kubenswrapper[5005]: I0225 12:59:00.811398 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1b8bc8d-663f-470f-8991-327d03647d7a-utilities\") pod \"redhat-marketplace-45rbw\" (UID: \"b1b8bc8d-663f-470f-8991-327d03647d7a\") " pod="openshift-marketplace/redhat-marketplace-45rbw" Feb 25 12:59:00 crc kubenswrapper[5005]: I0225 12:59:00.811659 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1b8bc8d-663f-470f-8991-327d03647d7a-catalog-content\") pod \"redhat-marketplace-45rbw\" (UID: \"b1b8bc8d-663f-470f-8991-327d03647d7a\") " pod="openshift-marketplace/redhat-marketplace-45rbw" Feb 25 12:59:00 crc kubenswrapper[5005]: I0225 12:59:00.835623 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbq5r\" (UniqueName: \"kubernetes.io/projected/b1b8bc8d-663f-470f-8991-327d03647d7a-kube-api-access-xbq5r\") pod \"redhat-marketplace-45rbw\" (UID: \"b1b8bc8d-663f-470f-8991-327d03647d7a\") " pod="openshift-marketplace/redhat-marketplace-45rbw" Feb 25 12:59:00 crc kubenswrapper[5005]: I0225 12:59:00.940672 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-45rbw" Feb 25 12:59:01 crc kubenswrapper[5005]: I0225 12:59:01.946739 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-2g97t" podUID="a3c7dfa8-0263-4f57-84c7-c61b75fab65c" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 25 12:59:02 crc kubenswrapper[5005]: I0225 12:59:02.201855 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="fe9dcc0a-0321-4f68-929f-fb5393b97e38" containerName="galera" probeResult="failure" output="command timed out" Feb 25 12:59:04 crc kubenswrapper[5005]: I0225 12:59:04.250967 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-45rbw"] Feb 25 12:59:04 crc kubenswrapper[5005]: I0225 12:59:04.368715 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-45rbw" event={"ID":"b1b8bc8d-663f-470f-8991-327d03647d7a","Type":"ContainerStarted","Data":"43223f512c65f8811388304d7f5c827aa6cc71d9b66f47dac8fd7c12fa606903"} Feb 25 12:59:05 crc kubenswrapper[5005]: I0225 12:59:05.379023 5005 generic.go:334] "Generic (PLEG): container finished" podID="b1b8bc8d-663f-470f-8991-327d03647d7a" containerID="165cd294af87c8db2494239653f821dcda660d78a72504d3217d32483106b920" exitCode=0 Feb 25 12:59:05 crc kubenswrapper[5005]: I0225 12:59:05.379350 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-45rbw" event={"ID":"b1b8bc8d-663f-470f-8991-327d03647d7a","Type":"ContainerDied","Data":"165cd294af87c8db2494239653f821dcda660d78a72504d3217d32483106b920"} Feb 25 12:59:05 crc kubenswrapper[5005]: I0225 12:59:05.381619 5005 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 25 12:59:07 crc kubenswrapper[5005]: I0225 12:59:07.396113 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-45rbw" event={"ID":"b1b8bc8d-663f-470f-8991-327d03647d7a","Type":"ContainerStarted","Data":"782531c02caaa1d9ac83cedbb2f6424a4e380bf9b350944f8ab4a551951e00c4"} Feb 25 12:59:08 crc kubenswrapper[5005]: I0225 12:59:08.408592 5005 generic.go:334] "Generic (PLEG): container finished" podID="b1b8bc8d-663f-470f-8991-327d03647d7a" containerID="782531c02caaa1d9ac83cedbb2f6424a4e380bf9b350944f8ab4a551951e00c4" exitCode=0 Feb 25 12:59:08 crc kubenswrapper[5005]: I0225 12:59:08.408641 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-45rbw" event={"ID":"b1b8bc8d-663f-470f-8991-327d03647d7a","Type":"ContainerDied","Data":"782531c02caaa1d9ac83cedbb2f6424a4e380bf9b350944f8ab4a551951e00c4"} Feb 25 12:59:09 crc kubenswrapper[5005]: I0225 12:59:09.422087 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-45rbw" event={"ID":"b1b8bc8d-663f-470f-8991-327d03647d7a","Type":"ContainerStarted","Data":"87c0983b10d19ae1492446d9058873717a3fa006ec2402632a3d45c70c9ac123"} Feb 25 12:59:09 crc kubenswrapper[5005]: I0225 12:59:09.447827 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-45rbw" podStartSLOduration=5.758929978 podStartE2EDuration="9.447800623s" podCreationTimestamp="2026-02-25 12:59:00 +0000 UTC" firstStartedPulling="2026-02-25 12:59:05.381292331 +0000 UTC m=+6059.422024668" lastFinishedPulling="2026-02-25 12:59:09.070162986 +0000 UTC m=+6063.110895313" observedRunningTime="2026-02-25 12:59:09.446847053 +0000 UTC m=+6063.487579410" watchObservedRunningTime="2026-02-25 12:59:09.447800623 +0000 UTC m=+6063.488532990" Feb 25 12:59:10 crc kubenswrapper[5005]: I0225 12:59:10.941717 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-45rbw" Feb 25 12:59:10 crc kubenswrapper[5005]: I0225 12:59:10.942166 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-45rbw" Feb 25 12:59:10 crc kubenswrapper[5005]: I0225 12:59:10.990206 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-45rbw" Feb 25 12:59:21 crc kubenswrapper[5005]: I0225 12:59:21.018612 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-45rbw" Feb 25 12:59:21 crc kubenswrapper[5005]: I0225 12:59:21.099687 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-45rbw"] Feb 25 12:59:21 crc kubenswrapper[5005]: I0225 12:59:21.656884 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-45rbw" podUID="b1b8bc8d-663f-470f-8991-327d03647d7a" containerName="registry-server" containerID="cri-o://87c0983b10d19ae1492446d9058873717a3fa006ec2402632a3d45c70c9ac123" gracePeriod=2 Feb 25 12:59:22 crc kubenswrapper[5005]: I0225 12:59:22.257045 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-45rbw" Feb 25 12:59:22 crc kubenswrapper[5005]: I0225 12:59:22.332323 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1b8bc8d-663f-470f-8991-327d03647d7a-catalog-content\") pod \"b1b8bc8d-663f-470f-8991-327d03647d7a\" (UID: \"b1b8bc8d-663f-470f-8991-327d03647d7a\") " Feb 25 12:59:22 crc kubenswrapper[5005]: I0225 12:59:22.332731 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1b8bc8d-663f-470f-8991-327d03647d7a-utilities\") pod \"b1b8bc8d-663f-470f-8991-327d03647d7a\" (UID: \"b1b8bc8d-663f-470f-8991-327d03647d7a\") " Feb 25 12:59:22 crc kubenswrapper[5005]: I0225 12:59:22.332979 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbq5r\" (UniqueName: \"kubernetes.io/projected/b1b8bc8d-663f-470f-8991-327d03647d7a-kube-api-access-xbq5r\") pod \"b1b8bc8d-663f-470f-8991-327d03647d7a\" (UID: \"b1b8bc8d-663f-470f-8991-327d03647d7a\") " Feb 25 12:59:22 crc kubenswrapper[5005]: I0225 12:59:22.335258 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1b8bc8d-663f-470f-8991-327d03647d7a-utilities" (OuterVolumeSpecName: "utilities") pod "b1b8bc8d-663f-470f-8991-327d03647d7a" (UID: "b1b8bc8d-663f-470f-8991-327d03647d7a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 12:59:22 crc kubenswrapper[5005]: I0225 12:59:22.358041 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1b8bc8d-663f-470f-8991-327d03647d7a-kube-api-access-xbq5r" (OuterVolumeSpecName: "kube-api-access-xbq5r") pod "b1b8bc8d-663f-470f-8991-327d03647d7a" (UID: "b1b8bc8d-663f-470f-8991-327d03647d7a"). InnerVolumeSpecName "kube-api-access-xbq5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:59:22 crc kubenswrapper[5005]: I0225 12:59:22.363821 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1b8bc8d-663f-470f-8991-327d03647d7a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b1b8bc8d-663f-470f-8991-327d03647d7a" (UID: "b1b8bc8d-663f-470f-8991-327d03647d7a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 12:59:22 crc kubenswrapper[5005]: I0225 12:59:22.435166 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbq5r\" (UniqueName: \"kubernetes.io/projected/b1b8bc8d-663f-470f-8991-327d03647d7a-kube-api-access-xbq5r\") on node \"crc\" DevicePath \"\"" Feb 25 12:59:22 crc kubenswrapper[5005]: I0225 12:59:22.435201 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1b8bc8d-663f-470f-8991-327d03647d7a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 12:59:22 crc kubenswrapper[5005]: I0225 12:59:22.435210 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1b8bc8d-663f-470f-8991-327d03647d7a-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 12:59:22 crc kubenswrapper[5005]: I0225 12:59:22.668282 5005 generic.go:334] "Generic (PLEG): container finished" podID="b1b8bc8d-663f-470f-8991-327d03647d7a" containerID="87c0983b10d19ae1492446d9058873717a3fa006ec2402632a3d45c70c9ac123" exitCode=0 Feb 25 12:59:22 crc kubenswrapper[5005]: I0225 12:59:22.668335 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-45rbw" event={"ID":"b1b8bc8d-663f-470f-8991-327d03647d7a","Type":"ContainerDied","Data":"87c0983b10d19ae1492446d9058873717a3fa006ec2402632a3d45c70c9ac123"} Feb 25 12:59:22 crc kubenswrapper[5005]: I0225 12:59:22.668412 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-45rbw" Feb 25 12:59:22 crc kubenswrapper[5005]: I0225 12:59:22.668475 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-45rbw" event={"ID":"b1b8bc8d-663f-470f-8991-327d03647d7a","Type":"ContainerDied","Data":"43223f512c65f8811388304d7f5c827aa6cc71d9b66f47dac8fd7c12fa606903"} Feb 25 12:59:22 crc kubenswrapper[5005]: I0225 12:59:22.668502 5005 scope.go:117] "RemoveContainer" containerID="87c0983b10d19ae1492446d9058873717a3fa006ec2402632a3d45c70c9ac123" Feb 25 12:59:22 crc kubenswrapper[5005]: I0225 12:59:22.696825 5005 scope.go:117] "RemoveContainer" containerID="782531c02caaa1d9ac83cedbb2f6424a4e380bf9b350944f8ab4a551951e00c4" Feb 25 12:59:22 crc kubenswrapper[5005]: I0225 12:59:22.724082 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-45rbw"] Feb 25 12:59:22 crc kubenswrapper[5005]: I0225 12:59:22.737998 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-45rbw"] Feb 25 12:59:22 crc kubenswrapper[5005]: I0225 12:59:22.756127 5005 scope.go:117] "RemoveContainer" containerID="165cd294af87c8db2494239653f821dcda660d78a72504d3217d32483106b920" Feb 25 12:59:22 crc kubenswrapper[5005]: I0225 12:59:22.784566 5005 scope.go:117] "RemoveContainer" containerID="87c0983b10d19ae1492446d9058873717a3fa006ec2402632a3d45c70c9ac123" Feb 25 12:59:22 crc kubenswrapper[5005]: E0225 12:59:22.785276 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87c0983b10d19ae1492446d9058873717a3fa006ec2402632a3d45c70c9ac123\": container with ID starting with 87c0983b10d19ae1492446d9058873717a3fa006ec2402632a3d45c70c9ac123 not found: ID does not exist" containerID="87c0983b10d19ae1492446d9058873717a3fa006ec2402632a3d45c70c9ac123" Feb 25 12:59:22 crc kubenswrapper[5005]: I0225 12:59:22.785510 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87c0983b10d19ae1492446d9058873717a3fa006ec2402632a3d45c70c9ac123"} err="failed to get container status \"87c0983b10d19ae1492446d9058873717a3fa006ec2402632a3d45c70c9ac123\": rpc error: code = NotFound desc = could not find container \"87c0983b10d19ae1492446d9058873717a3fa006ec2402632a3d45c70c9ac123\": container with ID starting with 87c0983b10d19ae1492446d9058873717a3fa006ec2402632a3d45c70c9ac123 not found: ID does not exist" Feb 25 12:59:22 crc kubenswrapper[5005]: I0225 12:59:22.785583 5005 scope.go:117] "RemoveContainer" containerID="782531c02caaa1d9ac83cedbb2f6424a4e380bf9b350944f8ab4a551951e00c4" Feb 25 12:59:22 crc kubenswrapper[5005]: E0225 12:59:22.786020 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"782531c02caaa1d9ac83cedbb2f6424a4e380bf9b350944f8ab4a551951e00c4\": container with ID starting with 782531c02caaa1d9ac83cedbb2f6424a4e380bf9b350944f8ab4a551951e00c4 not found: ID does not exist" containerID="782531c02caaa1d9ac83cedbb2f6424a4e380bf9b350944f8ab4a551951e00c4" Feb 25 12:59:22 crc kubenswrapper[5005]: I0225 12:59:22.786069 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"782531c02caaa1d9ac83cedbb2f6424a4e380bf9b350944f8ab4a551951e00c4"} err="failed to get container status \"782531c02caaa1d9ac83cedbb2f6424a4e380bf9b350944f8ab4a551951e00c4\": rpc error: code = NotFound desc = could not find container \"782531c02caaa1d9ac83cedbb2f6424a4e380bf9b350944f8ab4a551951e00c4\": container with ID starting with 782531c02caaa1d9ac83cedbb2f6424a4e380bf9b350944f8ab4a551951e00c4 not found: ID does not exist" Feb 25 12:59:22 crc kubenswrapper[5005]: I0225 12:59:22.786101 5005 scope.go:117] "RemoveContainer" containerID="165cd294af87c8db2494239653f821dcda660d78a72504d3217d32483106b920" Feb 25 12:59:22 crc kubenswrapper[5005]: E0225 12:59:22.786582 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"165cd294af87c8db2494239653f821dcda660d78a72504d3217d32483106b920\": container with ID starting with 165cd294af87c8db2494239653f821dcda660d78a72504d3217d32483106b920 not found: ID does not exist" containerID="165cd294af87c8db2494239653f821dcda660d78a72504d3217d32483106b920" Feb 25 12:59:22 crc kubenswrapper[5005]: I0225 12:59:22.786606 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"165cd294af87c8db2494239653f821dcda660d78a72504d3217d32483106b920"} err="failed to get container status \"165cd294af87c8db2494239653f821dcda660d78a72504d3217d32483106b920\": rpc error: code = NotFound desc = could not find container \"165cd294af87c8db2494239653f821dcda660d78a72504d3217d32483106b920\": container with ID starting with 165cd294af87c8db2494239653f821dcda660d78a72504d3217d32483106b920 not found: ID does not exist" Feb 25 12:59:24 crc kubenswrapper[5005]: I0225 12:59:24.699442 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1b8bc8d-663f-470f-8991-327d03647d7a" path="/var/lib/kubelet/pods/b1b8bc8d-663f-470f-8991-327d03647d7a/volumes" Feb 25 12:59:37 crc kubenswrapper[5005]: I0225 12:59:37.524389 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-42hzd"] Feb 25 12:59:37 crc kubenswrapper[5005]: E0225 12:59:37.525460 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1b8bc8d-663f-470f-8991-327d03647d7a" containerName="extract-content" Feb 25 12:59:37 crc kubenswrapper[5005]: I0225 12:59:37.525476 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1b8bc8d-663f-470f-8991-327d03647d7a" containerName="extract-content" Feb 25 12:59:37 crc kubenswrapper[5005]: E0225 12:59:37.525492 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1b8bc8d-663f-470f-8991-327d03647d7a" containerName="extract-utilities" Feb 25 12:59:37 crc kubenswrapper[5005]: I0225 12:59:37.525500 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1b8bc8d-663f-470f-8991-327d03647d7a" containerName="extract-utilities" Feb 25 12:59:37 crc kubenswrapper[5005]: E0225 12:59:37.525535 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1b8bc8d-663f-470f-8991-327d03647d7a" containerName="registry-server" Feb 25 12:59:37 crc kubenswrapper[5005]: I0225 12:59:37.525544 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1b8bc8d-663f-470f-8991-327d03647d7a" containerName="registry-server" Feb 25 12:59:37 crc kubenswrapper[5005]: I0225 12:59:37.525760 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1b8bc8d-663f-470f-8991-327d03647d7a" containerName="registry-server" Feb 25 12:59:37 crc kubenswrapper[5005]: I0225 12:59:37.527623 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-42hzd" Feb 25 12:59:37 crc kubenswrapper[5005]: I0225 12:59:37.560682 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-42hzd"] Feb 25 12:59:37 crc kubenswrapper[5005]: I0225 12:59:37.607045 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c48fedb6-632c-4c2e-98d8-dbbc365db54f-catalog-content\") pod \"certified-operators-42hzd\" (UID: \"c48fedb6-632c-4c2e-98d8-dbbc365db54f\") " pod="openshift-marketplace/certified-operators-42hzd" Feb 25 12:59:37 crc kubenswrapper[5005]: I0225 12:59:37.607358 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6qln\" (UniqueName: \"kubernetes.io/projected/c48fedb6-632c-4c2e-98d8-dbbc365db54f-kube-api-access-x6qln\") pod \"certified-operators-42hzd\" (UID: \"c48fedb6-632c-4c2e-98d8-dbbc365db54f\") " pod="openshift-marketplace/certified-operators-42hzd" Feb 25 12:59:37 crc kubenswrapper[5005]: I0225 12:59:37.607602 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c48fedb6-632c-4c2e-98d8-dbbc365db54f-utilities\") pod \"certified-operators-42hzd\" (UID: \"c48fedb6-632c-4c2e-98d8-dbbc365db54f\") " pod="openshift-marketplace/certified-operators-42hzd" Feb 25 12:59:37 crc kubenswrapper[5005]: I0225 12:59:37.709814 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6qln\" (UniqueName: \"kubernetes.io/projected/c48fedb6-632c-4c2e-98d8-dbbc365db54f-kube-api-access-x6qln\") pod \"certified-operators-42hzd\" (UID: \"c48fedb6-632c-4c2e-98d8-dbbc365db54f\") " pod="openshift-marketplace/certified-operators-42hzd" Feb 25 12:59:37 crc kubenswrapper[5005]: I0225 12:59:37.709910 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c48fedb6-632c-4c2e-98d8-dbbc365db54f-utilities\") pod \"certified-operators-42hzd\" (UID: \"c48fedb6-632c-4c2e-98d8-dbbc365db54f\") " pod="openshift-marketplace/certified-operators-42hzd" Feb 25 12:59:37 crc kubenswrapper[5005]: I0225 12:59:37.710090 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c48fedb6-632c-4c2e-98d8-dbbc365db54f-catalog-content\") pod \"certified-operators-42hzd\" (UID: \"c48fedb6-632c-4c2e-98d8-dbbc365db54f\") " pod="openshift-marketplace/certified-operators-42hzd" Feb 25 12:59:37 crc kubenswrapper[5005]: I0225 12:59:37.710668 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c48fedb6-632c-4c2e-98d8-dbbc365db54f-utilities\") pod \"certified-operators-42hzd\" (UID: \"c48fedb6-632c-4c2e-98d8-dbbc365db54f\") " pod="openshift-marketplace/certified-operators-42hzd" Feb 25 12:59:37 crc kubenswrapper[5005]: I0225 12:59:37.710761 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c48fedb6-632c-4c2e-98d8-dbbc365db54f-catalog-content\") pod \"certified-operators-42hzd\" (UID: \"c48fedb6-632c-4c2e-98d8-dbbc365db54f\") " pod="openshift-marketplace/certified-operators-42hzd" Feb 25 12:59:37 crc kubenswrapper[5005]: I0225 12:59:37.729952 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6qln\" (UniqueName: \"kubernetes.io/projected/c48fedb6-632c-4c2e-98d8-dbbc365db54f-kube-api-access-x6qln\") pod \"certified-operators-42hzd\" (UID: \"c48fedb6-632c-4c2e-98d8-dbbc365db54f\") " pod="openshift-marketplace/certified-operators-42hzd" Feb 25 12:59:37 crc kubenswrapper[5005]: I0225 12:59:37.874418 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-42hzd" Feb 25 12:59:38 crc kubenswrapper[5005]: I0225 12:59:38.384127 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-42hzd"] Feb 25 12:59:39 crc kubenswrapper[5005]: I0225 12:59:39.096257 5005 generic.go:334] "Generic (PLEG): container finished" podID="c48fedb6-632c-4c2e-98d8-dbbc365db54f" containerID="ec99067d11129bc08eb96c28456d27324994084cdf273ac1da2623ac48e0999b" exitCode=0 Feb 25 12:59:39 crc kubenswrapper[5005]: I0225 12:59:39.096646 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-42hzd" event={"ID":"c48fedb6-632c-4c2e-98d8-dbbc365db54f","Type":"ContainerDied","Data":"ec99067d11129bc08eb96c28456d27324994084cdf273ac1da2623ac48e0999b"} Feb 25 12:59:39 crc kubenswrapper[5005]: I0225 12:59:39.096684 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-42hzd" event={"ID":"c48fedb6-632c-4c2e-98d8-dbbc365db54f","Type":"ContainerStarted","Data":"bdb3b7783f1bb1cf90086f36432f82843460d805a7f1d552bccd0e30681d6b49"} Feb 25 12:59:41 crc kubenswrapper[5005]: I0225 12:59:41.114169 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-42hzd" event={"ID":"c48fedb6-632c-4c2e-98d8-dbbc365db54f","Type":"ContainerStarted","Data":"0c1cf3525e7dfb669df0a654673c668ad332decfbdb25edfecec38a2746311c6"} Feb 25 12:59:45 crc kubenswrapper[5005]: I0225 12:59:45.148000 5005 generic.go:334] "Generic (PLEG): container finished" podID="c48fedb6-632c-4c2e-98d8-dbbc365db54f" containerID="0c1cf3525e7dfb669df0a654673c668ad332decfbdb25edfecec38a2746311c6" exitCode=0 Feb 25 12:59:45 crc kubenswrapper[5005]: I0225 12:59:45.148214 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-42hzd" event={"ID":"c48fedb6-632c-4c2e-98d8-dbbc365db54f","Type":"ContainerDied","Data":"0c1cf3525e7dfb669df0a654673c668ad332decfbdb25edfecec38a2746311c6"} Feb 25 12:59:46 crc kubenswrapper[5005]: I0225 12:59:46.158715 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-42hzd" event={"ID":"c48fedb6-632c-4c2e-98d8-dbbc365db54f","Type":"ContainerStarted","Data":"624a4d37e055b3f65d505d01195e14754ccd28e31e2db1c90e821311c94bb13b"} Feb 25 12:59:46 crc kubenswrapper[5005]: I0225 12:59:46.182898 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-42hzd" podStartSLOduration=2.655027555 podStartE2EDuration="9.182865831s" podCreationTimestamp="2026-02-25 12:59:37 +0000 UTC" firstStartedPulling="2026-02-25 12:59:39.099802687 +0000 UTC m=+6093.140535054" lastFinishedPulling="2026-02-25 12:59:45.627641003 +0000 UTC m=+6099.668373330" observedRunningTime="2026-02-25 12:59:46.178063273 +0000 UTC m=+6100.218795600" watchObservedRunningTime="2026-02-25 12:59:46.182865831 +0000 UTC m=+6100.223598208" Feb 25 12:59:47 crc kubenswrapper[5005]: I0225 12:59:47.875320 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-42hzd" Feb 25 12:59:47 crc kubenswrapper[5005]: I0225 12:59:47.875721 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-42hzd" Feb 25 12:59:47 crc kubenswrapper[5005]: I0225 12:59:47.925928 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-42hzd" Feb 25 12:59:57 crc kubenswrapper[5005]: I0225 12:59:57.943488 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-42hzd" Feb 25 12:59:58 crc kubenswrapper[5005]: I0225 12:59:58.012563 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-42hzd"] Feb 25 12:59:58 crc kubenswrapper[5005]: I0225 12:59:58.087183 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 12:59:58 crc kubenswrapper[5005]: I0225 12:59:58.087261 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 12:59:58 crc kubenswrapper[5005]: I0225 12:59:58.410987 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-42hzd" podUID="c48fedb6-632c-4c2e-98d8-dbbc365db54f" containerName="registry-server" containerID="cri-o://624a4d37e055b3f65d505d01195e14754ccd28e31e2db1c90e821311c94bb13b" gracePeriod=2 Feb 25 12:59:59 crc kubenswrapper[5005]: I0225 12:59:59.451894 5005 generic.go:334] "Generic (PLEG): container finished" podID="c48fedb6-632c-4c2e-98d8-dbbc365db54f" containerID="624a4d37e055b3f65d505d01195e14754ccd28e31e2db1c90e821311c94bb13b" exitCode=0 Feb 25 12:59:59 crc kubenswrapper[5005]: I0225 12:59:59.452219 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-42hzd" event={"ID":"c48fedb6-632c-4c2e-98d8-dbbc365db54f","Type":"ContainerDied","Data":"624a4d37e055b3f65d505d01195e14754ccd28e31e2db1c90e821311c94bb13b"} Feb 25 12:59:59 crc kubenswrapper[5005]: I0225 12:59:59.752454 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-42hzd" Feb 25 12:59:59 crc kubenswrapper[5005]: I0225 12:59:59.879816 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c48fedb6-632c-4c2e-98d8-dbbc365db54f-catalog-content\") pod \"c48fedb6-632c-4c2e-98d8-dbbc365db54f\" (UID: \"c48fedb6-632c-4c2e-98d8-dbbc365db54f\") " Feb 25 12:59:59 crc kubenswrapper[5005]: I0225 12:59:59.879888 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c48fedb6-632c-4c2e-98d8-dbbc365db54f-utilities\") pod \"c48fedb6-632c-4c2e-98d8-dbbc365db54f\" (UID: \"c48fedb6-632c-4c2e-98d8-dbbc365db54f\") " Feb 25 12:59:59 crc kubenswrapper[5005]: I0225 12:59:59.879928 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6qln\" (UniqueName: \"kubernetes.io/projected/c48fedb6-632c-4c2e-98d8-dbbc365db54f-kube-api-access-x6qln\") pod \"c48fedb6-632c-4c2e-98d8-dbbc365db54f\" (UID: \"c48fedb6-632c-4c2e-98d8-dbbc365db54f\") " Feb 25 12:59:59 crc kubenswrapper[5005]: I0225 12:59:59.881545 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c48fedb6-632c-4c2e-98d8-dbbc365db54f-utilities" (OuterVolumeSpecName: "utilities") pod "c48fedb6-632c-4c2e-98d8-dbbc365db54f" (UID: "c48fedb6-632c-4c2e-98d8-dbbc365db54f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 12:59:59 crc kubenswrapper[5005]: I0225 12:59:59.892160 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c48fedb6-632c-4c2e-98d8-dbbc365db54f-kube-api-access-x6qln" (OuterVolumeSpecName: "kube-api-access-x6qln") pod "c48fedb6-632c-4c2e-98d8-dbbc365db54f" (UID: "c48fedb6-632c-4c2e-98d8-dbbc365db54f"). InnerVolumeSpecName "kube-api-access-x6qln". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 12:59:59 crc kubenswrapper[5005]: I0225 12:59:59.944251 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c48fedb6-632c-4c2e-98d8-dbbc365db54f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c48fedb6-632c-4c2e-98d8-dbbc365db54f" (UID: "c48fedb6-632c-4c2e-98d8-dbbc365db54f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 12:59:59 crc kubenswrapper[5005]: I0225 12:59:59.981850 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c48fedb6-632c-4c2e-98d8-dbbc365db54f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 12:59:59 crc kubenswrapper[5005]: I0225 12:59:59.981896 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c48fedb6-632c-4c2e-98d8-dbbc365db54f-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 12:59:59 crc kubenswrapper[5005]: I0225 12:59:59.981910 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6qln\" (UniqueName: \"kubernetes.io/projected/c48fedb6-632c-4c2e-98d8-dbbc365db54f-kube-api-access-x6qln\") on node \"crc\" DevicePath \"\"" Feb 25 13:00:00 crc kubenswrapper[5005]: I0225 13:00:00.171420 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29533740-4ztrj"] Feb 25 13:00:00 crc kubenswrapper[5005]: E0225 13:00:00.171870 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c48fedb6-632c-4c2e-98d8-dbbc365db54f" containerName="registry-server" Feb 25 13:00:00 crc kubenswrapper[5005]: I0225 13:00:00.171891 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="c48fedb6-632c-4c2e-98d8-dbbc365db54f" containerName="registry-server" Feb 25 13:00:00 crc kubenswrapper[5005]: E0225 13:00:00.171922 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c48fedb6-632c-4c2e-98d8-dbbc365db54f" containerName="extract-utilities" Feb 25 13:00:00 crc kubenswrapper[5005]: I0225 13:00:00.171931 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="c48fedb6-632c-4c2e-98d8-dbbc365db54f" containerName="extract-utilities" Feb 25 13:00:00 crc kubenswrapper[5005]: E0225 13:00:00.171965 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c48fedb6-632c-4c2e-98d8-dbbc365db54f" containerName="extract-content" Feb 25 13:00:00 crc kubenswrapper[5005]: I0225 13:00:00.171973 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="c48fedb6-632c-4c2e-98d8-dbbc365db54f" containerName="extract-content" Feb 25 13:00:00 crc kubenswrapper[5005]: I0225 13:00:00.172216 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="c48fedb6-632c-4c2e-98d8-dbbc365db54f" containerName="registry-server" Feb 25 13:00:00 crc kubenswrapper[5005]: I0225 13:00:00.173016 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29533740-4ztrj" Feb 25 13:00:00 crc kubenswrapper[5005]: I0225 13:00:00.175335 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 25 13:00:00 crc kubenswrapper[5005]: I0225 13:00:00.175336 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 25 13:00:00 crc kubenswrapper[5005]: I0225 13:00:00.181702 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533740-mdk5g"] Feb 25 13:00:00 crc kubenswrapper[5005]: I0225 13:00:00.183169 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533740-mdk5g" Feb 25 13:00:00 crc kubenswrapper[5005]: I0225 13:00:00.184560 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/729d63f2-6dfd-4319-ad89-5ddd51220848-secret-volume\") pod \"collect-profiles-29533740-4ztrj\" (UID: \"729d63f2-6dfd-4319-ad89-5ddd51220848\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533740-4ztrj" Feb 25 13:00:00 crc kubenswrapper[5005]: I0225 13:00:00.184630 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/729d63f2-6dfd-4319-ad89-5ddd51220848-config-volume\") pod \"collect-profiles-29533740-4ztrj\" (UID: \"729d63f2-6dfd-4319-ad89-5ddd51220848\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533740-4ztrj" Feb 25 13:00:00 crc kubenswrapper[5005]: I0225 13:00:00.184942 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcs42\" (UniqueName: \"kubernetes.io/projected/06f40c75-a2da-4d43-b704-8c8be81db601-kube-api-access-rcs42\") pod \"auto-csr-approver-29533740-mdk5g\" (UID: \"06f40c75-a2da-4d43-b704-8c8be81db601\") " pod="openshift-infra/auto-csr-approver-29533740-mdk5g" Feb 25 13:00:00 crc kubenswrapper[5005]: I0225 13:00:00.185035 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5gt6\" (UniqueName: \"kubernetes.io/projected/729d63f2-6dfd-4319-ad89-5ddd51220848-kube-api-access-g5gt6\") pod \"collect-profiles-29533740-4ztrj\" (UID: \"729d63f2-6dfd-4319-ad89-5ddd51220848\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533740-4ztrj" Feb 25 13:00:00 crc kubenswrapper[5005]: I0225 13:00:00.187174 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 13:00:00 crc kubenswrapper[5005]: I0225 13:00:00.187246 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 13:00:00 crc kubenswrapper[5005]: I0225 13:00:00.187479 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 13:00:00 crc kubenswrapper[5005]: I0225 13:00:00.194757 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533740-mdk5g"] Feb 25 13:00:00 crc kubenswrapper[5005]: I0225 13:00:00.204200 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29533740-4ztrj"] Feb 25 13:00:00 crc kubenswrapper[5005]: I0225 13:00:00.286875 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcs42\" (UniqueName: \"kubernetes.io/projected/06f40c75-a2da-4d43-b704-8c8be81db601-kube-api-access-rcs42\") pod \"auto-csr-approver-29533740-mdk5g\" (UID: \"06f40c75-a2da-4d43-b704-8c8be81db601\") " pod="openshift-infra/auto-csr-approver-29533740-mdk5g" Feb 25 13:00:00 crc kubenswrapper[5005]: I0225 13:00:00.287211 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5gt6\" (UniqueName: \"kubernetes.io/projected/729d63f2-6dfd-4319-ad89-5ddd51220848-kube-api-access-g5gt6\") pod \"collect-profiles-29533740-4ztrj\" (UID: \"729d63f2-6dfd-4319-ad89-5ddd51220848\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533740-4ztrj" Feb 25 13:00:00 crc kubenswrapper[5005]: I0225 13:00:00.287251 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/729d63f2-6dfd-4319-ad89-5ddd51220848-secret-volume\") pod \"collect-profiles-29533740-4ztrj\" (UID: \"729d63f2-6dfd-4319-ad89-5ddd51220848\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533740-4ztrj" Feb 25 13:00:00 crc kubenswrapper[5005]: I0225 13:00:00.287320 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/729d63f2-6dfd-4319-ad89-5ddd51220848-config-volume\") pod \"collect-profiles-29533740-4ztrj\" (UID: \"729d63f2-6dfd-4319-ad89-5ddd51220848\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533740-4ztrj" Feb 25 13:00:00 crc kubenswrapper[5005]: I0225 13:00:00.288526 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/729d63f2-6dfd-4319-ad89-5ddd51220848-config-volume\") pod \"collect-profiles-29533740-4ztrj\" (UID: \"729d63f2-6dfd-4319-ad89-5ddd51220848\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533740-4ztrj" Feb 25 13:00:00 crc kubenswrapper[5005]: I0225 13:00:00.291328 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/729d63f2-6dfd-4319-ad89-5ddd51220848-secret-volume\") pod \"collect-profiles-29533740-4ztrj\" (UID: \"729d63f2-6dfd-4319-ad89-5ddd51220848\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533740-4ztrj" Feb 25 13:00:00 crc kubenswrapper[5005]: I0225 13:00:00.323236 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5gt6\" (UniqueName: \"kubernetes.io/projected/729d63f2-6dfd-4319-ad89-5ddd51220848-kube-api-access-g5gt6\") pod \"collect-profiles-29533740-4ztrj\" (UID: \"729d63f2-6dfd-4319-ad89-5ddd51220848\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533740-4ztrj" Feb 25 13:00:00 crc kubenswrapper[5005]: I0225 13:00:00.325217 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcs42\" (UniqueName: \"kubernetes.io/projected/06f40c75-a2da-4d43-b704-8c8be81db601-kube-api-access-rcs42\") pod \"auto-csr-approver-29533740-mdk5g\" (UID: \"06f40c75-a2da-4d43-b704-8c8be81db601\") " pod="openshift-infra/auto-csr-approver-29533740-mdk5g" Feb 25 13:00:00 crc kubenswrapper[5005]: I0225 13:00:00.497424 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29533740-4ztrj" Feb 25 13:00:00 crc kubenswrapper[5005]: I0225 13:00:00.509079 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533740-mdk5g" Feb 25 13:00:00 crc kubenswrapper[5005]: I0225 13:00:00.545161 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-42hzd" event={"ID":"c48fedb6-632c-4c2e-98d8-dbbc365db54f","Type":"ContainerDied","Data":"bdb3b7783f1bb1cf90086f36432f82843460d805a7f1d552bccd0e30681d6b49"} Feb 25 13:00:00 crc kubenswrapper[5005]: I0225 13:00:00.545216 5005 scope.go:117] "RemoveContainer" containerID="624a4d37e055b3f65d505d01195e14754ccd28e31e2db1c90e821311c94bb13b" Feb 25 13:00:00 crc kubenswrapper[5005]: I0225 13:00:00.545442 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-42hzd" Feb 25 13:00:01 crc kubenswrapper[5005]: I0225 13:00:01.114917 5005 scope.go:117] "RemoveContainer" containerID="0c1cf3525e7dfb669df0a654673c668ad332decfbdb25edfecec38a2746311c6" Feb 25 13:00:01 crc kubenswrapper[5005]: I0225 13:00:01.162365 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-42hzd"] Feb 25 13:00:01 crc kubenswrapper[5005]: I0225 13:00:01.174161 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-42hzd"] Feb 25 13:00:01 crc kubenswrapper[5005]: I0225 13:00:01.315634 5005 scope.go:117] "RemoveContainer" containerID="ec99067d11129bc08eb96c28456d27324994084cdf273ac1da2623ac48e0999b" Feb 25 13:00:01 crc kubenswrapper[5005]: I0225 13:00:01.654722 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533740-mdk5g"] Feb 25 13:00:01 crc kubenswrapper[5005]: I0225 13:00:01.747315 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29533740-4ztrj"] Feb 25 13:00:01 crc kubenswrapper[5005]: W0225 13:00:01.760849 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod729d63f2_6dfd_4319_ad89_5ddd51220848.slice/crio-ae2e5525cd04fad80f19d977c71ec04f941e1c1cedff3957b264afe7e9fa2f47 WatchSource:0}: Error finding container ae2e5525cd04fad80f19d977c71ec04f941e1c1cedff3957b264afe7e9fa2f47: Status 404 returned error can't find the container with id ae2e5525cd04fad80f19d977c71ec04f941e1c1cedff3957b264afe7e9fa2f47 Feb 25 13:00:02 crc kubenswrapper[5005]: I0225 13:00:02.564105 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533740-mdk5g" event={"ID":"06f40c75-a2da-4d43-b704-8c8be81db601","Type":"ContainerStarted","Data":"00ed3f616bdfec64a52b4b01a0dc87ccefb7770899adadaa81fc55a05da5d0ba"} Feb 25 13:00:02 crc kubenswrapper[5005]: I0225 13:00:02.566040 5005 generic.go:334] "Generic (PLEG): container finished" podID="729d63f2-6dfd-4319-ad89-5ddd51220848" containerID="e6b19cbb070152cda0233926ebefcc34299b9d578f06faddc82b7125d67797c0" exitCode=0 Feb 25 13:00:02 crc kubenswrapper[5005]: I0225 13:00:02.566255 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29533740-4ztrj" event={"ID":"729d63f2-6dfd-4319-ad89-5ddd51220848","Type":"ContainerDied","Data":"e6b19cbb070152cda0233926ebefcc34299b9d578f06faddc82b7125d67797c0"} Feb 25 13:00:02 crc kubenswrapper[5005]: I0225 13:00:02.566329 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29533740-4ztrj" event={"ID":"729d63f2-6dfd-4319-ad89-5ddd51220848","Type":"ContainerStarted","Data":"ae2e5525cd04fad80f19d977c71ec04f941e1c1cedff3957b264afe7e9fa2f47"} Feb 25 13:00:02 crc kubenswrapper[5005]: I0225 13:00:02.711613 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c48fedb6-632c-4c2e-98d8-dbbc365db54f" path="/var/lib/kubelet/pods/c48fedb6-632c-4c2e-98d8-dbbc365db54f/volumes" Feb 25 13:00:03 crc kubenswrapper[5005]: I0225 13:00:03.998202 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29533740-4ztrj" Feb 25 13:00:04 crc kubenswrapper[5005]: I0225 13:00:04.076682 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5gt6\" (UniqueName: \"kubernetes.io/projected/729d63f2-6dfd-4319-ad89-5ddd51220848-kube-api-access-g5gt6\") pod \"729d63f2-6dfd-4319-ad89-5ddd51220848\" (UID: \"729d63f2-6dfd-4319-ad89-5ddd51220848\") " Feb 25 13:00:04 crc kubenswrapper[5005]: I0225 13:00:04.077067 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/729d63f2-6dfd-4319-ad89-5ddd51220848-config-volume\") pod \"729d63f2-6dfd-4319-ad89-5ddd51220848\" (UID: \"729d63f2-6dfd-4319-ad89-5ddd51220848\") " Feb 25 13:00:04 crc kubenswrapper[5005]: I0225 13:00:04.077268 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/729d63f2-6dfd-4319-ad89-5ddd51220848-secret-volume\") pod \"729d63f2-6dfd-4319-ad89-5ddd51220848\" (UID: \"729d63f2-6dfd-4319-ad89-5ddd51220848\") " Feb 25 13:00:04 crc kubenswrapper[5005]: I0225 13:00:04.077678 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/729d63f2-6dfd-4319-ad89-5ddd51220848-config-volume" (OuterVolumeSpecName: "config-volume") pod "729d63f2-6dfd-4319-ad89-5ddd51220848" (UID: "729d63f2-6dfd-4319-ad89-5ddd51220848"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 13:00:04 crc kubenswrapper[5005]: I0225 13:00:04.077900 5005 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/729d63f2-6dfd-4319-ad89-5ddd51220848-config-volume\") on node \"crc\" DevicePath \"\"" Feb 25 13:00:04 crc kubenswrapper[5005]: I0225 13:00:04.082355 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/729d63f2-6dfd-4319-ad89-5ddd51220848-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "729d63f2-6dfd-4319-ad89-5ddd51220848" (UID: "729d63f2-6dfd-4319-ad89-5ddd51220848"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 13:00:04 crc kubenswrapper[5005]: I0225 13:00:04.082447 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/729d63f2-6dfd-4319-ad89-5ddd51220848-kube-api-access-g5gt6" (OuterVolumeSpecName: "kube-api-access-g5gt6") pod "729d63f2-6dfd-4319-ad89-5ddd51220848" (UID: "729d63f2-6dfd-4319-ad89-5ddd51220848"). InnerVolumeSpecName "kube-api-access-g5gt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 13:00:04 crc kubenswrapper[5005]: I0225 13:00:04.179953 5005 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/729d63f2-6dfd-4319-ad89-5ddd51220848-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 25 13:00:04 crc kubenswrapper[5005]: I0225 13:00:04.180011 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5gt6\" (UniqueName: \"kubernetes.io/projected/729d63f2-6dfd-4319-ad89-5ddd51220848-kube-api-access-g5gt6\") on node \"crc\" DevicePath \"\"" Feb 25 13:00:04 crc kubenswrapper[5005]: I0225 13:00:04.583480 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29533740-4ztrj" event={"ID":"729d63f2-6dfd-4319-ad89-5ddd51220848","Type":"ContainerDied","Data":"ae2e5525cd04fad80f19d977c71ec04f941e1c1cedff3957b264afe7e9fa2f47"} Feb 25 13:00:04 crc kubenswrapper[5005]: I0225 13:00:04.583527 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae2e5525cd04fad80f19d977c71ec04f941e1c1cedff3957b264afe7e9fa2f47" Feb 25 13:00:04 crc kubenswrapper[5005]: I0225 13:00:04.583539 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29533740-4ztrj" Feb 25 13:00:05 crc kubenswrapper[5005]: I0225 13:00:05.073593 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29533695-2twb6"] Feb 25 13:00:05 crc kubenswrapper[5005]: I0225 13:00:05.085899 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29533695-2twb6"] Feb 25 13:00:06 crc kubenswrapper[5005]: I0225 13:00:06.696968 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be6abeca-11d0-4b7f-aa7f-2ee814906b8d" path="/var/lib/kubelet/pods/be6abeca-11d0-4b7f-aa7f-2ee814906b8d/volumes" Feb 25 13:00:10 crc kubenswrapper[5005]: I0225 13:00:10.628071 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533740-mdk5g" event={"ID":"06f40c75-a2da-4d43-b704-8c8be81db601","Type":"ContainerStarted","Data":"12ad2f4e70d082261a59308ad34be53d081bed5cb25fc30c88f132d14a6263eb"} Feb 25 13:00:11 crc kubenswrapper[5005]: I0225 13:00:11.647503 5005 generic.go:334] "Generic (PLEG): container finished" podID="06f40c75-a2da-4d43-b704-8c8be81db601" containerID="12ad2f4e70d082261a59308ad34be53d081bed5cb25fc30c88f132d14a6263eb" exitCode=0 Feb 25 13:00:11 crc kubenswrapper[5005]: I0225 13:00:11.647802 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533740-mdk5g" event={"ID":"06f40c75-a2da-4d43-b704-8c8be81db601","Type":"ContainerDied","Data":"12ad2f4e70d082261a59308ad34be53d081bed5cb25fc30c88f132d14a6263eb"} Feb 25 13:00:13 crc kubenswrapper[5005]: I0225 13:00:13.075845 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533740-mdk5g" Feb 25 13:00:13 crc kubenswrapper[5005]: I0225 13:00:13.161194 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcs42\" (UniqueName: \"kubernetes.io/projected/06f40c75-a2da-4d43-b704-8c8be81db601-kube-api-access-rcs42\") pod \"06f40c75-a2da-4d43-b704-8c8be81db601\" (UID: \"06f40c75-a2da-4d43-b704-8c8be81db601\") " Feb 25 13:00:13 crc kubenswrapper[5005]: I0225 13:00:13.168591 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06f40c75-a2da-4d43-b704-8c8be81db601-kube-api-access-rcs42" (OuterVolumeSpecName: "kube-api-access-rcs42") pod "06f40c75-a2da-4d43-b704-8c8be81db601" (UID: "06f40c75-a2da-4d43-b704-8c8be81db601"). InnerVolumeSpecName "kube-api-access-rcs42". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 13:00:13 crc kubenswrapper[5005]: I0225 13:00:13.263391 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcs42\" (UniqueName: \"kubernetes.io/projected/06f40c75-a2da-4d43-b704-8c8be81db601-kube-api-access-rcs42\") on node \"crc\" DevicePath \"\"" Feb 25 13:00:14 crc kubenswrapper[5005]: I0225 13:00:14.136262 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533740-mdk5g" event={"ID":"06f40c75-a2da-4d43-b704-8c8be81db601","Type":"ContainerDied","Data":"00ed3f616bdfec64a52b4b01a0dc87ccefb7770899adadaa81fc55a05da5d0ba"} Feb 25 13:00:14 crc kubenswrapper[5005]: I0225 13:00:14.136312 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00ed3f616bdfec64a52b4b01a0dc87ccefb7770899adadaa81fc55a05da5d0ba" Feb 25 13:00:14 crc kubenswrapper[5005]: I0225 13:00:14.136362 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533740-mdk5g" Feb 25 13:00:14 crc kubenswrapper[5005]: I0225 13:00:14.165555 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533734-h7ck2"] Feb 25 13:00:14 crc kubenswrapper[5005]: I0225 13:00:14.177127 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533734-h7ck2"] Feb 25 13:00:14 crc kubenswrapper[5005]: I0225 13:00:14.695642 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a43239f-6794-4566-999b-c8ecae920f9b" path="/var/lib/kubelet/pods/8a43239f-6794-4566-999b-c8ecae920f9b/volumes" Feb 25 13:00:23 crc kubenswrapper[5005]: I0225 13:00:23.481899 5005 scope.go:117] "RemoveContainer" containerID="e5278458fe4050606aa287d4525d23d0f6b96fbdc7a3b6b4bf2403d70246af88" Feb 25 13:00:23 crc kubenswrapper[5005]: I0225 13:00:23.562314 5005 scope.go:117] "RemoveContainer" containerID="3abd91c50d1623e904661b1da012eaef616db0e3f72d77d482baa472994225fb" Feb 25 13:00:28 crc kubenswrapper[5005]: I0225 13:00:28.087837 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 13:00:28 crc kubenswrapper[5005]: I0225 13:00:28.088586 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 13:00:58 crc kubenswrapper[5005]: I0225 13:00:58.087103 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 13:00:58 crc kubenswrapper[5005]: I0225 13:00:58.089397 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 13:00:58 crc kubenswrapper[5005]: I0225 13:00:58.089590 5005 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" Feb 25 13:00:58 crc kubenswrapper[5005]: I0225 13:00:58.090726 5005 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6c2ff527f1ec5ff1329d594b9ec21cfb96db8b4836612d2713336089cccc84e6"} pod="openshift-machine-config-operator/machine-config-daemon-tct5q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 25 13:00:58 crc kubenswrapper[5005]: I0225 13:00:58.090934 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" containerID="cri-o://6c2ff527f1ec5ff1329d594b9ec21cfb96db8b4836612d2713336089cccc84e6" gracePeriod=600 Feb 25 13:00:58 crc kubenswrapper[5005]: I0225 13:00:58.546141 5005 generic.go:334] "Generic (PLEG): container finished" podID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerID="6c2ff527f1ec5ff1329d594b9ec21cfb96db8b4836612d2713336089cccc84e6" exitCode=0 Feb 25 13:00:58 crc kubenswrapper[5005]: I0225 13:00:58.546243 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerDied","Data":"6c2ff527f1ec5ff1329d594b9ec21cfb96db8b4836612d2713336089cccc84e6"} Feb 25 13:00:58 crc kubenswrapper[5005]: I0225 13:00:58.546421 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerStarted","Data":"2dc296c05982df07bbfbafb51e5e803239722ae15f0371961a118244b4d71e9f"} Feb 25 13:00:58 crc kubenswrapper[5005]: I0225 13:00:58.546459 5005 scope.go:117] "RemoveContainer" containerID="961885a00519aaff92ed827bc5e9b0b521218b07be93348ca17604bea49ccce2" Feb 25 13:01:00 crc kubenswrapper[5005]: I0225 13:01:00.157669 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29533741-cx7jz"] Feb 25 13:01:00 crc kubenswrapper[5005]: E0225 13:01:00.159691 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="729d63f2-6dfd-4319-ad89-5ddd51220848" containerName="collect-profiles" Feb 25 13:01:00 crc kubenswrapper[5005]: I0225 13:01:00.159793 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="729d63f2-6dfd-4319-ad89-5ddd51220848" containerName="collect-profiles" Feb 25 13:01:00 crc kubenswrapper[5005]: E0225 13:01:00.159893 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06f40c75-a2da-4d43-b704-8c8be81db601" containerName="oc" Feb 25 13:01:00 crc kubenswrapper[5005]: I0225 13:01:00.159954 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="06f40c75-a2da-4d43-b704-8c8be81db601" containerName="oc" Feb 25 13:01:00 crc kubenswrapper[5005]: I0225 13:01:00.160181 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="729d63f2-6dfd-4319-ad89-5ddd51220848" containerName="collect-profiles" Feb 25 13:01:00 crc kubenswrapper[5005]: I0225 13:01:00.160247 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="06f40c75-a2da-4d43-b704-8c8be81db601" containerName="oc" Feb 25 13:01:00 crc kubenswrapper[5005]: I0225 13:01:00.161017 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29533741-cx7jz" Feb 25 13:01:00 crc kubenswrapper[5005]: I0225 13:01:00.170933 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29533741-cx7jz"] Feb 25 13:01:00 crc kubenswrapper[5005]: I0225 13:01:00.304826 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/797ec89c-781b-4d90-a6f5-7e54312be3d9-fernet-keys\") pod \"keystone-cron-29533741-cx7jz\" (UID: \"797ec89c-781b-4d90-a6f5-7e54312be3d9\") " pod="openstack/keystone-cron-29533741-cx7jz" Feb 25 13:01:00 crc kubenswrapper[5005]: I0225 13:01:00.304953 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg8q7\" (UniqueName: \"kubernetes.io/projected/797ec89c-781b-4d90-a6f5-7e54312be3d9-kube-api-access-mg8q7\") pod \"keystone-cron-29533741-cx7jz\" (UID: \"797ec89c-781b-4d90-a6f5-7e54312be3d9\") " pod="openstack/keystone-cron-29533741-cx7jz" Feb 25 13:01:00 crc kubenswrapper[5005]: I0225 13:01:00.304979 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/797ec89c-781b-4d90-a6f5-7e54312be3d9-combined-ca-bundle\") pod \"keystone-cron-29533741-cx7jz\" (UID: \"797ec89c-781b-4d90-a6f5-7e54312be3d9\") " pod="openstack/keystone-cron-29533741-cx7jz" Feb 25 13:01:00 crc kubenswrapper[5005]: I0225 13:01:00.305014 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/797ec89c-781b-4d90-a6f5-7e54312be3d9-config-data\") pod \"keystone-cron-29533741-cx7jz\" (UID: \"797ec89c-781b-4d90-a6f5-7e54312be3d9\") " pod="openstack/keystone-cron-29533741-cx7jz" Feb 25 13:01:00 crc kubenswrapper[5005]: I0225 13:01:00.407329 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/797ec89c-781b-4d90-a6f5-7e54312be3d9-combined-ca-bundle\") pod \"keystone-cron-29533741-cx7jz\" (UID: \"797ec89c-781b-4d90-a6f5-7e54312be3d9\") " pod="openstack/keystone-cron-29533741-cx7jz" Feb 25 13:01:00 crc kubenswrapper[5005]: I0225 13:01:00.407437 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/797ec89c-781b-4d90-a6f5-7e54312be3d9-config-data\") pod \"keystone-cron-29533741-cx7jz\" (UID: \"797ec89c-781b-4d90-a6f5-7e54312be3d9\") " pod="openstack/keystone-cron-29533741-cx7jz" Feb 25 13:01:00 crc kubenswrapper[5005]: I0225 13:01:00.407605 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/797ec89c-781b-4d90-a6f5-7e54312be3d9-fernet-keys\") pod \"keystone-cron-29533741-cx7jz\" (UID: \"797ec89c-781b-4d90-a6f5-7e54312be3d9\") " pod="openstack/keystone-cron-29533741-cx7jz" Feb 25 13:01:00 crc kubenswrapper[5005]: I0225 13:01:00.407686 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mg8q7\" (UniqueName: \"kubernetes.io/projected/797ec89c-781b-4d90-a6f5-7e54312be3d9-kube-api-access-mg8q7\") pod \"keystone-cron-29533741-cx7jz\" (UID: \"797ec89c-781b-4d90-a6f5-7e54312be3d9\") " pod="openstack/keystone-cron-29533741-cx7jz" Feb 25 13:01:00 crc kubenswrapper[5005]: I0225 13:01:00.415794 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/797ec89c-781b-4d90-a6f5-7e54312be3d9-combined-ca-bundle\") pod \"keystone-cron-29533741-cx7jz\" (UID: \"797ec89c-781b-4d90-a6f5-7e54312be3d9\") " pod="openstack/keystone-cron-29533741-cx7jz" Feb 25 13:01:00 crc kubenswrapper[5005]: I0225 13:01:00.415841 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/797ec89c-781b-4d90-a6f5-7e54312be3d9-config-data\") pod \"keystone-cron-29533741-cx7jz\" (UID: \"797ec89c-781b-4d90-a6f5-7e54312be3d9\") " pod="openstack/keystone-cron-29533741-cx7jz" Feb 25 13:01:00 crc kubenswrapper[5005]: I0225 13:01:00.418355 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/797ec89c-781b-4d90-a6f5-7e54312be3d9-fernet-keys\") pod \"keystone-cron-29533741-cx7jz\" (UID: \"797ec89c-781b-4d90-a6f5-7e54312be3d9\") " pod="openstack/keystone-cron-29533741-cx7jz" Feb 25 13:01:00 crc kubenswrapper[5005]: I0225 13:01:00.426120 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mg8q7\" (UniqueName: \"kubernetes.io/projected/797ec89c-781b-4d90-a6f5-7e54312be3d9-kube-api-access-mg8q7\") pod \"keystone-cron-29533741-cx7jz\" (UID: \"797ec89c-781b-4d90-a6f5-7e54312be3d9\") " pod="openstack/keystone-cron-29533741-cx7jz" Feb 25 13:01:00 crc kubenswrapper[5005]: I0225 13:01:00.486855 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29533741-cx7jz" Feb 25 13:01:00 crc kubenswrapper[5005]: I0225 13:01:00.997075 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29533741-cx7jz"] Feb 25 13:01:01 crc kubenswrapper[5005]: I0225 13:01:01.577534 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29533741-cx7jz" event={"ID":"797ec89c-781b-4d90-a6f5-7e54312be3d9","Type":"ContainerStarted","Data":"0cd58358ed4254db6d65f970a673664a820934497a84838dad3a587146b99b6b"} Feb 25 13:01:01 crc kubenswrapper[5005]: I0225 13:01:01.577890 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29533741-cx7jz" event={"ID":"797ec89c-781b-4d90-a6f5-7e54312be3d9","Type":"ContainerStarted","Data":"4ffba9bada13ab5fb82ef4202b8f63d05be94952fadc23e34e09814660d1d2b2"} Feb 25 13:01:01 crc kubenswrapper[5005]: I0225 13:01:01.593117 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29533741-cx7jz" podStartSLOduration=1.593099623 podStartE2EDuration="1.593099623s" podCreationTimestamp="2026-02-25 13:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 13:01:01.590192904 +0000 UTC m=+6175.630925231" watchObservedRunningTime="2026-02-25 13:01:01.593099623 +0000 UTC m=+6175.633831950" Feb 25 13:01:07 crc kubenswrapper[5005]: I0225 13:01:07.992288 5005 generic.go:334] "Generic (PLEG): container finished" podID="797ec89c-781b-4d90-a6f5-7e54312be3d9" containerID="0cd58358ed4254db6d65f970a673664a820934497a84838dad3a587146b99b6b" exitCode=0 Feb 25 13:01:07 crc kubenswrapper[5005]: I0225 13:01:07.992478 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29533741-cx7jz" event={"ID":"797ec89c-781b-4d90-a6f5-7e54312be3d9","Type":"ContainerDied","Data":"0cd58358ed4254db6d65f970a673664a820934497a84838dad3a587146b99b6b"} Feb 25 13:01:09 crc kubenswrapper[5005]: I0225 13:01:09.401913 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29533741-cx7jz" Feb 25 13:01:09 crc kubenswrapper[5005]: I0225 13:01:09.507656 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/797ec89c-781b-4d90-a6f5-7e54312be3d9-fernet-keys\") pod \"797ec89c-781b-4d90-a6f5-7e54312be3d9\" (UID: \"797ec89c-781b-4d90-a6f5-7e54312be3d9\") " Feb 25 13:01:09 crc kubenswrapper[5005]: I0225 13:01:09.508723 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/797ec89c-781b-4d90-a6f5-7e54312be3d9-config-data\") pod \"797ec89c-781b-4d90-a6f5-7e54312be3d9\" (UID: \"797ec89c-781b-4d90-a6f5-7e54312be3d9\") " Feb 25 13:01:09 crc kubenswrapper[5005]: I0225 13:01:09.508910 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/797ec89c-781b-4d90-a6f5-7e54312be3d9-combined-ca-bundle\") pod \"797ec89c-781b-4d90-a6f5-7e54312be3d9\" (UID: \"797ec89c-781b-4d90-a6f5-7e54312be3d9\") " Feb 25 13:01:09 crc kubenswrapper[5005]: I0225 13:01:09.509048 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg8q7\" (UniqueName: \"kubernetes.io/projected/797ec89c-781b-4d90-a6f5-7e54312be3d9-kube-api-access-mg8q7\") pod \"797ec89c-781b-4d90-a6f5-7e54312be3d9\" (UID: \"797ec89c-781b-4d90-a6f5-7e54312be3d9\") " Feb 25 13:01:09 crc kubenswrapper[5005]: I0225 13:01:09.515084 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/797ec89c-781b-4d90-a6f5-7e54312be3d9-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "797ec89c-781b-4d90-a6f5-7e54312be3d9" (UID: "797ec89c-781b-4d90-a6f5-7e54312be3d9"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 13:01:09 crc kubenswrapper[5005]: I0225 13:01:09.518673 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/797ec89c-781b-4d90-a6f5-7e54312be3d9-kube-api-access-mg8q7" (OuterVolumeSpecName: "kube-api-access-mg8q7") pod "797ec89c-781b-4d90-a6f5-7e54312be3d9" (UID: "797ec89c-781b-4d90-a6f5-7e54312be3d9"). InnerVolumeSpecName "kube-api-access-mg8q7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 13:01:09 crc kubenswrapper[5005]: I0225 13:01:09.540008 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/797ec89c-781b-4d90-a6f5-7e54312be3d9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "797ec89c-781b-4d90-a6f5-7e54312be3d9" (UID: "797ec89c-781b-4d90-a6f5-7e54312be3d9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 13:01:09 crc kubenswrapper[5005]: I0225 13:01:09.560687 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/797ec89c-781b-4d90-a6f5-7e54312be3d9-config-data" (OuterVolumeSpecName: "config-data") pod "797ec89c-781b-4d90-a6f5-7e54312be3d9" (UID: "797ec89c-781b-4d90-a6f5-7e54312be3d9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 13:01:09 crc kubenswrapper[5005]: I0225 13:01:09.612012 5005 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/797ec89c-781b-4d90-a6f5-7e54312be3d9-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 25 13:01:09 crc kubenswrapper[5005]: I0225 13:01:09.612052 5005 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/797ec89c-781b-4d90-a6f5-7e54312be3d9-config-data\") on node \"crc\" DevicePath \"\"" Feb 25 13:01:09 crc kubenswrapper[5005]: I0225 13:01:09.612065 5005 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/797ec89c-781b-4d90-a6f5-7e54312be3d9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 13:01:09 crc kubenswrapper[5005]: I0225 13:01:09.612079 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg8q7\" (UniqueName: \"kubernetes.io/projected/797ec89c-781b-4d90-a6f5-7e54312be3d9-kube-api-access-mg8q7\") on node \"crc\" DevicePath \"\"" Feb 25 13:01:10 crc kubenswrapper[5005]: I0225 13:01:10.015223 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29533741-cx7jz" event={"ID":"797ec89c-781b-4d90-a6f5-7e54312be3d9","Type":"ContainerDied","Data":"4ffba9bada13ab5fb82ef4202b8f63d05be94952fadc23e34e09814660d1d2b2"} Feb 25 13:01:10 crc kubenswrapper[5005]: I0225 13:01:10.015857 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ffba9bada13ab5fb82ef4202b8f63d05be94952fadc23e34e09814660d1d2b2" Feb 25 13:01:10 crc kubenswrapper[5005]: I0225 13:01:10.015332 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29533741-cx7jz" Feb 25 13:02:00 crc kubenswrapper[5005]: I0225 13:02:00.140948 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533742-5qcss"] Feb 25 13:02:00 crc kubenswrapper[5005]: E0225 13:02:00.142070 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="797ec89c-781b-4d90-a6f5-7e54312be3d9" containerName="keystone-cron" Feb 25 13:02:00 crc kubenswrapper[5005]: I0225 13:02:00.142091 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="797ec89c-781b-4d90-a6f5-7e54312be3d9" containerName="keystone-cron" Feb 25 13:02:00 crc kubenswrapper[5005]: I0225 13:02:00.142400 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="797ec89c-781b-4d90-a6f5-7e54312be3d9" containerName="keystone-cron" Feb 25 13:02:00 crc kubenswrapper[5005]: I0225 13:02:00.143197 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533742-5qcss" Feb 25 13:02:00 crc kubenswrapper[5005]: I0225 13:02:00.146083 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 13:02:00 crc kubenswrapper[5005]: I0225 13:02:00.146108 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 13:02:00 crc kubenswrapper[5005]: I0225 13:02:00.147592 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 13:02:00 crc kubenswrapper[5005]: I0225 13:02:00.149188 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533742-5qcss"] Feb 25 13:02:00 crc kubenswrapper[5005]: I0225 13:02:00.294663 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r7m2\" (UniqueName: \"kubernetes.io/projected/3d600588-d398-4782-8c31-2002655cbfa3-kube-api-access-5r7m2\") pod \"auto-csr-approver-29533742-5qcss\" (UID: \"3d600588-d398-4782-8c31-2002655cbfa3\") " pod="openshift-infra/auto-csr-approver-29533742-5qcss" Feb 25 13:02:00 crc kubenswrapper[5005]: I0225 13:02:00.396562 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r7m2\" (UniqueName: \"kubernetes.io/projected/3d600588-d398-4782-8c31-2002655cbfa3-kube-api-access-5r7m2\") pod \"auto-csr-approver-29533742-5qcss\" (UID: \"3d600588-d398-4782-8c31-2002655cbfa3\") " pod="openshift-infra/auto-csr-approver-29533742-5qcss" Feb 25 13:02:00 crc kubenswrapper[5005]: I0225 13:02:00.416673 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r7m2\" (UniqueName: \"kubernetes.io/projected/3d600588-d398-4782-8c31-2002655cbfa3-kube-api-access-5r7m2\") pod \"auto-csr-approver-29533742-5qcss\" (UID: \"3d600588-d398-4782-8c31-2002655cbfa3\") " pod="openshift-infra/auto-csr-approver-29533742-5qcss" Feb 25 13:02:00 crc kubenswrapper[5005]: I0225 13:02:00.466138 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533742-5qcss" Feb 25 13:02:00 crc kubenswrapper[5005]: I0225 13:02:00.957240 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533742-5qcss"] Feb 25 13:02:01 crc kubenswrapper[5005]: I0225 13:02:01.458949 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533742-5qcss" event={"ID":"3d600588-d398-4782-8c31-2002655cbfa3","Type":"ContainerStarted","Data":"771497ce423a2ad8f2b748235258bcda94380f0ee6edb788909b48f14dfc6439"} Feb 25 13:02:02 crc kubenswrapper[5005]: I0225 13:02:02.468308 5005 generic.go:334] "Generic (PLEG): container finished" podID="3d600588-d398-4782-8c31-2002655cbfa3" containerID="c1f96d3036e46ee8090eadbb1cf034a3293056309b8a429e44a2295f9b70a069" exitCode=0 Feb 25 13:02:02 crc kubenswrapper[5005]: I0225 13:02:02.468387 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533742-5qcss" event={"ID":"3d600588-d398-4782-8c31-2002655cbfa3","Type":"ContainerDied","Data":"c1f96d3036e46ee8090eadbb1cf034a3293056309b8a429e44a2295f9b70a069"} Feb 25 13:02:03 crc kubenswrapper[5005]: I0225 13:02:03.845555 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533742-5qcss" Feb 25 13:02:03 crc kubenswrapper[5005]: I0225 13:02:03.996851 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5r7m2\" (UniqueName: \"kubernetes.io/projected/3d600588-d398-4782-8c31-2002655cbfa3-kube-api-access-5r7m2\") pod \"3d600588-d398-4782-8c31-2002655cbfa3\" (UID: \"3d600588-d398-4782-8c31-2002655cbfa3\") " Feb 25 13:02:04 crc kubenswrapper[5005]: I0225 13:02:04.002593 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d600588-d398-4782-8c31-2002655cbfa3-kube-api-access-5r7m2" (OuterVolumeSpecName: "kube-api-access-5r7m2") pod "3d600588-d398-4782-8c31-2002655cbfa3" (UID: "3d600588-d398-4782-8c31-2002655cbfa3"). InnerVolumeSpecName "kube-api-access-5r7m2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 13:02:04 crc kubenswrapper[5005]: I0225 13:02:04.099820 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5r7m2\" (UniqueName: \"kubernetes.io/projected/3d600588-d398-4782-8c31-2002655cbfa3-kube-api-access-5r7m2\") on node \"crc\" DevicePath \"\"" Feb 25 13:02:04 crc kubenswrapper[5005]: I0225 13:02:04.483950 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533742-5qcss" event={"ID":"3d600588-d398-4782-8c31-2002655cbfa3","Type":"ContainerDied","Data":"771497ce423a2ad8f2b748235258bcda94380f0ee6edb788909b48f14dfc6439"} Feb 25 13:02:04 crc kubenswrapper[5005]: I0225 13:02:04.483990 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="771497ce423a2ad8f2b748235258bcda94380f0ee6edb788909b48f14dfc6439" Feb 25 13:02:04 crc kubenswrapper[5005]: I0225 13:02:04.484004 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533742-5qcss" Feb 25 13:02:04 crc kubenswrapper[5005]: I0225 13:02:04.929846 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533736-qmkqb"] Feb 25 13:02:04 crc kubenswrapper[5005]: I0225 13:02:04.937121 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533736-qmkqb"] Feb 25 13:02:06 crc kubenswrapper[5005]: I0225 13:02:06.697857 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd0b9ea2-eb17-41fe-9a64-1683557efe11" path="/var/lib/kubelet/pods/bd0b9ea2-eb17-41fe-9a64-1683557efe11/volumes" Feb 25 13:02:23 crc kubenswrapper[5005]: I0225 13:02:23.699899 5005 scope.go:117] "RemoveContainer" containerID="3043e67c9f1cab4312ce236d718b37b4c7606fbe4b78d9e2e68b7460e41f6899" Feb 25 13:02:58 crc kubenswrapper[5005]: I0225 13:02:58.087583 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 13:02:58 crc kubenswrapper[5005]: I0225 13:02:58.088101 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 13:03:28 crc kubenswrapper[5005]: I0225 13:03:28.087519 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 13:03:28 crc kubenswrapper[5005]: I0225 13:03:28.088080 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 13:03:58 crc kubenswrapper[5005]: I0225 13:03:58.087471 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 13:03:58 crc kubenswrapper[5005]: I0225 13:03:58.087944 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 13:03:58 crc kubenswrapper[5005]: I0225 13:03:58.087980 5005 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" Feb 25 13:03:58 crc kubenswrapper[5005]: I0225 13:03:58.088641 5005 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2dc296c05982df07bbfbafb51e5e803239722ae15f0371961a118244b4d71e9f"} pod="openshift-machine-config-operator/machine-config-daemon-tct5q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 25 13:03:58 crc kubenswrapper[5005]: I0225 13:03:58.088687 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" containerID="cri-o://2dc296c05982df07bbfbafb51e5e803239722ae15f0371961a118244b4d71e9f" gracePeriod=600 Feb 25 13:03:58 crc kubenswrapper[5005]: E0225 13:03:58.210919 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:03:58 crc kubenswrapper[5005]: I0225 13:03:58.637845 5005 generic.go:334] "Generic (PLEG): container finished" podID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerID="2dc296c05982df07bbfbafb51e5e803239722ae15f0371961a118244b4d71e9f" exitCode=0 Feb 25 13:03:58 crc kubenswrapper[5005]: I0225 13:03:58.637904 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerDied","Data":"2dc296c05982df07bbfbafb51e5e803239722ae15f0371961a118244b4d71e9f"} Feb 25 13:03:58 crc kubenswrapper[5005]: I0225 13:03:58.637952 5005 scope.go:117] "RemoveContainer" containerID="6c2ff527f1ec5ff1329d594b9ec21cfb96db8b4836612d2713336089cccc84e6" Feb 25 13:03:58 crc kubenswrapper[5005]: I0225 13:03:58.638889 5005 scope.go:117] "RemoveContainer" containerID="2dc296c05982df07bbfbafb51e5e803239722ae15f0371961a118244b4d71e9f" Feb 25 13:03:58 crc kubenswrapper[5005]: E0225 13:03:58.639326 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:04:00 crc kubenswrapper[5005]: I0225 13:04:00.144184 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533744-8z852"] Feb 25 13:04:00 crc kubenswrapper[5005]: E0225 13:04:00.144818 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d600588-d398-4782-8c31-2002655cbfa3" containerName="oc" Feb 25 13:04:00 crc kubenswrapper[5005]: I0225 13:04:00.144830 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d600588-d398-4782-8c31-2002655cbfa3" containerName="oc" Feb 25 13:04:00 crc kubenswrapper[5005]: I0225 13:04:00.145008 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d600588-d398-4782-8c31-2002655cbfa3" containerName="oc" Feb 25 13:04:00 crc kubenswrapper[5005]: I0225 13:04:00.145743 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533744-8z852" Feb 25 13:04:00 crc kubenswrapper[5005]: I0225 13:04:00.153853 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 13:04:00 crc kubenswrapper[5005]: I0225 13:04:00.154340 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 13:04:00 crc kubenswrapper[5005]: I0225 13:04:00.155546 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 13:04:00 crc kubenswrapper[5005]: I0225 13:04:00.156045 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533744-8z852"] Feb 25 13:04:00 crc kubenswrapper[5005]: I0225 13:04:00.269512 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v88mw\" (UniqueName: \"kubernetes.io/projected/b6859e64-fa7b-4975-a332-33aee5c42f43-kube-api-access-v88mw\") pod \"auto-csr-approver-29533744-8z852\" (UID: \"b6859e64-fa7b-4975-a332-33aee5c42f43\") " pod="openshift-infra/auto-csr-approver-29533744-8z852" Feb 25 13:04:00 crc kubenswrapper[5005]: I0225 13:04:00.371585 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v88mw\" (UniqueName: \"kubernetes.io/projected/b6859e64-fa7b-4975-a332-33aee5c42f43-kube-api-access-v88mw\") pod \"auto-csr-approver-29533744-8z852\" (UID: \"b6859e64-fa7b-4975-a332-33aee5c42f43\") " pod="openshift-infra/auto-csr-approver-29533744-8z852" Feb 25 13:04:00 crc kubenswrapper[5005]: I0225 13:04:00.400846 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v88mw\" (UniqueName: \"kubernetes.io/projected/b6859e64-fa7b-4975-a332-33aee5c42f43-kube-api-access-v88mw\") pod \"auto-csr-approver-29533744-8z852\" (UID: \"b6859e64-fa7b-4975-a332-33aee5c42f43\") " pod="openshift-infra/auto-csr-approver-29533744-8z852" Feb 25 13:04:00 crc kubenswrapper[5005]: I0225 13:04:00.467039 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533744-8z852" Feb 25 13:04:00 crc kubenswrapper[5005]: I0225 13:04:00.982899 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533744-8z852"] Feb 25 13:04:01 crc kubenswrapper[5005]: I0225 13:04:01.681475 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533744-8z852" event={"ID":"b6859e64-fa7b-4975-a332-33aee5c42f43","Type":"ContainerStarted","Data":"285b74f4d630ae11ad19b4ef2020dcf13fc6ee5d23bb0ce547ddd1c868490f2f"} Feb 25 13:04:02 crc kubenswrapper[5005]: I0225 13:04:02.689473 5005 generic.go:334] "Generic (PLEG): container finished" podID="b6859e64-fa7b-4975-a332-33aee5c42f43" containerID="c60a83c8f8ed0c590bf94dc5d4b4bf1c08f221915ab9e3010d6fe8762b35f9ee" exitCode=0 Feb 25 13:04:02 crc kubenswrapper[5005]: I0225 13:04:02.698577 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533744-8z852" event={"ID":"b6859e64-fa7b-4975-a332-33aee5c42f43","Type":"ContainerDied","Data":"c60a83c8f8ed0c590bf94dc5d4b4bf1c08f221915ab9e3010d6fe8762b35f9ee"} Feb 25 13:04:04 crc kubenswrapper[5005]: I0225 13:04:04.255948 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533744-8z852" Feb 25 13:04:04 crc kubenswrapper[5005]: I0225 13:04:04.259598 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v88mw\" (UniqueName: \"kubernetes.io/projected/b6859e64-fa7b-4975-a332-33aee5c42f43-kube-api-access-v88mw\") pod \"b6859e64-fa7b-4975-a332-33aee5c42f43\" (UID: \"b6859e64-fa7b-4975-a332-33aee5c42f43\") " Feb 25 13:04:04 crc kubenswrapper[5005]: I0225 13:04:04.266343 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6859e64-fa7b-4975-a332-33aee5c42f43-kube-api-access-v88mw" (OuterVolumeSpecName: "kube-api-access-v88mw") pod "b6859e64-fa7b-4975-a332-33aee5c42f43" (UID: "b6859e64-fa7b-4975-a332-33aee5c42f43"). InnerVolumeSpecName "kube-api-access-v88mw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 13:04:04 crc kubenswrapper[5005]: I0225 13:04:04.362150 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v88mw\" (UniqueName: \"kubernetes.io/projected/b6859e64-fa7b-4975-a332-33aee5c42f43-kube-api-access-v88mw\") on node \"crc\" DevicePath \"\"" Feb 25 13:04:04 crc kubenswrapper[5005]: I0225 13:04:04.717551 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533744-8z852" event={"ID":"b6859e64-fa7b-4975-a332-33aee5c42f43","Type":"ContainerDied","Data":"285b74f4d630ae11ad19b4ef2020dcf13fc6ee5d23bb0ce547ddd1c868490f2f"} Feb 25 13:04:04 crc kubenswrapper[5005]: I0225 13:04:04.717965 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="285b74f4d630ae11ad19b4ef2020dcf13fc6ee5d23bb0ce547ddd1c868490f2f" Feb 25 13:04:04 crc kubenswrapper[5005]: I0225 13:04:04.718022 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533744-8z852" Feb 25 13:04:05 crc kubenswrapper[5005]: I0225 13:04:05.352156 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533738-flzkb"] Feb 25 13:04:05 crc kubenswrapper[5005]: I0225 13:04:05.364533 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533738-flzkb"] Feb 25 13:04:06 crc kubenswrapper[5005]: I0225 13:04:06.700664 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f6296e4-8928-472d-98da-f638988ede4e" path="/var/lib/kubelet/pods/0f6296e4-8928-472d-98da-f638988ede4e/volumes" Feb 25 13:04:13 crc kubenswrapper[5005]: I0225 13:04:13.685149 5005 scope.go:117] "RemoveContainer" containerID="2dc296c05982df07bbfbafb51e5e803239722ae15f0371961a118244b4d71e9f" Feb 25 13:04:13 crc kubenswrapper[5005]: E0225 13:04:13.686520 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:04:23 crc kubenswrapper[5005]: I0225 13:04:23.814693 5005 scope.go:117] "RemoveContainer" containerID="2b40e08a451c1dd4b6134e86fc6320c397c8910b9a59af796294df32e35794e5" Feb 25 13:04:27 crc kubenswrapper[5005]: I0225 13:04:27.686777 5005 scope.go:117] "RemoveContainer" containerID="2dc296c05982df07bbfbafb51e5e803239722ae15f0371961a118244b4d71e9f" Feb 25 13:04:27 crc kubenswrapper[5005]: E0225 13:04:27.687782 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:04:38 crc kubenswrapper[5005]: I0225 13:04:38.689183 5005 scope.go:117] "RemoveContainer" containerID="2dc296c05982df07bbfbafb51e5e803239722ae15f0371961a118244b4d71e9f" Feb 25 13:04:38 crc kubenswrapper[5005]: E0225 13:04:38.689783 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:04:51 crc kubenswrapper[5005]: I0225 13:04:51.687216 5005 scope.go:117] "RemoveContainer" containerID="2dc296c05982df07bbfbafb51e5e803239722ae15f0371961a118244b4d71e9f" Feb 25 13:04:51 crc kubenswrapper[5005]: E0225 13:04:51.688632 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:05:06 crc kubenswrapper[5005]: I0225 13:05:06.696542 5005 scope.go:117] "RemoveContainer" containerID="2dc296c05982df07bbfbafb51e5e803239722ae15f0371961a118244b4d71e9f" Feb 25 13:05:06 crc kubenswrapper[5005]: E0225 13:05:06.697775 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:05:12 crc kubenswrapper[5005]: I0225 13:05:12.294164 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5v87p"] Feb 25 13:05:12 crc kubenswrapper[5005]: E0225 13:05:12.295286 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6859e64-fa7b-4975-a332-33aee5c42f43" containerName="oc" Feb 25 13:05:12 crc kubenswrapper[5005]: I0225 13:05:12.295305 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6859e64-fa7b-4975-a332-33aee5c42f43" containerName="oc" Feb 25 13:05:12 crc kubenswrapper[5005]: I0225 13:05:12.295545 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6859e64-fa7b-4975-a332-33aee5c42f43" containerName="oc" Feb 25 13:05:12 crc kubenswrapper[5005]: I0225 13:05:12.297334 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5v87p" Feb 25 13:05:12 crc kubenswrapper[5005]: I0225 13:05:12.304004 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5v87p"] Feb 25 13:05:12 crc kubenswrapper[5005]: I0225 13:05:12.409232 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06480556-60fc-4227-8365-69b8bdf8416e-utilities\") pod \"redhat-operators-5v87p\" (UID: \"06480556-60fc-4227-8365-69b8bdf8416e\") " pod="openshift-marketplace/redhat-operators-5v87p" Feb 25 13:05:12 crc kubenswrapper[5005]: I0225 13:05:12.409277 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfs4t\" (UniqueName: \"kubernetes.io/projected/06480556-60fc-4227-8365-69b8bdf8416e-kube-api-access-zfs4t\") pod \"redhat-operators-5v87p\" (UID: \"06480556-60fc-4227-8365-69b8bdf8416e\") " pod="openshift-marketplace/redhat-operators-5v87p" Feb 25 13:05:12 crc kubenswrapper[5005]: I0225 13:05:12.409479 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06480556-60fc-4227-8365-69b8bdf8416e-catalog-content\") pod \"redhat-operators-5v87p\" (UID: \"06480556-60fc-4227-8365-69b8bdf8416e\") " pod="openshift-marketplace/redhat-operators-5v87p" Feb 25 13:05:12 crc kubenswrapper[5005]: I0225 13:05:12.510925 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06480556-60fc-4227-8365-69b8bdf8416e-utilities\") pod \"redhat-operators-5v87p\" (UID: \"06480556-60fc-4227-8365-69b8bdf8416e\") " pod="openshift-marketplace/redhat-operators-5v87p" Feb 25 13:05:12 crc kubenswrapper[5005]: I0225 13:05:12.510969 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfs4t\" (UniqueName: \"kubernetes.io/projected/06480556-60fc-4227-8365-69b8bdf8416e-kube-api-access-zfs4t\") pod \"redhat-operators-5v87p\" (UID: \"06480556-60fc-4227-8365-69b8bdf8416e\") " pod="openshift-marketplace/redhat-operators-5v87p" Feb 25 13:05:12 crc kubenswrapper[5005]: I0225 13:05:12.511016 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06480556-60fc-4227-8365-69b8bdf8416e-catalog-content\") pod \"redhat-operators-5v87p\" (UID: \"06480556-60fc-4227-8365-69b8bdf8416e\") " pod="openshift-marketplace/redhat-operators-5v87p" Feb 25 13:05:12 crc kubenswrapper[5005]: I0225 13:05:12.511717 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06480556-60fc-4227-8365-69b8bdf8416e-catalog-content\") pod \"redhat-operators-5v87p\" (UID: \"06480556-60fc-4227-8365-69b8bdf8416e\") " pod="openshift-marketplace/redhat-operators-5v87p" Feb 25 13:05:12 crc kubenswrapper[5005]: I0225 13:05:12.511729 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06480556-60fc-4227-8365-69b8bdf8416e-utilities\") pod \"redhat-operators-5v87p\" (UID: \"06480556-60fc-4227-8365-69b8bdf8416e\") " pod="openshift-marketplace/redhat-operators-5v87p" Feb 25 13:05:12 crc kubenswrapper[5005]: I0225 13:05:12.531193 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfs4t\" (UniqueName: \"kubernetes.io/projected/06480556-60fc-4227-8365-69b8bdf8416e-kube-api-access-zfs4t\") pod \"redhat-operators-5v87p\" (UID: \"06480556-60fc-4227-8365-69b8bdf8416e\") " pod="openshift-marketplace/redhat-operators-5v87p" Feb 25 13:05:12 crc kubenswrapper[5005]: I0225 13:05:12.618234 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5v87p" Feb 25 13:05:13 crc kubenswrapper[5005]: I0225 13:05:13.155516 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5v87p"] Feb 25 13:05:13 crc kubenswrapper[5005]: I0225 13:05:13.423914 5005 generic.go:334] "Generic (PLEG): container finished" podID="06480556-60fc-4227-8365-69b8bdf8416e" containerID="e920cad3330a1df12bfb140dcd72b3e76a0056cc366d61819ba78ce70eeeda53" exitCode=0 Feb 25 13:05:13 crc kubenswrapper[5005]: I0225 13:05:13.424113 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5v87p" event={"ID":"06480556-60fc-4227-8365-69b8bdf8416e","Type":"ContainerDied","Data":"e920cad3330a1df12bfb140dcd72b3e76a0056cc366d61819ba78ce70eeeda53"} Feb 25 13:05:13 crc kubenswrapper[5005]: I0225 13:05:13.425173 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5v87p" event={"ID":"06480556-60fc-4227-8365-69b8bdf8416e","Type":"ContainerStarted","Data":"e79195c470000db689ebd71b55f74baa3e989a68d55e6d77ec4df0dbae86a148"} Feb 25 13:05:13 crc kubenswrapper[5005]: I0225 13:05:13.425945 5005 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 25 13:05:14 crc kubenswrapper[5005]: I0225 13:05:14.434747 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5v87p" event={"ID":"06480556-60fc-4227-8365-69b8bdf8416e","Type":"ContainerStarted","Data":"5f9765d80db4e23d9a1208d89c4bac5150f44c41da95a1deaf4d2946a86711ea"} Feb 25 13:05:19 crc kubenswrapper[5005]: I0225 13:05:19.484408 5005 generic.go:334] "Generic (PLEG): container finished" podID="06480556-60fc-4227-8365-69b8bdf8416e" containerID="5f9765d80db4e23d9a1208d89c4bac5150f44c41da95a1deaf4d2946a86711ea" exitCode=0 Feb 25 13:05:19 crc kubenswrapper[5005]: I0225 13:05:19.484597 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5v87p" event={"ID":"06480556-60fc-4227-8365-69b8bdf8416e","Type":"ContainerDied","Data":"5f9765d80db4e23d9a1208d89c4bac5150f44c41da95a1deaf4d2946a86711ea"} Feb 25 13:05:20 crc kubenswrapper[5005]: I0225 13:05:20.500747 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5v87p" event={"ID":"06480556-60fc-4227-8365-69b8bdf8416e","Type":"ContainerStarted","Data":"305cca91a92927b01cb96048a31faa6d2d241ec0454d562c1355280b2c40ee71"} Feb 25 13:05:20 crc kubenswrapper[5005]: I0225 13:05:20.522703 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5v87p" podStartSLOduration=1.8204027109999998 podStartE2EDuration="8.522684892s" podCreationTimestamp="2026-02-25 13:05:12 +0000 UTC" firstStartedPulling="2026-02-25 13:05:13.425742257 +0000 UTC m=+6427.466474584" lastFinishedPulling="2026-02-25 13:05:20.128024438 +0000 UTC m=+6434.168756765" observedRunningTime="2026-02-25 13:05:20.522237238 +0000 UTC m=+6434.562969565" watchObservedRunningTime="2026-02-25 13:05:20.522684892 +0000 UTC m=+6434.563417219" Feb 25 13:05:21 crc kubenswrapper[5005]: I0225 13:05:21.685266 5005 scope.go:117] "RemoveContainer" containerID="2dc296c05982df07bbfbafb51e5e803239722ae15f0371961a118244b4d71e9f" Feb 25 13:05:21 crc kubenswrapper[5005]: E0225 13:05:21.686514 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:05:22 crc kubenswrapper[5005]: I0225 13:05:22.618667 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5v87p" Feb 25 13:05:22 crc kubenswrapper[5005]: I0225 13:05:22.618987 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5v87p" Feb 25 13:05:23 crc kubenswrapper[5005]: I0225 13:05:23.675946 5005 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5v87p" podUID="06480556-60fc-4227-8365-69b8bdf8416e" containerName="registry-server" probeResult="failure" output=< Feb 25 13:05:23 crc kubenswrapper[5005]: timeout: failed to connect service ":50051" within 1s Feb 25 13:05:23 crc kubenswrapper[5005]: > Feb 25 13:05:32 crc kubenswrapper[5005]: I0225 13:05:32.711541 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5v87p" Feb 25 13:05:32 crc kubenswrapper[5005]: I0225 13:05:32.772671 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5v87p" Feb 25 13:05:32 crc kubenswrapper[5005]: I0225 13:05:32.953410 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5v87p"] Feb 25 13:05:34 crc kubenswrapper[5005]: I0225 13:05:34.613696 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5v87p" podUID="06480556-60fc-4227-8365-69b8bdf8416e" containerName="registry-server" containerID="cri-o://305cca91a92927b01cb96048a31faa6d2d241ec0454d562c1355280b2c40ee71" gracePeriod=2 Feb 25 13:05:35 crc kubenswrapper[5005]: I0225 13:05:35.129691 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5v87p" Feb 25 13:05:35 crc kubenswrapper[5005]: I0225 13:05:35.311319 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfs4t\" (UniqueName: \"kubernetes.io/projected/06480556-60fc-4227-8365-69b8bdf8416e-kube-api-access-zfs4t\") pod \"06480556-60fc-4227-8365-69b8bdf8416e\" (UID: \"06480556-60fc-4227-8365-69b8bdf8416e\") " Feb 25 13:05:35 crc kubenswrapper[5005]: I0225 13:05:35.311481 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06480556-60fc-4227-8365-69b8bdf8416e-utilities\") pod \"06480556-60fc-4227-8365-69b8bdf8416e\" (UID: \"06480556-60fc-4227-8365-69b8bdf8416e\") " Feb 25 13:05:35 crc kubenswrapper[5005]: I0225 13:05:35.311519 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06480556-60fc-4227-8365-69b8bdf8416e-catalog-content\") pod \"06480556-60fc-4227-8365-69b8bdf8416e\" (UID: \"06480556-60fc-4227-8365-69b8bdf8416e\") " Feb 25 13:05:35 crc kubenswrapper[5005]: I0225 13:05:35.312575 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06480556-60fc-4227-8365-69b8bdf8416e-utilities" (OuterVolumeSpecName: "utilities") pod "06480556-60fc-4227-8365-69b8bdf8416e" (UID: "06480556-60fc-4227-8365-69b8bdf8416e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 13:05:35 crc kubenswrapper[5005]: I0225 13:05:35.324639 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06480556-60fc-4227-8365-69b8bdf8416e-kube-api-access-zfs4t" (OuterVolumeSpecName: "kube-api-access-zfs4t") pod "06480556-60fc-4227-8365-69b8bdf8416e" (UID: "06480556-60fc-4227-8365-69b8bdf8416e"). InnerVolumeSpecName "kube-api-access-zfs4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 13:05:35 crc kubenswrapper[5005]: I0225 13:05:35.413548 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06480556-60fc-4227-8365-69b8bdf8416e-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 13:05:35 crc kubenswrapper[5005]: I0225 13:05:35.413584 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zfs4t\" (UniqueName: \"kubernetes.io/projected/06480556-60fc-4227-8365-69b8bdf8416e-kube-api-access-zfs4t\") on node \"crc\" DevicePath \"\"" Feb 25 13:05:35 crc kubenswrapper[5005]: I0225 13:05:35.468504 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06480556-60fc-4227-8365-69b8bdf8416e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "06480556-60fc-4227-8365-69b8bdf8416e" (UID: "06480556-60fc-4227-8365-69b8bdf8416e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 13:05:35 crc kubenswrapper[5005]: I0225 13:05:35.515155 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06480556-60fc-4227-8365-69b8bdf8416e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 13:05:35 crc kubenswrapper[5005]: I0225 13:05:35.628173 5005 generic.go:334] "Generic (PLEG): container finished" podID="06480556-60fc-4227-8365-69b8bdf8416e" containerID="305cca91a92927b01cb96048a31faa6d2d241ec0454d562c1355280b2c40ee71" exitCode=0 Feb 25 13:05:35 crc kubenswrapper[5005]: I0225 13:05:35.628215 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5v87p" event={"ID":"06480556-60fc-4227-8365-69b8bdf8416e","Type":"ContainerDied","Data":"305cca91a92927b01cb96048a31faa6d2d241ec0454d562c1355280b2c40ee71"} Feb 25 13:05:35 crc kubenswrapper[5005]: I0225 13:05:35.628256 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5v87p" event={"ID":"06480556-60fc-4227-8365-69b8bdf8416e","Type":"ContainerDied","Data":"e79195c470000db689ebd71b55f74baa3e989a68d55e6d77ec4df0dbae86a148"} Feb 25 13:05:35 crc kubenswrapper[5005]: I0225 13:05:35.628283 5005 scope.go:117] "RemoveContainer" containerID="305cca91a92927b01cb96048a31faa6d2d241ec0454d562c1355280b2c40ee71" Feb 25 13:05:35 crc kubenswrapper[5005]: I0225 13:05:35.628284 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5v87p" Feb 25 13:05:35 crc kubenswrapper[5005]: I0225 13:05:35.686081 5005 scope.go:117] "RemoveContainer" containerID="2dc296c05982df07bbfbafb51e5e803239722ae15f0371961a118244b4d71e9f" Feb 25 13:05:35 crc kubenswrapper[5005]: E0225 13:05:35.686413 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:05:35 crc kubenswrapper[5005]: I0225 13:05:35.693427 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5v87p"] Feb 25 13:05:35 crc kubenswrapper[5005]: I0225 13:05:35.704516 5005 scope.go:117] "RemoveContainer" containerID="5f9765d80db4e23d9a1208d89c4bac5150f44c41da95a1deaf4d2946a86711ea" Feb 25 13:05:35 crc kubenswrapper[5005]: I0225 13:05:35.712209 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5v87p"] Feb 25 13:05:35 crc kubenswrapper[5005]: I0225 13:05:35.748583 5005 scope.go:117] "RemoveContainer" containerID="e920cad3330a1df12bfb140dcd72b3e76a0056cc366d61819ba78ce70eeeda53" Feb 25 13:05:35 crc kubenswrapper[5005]: I0225 13:05:35.784884 5005 scope.go:117] "RemoveContainer" containerID="305cca91a92927b01cb96048a31faa6d2d241ec0454d562c1355280b2c40ee71" Feb 25 13:05:35 crc kubenswrapper[5005]: E0225 13:05:35.785387 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"305cca91a92927b01cb96048a31faa6d2d241ec0454d562c1355280b2c40ee71\": container with ID starting with 305cca91a92927b01cb96048a31faa6d2d241ec0454d562c1355280b2c40ee71 not found: ID does not exist" containerID="305cca91a92927b01cb96048a31faa6d2d241ec0454d562c1355280b2c40ee71" Feb 25 13:05:35 crc kubenswrapper[5005]: I0225 13:05:35.785434 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"305cca91a92927b01cb96048a31faa6d2d241ec0454d562c1355280b2c40ee71"} err="failed to get container status \"305cca91a92927b01cb96048a31faa6d2d241ec0454d562c1355280b2c40ee71\": rpc error: code = NotFound desc = could not find container \"305cca91a92927b01cb96048a31faa6d2d241ec0454d562c1355280b2c40ee71\": container with ID starting with 305cca91a92927b01cb96048a31faa6d2d241ec0454d562c1355280b2c40ee71 not found: ID does not exist" Feb 25 13:05:35 crc kubenswrapper[5005]: I0225 13:05:35.785459 5005 scope.go:117] "RemoveContainer" containerID="5f9765d80db4e23d9a1208d89c4bac5150f44c41da95a1deaf4d2946a86711ea" Feb 25 13:05:35 crc kubenswrapper[5005]: E0225 13:05:35.785943 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f9765d80db4e23d9a1208d89c4bac5150f44c41da95a1deaf4d2946a86711ea\": container with ID starting with 5f9765d80db4e23d9a1208d89c4bac5150f44c41da95a1deaf4d2946a86711ea not found: ID does not exist" containerID="5f9765d80db4e23d9a1208d89c4bac5150f44c41da95a1deaf4d2946a86711ea" Feb 25 13:05:35 crc kubenswrapper[5005]: I0225 13:05:35.785967 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f9765d80db4e23d9a1208d89c4bac5150f44c41da95a1deaf4d2946a86711ea"} err="failed to get container status \"5f9765d80db4e23d9a1208d89c4bac5150f44c41da95a1deaf4d2946a86711ea\": rpc error: code = NotFound desc = could not find container \"5f9765d80db4e23d9a1208d89c4bac5150f44c41da95a1deaf4d2946a86711ea\": container with ID starting with 5f9765d80db4e23d9a1208d89c4bac5150f44c41da95a1deaf4d2946a86711ea not found: ID does not exist" Feb 25 13:05:35 crc kubenswrapper[5005]: I0225 13:05:35.785983 5005 scope.go:117] "RemoveContainer" containerID="e920cad3330a1df12bfb140dcd72b3e76a0056cc366d61819ba78ce70eeeda53" Feb 25 13:05:35 crc kubenswrapper[5005]: E0225 13:05:35.786231 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e920cad3330a1df12bfb140dcd72b3e76a0056cc366d61819ba78ce70eeeda53\": container with ID starting with e920cad3330a1df12bfb140dcd72b3e76a0056cc366d61819ba78ce70eeeda53 not found: ID does not exist" containerID="e920cad3330a1df12bfb140dcd72b3e76a0056cc366d61819ba78ce70eeeda53" Feb 25 13:05:35 crc kubenswrapper[5005]: I0225 13:05:35.786251 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e920cad3330a1df12bfb140dcd72b3e76a0056cc366d61819ba78ce70eeeda53"} err="failed to get container status \"e920cad3330a1df12bfb140dcd72b3e76a0056cc366d61819ba78ce70eeeda53\": rpc error: code = NotFound desc = could not find container \"e920cad3330a1df12bfb140dcd72b3e76a0056cc366d61819ba78ce70eeeda53\": container with ID starting with e920cad3330a1df12bfb140dcd72b3e76a0056cc366d61819ba78ce70eeeda53 not found: ID does not exist" Feb 25 13:05:35 crc kubenswrapper[5005]: E0225 13:05:35.859934 5005 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06480556_60fc_4227_8365_69b8bdf8416e.slice/crio-e79195c470000db689ebd71b55f74baa3e989a68d55e6d77ec4df0dbae86a148\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06480556_60fc_4227_8365_69b8bdf8416e.slice\": RecentStats: unable to find data in memory cache]" Feb 25 13:05:36 crc kubenswrapper[5005]: I0225 13:05:36.699250 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06480556-60fc-4227-8365-69b8bdf8416e" path="/var/lib/kubelet/pods/06480556-60fc-4227-8365-69b8bdf8416e/volumes" Feb 25 13:05:46 crc kubenswrapper[5005]: I0225 13:05:46.688673 5005 scope.go:117] "RemoveContainer" containerID="2dc296c05982df07bbfbafb51e5e803239722ae15f0371961a118244b4d71e9f" Feb 25 13:05:46 crc kubenswrapper[5005]: E0225 13:05:46.692774 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:06:00 crc kubenswrapper[5005]: I0225 13:06:00.150345 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533746-4crcr"] Feb 25 13:06:00 crc kubenswrapper[5005]: E0225 13:06:00.151540 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06480556-60fc-4227-8365-69b8bdf8416e" containerName="registry-server" Feb 25 13:06:00 crc kubenswrapper[5005]: I0225 13:06:00.151561 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="06480556-60fc-4227-8365-69b8bdf8416e" containerName="registry-server" Feb 25 13:06:00 crc kubenswrapper[5005]: E0225 13:06:00.151591 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06480556-60fc-4227-8365-69b8bdf8416e" containerName="extract-content" Feb 25 13:06:00 crc kubenswrapper[5005]: I0225 13:06:00.151603 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="06480556-60fc-4227-8365-69b8bdf8416e" containerName="extract-content" Feb 25 13:06:00 crc kubenswrapper[5005]: E0225 13:06:00.151647 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06480556-60fc-4227-8365-69b8bdf8416e" containerName="extract-utilities" Feb 25 13:06:00 crc kubenswrapper[5005]: I0225 13:06:00.151656 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="06480556-60fc-4227-8365-69b8bdf8416e" containerName="extract-utilities" Feb 25 13:06:00 crc kubenswrapper[5005]: I0225 13:06:00.151923 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="06480556-60fc-4227-8365-69b8bdf8416e" containerName="registry-server" Feb 25 13:06:00 crc kubenswrapper[5005]: I0225 13:06:00.152793 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533746-4crcr" Feb 25 13:06:00 crc kubenswrapper[5005]: I0225 13:06:00.155059 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 13:06:00 crc kubenswrapper[5005]: I0225 13:06:00.155315 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 13:06:00 crc kubenswrapper[5005]: I0225 13:06:00.159016 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 13:06:00 crc kubenswrapper[5005]: I0225 13:06:00.216902 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533746-4crcr"] Feb 25 13:06:00 crc kubenswrapper[5005]: I0225 13:06:00.247847 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h76hv\" (UniqueName: \"kubernetes.io/projected/b52f1477-86f2-49a2-b28c-c2e7b68a3b6d-kube-api-access-h76hv\") pod \"auto-csr-approver-29533746-4crcr\" (UID: \"b52f1477-86f2-49a2-b28c-c2e7b68a3b6d\") " pod="openshift-infra/auto-csr-approver-29533746-4crcr" Feb 25 13:06:00 crc kubenswrapper[5005]: I0225 13:06:00.349766 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h76hv\" (UniqueName: \"kubernetes.io/projected/b52f1477-86f2-49a2-b28c-c2e7b68a3b6d-kube-api-access-h76hv\") pod \"auto-csr-approver-29533746-4crcr\" (UID: \"b52f1477-86f2-49a2-b28c-c2e7b68a3b6d\") " pod="openshift-infra/auto-csr-approver-29533746-4crcr" Feb 25 13:06:00 crc kubenswrapper[5005]: I0225 13:06:00.369698 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h76hv\" (UniqueName: \"kubernetes.io/projected/b52f1477-86f2-49a2-b28c-c2e7b68a3b6d-kube-api-access-h76hv\") pod \"auto-csr-approver-29533746-4crcr\" (UID: \"b52f1477-86f2-49a2-b28c-c2e7b68a3b6d\") " pod="openshift-infra/auto-csr-approver-29533746-4crcr" Feb 25 13:06:00 crc kubenswrapper[5005]: I0225 13:06:00.524484 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533746-4crcr" Feb 25 13:06:00 crc kubenswrapper[5005]: I0225 13:06:00.687831 5005 scope.go:117] "RemoveContainer" containerID="2dc296c05982df07bbfbafb51e5e803239722ae15f0371961a118244b4d71e9f" Feb 25 13:06:00 crc kubenswrapper[5005]: E0225 13:06:00.688237 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:06:01 crc kubenswrapper[5005]: I0225 13:06:01.009386 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533746-4crcr"] Feb 25 13:06:01 crc kubenswrapper[5005]: W0225 13:06:01.026016 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb52f1477_86f2_49a2_b28c_c2e7b68a3b6d.slice/crio-d922917f183661543a8fbc5981cc63d31a0e433f38d81209e85349426fc482a3 WatchSource:0}: Error finding container d922917f183661543a8fbc5981cc63d31a0e433f38d81209e85349426fc482a3: Status 404 returned error can't find the container with id d922917f183661543a8fbc5981cc63d31a0e433f38d81209e85349426fc482a3 Feb 25 13:06:01 crc kubenswrapper[5005]: I0225 13:06:01.901356 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533746-4crcr" event={"ID":"b52f1477-86f2-49a2-b28c-c2e7b68a3b6d","Type":"ContainerStarted","Data":"d922917f183661543a8fbc5981cc63d31a0e433f38d81209e85349426fc482a3"} Feb 25 13:06:02 crc kubenswrapper[5005]: I0225 13:06:02.913617 5005 generic.go:334] "Generic (PLEG): container finished" podID="b52f1477-86f2-49a2-b28c-c2e7b68a3b6d" containerID="d1a60ac75ff5c87cb8817253d350d6c49064c68ddb6b6bf99dd52004c76f852f" exitCode=0 Feb 25 13:06:02 crc kubenswrapper[5005]: I0225 13:06:02.913677 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533746-4crcr" event={"ID":"b52f1477-86f2-49a2-b28c-c2e7b68a3b6d","Type":"ContainerDied","Data":"d1a60ac75ff5c87cb8817253d350d6c49064c68ddb6b6bf99dd52004c76f852f"} Feb 25 13:06:04 crc kubenswrapper[5005]: I0225 13:06:04.370958 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533746-4crcr" Feb 25 13:06:04 crc kubenswrapper[5005]: I0225 13:06:04.440152 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h76hv\" (UniqueName: \"kubernetes.io/projected/b52f1477-86f2-49a2-b28c-c2e7b68a3b6d-kube-api-access-h76hv\") pod \"b52f1477-86f2-49a2-b28c-c2e7b68a3b6d\" (UID: \"b52f1477-86f2-49a2-b28c-c2e7b68a3b6d\") " Feb 25 13:06:04 crc kubenswrapper[5005]: I0225 13:06:04.456620 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b52f1477-86f2-49a2-b28c-c2e7b68a3b6d-kube-api-access-h76hv" (OuterVolumeSpecName: "kube-api-access-h76hv") pod "b52f1477-86f2-49a2-b28c-c2e7b68a3b6d" (UID: "b52f1477-86f2-49a2-b28c-c2e7b68a3b6d"). InnerVolumeSpecName "kube-api-access-h76hv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 13:06:04 crc kubenswrapper[5005]: I0225 13:06:04.541978 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h76hv\" (UniqueName: \"kubernetes.io/projected/b52f1477-86f2-49a2-b28c-c2e7b68a3b6d-kube-api-access-h76hv\") on node \"crc\" DevicePath \"\"" Feb 25 13:06:04 crc kubenswrapper[5005]: I0225 13:06:04.932666 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533746-4crcr" event={"ID":"b52f1477-86f2-49a2-b28c-c2e7b68a3b6d","Type":"ContainerDied","Data":"d922917f183661543a8fbc5981cc63d31a0e433f38d81209e85349426fc482a3"} Feb 25 13:06:04 crc kubenswrapper[5005]: I0225 13:06:04.932710 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d922917f183661543a8fbc5981cc63d31a0e433f38d81209e85349426fc482a3" Feb 25 13:06:04 crc kubenswrapper[5005]: I0225 13:06:04.932725 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533746-4crcr" Feb 25 13:06:05 crc kubenswrapper[5005]: I0225 13:06:05.452850 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533740-mdk5g"] Feb 25 13:06:05 crc kubenswrapper[5005]: I0225 13:06:05.465349 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533740-mdk5g"] Feb 25 13:06:06 crc kubenswrapper[5005]: I0225 13:06:06.696624 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06f40c75-a2da-4d43-b704-8c8be81db601" path="/var/lib/kubelet/pods/06f40c75-a2da-4d43-b704-8c8be81db601/volumes" Feb 25 13:06:11 crc kubenswrapper[5005]: I0225 13:06:11.686581 5005 scope.go:117] "RemoveContainer" containerID="2dc296c05982df07bbfbafb51e5e803239722ae15f0371961a118244b4d71e9f" Feb 25 13:06:11 crc kubenswrapper[5005]: E0225 13:06:11.687586 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:06:23 crc kubenswrapper[5005]: I0225 13:06:23.685620 5005 scope.go:117] "RemoveContainer" containerID="2dc296c05982df07bbfbafb51e5e803239722ae15f0371961a118244b4d71e9f" Feb 25 13:06:23 crc kubenswrapper[5005]: E0225 13:06:23.686357 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:06:23 crc kubenswrapper[5005]: I0225 13:06:23.937823 5005 scope.go:117] "RemoveContainer" containerID="12ad2f4e70d082261a59308ad34be53d081bed5cb25fc30c88f132d14a6263eb" Feb 25 13:06:37 crc kubenswrapper[5005]: I0225 13:06:37.686693 5005 scope.go:117] "RemoveContainer" containerID="2dc296c05982df07bbfbafb51e5e803239722ae15f0371961a118244b4d71e9f" Feb 25 13:06:37 crc kubenswrapper[5005]: E0225 13:06:37.687439 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:06:50 crc kubenswrapper[5005]: I0225 13:06:50.687169 5005 scope.go:117] "RemoveContainer" containerID="2dc296c05982df07bbfbafb51e5e803239722ae15f0371961a118244b4d71e9f" Feb 25 13:06:50 crc kubenswrapper[5005]: E0225 13:06:50.688743 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:07:04 crc kubenswrapper[5005]: I0225 13:07:04.685671 5005 scope.go:117] "RemoveContainer" containerID="2dc296c05982df07bbfbafb51e5e803239722ae15f0371961a118244b4d71e9f" Feb 25 13:07:04 crc kubenswrapper[5005]: E0225 13:07:04.686771 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:07:09 crc kubenswrapper[5005]: I0225 13:07:09.574131 5005 generic.go:334] "Generic (PLEG): container finished" podID="31e745d1-9f40-4deb-adb0-7cb412b3b21f" containerID="f649e48e1a2c0d55a745d90fee17a1a5e5f4b2e66a7e264331485816dce7bd76" exitCode=0 Feb 25 13:07:09 crc kubenswrapper[5005]: I0225 13:07:09.574185 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-full" event={"ID":"31e745d1-9f40-4deb-adb0-7cb412b3b21f","Type":"ContainerDied","Data":"f649e48e1a2c0d55a745d90fee17a1a5e5f4b2e66a7e264331485816dce7bd76"} Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.374403 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-full" Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.452784 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest-s01-single-test"] Feb 25 13:07:11 crc kubenswrapper[5005]: E0225 13:07:11.453507 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31e745d1-9f40-4deb-adb0-7cb412b3b21f" containerName="tempest-tests-tempest-tests-runner" Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.453526 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="31e745d1-9f40-4deb-adb0-7cb412b3b21f" containerName="tempest-tests-tempest-tests-runner" Feb 25 13:07:11 crc kubenswrapper[5005]: E0225 13:07:11.453559 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b52f1477-86f2-49a2-b28c-c2e7b68a3b6d" containerName="oc" Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.453566 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="b52f1477-86f2-49a2-b28c-c2e7b68a3b6d" containerName="oc" Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.453743 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="31e745d1-9f40-4deb-adb0-7cb412b3b21f" containerName="tempest-tests-tempest-tests-runner" Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.453767 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="b52f1477-86f2-49a2-b28c-c2e7b68a3b6d" containerName="oc" Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.454407 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-test" Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.456506 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s1" Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.456699 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s1" Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.465112 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s01-single-test"] Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.508976 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/31e745d1-9f40-4deb-adb0-7cb412b3b21f-ca-certs\") pod \"31e745d1-9f40-4deb-adb0-7cb412b3b21f\" (UID: \"31e745d1-9f40-4deb-adb0-7cb412b3b21f\") " Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.509316 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/31e745d1-9f40-4deb-adb0-7cb412b3b21f-openstack-config\") pod \"31e745d1-9f40-4deb-adb0-7cb412b3b21f\" (UID: \"31e745d1-9f40-4deb-adb0-7cb412b3b21f\") " Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.509402 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/31e745d1-9f40-4deb-adb0-7cb412b3b21f-openstack-config-secret\") pod \"31e745d1-9f40-4deb-adb0-7cb412b3b21f\" (UID: \"31e745d1-9f40-4deb-adb0-7cb412b3b21f\") " Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.509445 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25gl7\" (UniqueName: \"kubernetes.io/projected/31e745d1-9f40-4deb-adb0-7cb412b3b21f-kube-api-access-25gl7\") pod \"31e745d1-9f40-4deb-adb0-7cb412b3b21f\" (UID: \"31e745d1-9f40-4deb-adb0-7cb412b3b21f\") " Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.509511 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/31e745d1-9f40-4deb-adb0-7cb412b3b21f-config-data\") pod \"31e745d1-9f40-4deb-adb0-7cb412b3b21f\" (UID: \"31e745d1-9f40-4deb-adb0-7cb412b3b21f\") " Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.509536 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/31e745d1-9f40-4deb-adb0-7cb412b3b21f-ceph\") pod \"31e745d1-9f40-4deb-adb0-7cb412b3b21f\" (UID: \"31e745d1-9f40-4deb-adb0-7cb412b3b21f\") " Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.509608 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/31e745d1-9f40-4deb-adb0-7cb412b3b21f-test-operator-ephemeral-workdir\") pod \"31e745d1-9f40-4deb-adb0-7cb412b3b21f\" (UID: \"31e745d1-9f40-4deb-adb0-7cb412b3b21f\") " Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.509659 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"31e745d1-9f40-4deb-adb0-7cb412b3b21f\" (UID: \"31e745d1-9f40-4deb-adb0-7cb412b3b21f\") " Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.509717 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/31e745d1-9f40-4deb-adb0-7cb412b3b21f-ssh-key\") pod \"31e745d1-9f40-4deb-adb0-7cb412b3b21f\" (UID: \"31e745d1-9f40-4deb-adb0-7cb412b3b21f\") " Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.509785 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/31e745d1-9f40-4deb-adb0-7cb412b3b21f-test-operator-ephemeral-temporary\") pod \"31e745d1-9f40-4deb-adb0-7cb412b3b21f\" (UID: \"31e745d1-9f40-4deb-adb0-7cb412b3b21f\") " Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.510683 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31e745d1-9f40-4deb-adb0-7cb412b3b21f-config-data" (OuterVolumeSpecName: "config-data") pod "31e745d1-9f40-4deb-adb0-7cb412b3b21f" (UID: "31e745d1-9f40-4deb-adb0-7cb412b3b21f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.511147 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31e745d1-9f40-4deb-adb0-7cb412b3b21f-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "31e745d1-9f40-4deb-adb0-7cb412b3b21f" (UID: "31e745d1-9f40-4deb-adb0-7cb412b3b21f"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.511686 5005 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/31e745d1-9f40-4deb-adb0-7cb412b3b21f-config-data\") on node \"crc\" DevicePath \"\"" Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.511713 5005 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/31e745d1-9f40-4deb-adb0-7cb412b3b21f-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.515607 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "test-operator-logs") pod "31e745d1-9f40-4deb-adb0-7cb412b3b21f" (UID: "31e745d1-9f40-4deb-adb0-7cb412b3b21f"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.516817 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31e745d1-9f40-4deb-adb0-7cb412b3b21f-ceph" (OuterVolumeSpecName: "ceph") pod "31e745d1-9f40-4deb-adb0-7cb412b3b21f" (UID: "31e745d1-9f40-4deb-adb0-7cb412b3b21f"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.520660 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31e745d1-9f40-4deb-adb0-7cb412b3b21f-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "31e745d1-9f40-4deb-adb0-7cb412b3b21f" (UID: "31e745d1-9f40-4deb-adb0-7cb412b3b21f"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.523508 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31e745d1-9f40-4deb-adb0-7cb412b3b21f-kube-api-access-25gl7" (OuterVolumeSpecName: "kube-api-access-25gl7") pod "31e745d1-9f40-4deb-adb0-7cb412b3b21f" (UID: "31e745d1-9f40-4deb-adb0-7cb412b3b21f"). InnerVolumeSpecName "kube-api-access-25gl7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.536573 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31e745d1-9f40-4deb-adb0-7cb412b3b21f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "31e745d1-9f40-4deb-adb0-7cb412b3b21f" (UID: "31e745d1-9f40-4deb-adb0-7cb412b3b21f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.545132 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31e745d1-9f40-4deb-adb0-7cb412b3b21f-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "31e745d1-9f40-4deb-adb0-7cb412b3b21f" (UID: "31e745d1-9f40-4deb-adb0-7cb412b3b21f"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.553990 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31e745d1-9f40-4deb-adb0-7cb412b3b21f-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "31e745d1-9f40-4deb-adb0-7cb412b3b21f" (UID: "31e745d1-9f40-4deb-adb0-7cb412b3b21f"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.561686 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31e745d1-9f40-4deb-adb0-7cb412b3b21f-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "31e745d1-9f40-4deb-adb0-7cb412b3b21f" (UID: "31e745d1-9f40-4deb-adb0-7cb412b3b21f"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.593043 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-full" event={"ID":"31e745d1-9f40-4deb-adb0-7cb412b3b21f","Type":"ContainerDied","Data":"3e8a6b625ca907c6660801f1488e97c6cade48122d989e8e9c402f39a5198e30"} Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.593090 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e8a6b625ca907c6660801f1488e97c6cade48122d989e8e9c402f39a5198e30" Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.593156 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-full" Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.614259 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/444bbedc-b80c-45ab-8538-c572e17d3892-openstack-config-secret\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"444bbedc-b80c-45ab-8538-c572e17d3892\") " pod="openstack/tempest-tests-tempest-s01-single-test" Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.614318 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/444bbedc-b80c-45ab-8538-c572e17d3892-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"444bbedc-b80c-45ab-8538-c572e17d3892\") " pod="openstack/tempest-tests-tempest-s01-single-test" Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.614348 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/444bbedc-b80c-45ab-8538-c572e17d3892-ceph\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"444bbedc-b80c-45ab-8538-c572e17d3892\") " pod="openstack/tempest-tests-tempest-s01-single-test" Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.614402 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"444bbedc-b80c-45ab-8538-c572e17d3892\") " pod="openstack/tempest-tests-tempest-s01-single-test" Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.614476 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/444bbedc-b80c-45ab-8538-c572e17d3892-ca-certs\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"444bbedc-b80c-45ab-8538-c572e17d3892\") " pod="openstack/tempest-tests-tempest-s01-single-test" Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.614628 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/444bbedc-b80c-45ab-8538-c572e17d3892-config-data\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"444bbedc-b80c-45ab-8538-c572e17d3892\") " pod="openstack/tempest-tests-tempest-s01-single-test" Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.614795 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm5z6\" (UniqueName: \"kubernetes.io/projected/444bbedc-b80c-45ab-8538-c572e17d3892-kube-api-access-zm5z6\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"444bbedc-b80c-45ab-8538-c572e17d3892\") " pod="openstack/tempest-tests-tempest-s01-single-test" Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.614927 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/444bbedc-b80c-45ab-8538-c572e17d3892-ssh-key\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"444bbedc-b80c-45ab-8538-c572e17d3892\") " pod="openstack/tempest-tests-tempest-s01-single-test" Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.615006 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/444bbedc-b80c-45ab-8538-c572e17d3892-openstack-config\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"444bbedc-b80c-45ab-8538-c572e17d3892\") " pod="openstack/tempest-tests-tempest-s01-single-test" Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.615049 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/444bbedc-b80c-45ab-8538-c572e17d3892-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"444bbedc-b80c-45ab-8538-c572e17d3892\") " pod="openstack/tempest-tests-tempest-s01-single-test" Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.615490 5005 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/31e745d1-9f40-4deb-adb0-7cb412b3b21f-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.615526 5005 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/31e745d1-9f40-4deb-adb0-7cb412b3b21f-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.615545 5005 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/31e745d1-9f40-4deb-adb0-7cb412b3b21f-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.615566 5005 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/31e745d1-9f40-4deb-adb0-7cb412b3b21f-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.615585 5005 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/31e745d1-9f40-4deb-adb0-7cb412b3b21f-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.615603 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25gl7\" (UniqueName: \"kubernetes.io/projected/31e745d1-9f40-4deb-adb0-7cb412b3b21f-kube-api-access-25gl7\") on node \"crc\" DevicePath \"\"" Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.615621 5005 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/31e745d1-9f40-4deb-adb0-7cb412b3b21f-ceph\") on node \"crc\" DevicePath \"\"" Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.642467 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"444bbedc-b80c-45ab-8538-c572e17d3892\") " pod="openstack/tempest-tests-tempest-s01-single-test" Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.721059 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/444bbedc-b80c-45ab-8538-c572e17d3892-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"444bbedc-b80c-45ab-8538-c572e17d3892\") " pod="openstack/tempest-tests-tempest-s01-single-test" Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.721215 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/444bbedc-b80c-45ab-8538-c572e17d3892-ceph\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"444bbedc-b80c-45ab-8538-c572e17d3892\") " pod="openstack/tempest-tests-tempest-s01-single-test" Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.721340 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/444bbedc-b80c-45ab-8538-c572e17d3892-ca-certs\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"444bbedc-b80c-45ab-8538-c572e17d3892\") " pod="openstack/tempest-tests-tempest-s01-single-test" Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.721470 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/444bbedc-b80c-45ab-8538-c572e17d3892-config-data\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"444bbedc-b80c-45ab-8538-c572e17d3892\") " pod="openstack/tempest-tests-tempest-s01-single-test" Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.721676 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/444bbedc-b80c-45ab-8538-c572e17d3892-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"444bbedc-b80c-45ab-8538-c572e17d3892\") " pod="openstack/tempest-tests-tempest-s01-single-test" Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.721678 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm5z6\" (UniqueName: \"kubernetes.io/projected/444bbedc-b80c-45ab-8538-c572e17d3892-kube-api-access-zm5z6\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"444bbedc-b80c-45ab-8538-c572e17d3892\") " pod="openstack/tempest-tests-tempest-s01-single-test" Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.721857 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/444bbedc-b80c-45ab-8538-c572e17d3892-ssh-key\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"444bbedc-b80c-45ab-8538-c572e17d3892\") " pod="openstack/tempest-tests-tempest-s01-single-test" Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.721906 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/444bbedc-b80c-45ab-8538-c572e17d3892-openstack-config\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"444bbedc-b80c-45ab-8538-c572e17d3892\") " pod="openstack/tempest-tests-tempest-s01-single-test" Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.721968 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/444bbedc-b80c-45ab-8538-c572e17d3892-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"444bbedc-b80c-45ab-8538-c572e17d3892\") " pod="openstack/tempest-tests-tempest-s01-single-test" Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.722420 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/444bbedc-b80c-45ab-8538-c572e17d3892-openstack-config-secret\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"444bbedc-b80c-45ab-8538-c572e17d3892\") " pod="openstack/tempest-tests-tempest-s01-single-test" Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.723306 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/444bbedc-b80c-45ab-8538-c572e17d3892-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"444bbedc-b80c-45ab-8538-c572e17d3892\") " pod="openstack/tempest-tests-tempest-s01-single-test" Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.723783 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/444bbedc-b80c-45ab-8538-c572e17d3892-openstack-config\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"444bbedc-b80c-45ab-8538-c572e17d3892\") " pod="openstack/tempest-tests-tempest-s01-single-test" Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.724944 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/444bbedc-b80c-45ab-8538-c572e17d3892-ca-certs\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"444bbedc-b80c-45ab-8538-c572e17d3892\") " pod="openstack/tempest-tests-tempest-s01-single-test" Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.725813 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/444bbedc-b80c-45ab-8538-c572e17d3892-ssh-key\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"444bbedc-b80c-45ab-8538-c572e17d3892\") " pod="openstack/tempest-tests-tempest-s01-single-test" Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.726405 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/444bbedc-b80c-45ab-8538-c572e17d3892-openstack-config-secret\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"444bbedc-b80c-45ab-8538-c572e17d3892\") " pod="openstack/tempest-tests-tempest-s01-single-test" Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.726851 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/444bbedc-b80c-45ab-8538-c572e17d3892-ceph\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"444bbedc-b80c-45ab-8538-c572e17d3892\") " pod="openstack/tempest-tests-tempest-s01-single-test" Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.727940 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/444bbedc-b80c-45ab-8538-c572e17d3892-config-data\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"444bbedc-b80c-45ab-8538-c572e17d3892\") " pod="openstack/tempest-tests-tempest-s01-single-test" Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.740648 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm5z6\" (UniqueName: \"kubernetes.io/projected/444bbedc-b80c-45ab-8538-c572e17d3892-kube-api-access-zm5z6\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"444bbedc-b80c-45ab-8538-c572e17d3892\") " pod="openstack/tempest-tests-tempest-s01-single-test" Feb 25 13:07:11 crc kubenswrapper[5005]: I0225 13:07:11.779740 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-test" Feb 25 13:07:12 crc kubenswrapper[5005]: I0225 13:07:12.401616 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s01-single-test"] Feb 25 13:07:12 crc kubenswrapper[5005]: I0225 13:07:12.601459 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-test" event={"ID":"444bbedc-b80c-45ab-8538-c572e17d3892","Type":"ContainerStarted","Data":"5107df06501106c28db8a61ffeed00f97fefc0064117d28ae4b2ec115730ff4a"} Feb 25 13:07:13 crc kubenswrapper[5005]: I0225 13:07:13.610742 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-test" event={"ID":"444bbedc-b80c-45ab-8538-c572e17d3892","Type":"ContainerStarted","Data":"d0f596c44d8b73e2e0d6c6024f64d6eaac6c89a80201cf9a36911643174077f7"} Feb 25 13:07:13 crc kubenswrapper[5005]: I0225 13:07:13.634790 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest-s01-single-test" podStartSLOduration=2.634770306 podStartE2EDuration="2.634770306s" podCreationTimestamp="2026-02-25 13:07:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 13:07:13.625338337 +0000 UTC m=+6547.666070664" watchObservedRunningTime="2026-02-25 13:07:13.634770306 +0000 UTC m=+6547.675502643" Feb 25 13:07:15 crc kubenswrapper[5005]: I0225 13:07:15.685399 5005 scope.go:117] "RemoveContainer" containerID="2dc296c05982df07bbfbafb51e5e803239722ae15f0371961a118244b4d71e9f" Feb 25 13:07:15 crc kubenswrapper[5005]: E0225 13:07:15.685947 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:07:27 crc kubenswrapper[5005]: I0225 13:07:27.685343 5005 scope.go:117] "RemoveContainer" containerID="2dc296c05982df07bbfbafb51e5e803239722ae15f0371961a118244b4d71e9f" Feb 25 13:07:27 crc kubenswrapper[5005]: E0225 13:07:27.686397 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:07:42 crc kubenswrapper[5005]: I0225 13:07:42.686626 5005 scope.go:117] "RemoveContainer" containerID="2dc296c05982df07bbfbafb51e5e803239722ae15f0371961a118244b4d71e9f" Feb 25 13:07:42 crc kubenswrapper[5005]: E0225 13:07:42.688236 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:07:57 crc kubenswrapper[5005]: I0225 13:07:57.685287 5005 scope.go:117] "RemoveContainer" containerID="2dc296c05982df07bbfbafb51e5e803239722ae15f0371961a118244b4d71e9f" Feb 25 13:07:57 crc kubenswrapper[5005]: E0225 13:07:57.686254 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:08:00 crc kubenswrapper[5005]: I0225 13:08:00.137356 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533748-vpbtw"] Feb 25 13:08:00 crc kubenswrapper[5005]: I0225 13:08:00.139903 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533748-vpbtw" Feb 25 13:08:00 crc kubenswrapper[5005]: I0225 13:08:00.142713 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 13:08:00 crc kubenswrapper[5005]: I0225 13:08:00.142798 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 13:08:00 crc kubenswrapper[5005]: I0225 13:08:00.142806 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 13:08:00 crc kubenswrapper[5005]: I0225 13:08:00.149901 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533748-vpbtw"] Feb 25 13:08:00 crc kubenswrapper[5005]: I0225 13:08:00.177787 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5r7n\" (UniqueName: \"kubernetes.io/projected/7c30a7e5-45bd-4090-8c26-0ced7f76ee9c-kube-api-access-r5r7n\") pod \"auto-csr-approver-29533748-vpbtw\" (UID: \"7c30a7e5-45bd-4090-8c26-0ced7f76ee9c\") " pod="openshift-infra/auto-csr-approver-29533748-vpbtw" Feb 25 13:08:00 crc kubenswrapper[5005]: I0225 13:08:00.279539 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5r7n\" (UniqueName: \"kubernetes.io/projected/7c30a7e5-45bd-4090-8c26-0ced7f76ee9c-kube-api-access-r5r7n\") pod \"auto-csr-approver-29533748-vpbtw\" (UID: \"7c30a7e5-45bd-4090-8c26-0ced7f76ee9c\") " pod="openshift-infra/auto-csr-approver-29533748-vpbtw" Feb 25 13:08:00 crc kubenswrapper[5005]: I0225 13:08:00.302850 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5r7n\" (UniqueName: \"kubernetes.io/projected/7c30a7e5-45bd-4090-8c26-0ced7f76ee9c-kube-api-access-r5r7n\") pod \"auto-csr-approver-29533748-vpbtw\" (UID: \"7c30a7e5-45bd-4090-8c26-0ced7f76ee9c\") " pod="openshift-infra/auto-csr-approver-29533748-vpbtw" Feb 25 13:08:00 crc kubenswrapper[5005]: I0225 13:08:00.457304 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533748-vpbtw" Feb 25 13:08:00 crc kubenswrapper[5005]: I0225 13:08:00.924529 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533748-vpbtw"] Feb 25 13:08:01 crc kubenswrapper[5005]: I0225 13:08:01.109200 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533748-vpbtw" event={"ID":"7c30a7e5-45bd-4090-8c26-0ced7f76ee9c","Type":"ContainerStarted","Data":"139564acb1f54507ba844ee304b441454907a1d47a68f5dd7df5109aab6884a6"} Feb 25 13:08:03 crc kubenswrapper[5005]: I0225 13:08:03.128431 5005 generic.go:334] "Generic (PLEG): container finished" podID="7c30a7e5-45bd-4090-8c26-0ced7f76ee9c" containerID="24b6f8fe4c1c64dec158e4d8704e77966deb4840a8f31550e4db6dfb23e74534" exitCode=0 Feb 25 13:08:03 crc kubenswrapper[5005]: I0225 13:08:03.128636 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533748-vpbtw" event={"ID":"7c30a7e5-45bd-4090-8c26-0ced7f76ee9c","Type":"ContainerDied","Data":"24b6f8fe4c1c64dec158e4d8704e77966deb4840a8f31550e4db6dfb23e74534"} Feb 25 13:08:04 crc kubenswrapper[5005]: I0225 13:08:04.503121 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533748-vpbtw" Feb 25 13:08:04 crc kubenswrapper[5005]: I0225 13:08:04.586978 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5r7n\" (UniqueName: \"kubernetes.io/projected/7c30a7e5-45bd-4090-8c26-0ced7f76ee9c-kube-api-access-r5r7n\") pod \"7c30a7e5-45bd-4090-8c26-0ced7f76ee9c\" (UID: \"7c30a7e5-45bd-4090-8c26-0ced7f76ee9c\") " Feb 25 13:08:04 crc kubenswrapper[5005]: I0225 13:08:04.592425 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c30a7e5-45bd-4090-8c26-0ced7f76ee9c-kube-api-access-r5r7n" (OuterVolumeSpecName: "kube-api-access-r5r7n") pod "7c30a7e5-45bd-4090-8c26-0ced7f76ee9c" (UID: "7c30a7e5-45bd-4090-8c26-0ced7f76ee9c"). InnerVolumeSpecName "kube-api-access-r5r7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 13:08:04 crc kubenswrapper[5005]: I0225 13:08:04.689269 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5r7n\" (UniqueName: \"kubernetes.io/projected/7c30a7e5-45bd-4090-8c26-0ced7f76ee9c-kube-api-access-r5r7n\") on node \"crc\" DevicePath \"\"" Feb 25 13:08:05 crc kubenswrapper[5005]: I0225 13:08:05.150074 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533748-vpbtw" event={"ID":"7c30a7e5-45bd-4090-8c26-0ced7f76ee9c","Type":"ContainerDied","Data":"139564acb1f54507ba844ee304b441454907a1d47a68f5dd7df5109aab6884a6"} Feb 25 13:08:05 crc kubenswrapper[5005]: I0225 13:08:05.150124 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="139564acb1f54507ba844ee304b441454907a1d47a68f5dd7df5109aab6884a6" Feb 25 13:08:05 crc kubenswrapper[5005]: I0225 13:08:05.150176 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533748-vpbtw" Feb 25 13:08:05 crc kubenswrapper[5005]: I0225 13:08:05.580659 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533742-5qcss"] Feb 25 13:08:05 crc kubenswrapper[5005]: I0225 13:08:05.595587 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533742-5qcss"] Feb 25 13:08:06 crc kubenswrapper[5005]: I0225 13:08:06.710236 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d600588-d398-4782-8c31-2002655cbfa3" path="/var/lib/kubelet/pods/3d600588-d398-4782-8c31-2002655cbfa3/volumes" Feb 25 13:08:11 crc kubenswrapper[5005]: I0225 13:08:11.685851 5005 scope.go:117] "RemoveContainer" containerID="2dc296c05982df07bbfbafb51e5e803239722ae15f0371961a118244b4d71e9f" Feb 25 13:08:11 crc kubenswrapper[5005]: E0225 13:08:11.686715 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:08:24 crc kubenswrapper[5005]: I0225 13:08:24.063871 5005 scope.go:117] "RemoveContainer" containerID="c1f96d3036e46ee8090eadbb1cf034a3293056309b8a429e44a2295f9b70a069" Feb 25 13:08:26 crc kubenswrapper[5005]: I0225 13:08:26.709645 5005 scope.go:117] "RemoveContainer" containerID="2dc296c05982df07bbfbafb51e5e803239722ae15f0371961a118244b4d71e9f" Feb 25 13:08:26 crc kubenswrapper[5005]: E0225 13:08:26.710357 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:08:39 crc kubenswrapper[5005]: I0225 13:08:39.447478 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-w78xk"] Feb 25 13:08:39 crc kubenswrapper[5005]: E0225 13:08:39.448736 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c30a7e5-45bd-4090-8c26-0ced7f76ee9c" containerName="oc" Feb 25 13:08:39 crc kubenswrapper[5005]: I0225 13:08:39.448761 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c30a7e5-45bd-4090-8c26-0ced7f76ee9c" containerName="oc" Feb 25 13:08:39 crc kubenswrapper[5005]: I0225 13:08:39.449081 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c30a7e5-45bd-4090-8c26-0ced7f76ee9c" containerName="oc" Feb 25 13:08:39 crc kubenswrapper[5005]: I0225 13:08:39.451151 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w78xk" Feb 25 13:08:39 crc kubenswrapper[5005]: I0225 13:08:39.465868 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w78xk"] Feb 25 13:08:39 crc kubenswrapper[5005]: I0225 13:08:39.645059 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n49pw\" (UniqueName: \"kubernetes.io/projected/e0863009-cdde-4465-a949-6c3d5b8b85cc-kube-api-access-n49pw\") pod \"community-operators-w78xk\" (UID: \"e0863009-cdde-4465-a949-6c3d5b8b85cc\") " pod="openshift-marketplace/community-operators-w78xk" Feb 25 13:08:39 crc kubenswrapper[5005]: I0225 13:08:39.645227 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0863009-cdde-4465-a949-6c3d5b8b85cc-catalog-content\") pod \"community-operators-w78xk\" (UID: \"e0863009-cdde-4465-a949-6c3d5b8b85cc\") " pod="openshift-marketplace/community-operators-w78xk" Feb 25 13:08:39 crc kubenswrapper[5005]: I0225 13:08:39.645347 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0863009-cdde-4465-a949-6c3d5b8b85cc-utilities\") pod \"community-operators-w78xk\" (UID: \"e0863009-cdde-4465-a949-6c3d5b8b85cc\") " pod="openshift-marketplace/community-operators-w78xk" Feb 25 13:08:39 crc kubenswrapper[5005]: I0225 13:08:39.748280 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n49pw\" (UniqueName: \"kubernetes.io/projected/e0863009-cdde-4465-a949-6c3d5b8b85cc-kube-api-access-n49pw\") pod \"community-operators-w78xk\" (UID: \"e0863009-cdde-4465-a949-6c3d5b8b85cc\") " pod="openshift-marketplace/community-operators-w78xk" Feb 25 13:08:39 crc kubenswrapper[5005]: I0225 13:08:39.748434 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0863009-cdde-4465-a949-6c3d5b8b85cc-catalog-content\") pod \"community-operators-w78xk\" (UID: \"e0863009-cdde-4465-a949-6c3d5b8b85cc\") " pod="openshift-marketplace/community-operators-w78xk" Feb 25 13:08:39 crc kubenswrapper[5005]: I0225 13:08:39.748490 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0863009-cdde-4465-a949-6c3d5b8b85cc-utilities\") pod \"community-operators-w78xk\" (UID: \"e0863009-cdde-4465-a949-6c3d5b8b85cc\") " pod="openshift-marketplace/community-operators-w78xk" Feb 25 13:08:39 crc kubenswrapper[5005]: I0225 13:08:39.749181 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0863009-cdde-4465-a949-6c3d5b8b85cc-utilities\") pod \"community-operators-w78xk\" (UID: \"e0863009-cdde-4465-a949-6c3d5b8b85cc\") " pod="openshift-marketplace/community-operators-w78xk" Feb 25 13:08:39 crc kubenswrapper[5005]: I0225 13:08:39.749237 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0863009-cdde-4465-a949-6c3d5b8b85cc-catalog-content\") pod \"community-operators-w78xk\" (UID: \"e0863009-cdde-4465-a949-6c3d5b8b85cc\") " pod="openshift-marketplace/community-operators-w78xk" Feb 25 13:08:39 crc kubenswrapper[5005]: I0225 13:08:39.777286 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n49pw\" (UniqueName: \"kubernetes.io/projected/e0863009-cdde-4465-a949-6c3d5b8b85cc-kube-api-access-n49pw\") pod \"community-operators-w78xk\" (UID: \"e0863009-cdde-4465-a949-6c3d5b8b85cc\") " pod="openshift-marketplace/community-operators-w78xk" Feb 25 13:08:40 crc kubenswrapper[5005]: I0225 13:08:40.070905 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w78xk" Feb 25 13:08:40 crc kubenswrapper[5005]: I0225 13:08:40.509088 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w78xk"] Feb 25 13:08:40 crc kubenswrapper[5005]: W0225 13:08:40.515747 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0863009_cdde_4465_a949_6c3d5b8b85cc.slice/crio-60eaa46c407695cd31b03b12505d79daf1fd715a44dd6252645f7ce49a125f55 WatchSource:0}: Error finding container 60eaa46c407695cd31b03b12505d79daf1fd715a44dd6252645f7ce49a125f55: Status 404 returned error can't find the container with id 60eaa46c407695cd31b03b12505d79daf1fd715a44dd6252645f7ce49a125f55 Feb 25 13:08:40 crc kubenswrapper[5005]: I0225 13:08:40.525930 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w78xk" event={"ID":"e0863009-cdde-4465-a949-6c3d5b8b85cc","Type":"ContainerStarted","Data":"60eaa46c407695cd31b03b12505d79daf1fd715a44dd6252645f7ce49a125f55"} Feb 25 13:08:40 crc kubenswrapper[5005]: I0225 13:08:40.685498 5005 scope.go:117] "RemoveContainer" containerID="2dc296c05982df07bbfbafb51e5e803239722ae15f0371961a118244b4d71e9f" Feb 25 13:08:40 crc kubenswrapper[5005]: E0225 13:08:40.685746 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:08:41 crc kubenswrapper[5005]: I0225 13:08:41.533075 5005 generic.go:334] "Generic (PLEG): container finished" podID="e0863009-cdde-4465-a949-6c3d5b8b85cc" containerID="15f8756dce2f0f8cccf10f1dbe61a29ad57afb7df71d3d34f67a2d543bcaf296" exitCode=0 Feb 25 13:08:41 crc kubenswrapper[5005]: I0225 13:08:41.533175 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w78xk" event={"ID":"e0863009-cdde-4465-a949-6c3d5b8b85cc","Type":"ContainerDied","Data":"15f8756dce2f0f8cccf10f1dbe61a29ad57afb7df71d3d34f67a2d543bcaf296"} Feb 25 13:08:42 crc kubenswrapper[5005]: I0225 13:08:42.542017 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w78xk" event={"ID":"e0863009-cdde-4465-a949-6c3d5b8b85cc","Type":"ContainerStarted","Data":"b29235acde5bed3d563673f6db63df5e2c4a7f8ca50d033c8f6a19d0a9fa3fac"} Feb 25 13:08:43 crc kubenswrapper[5005]: I0225 13:08:43.561142 5005 generic.go:334] "Generic (PLEG): container finished" podID="e0863009-cdde-4465-a949-6c3d5b8b85cc" containerID="b29235acde5bed3d563673f6db63df5e2c4a7f8ca50d033c8f6a19d0a9fa3fac" exitCode=0 Feb 25 13:08:43 crc kubenswrapper[5005]: I0225 13:08:43.563666 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w78xk" event={"ID":"e0863009-cdde-4465-a949-6c3d5b8b85cc","Type":"ContainerDied","Data":"b29235acde5bed3d563673f6db63df5e2c4a7f8ca50d033c8f6a19d0a9fa3fac"} Feb 25 13:08:44 crc kubenswrapper[5005]: I0225 13:08:44.574883 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w78xk" event={"ID":"e0863009-cdde-4465-a949-6c3d5b8b85cc","Type":"ContainerStarted","Data":"15f967861e5f9ea131a78c5d0aecc1667993e9bc3d6e9e338132d1143fb206db"} Feb 25 13:08:44 crc kubenswrapper[5005]: I0225 13:08:44.603076 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-w78xk" podStartSLOduration=3.174238789 podStartE2EDuration="5.60304444s" podCreationTimestamp="2026-02-25 13:08:39 +0000 UTC" firstStartedPulling="2026-02-25 13:08:41.535131016 +0000 UTC m=+6635.575863333" lastFinishedPulling="2026-02-25 13:08:43.963936657 +0000 UTC m=+6638.004668984" observedRunningTime="2026-02-25 13:08:44.591098963 +0000 UTC m=+6638.631831290" watchObservedRunningTime="2026-02-25 13:08:44.60304444 +0000 UTC m=+6638.643776767" Feb 25 13:08:50 crc kubenswrapper[5005]: I0225 13:08:50.071424 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-w78xk" Feb 25 13:08:50 crc kubenswrapper[5005]: I0225 13:08:50.071735 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-w78xk" Feb 25 13:08:50 crc kubenswrapper[5005]: I0225 13:08:50.140891 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-w78xk" Feb 25 13:08:50 crc kubenswrapper[5005]: I0225 13:08:50.702879 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-w78xk" Feb 25 13:08:50 crc kubenswrapper[5005]: I0225 13:08:50.747216 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w78xk"] Feb 25 13:08:52 crc kubenswrapper[5005]: I0225 13:08:52.648149 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-w78xk" podUID="e0863009-cdde-4465-a949-6c3d5b8b85cc" containerName="registry-server" containerID="cri-o://15f967861e5f9ea131a78c5d0aecc1667993e9bc3d6e9e338132d1143fb206db" gracePeriod=2 Feb 25 13:08:52 crc kubenswrapper[5005]: I0225 13:08:52.714287 5005 scope.go:117] "RemoveContainer" containerID="2dc296c05982df07bbfbafb51e5e803239722ae15f0371961a118244b4d71e9f" Feb 25 13:08:52 crc kubenswrapper[5005]: E0225 13:08:52.714589 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:08:53 crc kubenswrapper[5005]: I0225 13:08:53.112640 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w78xk" Feb 25 13:08:53 crc kubenswrapper[5005]: I0225 13:08:53.256473 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n49pw\" (UniqueName: \"kubernetes.io/projected/e0863009-cdde-4465-a949-6c3d5b8b85cc-kube-api-access-n49pw\") pod \"e0863009-cdde-4465-a949-6c3d5b8b85cc\" (UID: \"e0863009-cdde-4465-a949-6c3d5b8b85cc\") " Feb 25 13:08:53 crc kubenswrapper[5005]: I0225 13:08:53.256524 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0863009-cdde-4465-a949-6c3d5b8b85cc-utilities\") pod \"e0863009-cdde-4465-a949-6c3d5b8b85cc\" (UID: \"e0863009-cdde-4465-a949-6c3d5b8b85cc\") " Feb 25 13:08:53 crc kubenswrapper[5005]: I0225 13:08:53.256631 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0863009-cdde-4465-a949-6c3d5b8b85cc-catalog-content\") pod \"e0863009-cdde-4465-a949-6c3d5b8b85cc\" (UID: \"e0863009-cdde-4465-a949-6c3d5b8b85cc\") " Feb 25 13:08:53 crc kubenswrapper[5005]: I0225 13:08:53.257619 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0863009-cdde-4465-a949-6c3d5b8b85cc-utilities" (OuterVolumeSpecName: "utilities") pod "e0863009-cdde-4465-a949-6c3d5b8b85cc" (UID: "e0863009-cdde-4465-a949-6c3d5b8b85cc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 13:08:53 crc kubenswrapper[5005]: I0225 13:08:53.262594 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0863009-cdde-4465-a949-6c3d5b8b85cc-kube-api-access-n49pw" (OuterVolumeSpecName: "kube-api-access-n49pw") pod "e0863009-cdde-4465-a949-6c3d5b8b85cc" (UID: "e0863009-cdde-4465-a949-6c3d5b8b85cc"). InnerVolumeSpecName "kube-api-access-n49pw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 13:08:53 crc kubenswrapper[5005]: I0225 13:08:53.328327 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0863009-cdde-4465-a949-6c3d5b8b85cc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e0863009-cdde-4465-a949-6c3d5b8b85cc" (UID: "e0863009-cdde-4465-a949-6c3d5b8b85cc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 13:08:53 crc kubenswrapper[5005]: I0225 13:08:53.359964 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0863009-cdde-4465-a949-6c3d5b8b85cc-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 13:08:53 crc kubenswrapper[5005]: I0225 13:08:53.360018 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0863009-cdde-4465-a949-6c3d5b8b85cc-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 13:08:53 crc kubenswrapper[5005]: I0225 13:08:53.360032 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n49pw\" (UniqueName: \"kubernetes.io/projected/e0863009-cdde-4465-a949-6c3d5b8b85cc-kube-api-access-n49pw\") on node \"crc\" DevicePath \"\"" Feb 25 13:08:53 crc kubenswrapper[5005]: I0225 13:08:53.660521 5005 generic.go:334] "Generic (PLEG): container finished" podID="e0863009-cdde-4465-a949-6c3d5b8b85cc" containerID="15f967861e5f9ea131a78c5d0aecc1667993e9bc3d6e9e338132d1143fb206db" exitCode=0 Feb 25 13:08:53 crc kubenswrapper[5005]: I0225 13:08:53.660562 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w78xk" event={"ID":"e0863009-cdde-4465-a949-6c3d5b8b85cc","Type":"ContainerDied","Data":"15f967861e5f9ea131a78c5d0aecc1667993e9bc3d6e9e338132d1143fb206db"} Feb 25 13:08:53 crc kubenswrapper[5005]: I0225 13:08:53.660596 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w78xk" event={"ID":"e0863009-cdde-4465-a949-6c3d5b8b85cc","Type":"ContainerDied","Data":"60eaa46c407695cd31b03b12505d79daf1fd715a44dd6252645f7ce49a125f55"} Feb 25 13:08:53 crc kubenswrapper[5005]: I0225 13:08:53.660614 5005 scope.go:117] "RemoveContainer" containerID="15f967861e5f9ea131a78c5d0aecc1667993e9bc3d6e9e338132d1143fb206db" Feb 25 13:08:53 crc kubenswrapper[5005]: I0225 13:08:53.660646 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w78xk" Feb 25 13:08:53 crc kubenswrapper[5005]: I0225 13:08:53.703181 5005 scope.go:117] "RemoveContainer" containerID="b29235acde5bed3d563673f6db63df5e2c4a7f8ca50d033c8f6a19d0a9fa3fac" Feb 25 13:08:53 crc kubenswrapper[5005]: I0225 13:08:53.712339 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w78xk"] Feb 25 13:08:53 crc kubenswrapper[5005]: I0225 13:08:53.720755 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-w78xk"] Feb 25 13:08:53 crc kubenswrapper[5005]: I0225 13:08:53.728277 5005 scope.go:117] "RemoveContainer" containerID="15f8756dce2f0f8cccf10f1dbe61a29ad57afb7df71d3d34f67a2d543bcaf296" Feb 25 13:08:53 crc kubenswrapper[5005]: I0225 13:08:53.765759 5005 scope.go:117] "RemoveContainer" containerID="15f967861e5f9ea131a78c5d0aecc1667993e9bc3d6e9e338132d1143fb206db" Feb 25 13:08:53 crc kubenswrapper[5005]: E0225 13:08:53.766253 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15f967861e5f9ea131a78c5d0aecc1667993e9bc3d6e9e338132d1143fb206db\": container with ID starting with 15f967861e5f9ea131a78c5d0aecc1667993e9bc3d6e9e338132d1143fb206db not found: ID does not exist" containerID="15f967861e5f9ea131a78c5d0aecc1667993e9bc3d6e9e338132d1143fb206db" Feb 25 13:08:53 crc kubenswrapper[5005]: I0225 13:08:53.766312 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15f967861e5f9ea131a78c5d0aecc1667993e9bc3d6e9e338132d1143fb206db"} err="failed to get container status \"15f967861e5f9ea131a78c5d0aecc1667993e9bc3d6e9e338132d1143fb206db\": rpc error: code = NotFound desc = could not find container \"15f967861e5f9ea131a78c5d0aecc1667993e9bc3d6e9e338132d1143fb206db\": container with ID starting with 15f967861e5f9ea131a78c5d0aecc1667993e9bc3d6e9e338132d1143fb206db not found: ID does not exist" Feb 25 13:08:53 crc kubenswrapper[5005]: I0225 13:08:53.766346 5005 scope.go:117] "RemoveContainer" containerID="b29235acde5bed3d563673f6db63df5e2c4a7f8ca50d033c8f6a19d0a9fa3fac" Feb 25 13:08:53 crc kubenswrapper[5005]: E0225 13:08:53.766712 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b29235acde5bed3d563673f6db63df5e2c4a7f8ca50d033c8f6a19d0a9fa3fac\": container with ID starting with b29235acde5bed3d563673f6db63df5e2c4a7f8ca50d033c8f6a19d0a9fa3fac not found: ID does not exist" containerID="b29235acde5bed3d563673f6db63df5e2c4a7f8ca50d033c8f6a19d0a9fa3fac" Feb 25 13:08:53 crc kubenswrapper[5005]: I0225 13:08:53.766755 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b29235acde5bed3d563673f6db63df5e2c4a7f8ca50d033c8f6a19d0a9fa3fac"} err="failed to get container status \"b29235acde5bed3d563673f6db63df5e2c4a7f8ca50d033c8f6a19d0a9fa3fac\": rpc error: code = NotFound desc = could not find container \"b29235acde5bed3d563673f6db63df5e2c4a7f8ca50d033c8f6a19d0a9fa3fac\": container with ID starting with b29235acde5bed3d563673f6db63df5e2c4a7f8ca50d033c8f6a19d0a9fa3fac not found: ID does not exist" Feb 25 13:08:53 crc kubenswrapper[5005]: I0225 13:08:53.766780 5005 scope.go:117] "RemoveContainer" containerID="15f8756dce2f0f8cccf10f1dbe61a29ad57afb7df71d3d34f67a2d543bcaf296" Feb 25 13:08:53 crc kubenswrapper[5005]: E0225 13:08:53.767040 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15f8756dce2f0f8cccf10f1dbe61a29ad57afb7df71d3d34f67a2d543bcaf296\": container with ID starting with 15f8756dce2f0f8cccf10f1dbe61a29ad57afb7df71d3d34f67a2d543bcaf296 not found: ID does not exist" containerID="15f8756dce2f0f8cccf10f1dbe61a29ad57afb7df71d3d34f67a2d543bcaf296" Feb 25 13:08:53 crc kubenswrapper[5005]: I0225 13:08:53.767071 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15f8756dce2f0f8cccf10f1dbe61a29ad57afb7df71d3d34f67a2d543bcaf296"} err="failed to get container status \"15f8756dce2f0f8cccf10f1dbe61a29ad57afb7df71d3d34f67a2d543bcaf296\": rpc error: code = NotFound desc = could not find container \"15f8756dce2f0f8cccf10f1dbe61a29ad57afb7df71d3d34f67a2d543bcaf296\": container with ID starting with 15f8756dce2f0f8cccf10f1dbe61a29ad57afb7df71d3d34f67a2d543bcaf296 not found: ID does not exist" Feb 25 13:08:54 crc kubenswrapper[5005]: I0225 13:08:54.697145 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0863009-cdde-4465-a949-6c3d5b8b85cc" path="/var/lib/kubelet/pods/e0863009-cdde-4465-a949-6c3d5b8b85cc/volumes" Feb 25 13:09:04 crc kubenswrapper[5005]: I0225 13:09:04.685659 5005 scope.go:117] "RemoveContainer" containerID="2dc296c05982df07bbfbafb51e5e803239722ae15f0371961a118244b4d71e9f" Feb 25 13:09:05 crc kubenswrapper[5005]: I0225 13:09:05.767669 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerStarted","Data":"66027e5b3f25c40fa1bce2d4662da2bc4c7ab61051724db92de764ca548ffbb5"} Feb 25 13:09:20 crc kubenswrapper[5005]: I0225 13:09:20.518912 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zqp94"] Feb 25 13:09:20 crc kubenswrapper[5005]: E0225 13:09:20.519941 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0863009-cdde-4465-a949-6c3d5b8b85cc" containerName="registry-server" Feb 25 13:09:20 crc kubenswrapper[5005]: I0225 13:09:20.519963 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0863009-cdde-4465-a949-6c3d5b8b85cc" containerName="registry-server" Feb 25 13:09:20 crc kubenswrapper[5005]: E0225 13:09:20.519977 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0863009-cdde-4465-a949-6c3d5b8b85cc" containerName="extract-utilities" Feb 25 13:09:20 crc kubenswrapper[5005]: I0225 13:09:20.519986 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0863009-cdde-4465-a949-6c3d5b8b85cc" containerName="extract-utilities" Feb 25 13:09:20 crc kubenswrapper[5005]: E0225 13:09:20.520001 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0863009-cdde-4465-a949-6c3d5b8b85cc" containerName="extract-content" Feb 25 13:09:20 crc kubenswrapper[5005]: I0225 13:09:20.520009 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0863009-cdde-4465-a949-6c3d5b8b85cc" containerName="extract-content" Feb 25 13:09:20 crc kubenswrapper[5005]: I0225 13:09:20.520270 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0863009-cdde-4465-a949-6c3d5b8b85cc" containerName="registry-server" Feb 25 13:09:20 crc kubenswrapper[5005]: I0225 13:09:20.522402 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zqp94" Feb 25 13:09:20 crc kubenswrapper[5005]: I0225 13:09:20.538328 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zqp94"] Feb 25 13:09:20 crc kubenswrapper[5005]: I0225 13:09:20.657940 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq4xx\" (UniqueName: \"kubernetes.io/projected/e367e0dd-3f41-4b0c-8d81-855ea4b1fdf9-kube-api-access-fq4xx\") pod \"redhat-marketplace-zqp94\" (UID: \"e367e0dd-3f41-4b0c-8d81-855ea4b1fdf9\") " pod="openshift-marketplace/redhat-marketplace-zqp94" Feb 25 13:09:20 crc kubenswrapper[5005]: I0225 13:09:20.658010 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e367e0dd-3f41-4b0c-8d81-855ea4b1fdf9-catalog-content\") pod \"redhat-marketplace-zqp94\" (UID: \"e367e0dd-3f41-4b0c-8d81-855ea4b1fdf9\") " pod="openshift-marketplace/redhat-marketplace-zqp94" Feb 25 13:09:20 crc kubenswrapper[5005]: I0225 13:09:20.658233 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e367e0dd-3f41-4b0c-8d81-855ea4b1fdf9-utilities\") pod \"redhat-marketplace-zqp94\" (UID: \"e367e0dd-3f41-4b0c-8d81-855ea4b1fdf9\") " pod="openshift-marketplace/redhat-marketplace-zqp94" Feb 25 13:09:20 crc kubenswrapper[5005]: I0225 13:09:20.760858 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fq4xx\" (UniqueName: \"kubernetes.io/projected/e367e0dd-3f41-4b0c-8d81-855ea4b1fdf9-kube-api-access-fq4xx\") pod \"redhat-marketplace-zqp94\" (UID: \"e367e0dd-3f41-4b0c-8d81-855ea4b1fdf9\") " pod="openshift-marketplace/redhat-marketplace-zqp94" Feb 25 13:09:20 crc kubenswrapper[5005]: I0225 13:09:20.760941 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e367e0dd-3f41-4b0c-8d81-855ea4b1fdf9-catalog-content\") pod \"redhat-marketplace-zqp94\" (UID: \"e367e0dd-3f41-4b0c-8d81-855ea4b1fdf9\") " pod="openshift-marketplace/redhat-marketplace-zqp94" Feb 25 13:09:20 crc kubenswrapper[5005]: I0225 13:09:20.761092 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e367e0dd-3f41-4b0c-8d81-855ea4b1fdf9-utilities\") pod \"redhat-marketplace-zqp94\" (UID: \"e367e0dd-3f41-4b0c-8d81-855ea4b1fdf9\") " pod="openshift-marketplace/redhat-marketplace-zqp94" Feb 25 13:09:20 crc kubenswrapper[5005]: I0225 13:09:20.762018 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e367e0dd-3f41-4b0c-8d81-855ea4b1fdf9-catalog-content\") pod \"redhat-marketplace-zqp94\" (UID: \"e367e0dd-3f41-4b0c-8d81-855ea4b1fdf9\") " pod="openshift-marketplace/redhat-marketplace-zqp94" Feb 25 13:09:20 crc kubenswrapper[5005]: I0225 13:09:20.762063 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e367e0dd-3f41-4b0c-8d81-855ea4b1fdf9-utilities\") pod \"redhat-marketplace-zqp94\" (UID: \"e367e0dd-3f41-4b0c-8d81-855ea4b1fdf9\") " pod="openshift-marketplace/redhat-marketplace-zqp94" Feb 25 13:09:20 crc kubenswrapper[5005]: I0225 13:09:20.788526 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fq4xx\" (UniqueName: \"kubernetes.io/projected/e367e0dd-3f41-4b0c-8d81-855ea4b1fdf9-kube-api-access-fq4xx\") pod \"redhat-marketplace-zqp94\" (UID: \"e367e0dd-3f41-4b0c-8d81-855ea4b1fdf9\") " pod="openshift-marketplace/redhat-marketplace-zqp94" Feb 25 13:09:20 crc kubenswrapper[5005]: I0225 13:09:20.853689 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zqp94" Feb 25 13:09:21 crc kubenswrapper[5005]: I0225 13:09:21.358647 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zqp94"] Feb 25 13:09:21 crc kubenswrapper[5005]: W0225 13:09:21.360018 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode367e0dd_3f41_4b0c_8d81_855ea4b1fdf9.slice/crio-02ca954b4402901e8479b45db55f9774c8ea52a30bab2c84aa49a27d78ab6513 WatchSource:0}: Error finding container 02ca954b4402901e8479b45db55f9774c8ea52a30bab2c84aa49a27d78ab6513: Status 404 returned error can't find the container with id 02ca954b4402901e8479b45db55f9774c8ea52a30bab2c84aa49a27d78ab6513 Feb 25 13:09:21 crc kubenswrapper[5005]: I0225 13:09:21.906182 5005 generic.go:334] "Generic (PLEG): container finished" podID="e367e0dd-3f41-4b0c-8d81-855ea4b1fdf9" containerID="08a3452113256fc88fb7fe9f72893d5b65a9a01cc4ae636ff24bcdbc84eee467" exitCode=0 Feb 25 13:09:21 crc kubenswrapper[5005]: I0225 13:09:21.906481 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zqp94" event={"ID":"e367e0dd-3f41-4b0c-8d81-855ea4b1fdf9","Type":"ContainerDied","Data":"08a3452113256fc88fb7fe9f72893d5b65a9a01cc4ae636ff24bcdbc84eee467"} Feb 25 13:09:21 crc kubenswrapper[5005]: I0225 13:09:21.906506 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zqp94" event={"ID":"e367e0dd-3f41-4b0c-8d81-855ea4b1fdf9","Type":"ContainerStarted","Data":"02ca954b4402901e8479b45db55f9774c8ea52a30bab2c84aa49a27d78ab6513"} Feb 25 13:09:22 crc kubenswrapper[5005]: I0225 13:09:22.918624 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zqp94" event={"ID":"e367e0dd-3f41-4b0c-8d81-855ea4b1fdf9","Type":"ContainerStarted","Data":"7b3cedebc6466a554f451138bf2b59a0bb13b5bb085141b24194368072cab172"} Feb 25 13:09:23 crc kubenswrapper[5005]: I0225 13:09:23.927109 5005 generic.go:334] "Generic (PLEG): container finished" podID="e367e0dd-3f41-4b0c-8d81-855ea4b1fdf9" containerID="7b3cedebc6466a554f451138bf2b59a0bb13b5bb085141b24194368072cab172" exitCode=0 Feb 25 13:09:23 crc kubenswrapper[5005]: I0225 13:09:23.927159 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zqp94" event={"ID":"e367e0dd-3f41-4b0c-8d81-855ea4b1fdf9","Type":"ContainerDied","Data":"7b3cedebc6466a554f451138bf2b59a0bb13b5bb085141b24194368072cab172"} Feb 25 13:09:24 crc kubenswrapper[5005]: I0225 13:09:24.938814 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zqp94" event={"ID":"e367e0dd-3f41-4b0c-8d81-855ea4b1fdf9","Type":"ContainerStarted","Data":"7501915fee43cfa0202403f3624afbd74216a240aa64ad158a5071f72eb88556"} Feb 25 13:09:24 crc kubenswrapper[5005]: I0225 13:09:24.965767 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zqp94" podStartSLOduration=2.58120793 podStartE2EDuration="4.965744162s" podCreationTimestamp="2026-02-25 13:09:20 +0000 UTC" firstStartedPulling="2026-02-25 13:09:21.911168287 +0000 UTC m=+6675.951900614" lastFinishedPulling="2026-02-25 13:09:24.295704479 +0000 UTC m=+6678.336436846" observedRunningTime="2026-02-25 13:09:24.956016894 +0000 UTC m=+6678.996749221" watchObservedRunningTime="2026-02-25 13:09:24.965744162 +0000 UTC m=+6679.006476499" Feb 25 13:09:30 crc kubenswrapper[5005]: I0225 13:09:30.853765 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zqp94" Feb 25 13:09:30 crc kubenswrapper[5005]: I0225 13:09:30.854294 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zqp94" Feb 25 13:09:30 crc kubenswrapper[5005]: I0225 13:09:30.902807 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zqp94" Feb 25 13:09:31 crc kubenswrapper[5005]: I0225 13:09:31.024738 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zqp94" Feb 25 13:09:31 crc kubenswrapper[5005]: I0225 13:09:31.145431 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zqp94"] Feb 25 13:09:33 crc kubenswrapper[5005]: I0225 13:09:33.008300 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zqp94" podUID="e367e0dd-3f41-4b0c-8d81-855ea4b1fdf9" containerName="registry-server" containerID="cri-o://7501915fee43cfa0202403f3624afbd74216a240aa64ad158a5071f72eb88556" gracePeriod=2 Feb 25 13:09:33 crc kubenswrapper[5005]: I0225 13:09:33.502845 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zqp94" Feb 25 13:09:33 crc kubenswrapper[5005]: I0225 13:09:33.649759 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fq4xx\" (UniqueName: \"kubernetes.io/projected/e367e0dd-3f41-4b0c-8d81-855ea4b1fdf9-kube-api-access-fq4xx\") pod \"e367e0dd-3f41-4b0c-8d81-855ea4b1fdf9\" (UID: \"e367e0dd-3f41-4b0c-8d81-855ea4b1fdf9\") " Feb 25 13:09:33 crc kubenswrapper[5005]: I0225 13:09:33.649829 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e367e0dd-3f41-4b0c-8d81-855ea4b1fdf9-catalog-content\") pod \"e367e0dd-3f41-4b0c-8d81-855ea4b1fdf9\" (UID: \"e367e0dd-3f41-4b0c-8d81-855ea4b1fdf9\") " Feb 25 13:09:33 crc kubenswrapper[5005]: I0225 13:09:33.650177 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e367e0dd-3f41-4b0c-8d81-855ea4b1fdf9-utilities\") pod \"e367e0dd-3f41-4b0c-8d81-855ea4b1fdf9\" (UID: \"e367e0dd-3f41-4b0c-8d81-855ea4b1fdf9\") " Feb 25 13:09:33 crc kubenswrapper[5005]: I0225 13:09:33.652488 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e367e0dd-3f41-4b0c-8d81-855ea4b1fdf9-utilities" (OuterVolumeSpecName: "utilities") pod "e367e0dd-3f41-4b0c-8d81-855ea4b1fdf9" (UID: "e367e0dd-3f41-4b0c-8d81-855ea4b1fdf9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 13:09:33 crc kubenswrapper[5005]: I0225 13:09:33.656263 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e367e0dd-3f41-4b0c-8d81-855ea4b1fdf9-kube-api-access-fq4xx" (OuterVolumeSpecName: "kube-api-access-fq4xx") pod "e367e0dd-3f41-4b0c-8d81-855ea4b1fdf9" (UID: "e367e0dd-3f41-4b0c-8d81-855ea4b1fdf9"). InnerVolumeSpecName "kube-api-access-fq4xx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 13:09:33 crc kubenswrapper[5005]: I0225 13:09:33.675308 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e367e0dd-3f41-4b0c-8d81-855ea4b1fdf9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e367e0dd-3f41-4b0c-8d81-855ea4b1fdf9" (UID: "e367e0dd-3f41-4b0c-8d81-855ea4b1fdf9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 13:09:33 crc kubenswrapper[5005]: I0225 13:09:33.752771 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e367e0dd-3f41-4b0c-8d81-855ea4b1fdf9-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 13:09:33 crc kubenswrapper[5005]: I0225 13:09:33.752796 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e367e0dd-3f41-4b0c-8d81-855ea4b1fdf9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 13:09:33 crc kubenswrapper[5005]: I0225 13:09:33.752805 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fq4xx\" (UniqueName: \"kubernetes.io/projected/e367e0dd-3f41-4b0c-8d81-855ea4b1fdf9-kube-api-access-fq4xx\") on node \"crc\" DevicePath \"\"" Feb 25 13:09:34 crc kubenswrapper[5005]: I0225 13:09:34.020183 5005 generic.go:334] "Generic (PLEG): container finished" podID="e367e0dd-3f41-4b0c-8d81-855ea4b1fdf9" containerID="7501915fee43cfa0202403f3624afbd74216a240aa64ad158a5071f72eb88556" exitCode=0 Feb 25 13:09:34 crc kubenswrapper[5005]: I0225 13:09:34.020292 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zqp94" Feb 25 13:09:34 crc kubenswrapper[5005]: I0225 13:09:34.020467 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zqp94" event={"ID":"e367e0dd-3f41-4b0c-8d81-855ea4b1fdf9","Type":"ContainerDied","Data":"7501915fee43cfa0202403f3624afbd74216a240aa64ad158a5071f72eb88556"} Feb 25 13:09:34 crc kubenswrapper[5005]: I0225 13:09:34.020513 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zqp94" event={"ID":"e367e0dd-3f41-4b0c-8d81-855ea4b1fdf9","Type":"ContainerDied","Data":"02ca954b4402901e8479b45db55f9774c8ea52a30bab2c84aa49a27d78ab6513"} Feb 25 13:09:34 crc kubenswrapper[5005]: I0225 13:09:34.020530 5005 scope.go:117] "RemoveContainer" containerID="7501915fee43cfa0202403f3624afbd74216a240aa64ad158a5071f72eb88556" Feb 25 13:09:34 crc kubenswrapper[5005]: I0225 13:09:34.059642 5005 scope.go:117] "RemoveContainer" containerID="7b3cedebc6466a554f451138bf2b59a0bb13b5bb085141b24194368072cab172" Feb 25 13:09:34 crc kubenswrapper[5005]: I0225 13:09:34.068021 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zqp94"] Feb 25 13:09:34 crc kubenswrapper[5005]: I0225 13:09:34.085057 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zqp94"] Feb 25 13:09:34 crc kubenswrapper[5005]: I0225 13:09:34.098487 5005 scope.go:117] "RemoveContainer" containerID="08a3452113256fc88fb7fe9f72893d5b65a9a01cc4ae636ff24bcdbc84eee467" Feb 25 13:09:34 crc kubenswrapper[5005]: I0225 13:09:34.145176 5005 scope.go:117] "RemoveContainer" containerID="7501915fee43cfa0202403f3624afbd74216a240aa64ad158a5071f72eb88556" Feb 25 13:09:34 crc kubenswrapper[5005]: E0225 13:09:34.145682 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7501915fee43cfa0202403f3624afbd74216a240aa64ad158a5071f72eb88556\": container with ID starting with 7501915fee43cfa0202403f3624afbd74216a240aa64ad158a5071f72eb88556 not found: ID does not exist" containerID="7501915fee43cfa0202403f3624afbd74216a240aa64ad158a5071f72eb88556" Feb 25 13:09:34 crc kubenswrapper[5005]: I0225 13:09:34.145715 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7501915fee43cfa0202403f3624afbd74216a240aa64ad158a5071f72eb88556"} err="failed to get container status \"7501915fee43cfa0202403f3624afbd74216a240aa64ad158a5071f72eb88556\": rpc error: code = NotFound desc = could not find container \"7501915fee43cfa0202403f3624afbd74216a240aa64ad158a5071f72eb88556\": container with ID starting with 7501915fee43cfa0202403f3624afbd74216a240aa64ad158a5071f72eb88556 not found: ID does not exist" Feb 25 13:09:34 crc kubenswrapper[5005]: I0225 13:09:34.145735 5005 scope.go:117] "RemoveContainer" containerID="7b3cedebc6466a554f451138bf2b59a0bb13b5bb085141b24194368072cab172" Feb 25 13:09:34 crc kubenswrapper[5005]: E0225 13:09:34.146155 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b3cedebc6466a554f451138bf2b59a0bb13b5bb085141b24194368072cab172\": container with ID starting with 7b3cedebc6466a554f451138bf2b59a0bb13b5bb085141b24194368072cab172 not found: ID does not exist" containerID="7b3cedebc6466a554f451138bf2b59a0bb13b5bb085141b24194368072cab172" Feb 25 13:09:34 crc kubenswrapper[5005]: I0225 13:09:34.146179 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b3cedebc6466a554f451138bf2b59a0bb13b5bb085141b24194368072cab172"} err="failed to get container status \"7b3cedebc6466a554f451138bf2b59a0bb13b5bb085141b24194368072cab172\": rpc error: code = NotFound desc = could not find container \"7b3cedebc6466a554f451138bf2b59a0bb13b5bb085141b24194368072cab172\": container with ID starting with 7b3cedebc6466a554f451138bf2b59a0bb13b5bb085141b24194368072cab172 not found: ID does not exist" Feb 25 13:09:34 crc kubenswrapper[5005]: I0225 13:09:34.146192 5005 scope.go:117] "RemoveContainer" containerID="08a3452113256fc88fb7fe9f72893d5b65a9a01cc4ae636ff24bcdbc84eee467" Feb 25 13:09:34 crc kubenswrapper[5005]: E0225 13:09:34.146412 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08a3452113256fc88fb7fe9f72893d5b65a9a01cc4ae636ff24bcdbc84eee467\": container with ID starting with 08a3452113256fc88fb7fe9f72893d5b65a9a01cc4ae636ff24bcdbc84eee467 not found: ID does not exist" containerID="08a3452113256fc88fb7fe9f72893d5b65a9a01cc4ae636ff24bcdbc84eee467" Feb 25 13:09:34 crc kubenswrapper[5005]: I0225 13:09:34.146440 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08a3452113256fc88fb7fe9f72893d5b65a9a01cc4ae636ff24bcdbc84eee467"} err="failed to get container status \"08a3452113256fc88fb7fe9f72893d5b65a9a01cc4ae636ff24bcdbc84eee467\": rpc error: code = NotFound desc = could not find container \"08a3452113256fc88fb7fe9f72893d5b65a9a01cc4ae636ff24bcdbc84eee467\": container with ID starting with 08a3452113256fc88fb7fe9f72893d5b65a9a01cc4ae636ff24bcdbc84eee467 not found: ID does not exist" Feb 25 13:09:34 crc kubenswrapper[5005]: I0225 13:09:34.696365 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e367e0dd-3f41-4b0c-8d81-855ea4b1fdf9" path="/var/lib/kubelet/pods/e367e0dd-3f41-4b0c-8d81-855ea4b1fdf9/volumes" Feb 25 13:09:43 crc kubenswrapper[5005]: I0225 13:09:43.239764 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-z28cv"] Feb 25 13:09:43 crc kubenswrapper[5005]: E0225 13:09:43.240697 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e367e0dd-3f41-4b0c-8d81-855ea4b1fdf9" containerName="registry-server" Feb 25 13:09:43 crc kubenswrapper[5005]: I0225 13:09:43.240710 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="e367e0dd-3f41-4b0c-8d81-855ea4b1fdf9" containerName="registry-server" Feb 25 13:09:43 crc kubenswrapper[5005]: E0225 13:09:43.240723 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e367e0dd-3f41-4b0c-8d81-855ea4b1fdf9" containerName="extract-content" Feb 25 13:09:43 crc kubenswrapper[5005]: I0225 13:09:43.240730 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="e367e0dd-3f41-4b0c-8d81-855ea4b1fdf9" containerName="extract-content" Feb 25 13:09:43 crc kubenswrapper[5005]: E0225 13:09:43.240748 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e367e0dd-3f41-4b0c-8d81-855ea4b1fdf9" containerName="extract-utilities" Feb 25 13:09:43 crc kubenswrapper[5005]: I0225 13:09:43.240755 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="e367e0dd-3f41-4b0c-8d81-855ea4b1fdf9" containerName="extract-utilities" Feb 25 13:09:43 crc kubenswrapper[5005]: I0225 13:09:43.240934 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="e367e0dd-3f41-4b0c-8d81-855ea4b1fdf9" containerName="registry-server" Feb 25 13:09:43 crc kubenswrapper[5005]: I0225 13:09:43.242123 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z28cv" Feb 25 13:09:43 crc kubenswrapper[5005]: I0225 13:09:43.250208 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z28cv"] Feb 25 13:09:43 crc kubenswrapper[5005]: I0225 13:09:43.367539 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ttg4\" (UniqueName: \"kubernetes.io/projected/be9c5bd8-e440-4370-9e4f-0a5274d3710f-kube-api-access-8ttg4\") pod \"certified-operators-z28cv\" (UID: \"be9c5bd8-e440-4370-9e4f-0a5274d3710f\") " pod="openshift-marketplace/certified-operators-z28cv" Feb 25 13:09:43 crc kubenswrapper[5005]: I0225 13:09:43.367592 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be9c5bd8-e440-4370-9e4f-0a5274d3710f-utilities\") pod \"certified-operators-z28cv\" (UID: \"be9c5bd8-e440-4370-9e4f-0a5274d3710f\") " pod="openshift-marketplace/certified-operators-z28cv" Feb 25 13:09:43 crc kubenswrapper[5005]: I0225 13:09:43.367760 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be9c5bd8-e440-4370-9e4f-0a5274d3710f-catalog-content\") pod \"certified-operators-z28cv\" (UID: \"be9c5bd8-e440-4370-9e4f-0a5274d3710f\") " pod="openshift-marketplace/certified-operators-z28cv" Feb 25 13:09:43 crc kubenswrapper[5005]: I0225 13:09:43.471018 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be9c5bd8-e440-4370-9e4f-0a5274d3710f-catalog-content\") pod \"certified-operators-z28cv\" (UID: \"be9c5bd8-e440-4370-9e4f-0a5274d3710f\") " pod="openshift-marketplace/certified-operators-z28cv" Feb 25 13:09:43 crc kubenswrapper[5005]: I0225 13:09:43.471120 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ttg4\" (UniqueName: \"kubernetes.io/projected/be9c5bd8-e440-4370-9e4f-0a5274d3710f-kube-api-access-8ttg4\") pod \"certified-operators-z28cv\" (UID: \"be9c5bd8-e440-4370-9e4f-0a5274d3710f\") " pod="openshift-marketplace/certified-operators-z28cv" Feb 25 13:09:43 crc kubenswrapper[5005]: I0225 13:09:43.471151 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be9c5bd8-e440-4370-9e4f-0a5274d3710f-utilities\") pod \"certified-operators-z28cv\" (UID: \"be9c5bd8-e440-4370-9e4f-0a5274d3710f\") " pod="openshift-marketplace/certified-operators-z28cv" Feb 25 13:09:43 crc kubenswrapper[5005]: I0225 13:09:43.471535 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be9c5bd8-e440-4370-9e4f-0a5274d3710f-catalog-content\") pod \"certified-operators-z28cv\" (UID: \"be9c5bd8-e440-4370-9e4f-0a5274d3710f\") " pod="openshift-marketplace/certified-operators-z28cv" Feb 25 13:09:43 crc kubenswrapper[5005]: I0225 13:09:43.471791 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be9c5bd8-e440-4370-9e4f-0a5274d3710f-utilities\") pod \"certified-operators-z28cv\" (UID: \"be9c5bd8-e440-4370-9e4f-0a5274d3710f\") " pod="openshift-marketplace/certified-operators-z28cv" Feb 25 13:09:43 crc kubenswrapper[5005]: I0225 13:09:43.490161 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ttg4\" (UniqueName: \"kubernetes.io/projected/be9c5bd8-e440-4370-9e4f-0a5274d3710f-kube-api-access-8ttg4\") pod \"certified-operators-z28cv\" (UID: \"be9c5bd8-e440-4370-9e4f-0a5274d3710f\") " pod="openshift-marketplace/certified-operators-z28cv" Feb 25 13:09:43 crc kubenswrapper[5005]: I0225 13:09:43.563347 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z28cv" Feb 25 13:09:44 crc kubenswrapper[5005]: I0225 13:09:44.001173 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z28cv"] Feb 25 13:09:44 crc kubenswrapper[5005]: I0225 13:09:44.099261 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z28cv" event={"ID":"be9c5bd8-e440-4370-9e4f-0a5274d3710f","Type":"ContainerStarted","Data":"1b6e3184ef6b35a9d93f698efa5f76df93ff14a97c80d2f5df2091675c65a0aa"} Feb 25 13:09:45 crc kubenswrapper[5005]: I0225 13:09:45.125741 5005 generic.go:334] "Generic (PLEG): container finished" podID="be9c5bd8-e440-4370-9e4f-0a5274d3710f" containerID="d48c44fadd7f8778bc8c71649d92b515875a2a696b0e99a63c7c2fc17a4ad1ae" exitCode=0 Feb 25 13:09:45 crc kubenswrapper[5005]: I0225 13:09:45.125817 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z28cv" event={"ID":"be9c5bd8-e440-4370-9e4f-0a5274d3710f","Type":"ContainerDied","Data":"d48c44fadd7f8778bc8c71649d92b515875a2a696b0e99a63c7c2fc17a4ad1ae"} Feb 25 13:09:46 crc kubenswrapper[5005]: I0225 13:09:46.134885 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z28cv" event={"ID":"be9c5bd8-e440-4370-9e4f-0a5274d3710f","Type":"ContainerStarted","Data":"bb35ac687b24f48e86bd579ae2acad9be90ebb55fee2e89068fad7b26e0aa3cd"} Feb 25 13:09:47 crc kubenswrapper[5005]: I0225 13:09:47.144082 5005 generic.go:334] "Generic (PLEG): container finished" podID="be9c5bd8-e440-4370-9e4f-0a5274d3710f" containerID="bb35ac687b24f48e86bd579ae2acad9be90ebb55fee2e89068fad7b26e0aa3cd" exitCode=0 Feb 25 13:09:47 crc kubenswrapper[5005]: I0225 13:09:47.144136 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z28cv" event={"ID":"be9c5bd8-e440-4370-9e4f-0a5274d3710f","Type":"ContainerDied","Data":"bb35ac687b24f48e86bd579ae2acad9be90ebb55fee2e89068fad7b26e0aa3cd"} Feb 25 13:09:48 crc kubenswrapper[5005]: I0225 13:09:48.154396 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z28cv" event={"ID":"be9c5bd8-e440-4370-9e4f-0a5274d3710f","Type":"ContainerStarted","Data":"03122ddc35f802664b7f60440d45ea35cbfdc3e965672efa03a596b631dc0c9f"} Feb 25 13:09:53 crc kubenswrapper[5005]: I0225 13:09:53.563549 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-z28cv" Feb 25 13:09:53 crc kubenswrapper[5005]: I0225 13:09:53.564205 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-z28cv" Feb 25 13:09:53 crc kubenswrapper[5005]: I0225 13:09:53.613296 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-z28cv" Feb 25 13:09:53 crc kubenswrapper[5005]: I0225 13:09:53.631736 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-z28cv" podStartSLOduration=8.134356088 podStartE2EDuration="10.631717986s" podCreationTimestamp="2026-02-25 13:09:43 +0000 UTC" firstStartedPulling="2026-02-25 13:09:45.128134189 +0000 UTC m=+6699.168866526" lastFinishedPulling="2026-02-25 13:09:47.625496097 +0000 UTC m=+6701.666228424" observedRunningTime="2026-02-25 13:09:48.174971147 +0000 UTC m=+6702.215703484" watchObservedRunningTime="2026-02-25 13:09:53.631717986 +0000 UTC m=+6707.672450313" Feb 25 13:09:54 crc kubenswrapper[5005]: I0225 13:09:54.278275 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-z28cv" Feb 25 13:09:54 crc kubenswrapper[5005]: I0225 13:09:54.347246 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z28cv"] Feb 25 13:09:56 crc kubenswrapper[5005]: I0225 13:09:56.242930 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-z28cv" podUID="be9c5bd8-e440-4370-9e4f-0a5274d3710f" containerName="registry-server" containerID="cri-o://03122ddc35f802664b7f60440d45ea35cbfdc3e965672efa03a596b631dc0c9f" gracePeriod=2 Feb 25 13:09:56 crc kubenswrapper[5005]: I0225 13:09:56.735877 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z28cv" Feb 25 13:09:56 crc kubenswrapper[5005]: I0225 13:09:56.875225 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be9c5bd8-e440-4370-9e4f-0a5274d3710f-catalog-content\") pod \"be9c5bd8-e440-4370-9e4f-0a5274d3710f\" (UID: \"be9c5bd8-e440-4370-9e4f-0a5274d3710f\") " Feb 25 13:09:56 crc kubenswrapper[5005]: I0225 13:09:56.875837 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be9c5bd8-e440-4370-9e4f-0a5274d3710f-utilities\") pod \"be9c5bd8-e440-4370-9e4f-0a5274d3710f\" (UID: \"be9c5bd8-e440-4370-9e4f-0a5274d3710f\") " Feb 25 13:09:56 crc kubenswrapper[5005]: I0225 13:09:56.876042 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ttg4\" (UniqueName: \"kubernetes.io/projected/be9c5bd8-e440-4370-9e4f-0a5274d3710f-kube-api-access-8ttg4\") pod \"be9c5bd8-e440-4370-9e4f-0a5274d3710f\" (UID: \"be9c5bd8-e440-4370-9e4f-0a5274d3710f\") " Feb 25 13:09:56 crc kubenswrapper[5005]: I0225 13:09:56.876704 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be9c5bd8-e440-4370-9e4f-0a5274d3710f-utilities" (OuterVolumeSpecName: "utilities") pod "be9c5bd8-e440-4370-9e4f-0a5274d3710f" (UID: "be9c5bd8-e440-4370-9e4f-0a5274d3710f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 13:09:56 crc kubenswrapper[5005]: I0225 13:09:56.881664 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be9c5bd8-e440-4370-9e4f-0a5274d3710f-kube-api-access-8ttg4" (OuterVolumeSpecName: "kube-api-access-8ttg4") pod "be9c5bd8-e440-4370-9e4f-0a5274d3710f" (UID: "be9c5bd8-e440-4370-9e4f-0a5274d3710f"). InnerVolumeSpecName "kube-api-access-8ttg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 13:09:56 crc kubenswrapper[5005]: I0225 13:09:56.978415 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be9c5bd8-e440-4370-9e4f-0a5274d3710f-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 13:09:56 crc kubenswrapper[5005]: I0225 13:09:56.978911 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ttg4\" (UniqueName: \"kubernetes.io/projected/be9c5bd8-e440-4370-9e4f-0a5274d3710f-kube-api-access-8ttg4\") on node \"crc\" DevicePath \"\"" Feb 25 13:09:57 crc kubenswrapper[5005]: I0225 13:09:57.260822 5005 generic.go:334] "Generic (PLEG): container finished" podID="be9c5bd8-e440-4370-9e4f-0a5274d3710f" containerID="03122ddc35f802664b7f60440d45ea35cbfdc3e965672efa03a596b631dc0c9f" exitCode=0 Feb 25 13:09:57 crc kubenswrapper[5005]: I0225 13:09:57.260877 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z28cv" event={"ID":"be9c5bd8-e440-4370-9e4f-0a5274d3710f","Type":"ContainerDied","Data":"03122ddc35f802664b7f60440d45ea35cbfdc3e965672efa03a596b631dc0c9f"} Feb 25 13:09:57 crc kubenswrapper[5005]: I0225 13:09:57.260915 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z28cv" event={"ID":"be9c5bd8-e440-4370-9e4f-0a5274d3710f","Type":"ContainerDied","Data":"1b6e3184ef6b35a9d93f698efa5f76df93ff14a97c80d2f5df2091675c65a0aa"} Feb 25 13:09:57 crc kubenswrapper[5005]: I0225 13:09:57.260940 5005 scope.go:117] "RemoveContainer" containerID="03122ddc35f802664b7f60440d45ea35cbfdc3e965672efa03a596b631dc0c9f" Feb 25 13:09:57 crc kubenswrapper[5005]: I0225 13:09:57.261151 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z28cv" Feb 25 13:09:57 crc kubenswrapper[5005]: I0225 13:09:57.286954 5005 scope.go:117] "RemoveContainer" containerID="bb35ac687b24f48e86bd579ae2acad9be90ebb55fee2e89068fad7b26e0aa3cd" Feb 25 13:09:57 crc kubenswrapper[5005]: I0225 13:09:57.318188 5005 scope.go:117] "RemoveContainer" containerID="d48c44fadd7f8778bc8c71649d92b515875a2a696b0e99a63c7c2fc17a4ad1ae" Feb 25 13:09:57 crc kubenswrapper[5005]: I0225 13:09:57.362764 5005 scope.go:117] "RemoveContainer" containerID="03122ddc35f802664b7f60440d45ea35cbfdc3e965672efa03a596b631dc0c9f" Feb 25 13:09:57 crc kubenswrapper[5005]: E0225 13:09:57.363438 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03122ddc35f802664b7f60440d45ea35cbfdc3e965672efa03a596b631dc0c9f\": container with ID starting with 03122ddc35f802664b7f60440d45ea35cbfdc3e965672efa03a596b631dc0c9f not found: ID does not exist" containerID="03122ddc35f802664b7f60440d45ea35cbfdc3e965672efa03a596b631dc0c9f" Feb 25 13:09:57 crc kubenswrapper[5005]: I0225 13:09:57.363488 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03122ddc35f802664b7f60440d45ea35cbfdc3e965672efa03a596b631dc0c9f"} err="failed to get container status \"03122ddc35f802664b7f60440d45ea35cbfdc3e965672efa03a596b631dc0c9f\": rpc error: code = NotFound desc = could not find container \"03122ddc35f802664b7f60440d45ea35cbfdc3e965672efa03a596b631dc0c9f\": container with ID starting with 03122ddc35f802664b7f60440d45ea35cbfdc3e965672efa03a596b631dc0c9f not found: ID does not exist" Feb 25 13:09:57 crc kubenswrapper[5005]: I0225 13:09:57.363522 5005 scope.go:117] "RemoveContainer" containerID="bb35ac687b24f48e86bd579ae2acad9be90ebb55fee2e89068fad7b26e0aa3cd" Feb 25 13:09:57 crc kubenswrapper[5005]: E0225 13:09:57.364151 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb35ac687b24f48e86bd579ae2acad9be90ebb55fee2e89068fad7b26e0aa3cd\": container with ID starting with bb35ac687b24f48e86bd579ae2acad9be90ebb55fee2e89068fad7b26e0aa3cd not found: ID does not exist" containerID="bb35ac687b24f48e86bd579ae2acad9be90ebb55fee2e89068fad7b26e0aa3cd" Feb 25 13:09:57 crc kubenswrapper[5005]: I0225 13:09:57.364238 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb35ac687b24f48e86bd579ae2acad9be90ebb55fee2e89068fad7b26e0aa3cd"} err="failed to get container status \"bb35ac687b24f48e86bd579ae2acad9be90ebb55fee2e89068fad7b26e0aa3cd\": rpc error: code = NotFound desc = could not find container \"bb35ac687b24f48e86bd579ae2acad9be90ebb55fee2e89068fad7b26e0aa3cd\": container with ID starting with bb35ac687b24f48e86bd579ae2acad9be90ebb55fee2e89068fad7b26e0aa3cd not found: ID does not exist" Feb 25 13:09:57 crc kubenswrapper[5005]: I0225 13:09:57.364298 5005 scope.go:117] "RemoveContainer" containerID="d48c44fadd7f8778bc8c71649d92b515875a2a696b0e99a63c7c2fc17a4ad1ae" Feb 25 13:09:57 crc kubenswrapper[5005]: E0225 13:09:57.364806 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d48c44fadd7f8778bc8c71649d92b515875a2a696b0e99a63c7c2fc17a4ad1ae\": container with ID starting with d48c44fadd7f8778bc8c71649d92b515875a2a696b0e99a63c7c2fc17a4ad1ae not found: ID does not exist" containerID="d48c44fadd7f8778bc8c71649d92b515875a2a696b0e99a63c7c2fc17a4ad1ae" Feb 25 13:09:57 crc kubenswrapper[5005]: I0225 13:09:57.364852 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d48c44fadd7f8778bc8c71649d92b515875a2a696b0e99a63c7c2fc17a4ad1ae"} err="failed to get container status \"d48c44fadd7f8778bc8c71649d92b515875a2a696b0e99a63c7c2fc17a4ad1ae\": rpc error: code = NotFound desc = could not find container \"d48c44fadd7f8778bc8c71649d92b515875a2a696b0e99a63c7c2fc17a4ad1ae\": container with ID starting with d48c44fadd7f8778bc8c71649d92b515875a2a696b0e99a63c7c2fc17a4ad1ae not found: ID does not exist" Feb 25 13:09:57 crc kubenswrapper[5005]: I0225 13:09:57.562116 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be9c5bd8-e440-4370-9e4f-0a5274d3710f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "be9c5bd8-e440-4370-9e4f-0a5274d3710f" (UID: "be9c5bd8-e440-4370-9e4f-0a5274d3710f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 13:09:57 crc kubenswrapper[5005]: I0225 13:09:57.594542 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be9c5bd8-e440-4370-9e4f-0a5274d3710f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 13:09:57 crc kubenswrapper[5005]: I0225 13:09:57.916518 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z28cv"] Feb 25 13:09:57 crc kubenswrapper[5005]: I0225 13:09:57.936902 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-z28cv"] Feb 25 13:09:58 crc kubenswrapper[5005]: I0225 13:09:58.695286 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be9c5bd8-e440-4370-9e4f-0a5274d3710f" path="/var/lib/kubelet/pods/be9c5bd8-e440-4370-9e4f-0a5274d3710f/volumes" Feb 25 13:10:00 crc kubenswrapper[5005]: I0225 13:10:00.140111 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533750-4dsc4"] Feb 25 13:10:00 crc kubenswrapper[5005]: E0225 13:10:00.141214 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be9c5bd8-e440-4370-9e4f-0a5274d3710f" containerName="extract-content" Feb 25 13:10:00 crc kubenswrapper[5005]: I0225 13:10:00.141283 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="be9c5bd8-e440-4370-9e4f-0a5274d3710f" containerName="extract-content" Feb 25 13:10:00 crc kubenswrapper[5005]: E0225 13:10:00.141403 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be9c5bd8-e440-4370-9e4f-0a5274d3710f" containerName="extract-utilities" Feb 25 13:10:00 crc kubenswrapper[5005]: I0225 13:10:00.141474 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="be9c5bd8-e440-4370-9e4f-0a5274d3710f" containerName="extract-utilities" Feb 25 13:10:00 crc kubenswrapper[5005]: E0225 13:10:00.141548 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be9c5bd8-e440-4370-9e4f-0a5274d3710f" containerName="registry-server" Feb 25 13:10:00 crc kubenswrapper[5005]: I0225 13:10:00.141604 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="be9c5bd8-e440-4370-9e4f-0a5274d3710f" containerName="registry-server" Feb 25 13:10:00 crc kubenswrapper[5005]: I0225 13:10:00.141826 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="be9c5bd8-e440-4370-9e4f-0a5274d3710f" containerName="registry-server" Feb 25 13:10:00 crc kubenswrapper[5005]: I0225 13:10:00.142490 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533750-4dsc4" Feb 25 13:10:00 crc kubenswrapper[5005]: I0225 13:10:00.144388 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 13:10:00 crc kubenswrapper[5005]: I0225 13:10:00.144572 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 13:10:00 crc kubenswrapper[5005]: I0225 13:10:00.144833 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 13:10:00 crc kubenswrapper[5005]: I0225 13:10:00.152388 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533750-4dsc4"] Feb 25 13:10:00 crc kubenswrapper[5005]: I0225 13:10:00.163538 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqt29\" (UniqueName: \"kubernetes.io/projected/d6b1b463-ab19-4eae-bfac-ad8b1b4e93ec-kube-api-access-hqt29\") pod \"auto-csr-approver-29533750-4dsc4\" (UID: \"d6b1b463-ab19-4eae-bfac-ad8b1b4e93ec\") " pod="openshift-infra/auto-csr-approver-29533750-4dsc4" Feb 25 13:10:00 crc kubenswrapper[5005]: I0225 13:10:00.264953 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqt29\" (UniqueName: \"kubernetes.io/projected/d6b1b463-ab19-4eae-bfac-ad8b1b4e93ec-kube-api-access-hqt29\") pod \"auto-csr-approver-29533750-4dsc4\" (UID: \"d6b1b463-ab19-4eae-bfac-ad8b1b4e93ec\") " pod="openshift-infra/auto-csr-approver-29533750-4dsc4" Feb 25 13:10:00 crc kubenswrapper[5005]: I0225 13:10:00.284513 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqt29\" (UniqueName: \"kubernetes.io/projected/d6b1b463-ab19-4eae-bfac-ad8b1b4e93ec-kube-api-access-hqt29\") pod \"auto-csr-approver-29533750-4dsc4\" (UID: \"d6b1b463-ab19-4eae-bfac-ad8b1b4e93ec\") " pod="openshift-infra/auto-csr-approver-29533750-4dsc4" Feb 25 13:10:00 crc kubenswrapper[5005]: I0225 13:10:00.463178 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533750-4dsc4" Feb 25 13:10:00 crc kubenswrapper[5005]: I0225 13:10:00.950625 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533750-4dsc4"] Feb 25 13:10:00 crc kubenswrapper[5005]: W0225 13:10:00.961560 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd6b1b463_ab19_4eae_bfac_ad8b1b4e93ec.slice/crio-fe22d108443de5d044d8a44442fc63cc7a159e1a9aecdce2a106710e01cc0959 WatchSource:0}: Error finding container fe22d108443de5d044d8a44442fc63cc7a159e1a9aecdce2a106710e01cc0959: Status 404 returned error can't find the container with id fe22d108443de5d044d8a44442fc63cc7a159e1a9aecdce2a106710e01cc0959 Feb 25 13:10:01 crc kubenswrapper[5005]: I0225 13:10:01.302149 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533750-4dsc4" event={"ID":"d6b1b463-ab19-4eae-bfac-ad8b1b4e93ec","Type":"ContainerStarted","Data":"fe22d108443de5d044d8a44442fc63cc7a159e1a9aecdce2a106710e01cc0959"} Feb 25 13:10:03 crc kubenswrapper[5005]: I0225 13:10:03.326416 5005 generic.go:334] "Generic (PLEG): container finished" podID="d6b1b463-ab19-4eae-bfac-ad8b1b4e93ec" containerID="18649525785f20bde8867939cfc6099096ca58afca9828df95963f8671668275" exitCode=0 Feb 25 13:10:03 crc kubenswrapper[5005]: I0225 13:10:03.326487 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533750-4dsc4" event={"ID":"d6b1b463-ab19-4eae-bfac-ad8b1b4e93ec","Type":"ContainerDied","Data":"18649525785f20bde8867939cfc6099096ca58afca9828df95963f8671668275"} Feb 25 13:10:04 crc kubenswrapper[5005]: I0225 13:10:04.728130 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533750-4dsc4" Feb 25 13:10:04 crc kubenswrapper[5005]: I0225 13:10:04.848656 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqt29\" (UniqueName: \"kubernetes.io/projected/d6b1b463-ab19-4eae-bfac-ad8b1b4e93ec-kube-api-access-hqt29\") pod \"d6b1b463-ab19-4eae-bfac-ad8b1b4e93ec\" (UID: \"d6b1b463-ab19-4eae-bfac-ad8b1b4e93ec\") " Feb 25 13:10:04 crc kubenswrapper[5005]: I0225 13:10:04.858856 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6b1b463-ab19-4eae-bfac-ad8b1b4e93ec-kube-api-access-hqt29" (OuterVolumeSpecName: "kube-api-access-hqt29") pod "d6b1b463-ab19-4eae-bfac-ad8b1b4e93ec" (UID: "d6b1b463-ab19-4eae-bfac-ad8b1b4e93ec"). InnerVolumeSpecName "kube-api-access-hqt29". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 13:10:04 crc kubenswrapper[5005]: I0225 13:10:04.952251 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqt29\" (UniqueName: \"kubernetes.io/projected/d6b1b463-ab19-4eae-bfac-ad8b1b4e93ec-kube-api-access-hqt29\") on node \"crc\" DevicePath \"\"" Feb 25 13:10:05 crc kubenswrapper[5005]: I0225 13:10:05.352841 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533750-4dsc4" event={"ID":"d6b1b463-ab19-4eae-bfac-ad8b1b4e93ec","Type":"ContainerDied","Data":"fe22d108443de5d044d8a44442fc63cc7a159e1a9aecdce2a106710e01cc0959"} Feb 25 13:10:05 crc kubenswrapper[5005]: I0225 13:10:05.352892 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe22d108443de5d044d8a44442fc63cc7a159e1a9aecdce2a106710e01cc0959" Feb 25 13:10:05 crc kubenswrapper[5005]: I0225 13:10:05.352956 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533750-4dsc4" Feb 25 13:10:05 crc kubenswrapper[5005]: I0225 13:10:05.810905 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533744-8z852"] Feb 25 13:10:05 crc kubenswrapper[5005]: I0225 13:10:05.819212 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533744-8z852"] Feb 25 13:10:06 crc kubenswrapper[5005]: I0225 13:10:06.701619 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6859e64-fa7b-4975-a332-33aee5c42f43" path="/var/lib/kubelet/pods/b6859e64-fa7b-4975-a332-33aee5c42f43/volumes" Feb 25 13:10:24 crc kubenswrapper[5005]: I0225 13:10:24.202052 5005 scope.go:117] "RemoveContainer" containerID="c60a83c8f8ed0c590bf94dc5d4b4bf1c08f221915ab9e3010d6fe8762b35f9ee" Feb 25 13:10:55 crc kubenswrapper[5005]: I0225 13:10:55.325196 5005 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-rt5kw container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 25 13:10:55 crc kubenswrapper[5005]: I0225 13:10:55.326027 5005 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-rt5kw container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 25 13:10:55 crc kubenswrapper[5005]: I0225 13:10:55.326109 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rt5kw" podUID="ae6aaf14-6bb2-4e68-a46b-03209f55ba4d" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 25 13:10:55 crc kubenswrapper[5005]: I0225 13:10:55.326133 5005 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rt5kw" podUID="ae6aaf14-6bb2-4e68-a46b-03209f55ba4d" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 25 13:11:28 crc kubenswrapper[5005]: I0225 13:11:28.087994 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 13:11:28 crc kubenswrapper[5005]: I0225 13:11:28.088822 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 13:11:54 crc kubenswrapper[5005]: I0225 13:11:54.989520 5005 generic.go:334] "Generic (PLEG): container finished" podID="444bbedc-b80c-45ab-8538-c572e17d3892" containerID="d0f596c44d8b73e2e0d6c6024f64d6eaac6c89a80201cf9a36911643174077f7" exitCode=0 Feb 25 13:11:54 crc kubenswrapper[5005]: I0225 13:11:54.989609 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-test" event={"ID":"444bbedc-b80c-45ab-8538-c572e17d3892","Type":"ContainerDied","Data":"d0f596c44d8b73e2e0d6c6024f64d6eaac6c89a80201cf9a36911643174077f7"} Feb 25 13:11:56 crc kubenswrapper[5005]: I0225 13:11:56.448004 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-test" Feb 25 13:11:56 crc kubenswrapper[5005]: I0225 13:11:56.610994 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/444bbedc-b80c-45ab-8538-c572e17d3892-ceph\") pod \"444bbedc-b80c-45ab-8538-c572e17d3892\" (UID: \"444bbedc-b80c-45ab-8538-c572e17d3892\") " Feb 25 13:11:56 crc kubenswrapper[5005]: I0225 13:11:56.611065 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/444bbedc-b80c-45ab-8538-c572e17d3892-ssh-key\") pod \"444bbedc-b80c-45ab-8538-c572e17d3892\" (UID: \"444bbedc-b80c-45ab-8538-c572e17d3892\") " Feb 25 13:11:56 crc kubenswrapper[5005]: I0225 13:11:56.611124 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/444bbedc-b80c-45ab-8538-c572e17d3892-openstack-config-secret\") pod \"444bbedc-b80c-45ab-8538-c572e17d3892\" (UID: \"444bbedc-b80c-45ab-8538-c572e17d3892\") " Feb 25 13:11:56 crc kubenswrapper[5005]: I0225 13:11:56.611195 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zm5z6\" (UniqueName: \"kubernetes.io/projected/444bbedc-b80c-45ab-8538-c572e17d3892-kube-api-access-zm5z6\") pod \"444bbedc-b80c-45ab-8538-c572e17d3892\" (UID: \"444bbedc-b80c-45ab-8538-c572e17d3892\") " Feb 25 13:11:56 crc kubenswrapper[5005]: I0225 13:11:56.611618 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"444bbedc-b80c-45ab-8538-c572e17d3892\" (UID: \"444bbedc-b80c-45ab-8538-c572e17d3892\") " Feb 25 13:11:56 crc kubenswrapper[5005]: I0225 13:11:56.611682 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/444bbedc-b80c-45ab-8538-c572e17d3892-config-data\") pod \"444bbedc-b80c-45ab-8538-c572e17d3892\" (UID: \"444bbedc-b80c-45ab-8538-c572e17d3892\") " Feb 25 13:11:56 crc kubenswrapper[5005]: I0225 13:11:56.611709 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/444bbedc-b80c-45ab-8538-c572e17d3892-ca-certs\") pod \"444bbedc-b80c-45ab-8538-c572e17d3892\" (UID: \"444bbedc-b80c-45ab-8538-c572e17d3892\") " Feb 25 13:11:56 crc kubenswrapper[5005]: I0225 13:11:56.611780 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/444bbedc-b80c-45ab-8538-c572e17d3892-test-operator-ephemeral-workdir\") pod \"444bbedc-b80c-45ab-8538-c572e17d3892\" (UID: \"444bbedc-b80c-45ab-8538-c572e17d3892\") " Feb 25 13:11:56 crc kubenswrapper[5005]: I0225 13:11:56.611824 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/444bbedc-b80c-45ab-8538-c572e17d3892-test-operator-ephemeral-temporary\") pod \"444bbedc-b80c-45ab-8538-c572e17d3892\" (UID: \"444bbedc-b80c-45ab-8538-c572e17d3892\") " Feb 25 13:11:56 crc kubenswrapper[5005]: I0225 13:11:56.611848 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/444bbedc-b80c-45ab-8538-c572e17d3892-openstack-config\") pod \"444bbedc-b80c-45ab-8538-c572e17d3892\" (UID: \"444bbedc-b80c-45ab-8538-c572e17d3892\") " Feb 25 13:11:56 crc kubenswrapper[5005]: I0225 13:11:56.613226 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/444bbedc-b80c-45ab-8538-c572e17d3892-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "444bbedc-b80c-45ab-8538-c572e17d3892" (UID: "444bbedc-b80c-45ab-8538-c572e17d3892"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 13:11:56 crc kubenswrapper[5005]: I0225 13:11:56.613482 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/444bbedc-b80c-45ab-8538-c572e17d3892-config-data" (OuterVolumeSpecName: "config-data") pod "444bbedc-b80c-45ab-8538-c572e17d3892" (UID: "444bbedc-b80c-45ab-8538-c572e17d3892"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 13:11:56 crc kubenswrapper[5005]: I0225 13:11:56.617114 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/444bbedc-b80c-45ab-8538-c572e17d3892-kube-api-access-zm5z6" (OuterVolumeSpecName: "kube-api-access-zm5z6") pod "444bbedc-b80c-45ab-8538-c572e17d3892" (UID: "444bbedc-b80c-45ab-8538-c572e17d3892"). InnerVolumeSpecName "kube-api-access-zm5z6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 13:11:56 crc kubenswrapper[5005]: I0225 13:11:56.618558 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/444bbedc-b80c-45ab-8538-c572e17d3892-ceph" (OuterVolumeSpecName: "ceph") pod "444bbedc-b80c-45ab-8538-c572e17d3892" (UID: "444bbedc-b80c-45ab-8538-c572e17d3892"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 13:11:56 crc kubenswrapper[5005]: I0225 13:11:56.618556 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "test-operator-logs") pod "444bbedc-b80c-45ab-8538-c572e17d3892" (UID: "444bbedc-b80c-45ab-8538-c572e17d3892"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 25 13:11:56 crc kubenswrapper[5005]: I0225 13:11:56.624258 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/444bbedc-b80c-45ab-8538-c572e17d3892-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "444bbedc-b80c-45ab-8538-c572e17d3892" (UID: "444bbedc-b80c-45ab-8538-c572e17d3892"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 13:11:56 crc kubenswrapper[5005]: I0225 13:11:56.646265 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/444bbedc-b80c-45ab-8538-c572e17d3892-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "444bbedc-b80c-45ab-8538-c572e17d3892" (UID: "444bbedc-b80c-45ab-8538-c572e17d3892"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 13:11:56 crc kubenswrapper[5005]: I0225 13:11:56.653716 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/444bbedc-b80c-45ab-8538-c572e17d3892-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "444bbedc-b80c-45ab-8538-c572e17d3892" (UID: "444bbedc-b80c-45ab-8538-c572e17d3892"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 13:11:56 crc kubenswrapper[5005]: I0225 13:11:56.654107 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/444bbedc-b80c-45ab-8538-c572e17d3892-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "444bbedc-b80c-45ab-8538-c572e17d3892" (UID: "444bbedc-b80c-45ab-8538-c572e17d3892"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 13:11:56 crc kubenswrapper[5005]: I0225 13:11:56.669601 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/444bbedc-b80c-45ab-8538-c572e17d3892-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "444bbedc-b80c-45ab-8538-c572e17d3892" (UID: "444bbedc-b80c-45ab-8538-c572e17d3892"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 13:11:56 crc kubenswrapper[5005]: I0225 13:11:56.719946 5005 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/444bbedc-b80c-45ab-8538-c572e17d3892-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 25 13:11:56 crc kubenswrapper[5005]: I0225 13:11:56.719980 5005 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/444bbedc-b80c-45ab-8538-c572e17d3892-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 25 13:11:56 crc kubenswrapper[5005]: I0225 13:11:56.719999 5005 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/444bbedc-b80c-45ab-8538-c572e17d3892-ceph\") on node \"crc\" DevicePath \"\"" Feb 25 13:11:56 crc kubenswrapper[5005]: I0225 13:11:56.720011 5005 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/444bbedc-b80c-45ab-8538-c572e17d3892-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 25 13:11:56 crc kubenswrapper[5005]: I0225 13:11:56.720021 5005 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/444bbedc-b80c-45ab-8538-c572e17d3892-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 25 13:11:56 crc kubenswrapper[5005]: I0225 13:11:56.720031 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zm5z6\" (UniqueName: \"kubernetes.io/projected/444bbedc-b80c-45ab-8538-c572e17d3892-kube-api-access-zm5z6\") on node \"crc\" DevicePath \"\"" Feb 25 13:11:56 crc kubenswrapper[5005]: I0225 13:11:56.720056 5005 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Feb 25 13:11:56 crc kubenswrapper[5005]: I0225 13:11:56.720065 5005 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/444bbedc-b80c-45ab-8538-c572e17d3892-config-data\") on node \"crc\" DevicePath \"\"" Feb 25 13:11:56 crc kubenswrapper[5005]: I0225 13:11:56.720074 5005 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/444bbedc-b80c-45ab-8538-c572e17d3892-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 25 13:11:56 crc kubenswrapper[5005]: I0225 13:11:56.720086 5005 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/444bbedc-b80c-45ab-8538-c572e17d3892-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 25 13:11:56 crc kubenswrapper[5005]: I0225 13:11:56.740018 5005 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Feb 25 13:11:56 crc kubenswrapper[5005]: I0225 13:11:56.823271 5005 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Feb 25 13:11:57 crc kubenswrapper[5005]: I0225 13:11:57.006127 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-test" event={"ID":"444bbedc-b80c-45ab-8538-c572e17d3892","Type":"ContainerDied","Data":"5107df06501106c28db8a61ffeed00f97fefc0064117d28ae4b2ec115730ff4a"} Feb 25 13:11:57 crc kubenswrapper[5005]: I0225 13:11:57.006206 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5107df06501106c28db8a61ffeed00f97fefc0064117d28ae4b2ec115730ff4a" Feb 25 13:11:57 crc kubenswrapper[5005]: I0225 13:11:57.006300 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-test" Feb 25 13:11:58 crc kubenswrapper[5005]: I0225 13:11:58.086870 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 13:11:58 crc kubenswrapper[5005]: I0225 13:11:58.087217 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 13:12:00 crc kubenswrapper[5005]: I0225 13:12:00.167048 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533752-7ddnq"] Feb 25 13:12:00 crc kubenswrapper[5005]: E0225 13:12:00.167880 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="444bbedc-b80c-45ab-8538-c572e17d3892" containerName="tempest-tests-tempest-tests-runner" Feb 25 13:12:00 crc kubenswrapper[5005]: I0225 13:12:00.167901 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="444bbedc-b80c-45ab-8538-c572e17d3892" containerName="tempest-tests-tempest-tests-runner" Feb 25 13:12:00 crc kubenswrapper[5005]: E0225 13:12:00.167946 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6b1b463-ab19-4eae-bfac-ad8b1b4e93ec" containerName="oc" Feb 25 13:12:00 crc kubenswrapper[5005]: I0225 13:12:00.167954 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6b1b463-ab19-4eae-bfac-ad8b1b4e93ec" containerName="oc" Feb 25 13:12:00 crc kubenswrapper[5005]: I0225 13:12:00.168147 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6b1b463-ab19-4eae-bfac-ad8b1b4e93ec" containerName="oc" Feb 25 13:12:00 crc kubenswrapper[5005]: I0225 13:12:00.168165 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="444bbedc-b80c-45ab-8538-c572e17d3892" containerName="tempest-tests-tempest-tests-runner" Feb 25 13:12:00 crc kubenswrapper[5005]: I0225 13:12:00.168905 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533752-7ddnq" Feb 25 13:12:00 crc kubenswrapper[5005]: I0225 13:12:00.174815 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 13:12:00 crc kubenswrapper[5005]: I0225 13:12:00.174862 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 13:12:00 crc kubenswrapper[5005]: I0225 13:12:00.175148 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 13:12:00 crc kubenswrapper[5005]: I0225 13:12:00.182724 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533752-7ddnq"] Feb 25 13:12:00 crc kubenswrapper[5005]: I0225 13:12:00.303905 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8q86\" (UniqueName: \"kubernetes.io/projected/8934a323-1bc4-4d56-a124-dd020c24d20b-kube-api-access-p8q86\") pod \"auto-csr-approver-29533752-7ddnq\" (UID: \"8934a323-1bc4-4d56-a124-dd020c24d20b\") " pod="openshift-infra/auto-csr-approver-29533752-7ddnq" Feb 25 13:12:00 crc kubenswrapper[5005]: I0225 13:12:00.405911 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8q86\" (UniqueName: \"kubernetes.io/projected/8934a323-1bc4-4d56-a124-dd020c24d20b-kube-api-access-p8q86\") pod \"auto-csr-approver-29533752-7ddnq\" (UID: \"8934a323-1bc4-4d56-a124-dd020c24d20b\") " pod="openshift-infra/auto-csr-approver-29533752-7ddnq" Feb 25 13:12:00 crc kubenswrapper[5005]: I0225 13:12:00.426562 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8q86\" (UniqueName: \"kubernetes.io/projected/8934a323-1bc4-4d56-a124-dd020c24d20b-kube-api-access-p8q86\") pod \"auto-csr-approver-29533752-7ddnq\" (UID: \"8934a323-1bc4-4d56-a124-dd020c24d20b\") " pod="openshift-infra/auto-csr-approver-29533752-7ddnq" Feb 25 13:12:00 crc kubenswrapper[5005]: I0225 13:12:00.501883 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533752-7ddnq" Feb 25 13:12:01 crc kubenswrapper[5005]: I0225 13:12:01.762401 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533752-7ddnq"] Feb 25 13:12:01 crc kubenswrapper[5005]: I0225 13:12:01.772066 5005 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 25 13:12:02 crc kubenswrapper[5005]: I0225 13:12:02.404172 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533752-7ddnq" event={"ID":"8934a323-1bc4-4d56-a124-dd020c24d20b","Type":"ContainerStarted","Data":"81312acb0bb6b688c14a0a35a4db075ec5a75d65bbeaa7a9d37e92b0e7f674b2"} Feb 25 13:12:03 crc kubenswrapper[5005]: I0225 13:12:03.315574 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 25 13:12:03 crc kubenswrapper[5005]: I0225 13:12:03.317085 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 25 13:12:03 crc kubenswrapper[5005]: I0225 13:12:03.320039 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-g5ll5" Feb 25 13:12:03 crc kubenswrapper[5005]: I0225 13:12:03.332784 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 25 13:12:03 crc kubenswrapper[5005]: I0225 13:12:03.497498 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"2840d4e0-d718-4faf-81f2-b5b054a2fd5d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 25 13:12:03 crc kubenswrapper[5005]: I0225 13:12:03.498123 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhx8m\" (UniqueName: \"kubernetes.io/projected/2840d4e0-d718-4faf-81f2-b5b054a2fd5d-kube-api-access-zhx8m\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"2840d4e0-d718-4faf-81f2-b5b054a2fd5d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 25 13:12:03 crc kubenswrapper[5005]: I0225 13:12:03.600066 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"2840d4e0-d718-4faf-81f2-b5b054a2fd5d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 25 13:12:03 crc kubenswrapper[5005]: I0225 13:12:03.600443 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhx8m\" (UniqueName: \"kubernetes.io/projected/2840d4e0-d718-4faf-81f2-b5b054a2fd5d-kube-api-access-zhx8m\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"2840d4e0-d718-4faf-81f2-b5b054a2fd5d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 25 13:12:03 crc kubenswrapper[5005]: I0225 13:12:03.600634 5005 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"2840d4e0-d718-4faf-81f2-b5b054a2fd5d\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 25 13:12:03 crc kubenswrapper[5005]: I0225 13:12:03.618497 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhx8m\" (UniqueName: \"kubernetes.io/projected/2840d4e0-d718-4faf-81f2-b5b054a2fd5d-kube-api-access-zhx8m\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"2840d4e0-d718-4faf-81f2-b5b054a2fd5d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 25 13:12:03 crc kubenswrapper[5005]: I0225 13:12:03.627880 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"2840d4e0-d718-4faf-81f2-b5b054a2fd5d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 25 13:12:03 crc kubenswrapper[5005]: I0225 13:12:03.655879 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 25 13:12:04 crc kubenswrapper[5005]: I0225 13:12:04.066815 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 25 13:12:04 crc kubenswrapper[5005]: I0225 13:12:04.422496 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"2840d4e0-d718-4faf-81f2-b5b054a2fd5d","Type":"ContainerStarted","Data":"df04305354a4bce62726252e864a8372789fe5e53aa49ef8b7406c1994813c74"} Feb 25 13:12:04 crc kubenswrapper[5005]: I0225 13:12:04.424592 5005 generic.go:334] "Generic (PLEG): container finished" podID="8934a323-1bc4-4d56-a124-dd020c24d20b" containerID="37b38919e556334c35a822f4af8a89a3923d17c7c5e832453e06f3674955684b" exitCode=0 Feb 25 13:12:04 crc kubenswrapper[5005]: I0225 13:12:04.424642 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533752-7ddnq" event={"ID":"8934a323-1bc4-4d56-a124-dd020c24d20b","Type":"ContainerDied","Data":"37b38919e556334c35a822f4af8a89a3923d17c7c5e832453e06f3674955684b"} Feb 25 13:12:05 crc kubenswrapper[5005]: I0225 13:12:05.439829 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"2840d4e0-d718-4faf-81f2-b5b054a2fd5d","Type":"ContainerStarted","Data":"cd705b043ab787542de191cde3aa29730a89197d30fc94741e3e844d2f0ad9d4"} Feb 25 13:12:05 crc kubenswrapper[5005]: I0225 13:12:05.457605 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.700221278 podStartE2EDuration="2.457585943s" podCreationTimestamp="2026-02-25 13:12:03 +0000 UTC" firstStartedPulling="2026-02-25 13:12:04.078039924 +0000 UTC m=+6838.118772261" lastFinishedPulling="2026-02-25 13:12:04.835404599 +0000 UTC m=+6838.876136926" observedRunningTime="2026-02-25 13:12:05.452312501 +0000 UTC m=+6839.493044838" watchObservedRunningTime="2026-02-25 13:12:05.457585943 +0000 UTC m=+6839.498318290" Feb 25 13:12:05 crc kubenswrapper[5005]: I0225 13:12:05.776859 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533752-7ddnq" Feb 25 13:12:05 crc kubenswrapper[5005]: I0225 13:12:05.956595 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8q86\" (UniqueName: \"kubernetes.io/projected/8934a323-1bc4-4d56-a124-dd020c24d20b-kube-api-access-p8q86\") pod \"8934a323-1bc4-4d56-a124-dd020c24d20b\" (UID: \"8934a323-1bc4-4d56-a124-dd020c24d20b\") " Feb 25 13:12:05 crc kubenswrapper[5005]: I0225 13:12:05.962116 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8934a323-1bc4-4d56-a124-dd020c24d20b-kube-api-access-p8q86" (OuterVolumeSpecName: "kube-api-access-p8q86") pod "8934a323-1bc4-4d56-a124-dd020c24d20b" (UID: "8934a323-1bc4-4d56-a124-dd020c24d20b"). InnerVolumeSpecName "kube-api-access-p8q86". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 13:12:06 crc kubenswrapper[5005]: I0225 13:12:06.058864 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8q86\" (UniqueName: \"kubernetes.io/projected/8934a323-1bc4-4d56-a124-dd020c24d20b-kube-api-access-p8q86\") on node \"crc\" DevicePath \"\"" Feb 25 13:12:06 crc kubenswrapper[5005]: I0225 13:12:06.450316 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533752-7ddnq" Feb 25 13:12:06 crc kubenswrapper[5005]: I0225 13:12:06.450461 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533752-7ddnq" event={"ID":"8934a323-1bc4-4d56-a124-dd020c24d20b","Type":"ContainerDied","Data":"81312acb0bb6b688c14a0a35a4db075ec5a75d65bbeaa7a9d37e92b0e7f674b2"} Feb 25 13:12:06 crc kubenswrapper[5005]: I0225 13:12:06.450488 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81312acb0bb6b688c14a0a35a4db075ec5a75d65bbeaa7a9d37e92b0e7f674b2" Feb 25 13:12:06 crc kubenswrapper[5005]: I0225 13:12:06.840879 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533746-4crcr"] Feb 25 13:12:06 crc kubenswrapper[5005]: I0225 13:12:06.848453 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533746-4crcr"] Feb 25 13:12:08 crc kubenswrapper[5005]: I0225 13:12:08.700846 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b52f1477-86f2-49a2-b28c-c2e7b68a3b6d" path="/var/lib/kubelet/pods/b52f1477-86f2-49a2-b28c-c2e7b68a3b6d/volumes" Feb 25 13:12:21 crc kubenswrapper[5005]: I0225 13:12:21.510933 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tobiko-tests-tobiko-s00-podified-functional"] Feb 25 13:12:21 crc kubenswrapper[5005]: E0225 13:12:21.514057 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8934a323-1bc4-4d56-a124-dd020c24d20b" containerName="oc" Feb 25 13:12:21 crc kubenswrapper[5005]: I0225 13:12:21.514079 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="8934a323-1bc4-4d56-a124-dd020c24d20b" containerName="oc" Feb 25 13:12:21 crc kubenswrapper[5005]: I0225 13:12:21.514309 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="8934a323-1bc4-4d56-a124-dd020c24d20b" containerName="oc" Feb 25 13:12:21 crc kubenswrapper[5005]: I0225 13:12:21.515024 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 25 13:12:21 crc kubenswrapper[5005]: I0225 13:12:21.517487 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tobiko-tests-tobikotobiko-public-key" Feb 25 13:12:21 crc kubenswrapper[5005]: I0225 13:12:21.517556 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"tobiko-secret" Feb 25 13:12:21 crc kubenswrapper[5005]: I0225 13:12:21.517776 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"test-operator-clouds-config" Feb 25 13:12:21 crc kubenswrapper[5005]: I0225 13:12:21.520236 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tobiko-tests-tobikotobiko-private-key" Feb 25 13:12:21 crc kubenswrapper[5005]: I0225 13:12:21.522423 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tobiko-tests-tobiko-s00-podified-functional"] Feb 25 13:12:21 crc kubenswrapper[5005]: I0225 13:12:21.523447 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tobiko-tests-tobikotobiko-config" Feb 25 13:12:21 crc kubenswrapper[5005]: I0225 13:12:21.594198 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c817cda1-46cd-4d4d-bf2b-6d99af91a859-openstack-config-secret\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c817cda1-46cd-4d4d-bf2b-6d99af91a859\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 25 13:12:21 crc kubenswrapper[5005]: I0225 13:12:21.594268 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/c817cda1-46cd-4d4d-bf2b-6d99af91a859-tobiko-public-key\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c817cda1-46cd-4d4d-bf2b-6d99af91a859\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 25 13:12:21 crc kubenswrapper[5005]: I0225 13:12:21.594340 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/c817cda1-46cd-4d4d-bf2b-6d99af91a859-kubeconfig\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c817cda1-46cd-4d4d-bf2b-6d99af91a859\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 25 13:12:21 crc kubenswrapper[5005]: I0225 13:12:21.594439 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/c817cda1-46cd-4d4d-bf2b-6d99af91a859-test-operator-ephemeral-workdir\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c817cda1-46cd-4d4d-bf2b-6d99af91a859\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 25 13:12:21 crc kubenswrapper[5005]: I0225 13:12:21.594474 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/c817cda1-46cd-4d4d-bf2b-6d99af91a859-tobiko-config\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c817cda1-46cd-4d4d-bf2b-6d99af91a859\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 25 13:12:21 crc kubenswrapper[5005]: I0225 13:12:21.594606 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s4k8\" (UniqueName: \"kubernetes.io/projected/c817cda1-46cd-4d4d-bf2b-6d99af91a859-kube-api-access-5s4k8\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c817cda1-46cd-4d4d-bf2b-6d99af91a859\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 25 13:12:21 crc kubenswrapper[5005]: I0225 13:12:21.594669 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c817cda1-46cd-4d4d-bf2b-6d99af91a859-ceph\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c817cda1-46cd-4d4d-bf2b-6d99af91a859\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 25 13:12:21 crc kubenswrapper[5005]: I0225 13:12:21.594771 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/c817cda1-46cd-4d4d-bf2b-6d99af91a859-tobiko-private-key\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c817cda1-46cd-4d4d-bf2b-6d99af91a859\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 25 13:12:21 crc kubenswrapper[5005]: I0225 13:12:21.594887 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/c817cda1-46cd-4d4d-bf2b-6d99af91a859-test-operator-ephemeral-temporary\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c817cda1-46cd-4d4d-bf2b-6d99af91a859\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 25 13:12:21 crc kubenswrapper[5005]: I0225 13:12:21.594929 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c817cda1-46cd-4d4d-bf2b-6d99af91a859\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 25 13:12:21 crc kubenswrapper[5005]: I0225 13:12:21.594998 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/c817cda1-46cd-4d4d-bf2b-6d99af91a859-ca-certs\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c817cda1-46cd-4d4d-bf2b-6d99af91a859\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 25 13:12:21 crc kubenswrapper[5005]: I0225 13:12:21.595048 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/c817cda1-46cd-4d4d-bf2b-6d99af91a859-test-operator-clouds-config\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c817cda1-46cd-4d4d-bf2b-6d99af91a859\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 25 13:12:21 crc kubenswrapper[5005]: I0225 13:12:21.697256 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/c817cda1-46cd-4d4d-bf2b-6d99af91a859-test-operator-ephemeral-workdir\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c817cda1-46cd-4d4d-bf2b-6d99af91a859\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 25 13:12:21 crc kubenswrapper[5005]: I0225 13:12:21.697672 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/c817cda1-46cd-4d4d-bf2b-6d99af91a859-tobiko-config\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c817cda1-46cd-4d4d-bf2b-6d99af91a859\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 25 13:12:21 crc kubenswrapper[5005]: I0225 13:12:21.697707 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s4k8\" (UniqueName: \"kubernetes.io/projected/c817cda1-46cd-4d4d-bf2b-6d99af91a859-kube-api-access-5s4k8\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c817cda1-46cd-4d4d-bf2b-6d99af91a859\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 25 13:12:21 crc kubenswrapper[5005]: I0225 13:12:21.697745 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c817cda1-46cd-4d4d-bf2b-6d99af91a859-ceph\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c817cda1-46cd-4d4d-bf2b-6d99af91a859\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 25 13:12:21 crc kubenswrapper[5005]: I0225 13:12:21.697785 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/c817cda1-46cd-4d4d-bf2b-6d99af91a859-tobiko-private-key\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c817cda1-46cd-4d4d-bf2b-6d99af91a859\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 25 13:12:21 crc kubenswrapper[5005]: I0225 13:12:21.697838 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/c817cda1-46cd-4d4d-bf2b-6d99af91a859-test-operator-ephemeral-temporary\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c817cda1-46cd-4d4d-bf2b-6d99af91a859\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 25 13:12:21 crc kubenswrapper[5005]: I0225 13:12:21.697868 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c817cda1-46cd-4d4d-bf2b-6d99af91a859\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 25 13:12:21 crc kubenswrapper[5005]: I0225 13:12:21.697909 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/c817cda1-46cd-4d4d-bf2b-6d99af91a859-ca-certs\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c817cda1-46cd-4d4d-bf2b-6d99af91a859\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 25 13:12:21 crc kubenswrapper[5005]: I0225 13:12:21.697944 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/c817cda1-46cd-4d4d-bf2b-6d99af91a859-test-operator-clouds-config\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c817cda1-46cd-4d4d-bf2b-6d99af91a859\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 25 13:12:21 crc kubenswrapper[5005]: I0225 13:12:21.698010 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c817cda1-46cd-4d4d-bf2b-6d99af91a859-openstack-config-secret\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c817cda1-46cd-4d4d-bf2b-6d99af91a859\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 25 13:12:21 crc kubenswrapper[5005]: I0225 13:12:21.698033 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/c817cda1-46cd-4d4d-bf2b-6d99af91a859-tobiko-public-key\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c817cda1-46cd-4d4d-bf2b-6d99af91a859\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 25 13:12:21 crc kubenswrapper[5005]: I0225 13:12:21.698078 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/c817cda1-46cd-4d4d-bf2b-6d99af91a859-kubeconfig\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c817cda1-46cd-4d4d-bf2b-6d99af91a859\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 25 13:12:21 crc kubenswrapper[5005]: I0225 13:12:21.697859 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/c817cda1-46cd-4d4d-bf2b-6d99af91a859-test-operator-ephemeral-workdir\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c817cda1-46cd-4d4d-bf2b-6d99af91a859\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 25 13:12:21 crc kubenswrapper[5005]: I0225 13:12:21.699096 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/c817cda1-46cd-4d4d-bf2b-6d99af91a859-test-operator-ephemeral-temporary\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c817cda1-46cd-4d4d-bf2b-6d99af91a859\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 25 13:12:21 crc kubenswrapper[5005]: I0225 13:12:21.699432 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/c817cda1-46cd-4d4d-bf2b-6d99af91a859-tobiko-config\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c817cda1-46cd-4d4d-bf2b-6d99af91a859\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 25 13:12:21 crc kubenswrapper[5005]: I0225 13:12:21.699456 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/c817cda1-46cd-4d4d-bf2b-6d99af91a859-test-operator-clouds-config\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c817cda1-46cd-4d4d-bf2b-6d99af91a859\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 25 13:12:21 crc kubenswrapper[5005]: I0225 13:12:21.699558 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/c817cda1-46cd-4d4d-bf2b-6d99af91a859-tobiko-public-key\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c817cda1-46cd-4d4d-bf2b-6d99af91a859\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 25 13:12:21 crc kubenswrapper[5005]: I0225 13:12:21.699662 5005 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c817cda1-46cd-4d4d-bf2b-6d99af91a859\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 25 13:12:21 crc kubenswrapper[5005]: I0225 13:12:21.701446 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/c817cda1-46cd-4d4d-bf2b-6d99af91a859-tobiko-private-key\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c817cda1-46cd-4d4d-bf2b-6d99af91a859\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 25 13:12:21 crc kubenswrapper[5005]: I0225 13:12:21.706630 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/c817cda1-46cd-4d4d-bf2b-6d99af91a859-ca-certs\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c817cda1-46cd-4d4d-bf2b-6d99af91a859\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 25 13:12:21 crc kubenswrapper[5005]: I0225 13:12:21.706676 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c817cda1-46cd-4d4d-bf2b-6d99af91a859-ceph\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c817cda1-46cd-4d4d-bf2b-6d99af91a859\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 25 13:12:21 crc kubenswrapper[5005]: I0225 13:12:21.707041 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c817cda1-46cd-4d4d-bf2b-6d99af91a859-openstack-config-secret\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c817cda1-46cd-4d4d-bf2b-6d99af91a859\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 25 13:12:21 crc kubenswrapper[5005]: I0225 13:12:21.708439 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/c817cda1-46cd-4d4d-bf2b-6d99af91a859-kubeconfig\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c817cda1-46cd-4d4d-bf2b-6d99af91a859\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 25 13:12:21 crc kubenswrapper[5005]: I0225 13:12:21.722891 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5s4k8\" (UniqueName: \"kubernetes.io/projected/c817cda1-46cd-4d4d-bf2b-6d99af91a859-kube-api-access-5s4k8\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c817cda1-46cd-4d4d-bf2b-6d99af91a859\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 25 13:12:21 crc kubenswrapper[5005]: I0225 13:12:21.727542 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"c817cda1-46cd-4d4d-bf2b-6d99af91a859\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 25 13:12:21 crc kubenswrapper[5005]: I0225 13:12:21.847724 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 25 13:12:22 crc kubenswrapper[5005]: I0225 13:12:22.379255 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tobiko-tests-tobiko-s00-podified-functional"] Feb 25 13:12:22 crc kubenswrapper[5005]: I0225 13:12:22.629485 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" event={"ID":"c817cda1-46cd-4d4d-bf2b-6d99af91a859","Type":"ContainerStarted","Data":"14e51da8d4293174506d3018daa4269c6763ec51614aae8155af2b057b5836cd"} Feb 25 13:12:24 crc kubenswrapper[5005]: I0225 13:12:24.349775 5005 scope.go:117] "RemoveContainer" containerID="d1a60ac75ff5c87cb8817253d350d6c49064c68ddb6b6bf99dd52004c76f852f" Feb 25 13:12:28 crc kubenswrapper[5005]: I0225 13:12:28.087689 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 13:12:28 crc kubenswrapper[5005]: I0225 13:12:28.088285 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 13:12:28 crc kubenswrapper[5005]: I0225 13:12:28.088331 5005 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" Feb 25 13:12:28 crc kubenswrapper[5005]: I0225 13:12:28.089106 5005 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"66027e5b3f25c40fa1bce2d4662da2bc4c7ab61051724db92de764ca548ffbb5"} pod="openshift-machine-config-operator/machine-config-daemon-tct5q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 25 13:12:28 crc kubenswrapper[5005]: I0225 13:12:28.089152 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" containerID="cri-o://66027e5b3f25c40fa1bce2d4662da2bc4c7ab61051724db92de764ca548ffbb5" gracePeriod=600 Feb 25 13:12:28 crc kubenswrapper[5005]: I0225 13:12:28.699433 5005 generic.go:334] "Generic (PLEG): container finished" podID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerID="66027e5b3f25c40fa1bce2d4662da2bc4c7ab61051724db92de764ca548ffbb5" exitCode=0 Feb 25 13:12:28 crc kubenswrapper[5005]: I0225 13:12:28.699778 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerDied","Data":"66027e5b3f25c40fa1bce2d4662da2bc4c7ab61051724db92de764ca548ffbb5"} Feb 25 13:12:28 crc kubenswrapper[5005]: I0225 13:12:28.699873 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerStarted","Data":"e69861600cc9b89c742822fae1093e7ec75d5ae94ae770e895069e574a9b3089"} Feb 25 13:12:28 crc kubenswrapper[5005]: I0225 13:12:28.699892 5005 scope.go:117] "RemoveContainer" containerID="2dc296c05982df07bbfbafb51e5e803239722ae15f0371961a118244b4d71e9f" Feb 25 13:12:45 crc kubenswrapper[5005]: I0225 13:12:45.863664 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" event={"ID":"c817cda1-46cd-4d4d-bf2b-6d99af91a859","Type":"ContainerStarted","Data":"ddc82ce37d7a4ed47ca31bb7c98ce8c05c144b3126be4b2dad23130f4fd9c9fc"} Feb 25 13:12:45 crc kubenswrapper[5005]: I0225 13:12:45.893742 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" podStartSLOduration=3.5382768479999998 podStartE2EDuration="25.893720626s" podCreationTimestamp="2026-02-25 13:12:20 +0000 UTC" firstStartedPulling="2026-02-25 13:12:22.386883898 +0000 UTC m=+6856.427616225" lastFinishedPulling="2026-02-25 13:12:44.742327666 +0000 UTC m=+6878.783060003" observedRunningTime="2026-02-25 13:12:45.881328766 +0000 UTC m=+6879.922061093" watchObservedRunningTime="2026-02-25 13:12:45.893720626 +0000 UTC m=+6879.934452963" Feb 25 13:13:53 crc kubenswrapper[5005]: I0225 13:13:53.528198 5005 generic.go:334] "Generic (PLEG): container finished" podID="c817cda1-46cd-4d4d-bf2b-6d99af91a859" containerID="ddc82ce37d7a4ed47ca31bb7c98ce8c05c144b3126be4b2dad23130f4fd9c9fc" exitCode=0 Feb 25 13:13:53 crc kubenswrapper[5005]: I0225 13:13:53.528761 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" event={"ID":"c817cda1-46cd-4d4d-bf2b-6d99af91a859","Type":"ContainerDied","Data":"ddc82ce37d7a4ed47ca31bb7c98ce8c05c144b3126be4b2dad23130f4fd9c9fc"} Feb 25 13:13:54 crc kubenswrapper[5005]: I0225 13:13:54.961735 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.040461 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tobiko-tests-tobiko-s01-sanity"] Feb 25 13:13:55 crc kubenswrapper[5005]: E0225 13:13:55.041332 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c817cda1-46cd-4d4d-bf2b-6d99af91a859" containerName="tobiko-tests-tobiko" Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.041351 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="c817cda1-46cd-4d4d-bf2b-6d99af91a859" containerName="tobiko-tests-tobiko" Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.041632 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="c817cda1-46cd-4d4d-bf2b-6d99af91a859" containerName="tobiko-tests-tobiko" Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.042537 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.054001 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tobiko-tests-tobiko-s01-sanity"] Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.138553 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/c817cda1-46cd-4d4d-bf2b-6d99af91a859-test-operator-clouds-config\") pod \"c817cda1-46cd-4d4d-bf2b-6d99af91a859\" (UID: \"c817cda1-46cd-4d4d-bf2b-6d99af91a859\") " Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.138597 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/c817cda1-46cd-4d4d-bf2b-6d99af91a859-tobiko-config\") pod \"c817cda1-46cd-4d4d-bf2b-6d99af91a859\" (UID: \"c817cda1-46cd-4d4d-bf2b-6d99af91a859\") " Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.138653 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/c817cda1-46cd-4d4d-bf2b-6d99af91a859-kubeconfig\") pod \"c817cda1-46cd-4d4d-bf2b-6d99af91a859\" (UID: \"c817cda1-46cd-4d4d-bf2b-6d99af91a859\") " Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.138688 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c817cda1-46cd-4d4d-bf2b-6d99af91a859-ceph\") pod \"c817cda1-46cd-4d4d-bf2b-6d99af91a859\" (UID: \"c817cda1-46cd-4d4d-bf2b-6d99af91a859\") " Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.138712 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"c817cda1-46cd-4d4d-bf2b-6d99af91a859\" (UID: \"c817cda1-46cd-4d4d-bf2b-6d99af91a859\") " Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.138763 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/c817cda1-46cd-4d4d-bf2b-6d99af91a859-ca-certs\") pod \"c817cda1-46cd-4d4d-bf2b-6d99af91a859\" (UID: \"c817cda1-46cd-4d4d-bf2b-6d99af91a859\") " Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.138790 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/c817cda1-46cd-4d4d-bf2b-6d99af91a859-test-operator-ephemeral-temporary\") pod \"c817cda1-46cd-4d4d-bf2b-6d99af91a859\" (UID: \"c817cda1-46cd-4d4d-bf2b-6d99af91a859\") " Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.138830 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/c817cda1-46cd-4d4d-bf2b-6d99af91a859-tobiko-public-key\") pod \"c817cda1-46cd-4d4d-bf2b-6d99af91a859\" (UID: \"c817cda1-46cd-4d4d-bf2b-6d99af91a859\") " Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.138894 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/c817cda1-46cd-4d4d-bf2b-6d99af91a859-test-operator-ephemeral-workdir\") pod \"c817cda1-46cd-4d4d-bf2b-6d99af91a859\" (UID: \"c817cda1-46cd-4d4d-bf2b-6d99af91a859\") " Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.138959 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c817cda1-46cd-4d4d-bf2b-6d99af91a859-openstack-config-secret\") pod \"c817cda1-46cd-4d4d-bf2b-6d99af91a859\" (UID: \"c817cda1-46cd-4d4d-bf2b-6d99af91a859\") " Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.139003 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5s4k8\" (UniqueName: \"kubernetes.io/projected/c817cda1-46cd-4d4d-bf2b-6d99af91a859-kube-api-access-5s4k8\") pod \"c817cda1-46cd-4d4d-bf2b-6d99af91a859\" (UID: \"c817cda1-46cd-4d4d-bf2b-6d99af91a859\") " Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.139034 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/c817cda1-46cd-4d4d-bf2b-6d99af91a859-tobiko-private-key\") pod \"c817cda1-46cd-4d4d-bf2b-6d99af91a859\" (UID: \"c817cda1-46cd-4d4d-bf2b-6d99af91a859\") " Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.139316 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/5667c5d7-d8e5-482e-8463-6660b2289aa5-test-operator-clouds-config\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"5667c5d7-d8e5-482e-8463-6660b2289aa5\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.139366 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5667c5d7-d8e5-482e-8463-6660b2289aa5-ca-certs\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"5667c5d7-d8e5-482e-8463-6660b2289aa5\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.139433 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/5667c5d7-d8e5-482e-8463-6660b2289aa5-tobiko-public-key\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"5667c5d7-d8e5-482e-8463-6660b2289aa5\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.139483 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/5667c5d7-d8e5-482e-8463-6660b2289aa5-tobiko-config\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"5667c5d7-d8e5-482e-8463-6660b2289aa5\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.139514 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5667c5d7-d8e5-482e-8463-6660b2289aa5-test-operator-ephemeral-temporary\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"5667c5d7-d8e5-482e-8463-6660b2289aa5\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.139916 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5667c5d7-d8e5-482e-8463-6660b2289aa5-openstack-config-secret\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"5667c5d7-d8e5-482e-8463-6660b2289aa5\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.140013 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5667c5d7-d8e5-482e-8463-6660b2289aa5-ceph\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"5667c5d7-d8e5-482e-8463-6660b2289aa5\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.140047 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/5667c5d7-d8e5-482e-8463-6660b2289aa5-kubeconfig\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"5667c5d7-d8e5-482e-8463-6660b2289aa5\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.140068 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/5667c5d7-d8e5-482e-8463-6660b2289aa5-tobiko-private-key\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"5667c5d7-d8e5-482e-8463-6660b2289aa5\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.140121 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49k5c\" (UniqueName: \"kubernetes.io/projected/5667c5d7-d8e5-482e-8463-6660b2289aa5-kube-api-access-49k5c\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"5667c5d7-d8e5-482e-8463-6660b2289aa5\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.140146 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5667c5d7-d8e5-482e-8463-6660b2289aa5-test-operator-ephemeral-workdir\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"5667c5d7-d8e5-482e-8463-6660b2289aa5\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.140610 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c817cda1-46cd-4d4d-bf2b-6d99af91a859-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "c817cda1-46cd-4d4d-bf2b-6d99af91a859" (UID: "c817cda1-46cd-4d4d-bf2b-6d99af91a859"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.145486 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "test-operator-logs") pod "c817cda1-46cd-4d4d-bf2b-6d99af91a859" (UID: "c817cda1-46cd-4d4d-bf2b-6d99af91a859"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.146561 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c817cda1-46cd-4d4d-bf2b-6d99af91a859-ceph" (OuterVolumeSpecName: "ceph") pod "c817cda1-46cd-4d4d-bf2b-6d99af91a859" (UID: "c817cda1-46cd-4d4d-bf2b-6d99af91a859"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.148641 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c817cda1-46cd-4d4d-bf2b-6d99af91a859-kube-api-access-5s4k8" (OuterVolumeSpecName: "kube-api-access-5s4k8") pod "c817cda1-46cd-4d4d-bf2b-6d99af91a859" (UID: "c817cda1-46cd-4d4d-bf2b-6d99af91a859"). InnerVolumeSpecName "kube-api-access-5s4k8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.167821 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c817cda1-46cd-4d4d-bf2b-6d99af91a859-tobiko-config" (OuterVolumeSpecName: "tobiko-config") pod "c817cda1-46cd-4d4d-bf2b-6d99af91a859" (UID: "c817cda1-46cd-4d4d-bf2b-6d99af91a859"). InnerVolumeSpecName "tobiko-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.172832 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c817cda1-46cd-4d4d-bf2b-6d99af91a859-tobiko-private-key" (OuterVolumeSpecName: "tobiko-private-key") pod "c817cda1-46cd-4d4d-bf2b-6d99af91a859" (UID: "c817cda1-46cd-4d4d-bf2b-6d99af91a859"). InnerVolumeSpecName "tobiko-private-key". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.173043 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c817cda1-46cd-4d4d-bf2b-6d99af91a859-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "c817cda1-46cd-4d4d-bf2b-6d99af91a859" (UID: "c817cda1-46cd-4d4d-bf2b-6d99af91a859"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.177842 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c817cda1-46cd-4d4d-bf2b-6d99af91a859-kubeconfig" (OuterVolumeSpecName: "kubeconfig") pod "c817cda1-46cd-4d4d-bf2b-6d99af91a859" (UID: "c817cda1-46cd-4d4d-bf2b-6d99af91a859"). InnerVolumeSpecName "kubeconfig". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.185302 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c817cda1-46cd-4d4d-bf2b-6d99af91a859-tobiko-public-key" (OuterVolumeSpecName: "tobiko-public-key") pod "c817cda1-46cd-4d4d-bf2b-6d99af91a859" (UID: "c817cda1-46cd-4d4d-bf2b-6d99af91a859"). InnerVolumeSpecName "tobiko-public-key". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.204246 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c817cda1-46cd-4d4d-bf2b-6d99af91a859-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "c817cda1-46cd-4d4d-bf2b-6d99af91a859" (UID: "c817cda1-46cd-4d4d-bf2b-6d99af91a859"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.209437 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c817cda1-46cd-4d4d-bf2b-6d99af91a859-test-operator-clouds-config" (OuterVolumeSpecName: "test-operator-clouds-config") pod "c817cda1-46cd-4d4d-bf2b-6d99af91a859" (UID: "c817cda1-46cd-4d4d-bf2b-6d99af91a859"). InnerVolumeSpecName "test-operator-clouds-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.242341 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/5667c5d7-d8e5-482e-8463-6660b2289aa5-test-operator-clouds-config\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"5667c5d7-d8e5-482e-8463-6660b2289aa5\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.242412 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5667c5d7-d8e5-482e-8463-6660b2289aa5-ca-certs\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"5667c5d7-d8e5-482e-8463-6660b2289aa5\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.242463 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/5667c5d7-d8e5-482e-8463-6660b2289aa5-tobiko-public-key\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"5667c5d7-d8e5-482e-8463-6660b2289aa5\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.243014 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/5667c5d7-d8e5-482e-8463-6660b2289aa5-tobiko-config\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"5667c5d7-d8e5-482e-8463-6660b2289aa5\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.243152 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5667c5d7-d8e5-482e-8463-6660b2289aa5-test-operator-ephemeral-temporary\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"5667c5d7-d8e5-482e-8463-6660b2289aa5\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.243213 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5667c5d7-d8e5-482e-8463-6660b2289aa5-openstack-config-secret\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"5667c5d7-d8e5-482e-8463-6660b2289aa5\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.243262 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"5667c5d7-d8e5-482e-8463-6660b2289aa5\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.243360 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5667c5d7-d8e5-482e-8463-6660b2289aa5-ceph\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"5667c5d7-d8e5-482e-8463-6660b2289aa5\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.243416 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/5667c5d7-d8e5-482e-8463-6660b2289aa5-kubeconfig\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"5667c5d7-d8e5-482e-8463-6660b2289aa5\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.243443 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/5667c5d7-d8e5-482e-8463-6660b2289aa5-tobiko-private-key\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"5667c5d7-d8e5-482e-8463-6660b2289aa5\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.243495 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49k5c\" (UniqueName: \"kubernetes.io/projected/5667c5d7-d8e5-482e-8463-6660b2289aa5-kube-api-access-49k5c\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"5667c5d7-d8e5-482e-8463-6660b2289aa5\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.243515 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5667c5d7-d8e5-482e-8463-6660b2289aa5-test-operator-ephemeral-workdir\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"5667c5d7-d8e5-482e-8463-6660b2289aa5\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.243592 5005 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c817cda1-46cd-4d4d-bf2b-6d99af91a859-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.243609 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5s4k8\" (UniqueName: \"kubernetes.io/projected/c817cda1-46cd-4d4d-bf2b-6d99af91a859-kube-api-access-5s4k8\") on node \"crc\" DevicePath \"\"" Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.243621 5005 reconciler_common.go:293] "Volume detached for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/c817cda1-46cd-4d4d-bf2b-6d99af91a859-tobiko-private-key\") on node \"crc\" DevicePath \"\"" Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.243634 5005 reconciler_common.go:293] "Volume detached for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/c817cda1-46cd-4d4d-bf2b-6d99af91a859-test-operator-clouds-config\") on node \"crc\" DevicePath \"\"" Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.243646 5005 reconciler_common.go:293] "Volume detached for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/c817cda1-46cd-4d4d-bf2b-6d99af91a859-tobiko-config\") on node \"crc\" DevicePath \"\"" Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.243659 5005 reconciler_common.go:293] "Volume detached for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/c817cda1-46cd-4d4d-bf2b-6d99af91a859-kubeconfig\") on node \"crc\" DevicePath \"\"" Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.243670 5005 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c817cda1-46cd-4d4d-bf2b-6d99af91a859-ceph\") on node \"crc\" DevicePath \"\"" Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.243681 5005 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/c817cda1-46cd-4d4d-bf2b-6d99af91a859-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.243692 5005 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/c817cda1-46cd-4d4d-bf2b-6d99af91a859-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.243705 5005 reconciler_common.go:293] "Volume detached for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/c817cda1-46cd-4d4d-bf2b-6d99af91a859-tobiko-public-key\") on node \"crc\" DevicePath \"\"" Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.244534 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/5667c5d7-d8e5-482e-8463-6660b2289aa5-tobiko-private-key\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"5667c5d7-d8e5-482e-8463-6660b2289aa5\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.245520 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/5667c5d7-d8e5-482e-8463-6660b2289aa5-tobiko-config\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"5667c5d7-d8e5-482e-8463-6660b2289aa5\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.245603 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/5667c5d7-d8e5-482e-8463-6660b2289aa5-tobiko-public-key\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"5667c5d7-d8e5-482e-8463-6660b2289aa5\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.245741 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/5667c5d7-d8e5-482e-8463-6660b2289aa5-test-operator-clouds-config\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"5667c5d7-d8e5-482e-8463-6660b2289aa5\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.246006 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5667c5d7-d8e5-482e-8463-6660b2289aa5-test-operator-ephemeral-workdir\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"5667c5d7-d8e5-482e-8463-6660b2289aa5\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.246656 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5667c5d7-d8e5-482e-8463-6660b2289aa5-test-operator-ephemeral-temporary\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"5667c5d7-d8e5-482e-8463-6660b2289aa5\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.249610 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5667c5d7-d8e5-482e-8463-6660b2289aa5-openstack-config-secret\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"5667c5d7-d8e5-482e-8463-6660b2289aa5\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.250180 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5667c5d7-d8e5-482e-8463-6660b2289aa5-ceph\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"5667c5d7-d8e5-482e-8463-6660b2289aa5\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.250743 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/5667c5d7-d8e5-482e-8463-6660b2289aa5-kubeconfig\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"5667c5d7-d8e5-482e-8463-6660b2289aa5\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.251120 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5667c5d7-d8e5-482e-8463-6660b2289aa5-ca-certs\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"5667c5d7-d8e5-482e-8463-6660b2289aa5\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.261092 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49k5c\" (UniqueName: \"kubernetes.io/projected/5667c5d7-d8e5-482e-8463-6660b2289aa5-kube-api-access-49k5c\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"5667c5d7-d8e5-482e-8463-6660b2289aa5\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.272225 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"5667c5d7-d8e5-482e-8463-6660b2289aa5\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.360790 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.553039 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" event={"ID":"c817cda1-46cd-4d4d-bf2b-6d99af91a859","Type":"ContainerDied","Data":"14e51da8d4293174506d3018daa4269c6763ec51614aae8155af2b057b5836cd"} Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.553472 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14e51da8d4293174506d3018daa4269c6763ec51614aae8155af2b057b5836cd" Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.553256 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 25 13:13:55 crc kubenswrapper[5005]: I0225 13:13:55.957059 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tobiko-tests-tobiko-s01-sanity"] Feb 25 13:13:55 crc kubenswrapper[5005]: W0225 13:13:55.965052 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5667c5d7_d8e5_482e_8463_6660b2289aa5.slice/crio-817866e5fd7d8edd97b5c1c2b3a0473c8cfe11f20bfb49d033db34b0adb82bdf WatchSource:0}: Error finding container 817866e5fd7d8edd97b5c1c2b3a0473c8cfe11f20bfb49d033db34b0adb82bdf: Status 404 returned error can't find the container with id 817866e5fd7d8edd97b5c1c2b3a0473c8cfe11f20bfb49d033db34b0adb82bdf Feb 25 13:13:56 crc kubenswrapper[5005]: I0225 13:13:56.378110 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c817cda1-46cd-4d4d-bf2b-6d99af91a859-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "c817cda1-46cd-4d4d-bf2b-6d99af91a859" (UID: "c817cda1-46cd-4d4d-bf2b-6d99af91a859"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 13:13:56 crc kubenswrapper[5005]: I0225 13:13:56.469970 5005 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/c817cda1-46cd-4d4d-bf2b-6d99af91a859-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 25 13:13:56 crc kubenswrapper[5005]: I0225 13:13:56.561916 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tobiko-tests-tobiko-s01-sanity" event={"ID":"5667c5d7-d8e5-482e-8463-6660b2289aa5","Type":"ContainerStarted","Data":"1c127fbd1e33ca609619fdcbfdd93214197292810cd25a46940b270fcc66e34c"} Feb 25 13:13:56 crc kubenswrapper[5005]: I0225 13:13:56.562261 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tobiko-tests-tobiko-s01-sanity" event={"ID":"5667c5d7-d8e5-482e-8463-6660b2289aa5","Type":"ContainerStarted","Data":"817866e5fd7d8edd97b5c1c2b3a0473c8cfe11f20bfb49d033db34b0adb82bdf"} Feb 25 13:13:56 crc kubenswrapper[5005]: I0225 13:13:56.586077 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tobiko-tests-tobiko-s01-sanity" podStartSLOduration=2.586058098 podStartE2EDuration="2.586058098s" podCreationTimestamp="2026-02-25 13:13:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 13:13:56.576931798 +0000 UTC m=+6950.617664125" watchObservedRunningTime="2026-02-25 13:13:56.586058098 +0000 UTC m=+6950.626790425" Feb 25 13:14:00 crc kubenswrapper[5005]: I0225 13:14:00.127017 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533754-fz9h2"] Feb 25 13:14:00 crc kubenswrapper[5005]: I0225 13:14:00.129611 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533754-fz9h2" Feb 25 13:14:00 crc kubenswrapper[5005]: I0225 13:14:00.132731 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 13:14:00 crc kubenswrapper[5005]: I0225 13:14:00.133199 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 13:14:00 crc kubenswrapper[5005]: I0225 13:14:00.133437 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 13:14:00 crc kubenswrapper[5005]: I0225 13:14:00.145845 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533754-fz9h2"] Feb 25 13:14:00 crc kubenswrapper[5005]: I0225 13:14:00.244463 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m957\" (UniqueName: \"kubernetes.io/projected/d910565a-3322-4078-9a06-204b33bcece8-kube-api-access-9m957\") pod \"auto-csr-approver-29533754-fz9h2\" (UID: \"d910565a-3322-4078-9a06-204b33bcece8\") " pod="openshift-infra/auto-csr-approver-29533754-fz9h2" Feb 25 13:14:00 crc kubenswrapper[5005]: I0225 13:14:00.345763 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9m957\" (UniqueName: \"kubernetes.io/projected/d910565a-3322-4078-9a06-204b33bcece8-kube-api-access-9m957\") pod \"auto-csr-approver-29533754-fz9h2\" (UID: \"d910565a-3322-4078-9a06-204b33bcece8\") " pod="openshift-infra/auto-csr-approver-29533754-fz9h2" Feb 25 13:14:00 crc kubenswrapper[5005]: I0225 13:14:00.363757 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9m957\" (UniqueName: \"kubernetes.io/projected/d910565a-3322-4078-9a06-204b33bcece8-kube-api-access-9m957\") pod \"auto-csr-approver-29533754-fz9h2\" (UID: \"d910565a-3322-4078-9a06-204b33bcece8\") " pod="openshift-infra/auto-csr-approver-29533754-fz9h2" Feb 25 13:14:00 crc kubenswrapper[5005]: I0225 13:14:00.447301 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533754-fz9h2" Feb 25 13:14:00 crc kubenswrapper[5005]: I0225 13:14:00.955908 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533754-fz9h2"] Feb 25 13:14:00 crc kubenswrapper[5005]: W0225 13:14:00.967180 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd910565a_3322_4078_9a06_204b33bcece8.slice/crio-57b21beb82f4296a4368cf443ad6348b08c3e838cb03cf29a7098f2d3f9b3ead WatchSource:0}: Error finding container 57b21beb82f4296a4368cf443ad6348b08c3e838cb03cf29a7098f2d3f9b3ead: Status 404 returned error can't find the container with id 57b21beb82f4296a4368cf443ad6348b08c3e838cb03cf29a7098f2d3f9b3ead Feb 25 13:14:01 crc kubenswrapper[5005]: I0225 13:14:01.627341 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533754-fz9h2" event={"ID":"d910565a-3322-4078-9a06-204b33bcece8","Type":"ContainerStarted","Data":"57b21beb82f4296a4368cf443ad6348b08c3e838cb03cf29a7098f2d3f9b3ead"} Feb 25 13:14:02 crc kubenswrapper[5005]: I0225 13:14:02.636672 5005 generic.go:334] "Generic (PLEG): container finished" podID="d910565a-3322-4078-9a06-204b33bcece8" containerID="58a278f120a8a16cd383554711507040b797c1ff01e6e9a240f4fdc73cc04fdc" exitCode=0 Feb 25 13:14:02 crc kubenswrapper[5005]: I0225 13:14:02.637912 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533754-fz9h2" event={"ID":"d910565a-3322-4078-9a06-204b33bcece8","Type":"ContainerDied","Data":"58a278f120a8a16cd383554711507040b797c1ff01e6e9a240f4fdc73cc04fdc"} Feb 25 13:14:04 crc kubenswrapper[5005]: I0225 13:14:04.019789 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533754-fz9h2" Feb 25 13:14:04 crc kubenswrapper[5005]: I0225 13:14:04.121275 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9m957\" (UniqueName: \"kubernetes.io/projected/d910565a-3322-4078-9a06-204b33bcece8-kube-api-access-9m957\") pod \"d910565a-3322-4078-9a06-204b33bcece8\" (UID: \"d910565a-3322-4078-9a06-204b33bcece8\") " Feb 25 13:14:04 crc kubenswrapper[5005]: I0225 13:14:04.127629 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d910565a-3322-4078-9a06-204b33bcece8-kube-api-access-9m957" (OuterVolumeSpecName: "kube-api-access-9m957") pod "d910565a-3322-4078-9a06-204b33bcece8" (UID: "d910565a-3322-4078-9a06-204b33bcece8"). InnerVolumeSpecName "kube-api-access-9m957". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 13:14:04 crc kubenswrapper[5005]: I0225 13:14:04.234775 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9m957\" (UniqueName: \"kubernetes.io/projected/d910565a-3322-4078-9a06-204b33bcece8-kube-api-access-9m957\") on node \"crc\" DevicePath \"\"" Feb 25 13:14:04 crc kubenswrapper[5005]: I0225 13:14:04.659452 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533754-fz9h2" event={"ID":"d910565a-3322-4078-9a06-204b33bcece8","Type":"ContainerDied","Data":"57b21beb82f4296a4368cf443ad6348b08c3e838cb03cf29a7098f2d3f9b3ead"} Feb 25 13:14:04 crc kubenswrapper[5005]: I0225 13:14:04.659496 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57b21beb82f4296a4368cf443ad6348b08c3e838cb03cf29a7098f2d3f9b3ead" Feb 25 13:14:04 crc kubenswrapper[5005]: I0225 13:14:04.659531 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533754-fz9h2" Feb 25 13:14:05 crc kubenswrapper[5005]: I0225 13:14:05.092317 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533748-vpbtw"] Feb 25 13:14:05 crc kubenswrapper[5005]: I0225 13:14:05.105189 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533748-vpbtw"] Feb 25 13:14:06 crc kubenswrapper[5005]: I0225 13:14:06.705422 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c30a7e5-45bd-4090-8c26-0ced7f76ee9c" path="/var/lib/kubelet/pods/7c30a7e5-45bd-4090-8c26-0ced7f76ee9c/volumes" Feb 25 13:14:24 crc kubenswrapper[5005]: I0225 13:14:24.469471 5005 scope.go:117] "RemoveContainer" containerID="24b6f8fe4c1c64dec158e4d8704e77966deb4840a8f31550e4db6dfb23e74534" Feb 25 13:14:28 crc kubenswrapper[5005]: I0225 13:14:28.086878 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 13:14:28 crc kubenswrapper[5005]: I0225 13:14:28.087302 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 13:14:58 crc kubenswrapper[5005]: I0225 13:14:58.087615 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 13:14:58 crc kubenswrapper[5005]: I0225 13:14:58.088293 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 13:15:00 crc kubenswrapper[5005]: I0225 13:15:00.170315 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29533755-nc8nk"] Feb 25 13:15:00 crc kubenswrapper[5005]: E0225 13:15:00.171215 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d910565a-3322-4078-9a06-204b33bcece8" containerName="oc" Feb 25 13:15:00 crc kubenswrapper[5005]: I0225 13:15:00.171233 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="d910565a-3322-4078-9a06-204b33bcece8" containerName="oc" Feb 25 13:15:00 crc kubenswrapper[5005]: I0225 13:15:00.171474 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="d910565a-3322-4078-9a06-204b33bcece8" containerName="oc" Feb 25 13:15:00 crc kubenswrapper[5005]: I0225 13:15:00.172449 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29533755-nc8nk" Feb 25 13:15:00 crc kubenswrapper[5005]: I0225 13:15:00.178492 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 25 13:15:00 crc kubenswrapper[5005]: I0225 13:15:00.180531 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 25 13:15:00 crc kubenswrapper[5005]: I0225 13:15:00.191580 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29533755-nc8nk"] Feb 25 13:15:00 crc kubenswrapper[5005]: I0225 13:15:00.357574 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/435895b4-3a4a-49ab-bf1c-bd8a4b1df37d-secret-volume\") pod \"collect-profiles-29533755-nc8nk\" (UID: \"435895b4-3a4a-49ab-bf1c-bd8a4b1df37d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533755-nc8nk" Feb 25 13:15:00 crc kubenswrapper[5005]: I0225 13:15:00.357733 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slz5q\" (UniqueName: \"kubernetes.io/projected/435895b4-3a4a-49ab-bf1c-bd8a4b1df37d-kube-api-access-slz5q\") pod \"collect-profiles-29533755-nc8nk\" (UID: \"435895b4-3a4a-49ab-bf1c-bd8a4b1df37d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533755-nc8nk" Feb 25 13:15:00 crc kubenswrapper[5005]: I0225 13:15:00.357945 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/435895b4-3a4a-49ab-bf1c-bd8a4b1df37d-config-volume\") pod \"collect-profiles-29533755-nc8nk\" (UID: \"435895b4-3a4a-49ab-bf1c-bd8a4b1df37d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533755-nc8nk" Feb 25 13:15:00 crc kubenswrapper[5005]: I0225 13:15:00.460035 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/435895b4-3a4a-49ab-bf1c-bd8a4b1df37d-config-volume\") pod \"collect-profiles-29533755-nc8nk\" (UID: \"435895b4-3a4a-49ab-bf1c-bd8a4b1df37d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533755-nc8nk" Feb 25 13:15:00 crc kubenswrapper[5005]: I0225 13:15:00.460112 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/435895b4-3a4a-49ab-bf1c-bd8a4b1df37d-secret-volume\") pod \"collect-profiles-29533755-nc8nk\" (UID: \"435895b4-3a4a-49ab-bf1c-bd8a4b1df37d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533755-nc8nk" Feb 25 13:15:00 crc kubenswrapper[5005]: I0225 13:15:00.460206 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slz5q\" (UniqueName: \"kubernetes.io/projected/435895b4-3a4a-49ab-bf1c-bd8a4b1df37d-kube-api-access-slz5q\") pod \"collect-profiles-29533755-nc8nk\" (UID: \"435895b4-3a4a-49ab-bf1c-bd8a4b1df37d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533755-nc8nk" Feb 25 13:15:00 crc kubenswrapper[5005]: I0225 13:15:00.461325 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/435895b4-3a4a-49ab-bf1c-bd8a4b1df37d-config-volume\") pod \"collect-profiles-29533755-nc8nk\" (UID: \"435895b4-3a4a-49ab-bf1c-bd8a4b1df37d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533755-nc8nk" Feb 25 13:15:00 crc kubenswrapper[5005]: I0225 13:15:00.478493 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/435895b4-3a4a-49ab-bf1c-bd8a4b1df37d-secret-volume\") pod \"collect-profiles-29533755-nc8nk\" (UID: \"435895b4-3a4a-49ab-bf1c-bd8a4b1df37d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533755-nc8nk" Feb 25 13:15:00 crc kubenswrapper[5005]: I0225 13:15:00.482477 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slz5q\" (UniqueName: \"kubernetes.io/projected/435895b4-3a4a-49ab-bf1c-bd8a4b1df37d-kube-api-access-slz5q\") pod \"collect-profiles-29533755-nc8nk\" (UID: \"435895b4-3a4a-49ab-bf1c-bd8a4b1df37d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533755-nc8nk" Feb 25 13:15:00 crc kubenswrapper[5005]: I0225 13:15:00.513581 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29533755-nc8nk" Feb 25 13:15:01 crc kubenswrapper[5005]: I0225 13:15:01.040398 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29533755-nc8nk"] Feb 25 13:15:01 crc kubenswrapper[5005]: I0225 13:15:01.167959 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29533755-nc8nk" event={"ID":"435895b4-3a4a-49ab-bf1c-bd8a4b1df37d","Type":"ContainerStarted","Data":"fbd248afdf212685cda5619e8039058660d826575ec9d113f411570a0944aa0a"} Feb 25 13:15:02 crc kubenswrapper[5005]: I0225 13:15:02.179166 5005 generic.go:334] "Generic (PLEG): container finished" podID="435895b4-3a4a-49ab-bf1c-bd8a4b1df37d" containerID="230e0c86c5bc0a3b383e58d316e99cafd5282f49bef16678072ef9d094421e1a" exitCode=0 Feb 25 13:15:02 crc kubenswrapper[5005]: I0225 13:15:02.179248 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29533755-nc8nk" event={"ID":"435895b4-3a4a-49ab-bf1c-bd8a4b1df37d","Type":"ContainerDied","Data":"230e0c86c5bc0a3b383e58d316e99cafd5282f49bef16678072ef9d094421e1a"} Feb 25 13:15:03 crc kubenswrapper[5005]: I0225 13:15:03.618139 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29533755-nc8nk" Feb 25 13:15:03 crc kubenswrapper[5005]: I0225 13:15:03.746494 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/435895b4-3a4a-49ab-bf1c-bd8a4b1df37d-secret-volume\") pod \"435895b4-3a4a-49ab-bf1c-bd8a4b1df37d\" (UID: \"435895b4-3a4a-49ab-bf1c-bd8a4b1df37d\") " Feb 25 13:15:03 crc kubenswrapper[5005]: I0225 13:15:03.747322 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slz5q\" (UniqueName: \"kubernetes.io/projected/435895b4-3a4a-49ab-bf1c-bd8a4b1df37d-kube-api-access-slz5q\") pod \"435895b4-3a4a-49ab-bf1c-bd8a4b1df37d\" (UID: \"435895b4-3a4a-49ab-bf1c-bd8a4b1df37d\") " Feb 25 13:15:03 crc kubenswrapper[5005]: I0225 13:15:03.747659 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/435895b4-3a4a-49ab-bf1c-bd8a4b1df37d-config-volume\") pod \"435895b4-3a4a-49ab-bf1c-bd8a4b1df37d\" (UID: \"435895b4-3a4a-49ab-bf1c-bd8a4b1df37d\") " Feb 25 13:15:03 crc kubenswrapper[5005]: I0225 13:15:03.748394 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/435895b4-3a4a-49ab-bf1c-bd8a4b1df37d-config-volume" (OuterVolumeSpecName: "config-volume") pod "435895b4-3a4a-49ab-bf1c-bd8a4b1df37d" (UID: "435895b4-3a4a-49ab-bf1c-bd8a4b1df37d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 13:15:03 crc kubenswrapper[5005]: I0225 13:15:03.748565 5005 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/435895b4-3a4a-49ab-bf1c-bd8a4b1df37d-config-volume\") on node \"crc\" DevicePath \"\"" Feb 25 13:15:03 crc kubenswrapper[5005]: I0225 13:15:03.756748 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/435895b4-3a4a-49ab-bf1c-bd8a4b1df37d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "435895b4-3a4a-49ab-bf1c-bd8a4b1df37d" (UID: "435895b4-3a4a-49ab-bf1c-bd8a4b1df37d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 13:15:03 crc kubenswrapper[5005]: I0225 13:15:03.758227 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/435895b4-3a4a-49ab-bf1c-bd8a4b1df37d-kube-api-access-slz5q" (OuterVolumeSpecName: "kube-api-access-slz5q") pod "435895b4-3a4a-49ab-bf1c-bd8a4b1df37d" (UID: "435895b4-3a4a-49ab-bf1c-bd8a4b1df37d"). InnerVolumeSpecName "kube-api-access-slz5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 13:15:03 crc kubenswrapper[5005]: I0225 13:15:03.849789 5005 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/435895b4-3a4a-49ab-bf1c-bd8a4b1df37d-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 25 13:15:03 crc kubenswrapper[5005]: I0225 13:15:03.849827 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slz5q\" (UniqueName: \"kubernetes.io/projected/435895b4-3a4a-49ab-bf1c-bd8a4b1df37d-kube-api-access-slz5q\") on node \"crc\" DevicePath \"\"" Feb 25 13:15:04 crc kubenswrapper[5005]: I0225 13:15:04.203922 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29533755-nc8nk" event={"ID":"435895b4-3a4a-49ab-bf1c-bd8a4b1df37d","Type":"ContainerDied","Data":"fbd248afdf212685cda5619e8039058660d826575ec9d113f411570a0944aa0a"} Feb 25 13:15:04 crc kubenswrapper[5005]: I0225 13:15:04.203986 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbd248afdf212685cda5619e8039058660d826575ec9d113f411570a0944aa0a" Feb 25 13:15:04 crc kubenswrapper[5005]: I0225 13:15:04.204099 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29533755-nc8nk" Feb 25 13:15:04 crc kubenswrapper[5005]: I0225 13:15:04.706531 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29533710-9pqdf"] Feb 25 13:15:04 crc kubenswrapper[5005]: I0225 13:15:04.715914 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29533710-9pqdf"] Feb 25 13:15:06 crc kubenswrapper[5005]: I0225 13:15:06.696880 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e019bb9f-e3cf-4478-a93c-964ec21e955c" path="/var/lib/kubelet/pods/e019bb9f-e3cf-4478-a93c-964ec21e955c/volumes" Feb 25 13:15:21 crc kubenswrapper[5005]: I0225 13:15:21.367996 5005 generic.go:334] "Generic (PLEG): container finished" podID="5667c5d7-d8e5-482e-8463-6660b2289aa5" containerID="1c127fbd1e33ca609619fdcbfdd93214197292810cd25a46940b270fcc66e34c" exitCode=0 Feb 25 13:15:21 crc kubenswrapper[5005]: I0225 13:15:21.368113 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tobiko-tests-tobiko-s01-sanity" event={"ID":"5667c5d7-d8e5-482e-8463-6660b2289aa5","Type":"ContainerDied","Data":"1c127fbd1e33ca609619fdcbfdd93214197292810cd25a46940b270fcc66e34c"} Feb 25 13:15:22 crc kubenswrapper[5005]: I0225 13:15:22.821817 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 25 13:15:22 crc kubenswrapper[5005]: I0225 13:15:22.870982 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"5667c5d7-d8e5-482e-8463-6660b2289aa5\" (UID: \"5667c5d7-d8e5-482e-8463-6660b2289aa5\") " Feb 25 13:15:22 crc kubenswrapper[5005]: I0225 13:15:22.871128 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/5667c5d7-d8e5-482e-8463-6660b2289aa5-test-operator-clouds-config\") pod \"5667c5d7-d8e5-482e-8463-6660b2289aa5\" (UID: \"5667c5d7-d8e5-482e-8463-6660b2289aa5\") " Feb 25 13:15:22 crc kubenswrapper[5005]: I0225 13:15:22.871170 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49k5c\" (UniqueName: \"kubernetes.io/projected/5667c5d7-d8e5-482e-8463-6660b2289aa5-kube-api-access-49k5c\") pod \"5667c5d7-d8e5-482e-8463-6660b2289aa5\" (UID: \"5667c5d7-d8e5-482e-8463-6660b2289aa5\") " Feb 25 13:15:22 crc kubenswrapper[5005]: I0225 13:15:22.871226 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/5667c5d7-d8e5-482e-8463-6660b2289aa5-tobiko-private-key\") pod \"5667c5d7-d8e5-482e-8463-6660b2289aa5\" (UID: \"5667c5d7-d8e5-482e-8463-6660b2289aa5\") " Feb 25 13:15:22 crc kubenswrapper[5005]: I0225 13:15:22.871249 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/5667c5d7-d8e5-482e-8463-6660b2289aa5-tobiko-public-key\") pod \"5667c5d7-d8e5-482e-8463-6660b2289aa5\" (UID: \"5667c5d7-d8e5-482e-8463-6660b2289aa5\") " Feb 25 13:15:22 crc kubenswrapper[5005]: I0225 13:15:22.871299 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/5667c5d7-d8e5-482e-8463-6660b2289aa5-kubeconfig\") pod \"5667c5d7-d8e5-482e-8463-6660b2289aa5\" (UID: \"5667c5d7-d8e5-482e-8463-6660b2289aa5\") " Feb 25 13:15:22 crc kubenswrapper[5005]: I0225 13:15:22.871895 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5667c5d7-d8e5-482e-8463-6660b2289aa5-openstack-config-secret\") pod \"5667c5d7-d8e5-482e-8463-6660b2289aa5\" (UID: \"5667c5d7-d8e5-482e-8463-6660b2289aa5\") " Feb 25 13:15:22 crc kubenswrapper[5005]: I0225 13:15:22.871930 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5667c5d7-d8e5-482e-8463-6660b2289aa5-test-operator-ephemeral-temporary\") pod \"5667c5d7-d8e5-482e-8463-6660b2289aa5\" (UID: \"5667c5d7-d8e5-482e-8463-6660b2289aa5\") " Feb 25 13:15:22 crc kubenswrapper[5005]: I0225 13:15:22.872111 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5667c5d7-d8e5-482e-8463-6660b2289aa5-ca-certs\") pod \"5667c5d7-d8e5-482e-8463-6660b2289aa5\" (UID: \"5667c5d7-d8e5-482e-8463-6660b2289aa5\") " Feb 25 13:15:22 crc kubenswrapper[5005]: I0225 13:15:22.872183 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5667c5d7-d8e5-482e-8463-6660b2289aa5-ceph\") pod \"5667c5d7-d8e5-482e-8463-6660b2289aa5\" (UID: \"5667c5d7-d8e5-482e-8463-6660b2289aa5\") " Feb 25 13:15:22 crc kubenswrapper[5005]: I0225 13:15:22.872218 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/5667c5d7-d8e5-482e-8463-6660b2289aa5-tobiko-config\") pod \"5667c5d7-d8e5-482e-8463-6660b2289aa5\" (UID: \"5667c5d7-d8e5-482e-8463-6660b2289aa5\") " Feb 25 13:15:22 crc kubenswrapper[5005]: I0225 13:15:22.872274 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5667c5d7-d8e5-482e-8463-6660b2289aa5-test-operator-ephemeral-workdir\") pod \"5667c5d7-d8e5-482e-8463-6660b2289aa5\" (UID: \"5667c5d7-d8e5-482e-8463-6660b2289aa5\") " Feb 25 13:15:22 crc kubenswrapper[5005]: I0225 13:15:22.878240 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5667c5d7-d8e5-482e-8463-6660b2289aa5-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "5667c5d7-d8e5-482e-8463-6660b2289aa5" (UID: "5667c5d7-d8e5-482e-8463-6660b2289aa5"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 13:15:22 crc kubenswrapper[5005]: I0225 13:15:22.878669 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "test-operator-logs") pod "5667c5d7-d8e5-482e-8463-6660b2289aa5" (UID: "5667c5d7-d8e5-482e-8463-6660b2289aa5"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 25 13:15:22 crc kubenswrapper[5005]: I0225 13:15:22.879312 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5667c5d7-d8e5-482e-8463-6660b2289aa5-ceph" (OuterVolumeSpecName: "ceph") pod "5667c5d7-d8e5-482e-8463-6660b2289aa5" (UID: "5667c5d7-d8e5-482e-8463-6660b2289aa5"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 13:15:22 crc kubenswrapper[5005]: I0225 13:15:22.904686 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5667c5d7-d8e5-482e-8463-6660b2289aa5-kube-api-access-49k5c" (OuterVolumeSpecName: "kube-api-access-49k5c") pod "5667c5d7-d8e5-482e-8463-6660b2289aa5" (UID: "5667c5d7-d8e5-482e-8463-6660b2289aa5"). InnerVolumeSpecName "kube-api-access-49k5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 13:15:22 crc kubenswrapper[5005]: I0225 13:15:22.905888 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5667c5d7-d8e5-482e-8463-6660b2289aa5-tobiko-public-key" (OuterVolumeSpecName: "tobiko-public-key") pod "5667c5d7-d8e5-482e-8463-6660b2289aa5" (UID: "5667c5d7-d8e5-482e-8463-6660b2289aa5"). InnerVolumeSpecName "tobiko-public-key". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 13:15:22 crc kubenswrapper[5005]: I0225 13:15:22.906999 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5667c5d7-d8e5-482e-8463-6660b2289aa5-tobiko-private-key" (OuterVolumeSpecName: "tobiko-private-key") pod "5667c5d7-d8e5-482e-8463-6660b2289aa5" (UID: "5667c5d7-d8e5-482e-8463-6660b2289aa5"). InnerVolumeSpecName "tobiko-private-key". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 13:15:22 crc kubenswrapper[5005]: I0225 13:15:22.911949 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5667c5d7-d8e5-482e-8463-6660b2289aa5-tobiko-config" (OuterVolumeSpecName: "tobiko-config") pod "5667c5d7-d8e5-482e-8463-6660b2289aa5" (UID: "5667c5d7-d8e5-482e-8463-6660b2289aa5"). InnerVolumeSpecName "tobiko-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 13:15:22 crc kubenswrapper[5005]: I0225 13:15:22.926537 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5667c5d7-d8e5-482e-8463-6660b2289aa5-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "5667c5d7-d8e5-482e-8463-6660b2289aa5" (UID: "5667c5d7-d8e5-482e-8463-6660b2289aa5"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 13:15:22 crc kubenswrapper[5005]: I0225 13:15:22.932796 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5667c5d7-d8e5-482e-8463-6660b2289aa5-kubeconfig" (OuterVolumeSpecName: "kubeconfig") pod "5667c5d7-d8e5-482e-8463-6660b2289aa5" (UID: "5667c5d7-d8e5-482e-8463-6660b2289aa5"). InnerVolumeSpecName "kubeconfig". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 13:15:22 crc kubenswrapper[5005]: I0225 13:15:22.947410 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5667c5d7-d8e5-482e-8463-6660b2289aa5-test-operator-clouds-config" (OuterVolumeSpecName: "test-operator-clouds-config") pod "5667c5d7-d8e5-482e-8463-6660b2289aa5" (UID: "5667c5d7-d8e5-482e-8463-6660b2289aa5"). InnerVolumeSpecName "test-operator-clouds-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 13:15:22 crc kubenswrapper[5005]: I0225 13:15:22.950463 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5667c5d7-d8e5-482e-8463-6660b2289aa5-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "5667c5d7-d8e5-482e-8463-6660b2289aa5" (UID: "5667c5d7-d8e5-482e-8463-6660b2289aa5"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 13:15:22 crc kubenswrapper[5005]: I0225 13:15:22.976353 5005 reconciler_common.go:293] "Volume detached for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/5667c5d7-d8e5-482e-8463-6660b2289aa5-tobiko-private-key\") on node \"crc\" DevicePath \"\"" Feb 25 13:15:22 crc kubenswrapper[5005]: I0225 13:15:22.976424 5005 reconciler_common.go:293] "Volume detached for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/5667c5d7-d8e5-482e-8463-6660b2289aa5-tobiko-public-key\") on node \"crc\" DevicePath \"\"" Feb 25 13:15:22 crc kubenswrapper[5005]: I0225 13:15:22.976440 5005 reconciler_common.go:293] "Volume detached for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/5667c5d7-d8e5-482e-8463-6660b2289aa5-kubeconfig\") on node \"crc\" DevicePath \"\"" Feb 25 13:15:22 crc kubenswrapper[5005]: I0225 13:15:22.976454 5005 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5667c5d7-d8e5-482e-8463-6660b2289aa5-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 25 13:15:22 crc kubenswrapper[5005]: I0225 13:15:22.976472 5005 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5667c5d7-d8e5-482e-8463-6660b2289aa5-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 25 13:15:22 crc kubenswrapper[5005]: I0225 13:15:22.976487 5005 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5667c5d7-d8e5-482e-8463-6660b2289aa5-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 25 13:15:22 crc kubenswrapper[5005]: I0225 13:15:22.976499 5005 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5667c5d7-d8e5-482e-8463-6660b2289aa5-ceph\") on node \"crc\" DevicePath \"\"" Feb 25 13:15:22 crc kubenswrapper[5005]: I0225 13:15:22.976514 5005 reconciler_common.go:293] "Volume detached for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/5667c5d7-d8e5-482e-8463-6660b2289aa5-tobiko-config\") on node \"crc\" DevicePath \"\"" Feb 25 13:15:22 crc kubenswrapper[5005]: I0225 13:15:22.976555 5005 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Feb 25 13:15:22 crc kubenswrapper[5005]: I0225 13:15:22.976569 5005 reconciler_common.go:293] "Volume detached for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/5667c5d7-d8e5-482e-8463-6660b2289aa5-test-operator-clouds-config\") on node \"crc\" DevicePath \"\"" Feb 25 13:15:22 crc kubenswrapper[5005]: I0225 13:15:22.976583 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49k5c\" (UniqueName: \"kubernetes.io/projected/5667c5d7-d8e5-482e-8463-6660b2289aa5-kube-api-access-49k5c\") on node \"crc\" DevicePath \"\"" Feb 25 13:15:23 crc kubenswrapper[5005]: I0225 13:15:23.009753 5005 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Feb 25 13:15:23 crc kubenswrapper[5005]: I0225 13:15:23.078364 5005 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Feb 25 13:15:23 crc kubenswrapper[5005]: I0225 13:15:23.392266 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tobiko-tests-tobiko-s01-sanity" event={"ID":"5667c5d7-d8e5-482e-8463-6660b2289aa5","Type":"ContainerDied","Data":"817866e5fd7d8edd97b5c1c2b3a0473c8cfe11f20bfb49d033db34b0adb82bdf"} Feb 25 13:15:23 crc kubenswrapper[5005]: I0225 13:15:23.392315 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="817866e5fd7d8edd97b5c1c2b3a0473c8cfe11f20bfb49d033db34b0adb82bdf" Feb 25 13:15:23 crc kubenswrapper[5005]: I0225 13:15:23.392410 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 25 13:15:24 crc kubenswrapper[5005]: I0225 13:15:24.310850 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5667c5d7-d8e5-482e-8463-6660b2289aa5-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "5667c5d7-d8e5-482e-8463-6660b2289aa5" (UID: "5667c5d7-d8e5-482e-8463-6660b2289aa5"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 13:15:24 crc kubenswrapper[5005]: I0225 13:15:24.405566 5005 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5667c5d7-d8e5-482e-8463-6660b2289aa5-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 25 13:15:24 crc kubenswrapper[5005]: I0225 13:15:24.591279 5005 scope.go:117] "RemoveContainer" containerID="4a9703ca3bdc865f38f9f904bbe6322b957b88fac42f2c8950a3006c99f74dd7" Feb 25 13:15:28 crc kubenswrapper[5005]: I0225 13:15:28.087589 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 13:15:28 crc kubenswrapper[5005]: I0225 13:15:28.087931 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 13:15:28 crc kubenswrapper[5005]: I0225 13:15:28.087977 5005 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" Feb 25 13:15:28 crc kubenswrapper[5005]: I0225 13:15:28.088634 5005 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e69861600cc9b89c742822fae1093e7ec75d5ae94ae770e895069e574a9b3089"} pod="openshift-machine-config-operator/machine-config-daemon-tct5q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 25 13:15:28 crc kubenswrapper[5005]: I0225 13:15:28.088695 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" containerID="cri-o://e69861600cc9b89c742822fae1093e7ec75d5ae94ae770e895069e574a9b3089" gracePeriod=600 Feb 25 13:15:28 crc kubenswrapper[5005]: I0225 13:15:28.437743 5005 generic.go:334] "Generic (PLEG): container finished" podID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerID="e69861600cc9b89c742822fae1093e7ec75d5ae94ae770e895069e574a9b3089" exitCode=0 Feb 25 13:15:28 crc kubenswrapper[5005]: I0225 13:15:28.437933 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerDied","Data":"e69861600cc9b89c742822fae1093e7ec75d5ae94ae770e895069e574a9b3089"} Feb 25 13:15:28 crc kubenswrapper[5005]: I0225 13:15:28.438132 5005 scope.go:117] "RemoveContainer" containerID="66027e5b3f25c40fa1bce2d4662da2bc4c7ab61051724db92de764ca548ffbb5" Feb 25 13:15:28 crc kubenswrapper[5005]: E0225 13:15:28.770366 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:15:29 crc kubenswrapper[5005]: I0225 13:15:29.448090 5005 scope.go:117] "RemoveContainer" containerID="e69861600cc9b89c742822fae1093e7ec75d5ae94ae770e895069e574a9b3089" Feb 25 13:15:29 crc kubenswrapper[5005]: E0225 13:15:29.448347 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:15:35 crc kubenswrapper[5005]: I0225 13:15:35.327111 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko"] Feb 25 13:15:35 crc kubenswrapper[5005]: E0225 13:15:35.328822 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5667c5d7-d8e5-482e-8463-6660b2289aa5" containerName="tobiko-tests-tobiko" Feb 25 13:15:35 crc kubenswrapper[5005]: I0225 13:15:35.328846 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="5667c5d7-d8e5-482e-8463-6660b2289aa5" containerName="tobiko-tests-tobiko" Feb 25 13:15:35 crc kubenswrapper[5005]: E0225 13:15:35.328895 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="435895b4-3a4a-49ab-bf1c-bd8a4b1df37d" containerName="collect-profiles" Feb 25 13:15:35 crc kubenswrapper[5005]: I0225 13:15:35.328906 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="435895b4-3a4a-49ab-bf1c-bd8a4b1df37d" containerName="collect-profiles" Feb 25 13:15:35 crc kubenswrapper[5005]: I0225 13:15:35.329179 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="435895b4-3a4a-49ab-bf1c-bd8a4b1df37d" containerName="collect-profiles" Feb 25 13:15:35 crc kubenswrapper[5005]: I0225 13:15:35.329219 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="5667c5d7-d8e5-482e-8463-6660b2289aa5" containerName="tobiko-tests-tobiko" Feb 25 13:15:35 crc kubenswrapper[5005]: I0225 13:15:35.330295 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" Feb 25 13:15:35 crc kubenswrapper[5005]: I0225 13:15:35.347042 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko"] Feb 25 13:15:35 crc kubenswrapper[5005]: I0225 13:15:35.425725 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tobiko-tobiko-tests-tobiko\" (UID: \"ba83c291-5d2f-4fdd-9b61-e3c5a39b86c1\") " pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" Feb 25 13:15:35 crc kubenswrapper[5005]: I0225 13:15:35.425903 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j956\" (UniqueName: \"kubernetes.io/projected/ba83c291-5d2f-4fdd-9b61-e3c5a39b86c1-kube-api-access-5j956\") pod \"test-operator-logs-pod-tobiko-tobiko-tests-tobiko\" (UID: \"ba83c291-5d2f-4fdd-9b61-e3c5a39b86c1\") " pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" Feb 25 13:15:35 crc kubenswrapper[5005]: I0225 13:15:35.527539 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tobiko-tobiko-tests-tobiko\" (UID: \"ba83c291-5d2f-4fdd-9b61-e3c5a39b86c1\") " pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" Feb 25 13:15:35 crc kubenswrapper[5005]: I0225 13:15:35.527632 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j956\" (UniqueName: \"kubernetes.io/projected/ba83c291-5d2f-4fdd-9b61-e3c5a39b86c1-kube-api-access-5j956\") pod \"test-operator-logs-pod-tobiko-tobiko-tests-tobiko\" (UID: \"ba83c291-5d2f-4fdd-9b61-e3c5a39b86c1\") " pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" Feb 25 13:15:35 crc kubenswrapper[5005]: I0225 13:15:35.528823 5005 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tobiko-tobiko-tests-tobiko\" (UID: \"ba83c291-5d2f-4fdd-9b61-e3c5a39b86c1\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" Feb 25 13:15:35 crc kubenswrapper[5005]: I0225 13:15:35.566599 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j956\" (UniqueName: \"kubernetes.io/projected/ba83c291-5d2f-4fdd-9b61-e3c5a39b86c1-kube-api-access-5j956\") pod \"test-operator-logs-pod-tobiko-tobiko-tests-tobiko\" (UID: \"ba83c291-5d2f-4fdd-9b61-e3c5a39b86c1\") " pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" Feb 25 13:15:35 crc kubenswrapper[5005]: I0225 13:15:35.578627 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tobiko-tobiko-tests-tobiko\" (UID: \"ba83c291-5d2f-4fdd-9b61-e3c5a39b86c1\") " pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" Feb 25 13:15:35 crc kubenswrapper[5005]: I0225 13:15:35.668069 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" Feb 25 13:15:36 crc kubenswrapper[5005]: I0225 13:15:36.182818 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko"] Feb 25 13:15:36 crc kubenswrapper[5005]: I0225 13:15:36.521592 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" event={"ID":"ba83c291-5d2f-4fdd-9b61-e3c5a39b86c1","Type":"ContainerStarted","Data":"59415ed74ebac0cd1627fc16f57220b74377bdc7d3721c71886409a7b42820fb"} Feb 25 13:15:37 crc kubenswrapper[5005]: I0225 13:15:37.533781 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" event={"ID":"ba83c291-5d2f-4fdd-9b61-e3c5a39b86c1","Type":"ContainerStarted","Data":"3b5cb0092008d26e8094115f07e6dbb4510b6f255a66eab6437d967b86bc78a9"} Feb 25 13:15:42 crc kubenswrapper[5005]: I0225 13:15:42.685861 5005 scope.go:117] "RemoveContainer" containerID="e69861600cc9b89c742822fae1093e7ec75d5ae94ae770e895069e574a9b3089" Feb 25 13:15:42 crc kubenswrapper[5005]: E0225 13:15:42.686724 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:15:56 crc kubenswrapper[5005]: I0225 13:15:56.701893 5005 scope.go:117] "RemoveContainer" containerID="e69861600cc9b89c742822fae1093e7ec75d5ae94ae770e895069e574a9b3089" Feb 25 13:15:56 crc kubenswrapper[5005]: E0225 13:15:56.702762 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:16:00 crc kubenswrapper[5005]: I0225 13:16:00.135064 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" podStartSLOduration=24.425393738 podStartE2EDuration="25.135036238s" podCreationTimestamp="2026-02-25 13:15:35 +0000 UTC" firstStartedPulling="2026-02-25 13:15:36.197324683 +0000 UTC m=+7050.238057020" lastFinishedPulling="2026-02-25 13:15:36.906967183 +0000 UTC m=+7050.947699520" observedRunningTime="2026-02-25 13:15:37.551573334 +0000 UTC m=+7051.592305661" watchObservedRunningTime="2026-02-25 13:16:00.135036238 +0000 UTC m=+7074.175768575" Feb 25 13:16:00 crc kubenswrapper[5005]: I0225 13:16:00.144736 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533756-cd5gh"] Feb 25 13:16:00 crc kubenswrapper[5005]: I0225 13:16:00.146956 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533756-cd5gh" Feb 25 13:16:00 crc kubenswrapper[5005]: I0225 13:16:00.149224 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 13:16:00 crc kubenswrapper[5005]: I0225 13:16:00.149839 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 13:16:00 crc kubenswrapper[5005]: I0225 13:16:00.150139 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 13:16:00 crc kubenswrapper[5005]: I0225 13:16:00.156020 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533756-cd5gh"] Feb 25 13:16:00 crc kubenswrapper[5005]: I0225 13:16:00.272547 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fxhn\" (UniqueName: \"kubernetes.io/projected/5c5d9cbd-5bdd-46ba-8ff0-8b79f91d41f4-kube-api-access-7fxhn\") pod \"auto-csr-approver-29533756-cd5gh\" (UID: \"5c5d9cbd-5bdd-46ba-8ff0-8b79f91d41f4\") " pod="openshift-infra/auto-csr-approver-29533756-cd5gh" Feb 25 13:16:00 crc kubenswrapper[5005]: I0225 13:16:00.375339 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fxhn\" (UniqueName: \"kubernetes.io/projected/5c5d9cbd-5bdd-46ba-8ff0-8b79f91d41f4-kube-api-access-7fxhn\") pod \"auto-csr-approver-29533756-cd5gh\" (UID: \"5c5d9cbd-5bdd-46ba-8ff0-8b79f91d41f4\") " pod="openshift-infra/auto-csr-approver-29533756-cd5gh" Feb 25 13:16:00 crc kubenswrapper[5005]: I0225 13:16:00.396964 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fxhn\" (UniqueName: \"kubernetes.io/projected/5c5d9cbd-5bdd-46ba-8ff0-8b79f91d41f4-kube-api-access-7fxhn\") pod \"auto-csr-approver-29533756-cd5gh\" (UID: \"5c5d9cbd-5bdd-46ba-8ff0-8b79f91d41f4\") " pod="openshift-infra/auto-csr-approver-29533756-cd5gh" Feb 25 13:16:00 crc kubenswrapper[5005]: I0225 13:16:00.472602 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533756-cd5gh" Feb 25 13:16:00 crc kubenswrapper[5005]: I0225 13:16:00.947063 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533756-cd5gh"] Feb 25 13:16:01 crc kubenswrapper[5005]: I0225 13:16:01.754164 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533756-cd5gh" event={"ID":"5c5d9cbd-5bdd-46ba-8ff0-8b79f91d41f4","Type":"ContainerStarted","Data":"4889b5db66855c8e56b72837fb8b0a9b4da44b5ecc8f9376cbd4aafd2143c2e5"} Feb 25 13:16:03 crc kubenswrapper[5005]: I0225 13:16:03.793564 5005 generic.go:334] "Generic (PLEG): container finished" podID="5c5d9cbd-5bdd-46ba-8ff0-8b79f91d41f4" containerID="ccaec3dc9e43abb822102da65116558c1ed088a2186d254e7c9b329f43dbfa31" exitCode=0 Feb 25 13:16:03 crc kubenswrapper[5005]: I0225 13:16:03.793673 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533756-cd5gh" event={"ID":"5c5d9cbd-5bdd-46ba-8ff0-8b79f91d41f4","Type":"ContainerDied","Data":"ccaec3dc9e43abb822102da65116558c1ed088a2186d254e7c9b329f43dbfa31"} Feb 25 13:16:05 crc kubenswrapper[5005]: I0225 13:16:05.172490 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533756-cd5gh" Feb 25 13:16:05 crc kubenswrapper[5005]: I0225 13:16:05.284580 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fxhn\" (UniqueName: \"kubernetes.io/projected/5c5d9cbd-5bdd-46ba-8ff0-8b79f91d41f4-kube-api-access-7fxhn\") pod \"5c5d9cbd-5bdd-46ba-8ff0-8b79f91d41f4\" (UID: \"5c5d9cbd-5bdd-46ba-8ff0-8b79f91d41f4\") " Feb 25 13:16:05 crc kubenswrapper[5005]: I0225 13:16:05.289541 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c5d9cbd-5bdd-46ba-8ff0-8b79f91d41f4-kube-api-access-7fxhn" (OuterVolumeSpecName: "kube-api-access-7fxhn") pod "5c5d9cbd-5bdd-46ba-8ff0-8b79f91d41f4" (UID: "5c5d9cbd-5bdd-46ba-8ff0-8b79f91d41f4"). InnerVolumeSpecName "kube-api-access-7fxhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 13:16:05 crc kubenswrapper[5005]: I0225 13:16:05.386864 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fxhn\" (UniqueName: \"kubernetes.io/projected/5c5d9cbd-5bdd-46ba-8ff0-8b79f91d41f4-kube-api-access-7fxhn\") on node \"crc\" DevicePath \"\"" Feb 25 13:16:05 crc kubenswrapper[5005]: I0225 13:16:05.819944 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533756-cd5gh" event={"ID":"5c5d9cbd-5bdd-46ba-8ff0-8b79f91d41f4","Type":"ContainerDied","Data":"4889b5db66855c8e56b72837fb8b0a9b4da44b5ecc8f9376cbd4aafd2143c2e5"} Feb 25 13:16:05 crc kubenswrapper[5005]: I0225 13:16:05.819981 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4889b5db66855c8e56b72837fb8b0a9b4da44b5ecc8f9376cbd4aafd2143c2e5" Feb 25 13:16:05 crc kubenswrapper[5005]: I0225 13:16:05.820067 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533756-cd5gh" Feb 25 13:16:06 crc kubenswrapper[5005]: I0225 13:16:06.252284 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533750-4dsc4"] Feb 25 13:16:06 crc kubenswrapper[5005]: I0225 13:16:06.269917 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533750-4dsc4"] Feb 25 13:16:06 crc kubenswrapper[5005]: I0225 13:16:06.696755 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6b1b463-ab19-4eae-bfac-ad8b1b4e93ec" path="/var/lib/kubelet/pods/d6b1b463-ab19-4eae-bfac-ad8b1b4e93ec/volumes" Feb 25 13:16:10 crc kubenswrapper[5005]: I0225 13:16:10.686031 5005 scope.go:117] "RemoveContainer" containerID="e69861600cc9b89c742822fae1093e7ec75d5ae94ae770e895069e574a9b3089" Feb 25 13:16:10 crc kubenswrapper[5005]: E0225 13:16:10.686771 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:16:14 crc kubenswrapper[5005]: I0225 13:16:14.957523 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-76s8f"] Feb 25 13:16:14 crc kubenswrapper[5005]: E0225 13:16:14.959171 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c5d9cbd-5bdd-46ba-8ff0-8b79f91d41f4" containerName="oc" Feb 25 13:16:14 crc kubenswrapper[5005]: I0225 13:16:14.959203 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c5d9cbd-5bdd-46ba-8ff0-8b79f91d41f4" containerName="oc" Feb 25 13:16:14 crc kubenswrapper[5005]: I0225 13:16:14.959652 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c5d9cbd-5bdd-46ba-8ff0-8b79f91d41f4" containerName="oc" Feb 25 13:16:14 crc kubenswrapper[5005]: I0225 13:16:14.962148 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-76s8f" Feb 25 13:16:14 crc kubenswrapper[5005]: I0225 13:16:14.972738 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-76s8f"] Feb 25 13:16:14 crc kubenswrapper[5005]: I0225 13:16:14.974501 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7522104-8141-4f7c-a29f-3ee4c003e18a-catalog-content\") pod \"redhat-operators-76s8f\" (UID: \"b7522104-8141-4f7c-a29f-3ee4c003e18a\") " pod="openshift-marketplace/redhat-operators-76s8f" Feb 25 13:16:14 crc kubenswrapper[5005]: I0225 13:16:14.974572 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7522104-8141-4f7c-a29f-3ee4c003e18a-utilities\") pod \"redhat-operators-76s8f\" (UID: \"b7522104-8141-4f7c-a29f-3ee4c003e18a\") " pod="openshift-marketplace/redhat-operators-76s8f" Feb 25 13:16:14 crc kubenswrapper[5005]: I0225 13:16:14.974788 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtkrz\" (UniqueName: \"kubernetes.io/projected/b7522104-8141-4f7c-a29f-3ee4c003e18a-kube-api-access-mtkrz\") pod \"redhat-operators-76s8f\" (UID: \"b7522104-8141-4f7c-a29f-3ee4c003e18a\") " pod="openshift-marketplace/redhat-operators-76s8f" Feb 25 13:16:15 crc kubenswrapper[5005]: I0225 13:16:15.076798 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtkrz\" (UniqueName: \"kubernetes.io/projected/b7522104-8141-4f7c-a29f-3ee4c003e18a-kube-api-access-mtkrz\") pod \"redhat-operators-76s8f\" (UID: \"b7522104-8141-4f7c-a29f-3ee4c003e18a\") " pod="openshift-marketplace/redhat-operators-76s8f" Feb 25 13:16:15 crc kubenswrapper[5005]: I0225 13:16:15.076964 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7522104-8141-4f7c-a29f-3ee4c003e18a-catalog-content\") pod \"redhat-operators-76s8f\" (UID: \"b7522104-8141-4f7c-a29f-3ee4c003e18a\") " pod="openshift-marketplace/redhat-operators-76s8f" Feb 25 13:16:15 crc kubenswrapper[5005]: I0225 13:16:15.076993 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7522104-8141-4f7c-a29f-3ee4c003e18a-utilities\") pod \"redhat-operators-76s8f\" (UID: \"b7522104-8141-4f7c-a29f-3ee4c003e18a\") " pod="openshift-marketplace/redhat-operators-76s8f" Feb 25 13:16:15 crc kubenswrapper[5005]: I0225 13:16:15.077557 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7522104-8141-4f7c-a29f-3ee4c003e18a-catalog-content\") pod \"redhat-operators-76s8f\" (UID: \"b7522104-8141-4f7c-a29f-3ee4c003e18a\") " pod="openshift-marketplace/redhat-operators-76s8f" Feb 25 13:16:15 crc kubenswrapper[5005]: I0225 13:16:15.077662 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7522104-8141-4f7c-a29f-3ee4c003e18a-utilities\") pod \"redhat-operators-76s8f\" (UID: \"b7522104-8141-4f7c-a29f-3ee4c003e18a\") " pod="openshift-marketplace/redhat-operators-76s8f" Feb 25 13:16:15 crc kubenswrapper[5005]: I0225 13:16:15.102475 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtkrz\" (UniqueName: \"kubernetes.io/projected/b7522104-8141-4f7c-a29f-3ee4c003e18a-kube-api-access-mtkrz\") pod \"redhat-operators-76s8f\" (UID: \"b7522104-8141-4f7c-a29f-3ee4c003e18a\") " pod="openshift-marketplace/redhat-operators-76s8f" Feb 25 13:16:15 crc kubenswrapper[5005]: I0225 13:16:15.303400 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-76s8f" Feb 25 13:16:15 crc kubenswrapper[5005]: I0225 13:16:15.796267 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-76s8f"] Feb 25 13:16:15 crc kubenswrapper[5005]: I0225 13:16:15.915441 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-76s8f" event={"ID":"b7522104-8141-4f7c-a29f-3ee4c003e18a","Type":"ContainerStarted","Data":"73144a025fa74e42f3acb73967d65776cae8803d60bd94c0ac28cf2b70ace02f"} Feb 25 13:16:16 crc kubenswrapper[5005]: I0225 13:16:16.929720 5005 generic.go:334] "Generic (PLEG): container finished" podID="b7522104-8141-4f7c-a29f-3ee4c003e18a" containerID="a4b28f679c46dda23df09c0e8aea9d49d0361e12de2efec662e5a585431c4230" exitCode=0 Feb 25 13:16:16 crc kubenswrapper[5005]: I0225 13:16:16.930145 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-76s8f" event={"ID":"b7522104-8141-4f7c-a29f-3ee4c003e18a","Type":"ContainerDied","Data":"a4b28f679c46dda23df09c0e8aea9d49d0361e12de2efec662e5a585431c4230"} Feb 25 13:16:17 crc kubenswrapper[5005]: I0225 13:16:17.940903 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-76s8f" event={"ID":"b7522104-8141-4f7c-a29f-3ee4c003e18a","Type":"ContainerStarted","Data":"2ea93089d6c2480c13fd8e27476c9618240f2739dd50ba0af133c989d9ba45aa"} Feb 25 13:16:19 crc kubenswrapper[5005]: I0225 13:16:19.962451 5005 generic.go:334] "Generic (PLEG): container finished" podID="b7522104-8141-4f7c-a29f-3ee4c003e18a" containerID="2ea93089d6c2480c13fd8e27476c9618240f2739dd50ba0af133c989d9ba45aa" exitCode=0 Feb 25 13:16:19 crc kubenswrapper[5005]: I0225 13:16:19.962531 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-76s8f" event={"ID":"b7522104-8141-4f7c-a29f-3ee4c003e18a","Type":"ContainerDied","Data":"2ea93089d6c2480c13fd8e27476c9618240f2739dd50ba0af133c989d9ba45aa"} Feb 25 13:16:20 crc kubenswrapper[5005]: I0225 13:16:20.987150 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-76s8f" event={"ID":"b7522104-8141-4f7c-a29f-3ee4c003e18a","Type":"ContainerStarted","Data":"9caa639e1078fdaa04be3a4721374f6580b06bdc0f489bcf565ebd847ed12eb3"} Feb 25 13:16:21 crc kubenswrapper[5005]: I0225 13:16:21.014997 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-76s8f" podStartSLOduration=3.567539936 podStartE2EDuration="7.014930968s" podCreationTimestamp="2026-02-25 13:16:14 +0000 UTC" firstStartedPulling="2026-02-25 13:16:16.93222914 +0000 UTC m=+7090.972961467" lastFinishedPulling="2026-02-25 13:16:20.379620172 +0000 UTC m=+7094.420352499" observedRunningTime="2026-02-25 13:16:21.009011007 +0000 UTC m=+7095.049743334" watchObservedRunningTime="2026-02-25 13:16:21.014930968 +0000 UTC m=+7095.055663305" Feb 25 13:16:21 crc kubenswrapper[5005]: I0225 13:16:21.685430 5005 scope.go:117] "RemoveContainer" containerID="e69861600cc9b89c742822fae1093e7ec75d5ae94ae770e895069e574a9b3089" Feb 25 13:16:21 crc kubenswrapper[5005]: E0225 13:16:21.685912 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:16:24 crc kubenswrapper[5005]: I0225 13:16:24.646478 5005 scope.go:117] "RemoveContainer" containerID="18649525785f20bde8867939cfc6099096ca58afca9828df95963f8671668275" Feb 25 13:16:25 crc kubenswrapper[5005]: I0225 13:16:25.304681 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-76s8f" Feb 25 13:16:25 crc kubenswrapper[5005]: I0225 13:16:25.305159 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-76s8f" Feb 25 13:16:26 crc kubenswrapper[5005]: I0225 13:16:26.354236 5005 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-76s8f" podUID="b7522104-8141-4f7c-a29f-3ee4c003e18a" containerName="registry-server" probeResult="failure" output=< Feb 25 13:16:26 crc kubenswrapper[5005]: timeout: failed to connect service ":50051" within 1s Feb 25 13:16:26 crc kubenswrapper[5005]: > Feb 25 13:16:33 crc kubenswrapper[5005]: I0225 13:16:33.686527 5005 scope.go:117] "RemoveContainer" containerID="e69861600cc9b89c742822fae1093e7ec75d5ae94ae770e895069e574a9b3089" Feb 25 13:16:33 crc kubenswrapper[5005]: E0225 13:16:33.687413 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:16:35 crc kubenswrapper[5005]: I0225 13:16:35.349710 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-76s8f" Feb 25 13:16:35 crc kubenswrapper[5005]: I0225 13:16:35.403516 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-76s8f" Feb 25 13:16:35 crc kubenswrapper[5005]: I0225 13:16:35.587547 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-76s8f"] Feb 25 13:16:37 crc kubenswrapper[5005]: I0225 13:16:37.153888 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-76s8f" podUID="b7522104-8141-4f7c-a29f-3ee4c003e18a" containerName="registry-server" containerID="cri-o://9caa639e1078fdaa04be3a4721374f6580b06bdc0f489bcf565ebd847ed12eb3" gracePeriod=2 Feb 25 13:16:37 crc kubenswrapper[5005]: I0225 13:16:37.630080 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-76s8f" Feb 25 13:16:37 crc kubenswrapper[5005]: I0225 13:16:37.802474 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7522104-8141-4f7c-a29f-3ee4c003e18a-catalog-content\") pod \"b7522104-8141-4f7c-a29f-3ee4c003e18a\" (UID: \"b7522104-8141-4f7c-a29f-3ee4c003e18a\") " Feb 25 13:16:37 crc kubenswrapper[5005]: I0225 13:16:37.802530 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7522104-8141-4f7c-a29f-3ee4c003e18a-utilities\") pod \"b7522104-8141-4f7c-a29f-3ee4c003e18a\" (UID: \"b7522104-8141-4f7c-a29f-3ee4c003e18a\") " Feb 25 13:16:37 crc kubenswrapper[5005]: I0225 13:16:37.802578 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtkrz\" (UniqueName: \"kubernetes.io/projected/b7522104-8141-4f7c-a29f-3ee4c003e18a-kube-api-access-mtkrz\") pod \"b7522104-8141-4f7c-a29f-3ee4c003e18a\" (UID: \"b7522104-8141-4f7c-a29f-3ee4c003e18a\") " Feb 25 13:16:37 crc kubenswrapper[5005]: I0225 13:16:37.803561 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7522104-8141-4f7c-a29f-3ee4c003e18a-utilities" (OuterVolumeSpecName: "utilities") pod "b7522104-8141-4f7c-a29f-3ee4c003e18a" (UID: "b7522104-8141-4f7c-a29f-3ee4c003e18a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 13:16:37 crc kubenswrapper[5005]: I0225 13:16:37.821097 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7522104-8141-4f7c-a29f-3ee4c003e18a-kube-api-access-mtkrz" (OuterVolumeSpecName: "kube-api-access-mtkrz") pod "b7522104-8141-4f7c-a29f-3ee4c003e18a" (UID: "b7522104-8141-4f7c-a29f-3ee4c003e18a"). InnerVolumeSpecName "kube-api-access-mtkrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 13:16:37 crc kubenswrapper[5005]: I0225 13:16:37.904580 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7522104-8141-4f7c-a29f-3ee4c003e18a-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 13:16:37 crc kubenswrapper[5005]: I0225 13:16:37.904624 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtkrz\" (UniqueName: \"kubernetes.io/projected/b7522104-8141-4f7c-a29f-3ee4c003e18a-kube-api-access-mtkrz\") on node \"crc\" DevicePath \"\"" Feb 25 13:16:37 crc kubenswrapper[5005]: I0225 13:16:37.926732 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7522104-8141-4f7c-a29f-3ee4c003e18a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b7522104-8141-4f7c-a29f-3ee4c003e18a" (UID: "b7522104-8141-4f7c-a29f-3ee4c003e18a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 13:16:38 crc kubenswrapper[5005]: I0225 13:16:38.006338 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7522104-8141-4f7c-a29f-3ee4c003e18a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 13:16:38 crc kubenswrapper[5005]: I0225 13:16:38.165646 5005 generic.go:334] "Generic (PLEG): container finished" podID="b7522104-8141-4f7c-a29f-3ee4c003e18a" containerID="9caa639e1078fdaa04be3a4721374f6580b06bdc0f489bcf565ebd847ed12eb3" exitCode=0 Feb 25 13:16:38 crc kubenswrapper[5005]: I0225 13:16:38.165692 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-76s8f" event={"ID":"b7522104-8141-4f7c-a29f-3ee4c003e18a","Type":"ContainerDied","Data":"9caa639e1078fdaa04be3a4721374f6580b06bdc0f489bcf565ebd847ed12eb3"} Feb 25 13:16:38 crc kubenswrapper[5005]: I0225 13:16:38.165700 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-76s8f" Feb 25 13:16:38 crc kubenswrapper[5005]: I0225 13:16:38.165725 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-76s8f" event={"ID":"b7522104-8141-4f7c-a29f-3ee4c003e18a","Type":"ContainerDied","Data":"73144a025fa74e42f3acb73967d65776cae8803d60bd94c0ac28cf2b70ace02f"} Feb 25 13:16:38 crc kubenswrapper[5005]: I0225 13:16:38.165743 5005 scope.go:117] "RemoveContainer" containerID="9caa639e1078fdaa04be3a4721374f6580b06bdc0f489bcf565ebd847ed12eb3" Feb 25 13:16:38 crc kubenswrapper[5005]: I0225 13:16:38.195167 5005 scope.go:117] "RemoveContainer" containerID="2ea93089d6c2480c13fd8e27476c9618240f2739dd50ba0af133c989d9ba45aa" Feb 25 13:16:38 crc kubenswrapper[5005]: I0225 13:16:38.210910 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-76s8f"] Feb 25 13:16:38 crc kubenswrapper[5005]: I0225 13:16:38.222174 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-76s8f"] Feb 25 13:16:38 crc kubenswrapper[5005]: I0225 13:16:38.234252 5005 scope.go:117] "RemoveContainer" containerID="a4b28f679c46dda23df09c0e8aea9d49d0361e12de2efec662e5a585431c4230" Feb 25 13:16:38 crc kubenswrapper[5005]: I0225 13:16:38.294968 5005 scope.go:117] "RemoveContainer" containerID="9caa639e1078fdaa04be3a4721374f6580b06bdc0f489bcf565ebd847ed12eb3" Feb 25 13:16:38 crc kubenswrapper[5005]: E0225 13:16:38.295600 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9caa639e1078fdaa04be3a4721374f6580b06bdc0f489bcf565ebd847ed12eb3\": container with ID starting with 9caa639e1078fdaa04be3a4721374f6580b06bdc0f489bcf565ebd847ed12eb3 not found: ID does not exist" containerID="9caa639e1078fdaa04be3a4721374f6580b06bdc0f489bcf565ebd847ed12eb3" Feb 25 13:16:38 crc kubenswrapper[5005]: I0225 13:16:38.295663 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9caa639e1078fdaa04be3a4721374f6580b06bdc0f489bcf565ebd847ed12eb3"} err="failed to get container status \"9caa639e1078fdaa04be3a4721374f6580b06bdc0f489bcf565ebd847ed12eb3\": rpc error: code = NotFound desc = could not find container \"9caa639e1078fdaa04be3a4721374f6580b06bdc0f489bcf565ebd847ed12eb3\": container with ID starting with 9caa639e1078fdaa04be3a4721374f6580b06bdc0f489bcf565ebd847ed12eb3 not found: ID does not exist" Feb 25 13:16:38 crc kubenswrapper[5005]: I0225 13:16:38.295878 5005 scope.go:117] "RemoveContainer" containerID="2ea93089d6c2480c13fd8e27476c9618240f2739dd50ba0af133c989d9ba45aa" Feb 25 13:16:38 crc kubenswrapper[5005]: E0225 13:16:38.296598 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ea93089d6c2480c13fd8e27476c9618240f2739dd50ba0af133c989d9ba45aa\": container with ID starting with 2ea93089d6c2480c13fd8e27476c9618240f2739dd50ba0af133c989d9ba45aa not found: ID does not exist" containerID="2ea93089d6c2480c13fd8e27476c9618240f2739dd50ba0af133c989d9ba45aa" Feb 25 13:16:38 crc kubenswrapper[5005]: I0225 13:16:38.296633 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ea93089d6c2480c13fd8e27476c9618240f2739dd50ba0af133c989d9ba45aa"} err="failed to get container status \"2ea93089d6c2480c13fd8e27476c9618240f2739dd50ba0af133c989d9ba45aa\": rpc error: code = NotFound desc = could not find container \"2ea93089d6c2480c13fd8e27476c9618240f2739dd50ba0af133c989d9ba45aa\": container with ID starting with 2ea93089d6c2480c13fd8e27476c9618240f2739dd50ba0af133c989d9ba45aa not found: ID does not exist" Feb 25 13:16:38 crc kubenswrapper[5005]: I0225 13:16:38.296653 5005 scope.go:117] "RemoveContainer" containerID="a4b28f679c46dda23df09c0e8aea9d49d0361e12de2efec662e5a585431c4230" Feb 25 13:16:38 crc kubenswrapper[5005]: E0225 13:16:38.296998 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4b28f679c46dda23df09c0e8aea9d49d0361e12de2efec662e5a585431c4230\": container with ID starting with a4b28f679c46dda23df09c0e8aea9d49d0361e12de2efec662e5a585431c4230 not found: ID does not exist" containerID="a4b28f679c46dda23df09c0e8aea9d49d0361e12de2efec662e5a585431c4230" Feb 25 13:16:38 crc kubenswrapper[5005]: I0225 13:16:38.297059 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4b28f679c46dda23df09c0e8aea9d49d0361e12de2efec662e5a585431c4230"} err="failed to get container status \"a4b28f679c46dda23df09c0e8aea9d49d0361e12de2efec662e5a585431c4230\": rpc error: code = NotFound desc = could not find container \"a4b28f679c46dda23df09c0e8aea9d49d0361e12de2efec662e5a585431c4230\": container with ID starting with a4b28f679c46dda23df09c0e8aea9d49d0361e12de2efec662e5a585431c4230 not found: ID does not exist" Feb 25 13:16:38 crc kubenswrapper[5005]: I0225 13:16:38.695267 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7522104-8141-4f7c-a29f-3ee4c003e18a" path="/var/lib/kubelet/pods/b7522104-8141-4f7c-a29f-3ee4c003e18a/volumes" Feb 25 13:16:46 crc kubenswrapper[5005]: I0225 13:16:46.691527 5005 scope.go:117] "RemoveContainer" containerID="e69861600cc9b89c742822fae1093e7ec75d5ae94ae770e895069e574a9b3089" Feb 25 13:16:46 crc kubenswrapper[5005]: E0225 13:16:46.692407 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:16:57 crc kubenswrapper[5005]: I0225 13:16:57.686122 5005 scope.go:117] "RemoveContainer" containerID="e69861600cc9b89c742822fae1093e7ec75d5ae94ae770e895069e574a9b3089" Feb 25 13:16:57 crc kubenswrapper[5005]: E0225 13:16:57.686891 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:17:12 crc kubenswrapper[5005]: I0225 13:17:12.685841 5005 scope.go:117] "RemoveContainer" containerID="e69861600cc9b89c742822fae1093e7ec75d5ae94ae770e895069e574a9b3089" Feb 25 13:17:12 crc kubenswrapper[5005]: E0225 13:17:12.687847 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:17:26 crc kubenswrapper[5005]: I0225 13:17:26.698718 5005 scope.go:117] "RemoveContainer" containerID="e69861600cc9b89c742822fae1093e7ec75d5ae94ae770e895069e574a9b3089" Feb 25 13:17:26 crc kubenswrapper[5005]: E0225 13:17:26.699604 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:17:39 crc kubenswrapper[5005]: I0225 13:17:39.685568 5005 scope.go:117] "RemoveContainer" containerID="e69861600cc9b89c742822fae1093e7ec75d5ae94ae770e895069e574a9b3089" Feb 25 13:17:39 crc kubenswrapper[5005]: E0225 13:17:39.686518 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:17:53 crc kubenswrapper[5005]: I0225 13:17:53.685931 5005 scope.go:117] "RemoveContainer" containerID="e69861600cc9b89c742822fae1093e7ec75d5ae94ae770e895069e574a9b3089" Feb 25 13:17:53 crc kubenswrapper[5005]: E0225 13:17:53.686641 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:18:00 crc kubenswrapper[5005]: I0225 13:18:00.147833 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533758-wgbwm"] Feb 25 13:18:00 crc kubenswrapper[5005]: E0225 13:18:00.148942 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7522104-8141-4f7c-a29f-3ee4c003e18a" containerName="registry-server" Feb 25 13:18:00 crc kubenswrapper[5005]: I0225 13:18:00.148958 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7522104-8141-4f7c-a29f-3ee4c003e18a" containerName="registry-server" Feb 25 13:18:00 crc kubenswrapper[5005]: E0225 13:18:00.148971 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7522104-8141-4f7c-a29f-3ee4c003e18a" containerName="extract-utilities" Feb 25 13:18:00 crc kubenswrapper[5005]: I0225 13:18:00.148977 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7522104-8141-4f7c-a29f-3ee4c003e18a" containerName="extract-utilities" Feb 25 13:18:00 crc kubenswrapper[5005]: E0225 13:18:00.148994 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7522104-8141-4f7c-a29f-3ee4c003e18a" containerName="extract-content" Feb 25 13:18:00 crc kubenswrapper[5005]: I0225 13:18:00.148999 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7522104-8141-4f7c-a29f-3ee4c003e18a" containerName="extract-content" Feb 25 13:18:00 crc kubenswrapper[5005]: I0225 13:18:00.149168 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7522104-8141-4f7c-a29f-3ee4c003e18a" containerName="registry-server" Feb 25 13:18:00 crc kubenswrapper[5005]: I0225 13:18:00.149929 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533758-wgbwm" Feb 25 13:18:00 crc kubenswrapper[5005]: I0225 13:18:00.152886 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 13:18:00 crc kubenswrapper[5005]: I0225 13:18:00.152985 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 13:18:00 crc kubenswrapper[5005]: I0225 13:18:00.153073 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 13:18:00 crc kubenswrapper[5005]: I0225 13:18:00.168433 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533758-wgbwm"] Feb 25 13:18:00 crc kubenswrapper[5005]: I0225 13:18:00.204995 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5s7b\" (UniqueName: \"kubernetes.io/projected/edaae432-b990-4ba1-96c6-dd1d2b66322f-kube-api-access-x5s7b\") pod \"auto-csr-approver-29533758-wgbwm\" (UID: \"edaae432-b990-4ba1-96c6-dd1d2b66322f\") " pod="openshift-infra/auto-csr-approver-29533758-wgbwm" Feb 25 13:18:00 crc kubenswrapper[5005]: I0225 13:18:00.306339 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5s7b\" (UniqueName: \"kubernetes.io/projected/edaae432-b990-4ba1-96c6-dd1d2b66322f-kube-api-access-x5s7b\") pod \"auto-csr-approver-29533758-wgbwm\" (UID: \"edaae432-b990-4ba1-96c6-dd1d2b66322f\") " pod="openshift-infra/auto-csr-approver-29533758-wgbwm" Feb 25 13:18:00 crc kubenswrapper[5005]: I0225 13:18:00.328829 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5s7b\" (UniqueName: \"kubernetes.io/projected/edaae432-b990-4ba1-96c6-dd1d2b66322f-kube-api-access-x5s7b\") pod \"auto-csr-approver-29533758-wgbwm\" (UID: \"edaae432-b990-4ba1-96c6-dd1d2b66322f\") " pod="openshift-infra/auto-csr-approver-29533758-wgbwm" Feb 25 13:18:00 crc kubenswrapper[5005]: I0225 13:18:00.469353 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533758-wgbwm" Feb 25 13:18:01 crc kubenswrapper[5005]: I0225 13:18:01.000202 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533758-wgbwm"] Feb 25 13:18:01 crc kubenswrapper[5005]: W0225 13:18:01.002283 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podedaae432_b990_4ba1_96c6_dd1d2b66322f.slice/crio-09f2a2dc0206cc6d09a81798b4530127b10da3dcdca95a4234b6abaafd0b1f17 WatchSource:0}: Error finding container 09f2a2dc0206cc6d09a81798b4530127b10da3dcdca95a4234b6abaafd0b1f17: Status 404 returned error can't find the container with id 09f2a2dc0206cc6d09a81798b4530127b10da3dcdca95a4234b6abaafd0b1f17 Feb 25 13:18:01 crc kubenswrapper[5005]: I0225 13:18:01.006079 5005 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 25 13:18:01 crc kubenswrapper[5005]: I0225 13:18:01.929248 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533758-wgbwm" event={"ID":"edaae432-b990-4ba1-96c6-dd1d2b66322f","Type":"ContainerStarted","Data":"09f2a2dc0206cc6d09a81798b4530127b10da3dcdca95a4234b6abaafd0b1f17"} Feb 25 13:18:03 crc kubenswrapper[5005]: I0225 13:18:03.959231 5005 generic.go:334] "Generic (PLEG): container finished" podID="edaae432-b990-4ba1-96c6-dd1d2b66322f" containerID="7863eebea3504a9eb522a0a75b4e33564e9020a7aaf0134f2335c9cb035817ae" exitCode=0 Feb 25 13:18:03 crc kubenswrapper[5005]: I0225 13:18:03.959341 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533758-wgbwm" event={"ID":"edaae432-b990-4ba1-96c6-dd1d2b66322f","Type":"ContainerDied","Data":"7863eebea3504a9eb522a0a75b4e33564e9020a7aaf0134f2335c9cb035817ae"} Feb 25 13:18:05 crc kubenswrapper[5005]: I0225 13:18:05.299897 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533758-wgbwm" Feb 25 13:18:05 crc kubenswrapper[5005]: I0225 13:18:05.444541 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5s7b\" (UniqueName: \"kubernetes.io/projected/edaae432-b990-4ba1-96c6-dd1d2b66322f-kube-api-access-x5s7b\") pod \"edaae432-b990-4ba1-96c6-dd1d2b66322f\" (UID: \"edaae432-b990-4ba1-96c6-dd1d2b66322f\") " Feb 25 13:18:05 crc kubenswrapper[5005]: I0225 13:18:05.455539 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edaae432-b990-4ba1-96c6-dd1d2b66322f-kube-api-access-x5s7b" (OuterVolumeSpecName: "kube-api-access-x5s7b") pod "edaae432-b990-4ba1-96c6-dd1d2b66322f" (UID: "edaae432-b990-4ba1-96c6-dd1d2b66322f"). InnerVolumeSpecName "kube-api-access-x5s7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 13:18:05 crc kubenswrapper[5005]: I0225 13:18:05.546676 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5s7b\" (UniqueName: \"kubernetes.io/projected/edaae432-b990-4ba1-96c6-dd1d2b66322f-kube-api-access-x5s7b\") on node \"crc\" DevicePath \"\"" Feb 25 13:18:05 crc kubenswrapper[5005]: I0225 13:18:05.686213 5005 scope.go:117] "RemoveContainer" containerID="e69861600cc9b89c742822fae1093e7ec75d5ae94ae770e895069e574a9b3089" Feb 25 13:18:05 crc kubenswrapper[5005]: E0225 13:18:05.686543 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:18:05 crc kubenswrapper[5005]: I0225 13:18:05.978404 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533758-wgbwm" event={"ID":"edaae432-b990-4ba1-96c6-dd1d2b66322f","Type":"ContainerDied","Data":"09f2a2dc0206cc6d09a81798b4530127b10da3dcdca95a4234b6abaafd0b1f17"} Feb 25 13:18:05 crc kubenswrapper[5005]: I0225 13:18:05.978442 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09f2a2dc0206cc6d09a81798b4530127b10da3dcdca95a4234b6abaafd0b1f17" Feb 25 13:18:05 crc kubenswrapper[5005]: I0225 13:18:05.978464 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533758-wgbwm" Feb 25 13:18:06 crc kubenswrapper[5005]: I0225 13:18:06.374824 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533752-7ddnq"] Feb 25 13:18:06 crc kubenswrapper[5005]: I0225 13:18:06.383709 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533752-7ddnq"] Feb 25 13:18:06 crc kubenswrapper[5005]: I0225 13:18:06.743123 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8934a323-1bc4-4d56-a124-dd020c24d20b" path="/var/lib/kubelet/pods/8934a323-1bc4-4d56-a124-dd020c24d20b/volumes" Feb 25 13:18:20 crc kubenswrapper[5005]: I0225 13:18:20.685280 5005 scope.go:117] "RemoveContainer" containerID="e69861600cc9b89c742822fae1093e7ec75d5ae94ae770e895069e574a9b3089" Feb 25 13:18:20 crc kubenswrapper[5005]: E0225 13:18:20.686204 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:18:24 crc kubenswrapper[5005]: I0225 13:18:24.811536 5005 scope.go:117] "RemoveContainer" containerID="37b38919e556334c35a822f4af8a89a3923d17c7c5e832453e06f3674955684b" Feb 25 13:18:31 crc kubenswrapper[5005]: I0225 13:18:31.685836 5005 scope.go:117] "RemoveContainer" containerID="e69861600cc9b89c742822fae1093e7ec75d5ae94ae770e895069e574a9b3089" Feb 25 13:18:31 crc kubenswrapper[5005]: E0225 13:18:31.686815 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:18:43 crc kubenswrapper[5005]: I0225 13:18:43.686803 5005 scope.go:117] "RemoveContainer" containerID="e69861600cc9b89c742822fae1093e7ec75d5ae94ae770e895069e574a9b3089" Feb 25 13:18:43 crc kubenswrapper[5005]: E0225 13:18:43.687805 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:18:57 crc kubenswrapper[5005]: I0225 13:18:57.685831 5005 scope.go:117] "RemoveContainer" containerID="e69861600cc9b89c742822fae1093e7ec75d5ae94ae770e895069e574a9b3089" Feb 25 13:18:57 crc kubenswrapper[5005]: E0225 13:18:57.686516 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:19:11 crc kubenswrapper[5005]: I0225 13:19:11.685562 5005 scope.go:117] "RemoveContainer" containerID="e69861600cc9b89c742822fae1093e7ec75d5ae94ae770e895069e574a9b3089" Feb 25 13:19:11 crc kubenswrapper[5005]: E0225 13:19:11.686470 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:19:25 crc kubenswrapper[5005]: I0225 13:19:25.686469 5005 scope.go:117] "RemoveContainer" containerID="e69861600cc9b89c742822fae1093e7ec75d5ae94ae770e895069e574a9b3089" Feb 25 13:19:25 crc kubenswrapper[5005]: E0225 13:19:25.688872 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:19:36 crc kubenswrapper[5005]: I0225 13:19:36.692481 5005 scope.go:117] "RemoveContainer" containerID="e69861600cc9b89c742822fae1093e7ec75d5ae94ae770e895069e574a9b3089" Feb 25 13:19:36 crc kubenswrapper[5005]: E0225 13:19:36.693434 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:19:50 crc kubenswrapper[5005]: I0225 13:19:50.685537 5005 scope.go:117] "RemoveContainer" containerID="e69861600cc9b89c742822fae1093e7ec75d5ae94ae770e895069e574a9b3089" Feb 25 13:19:50 crc kubenswrapper[5005]: E0225 13:19:50.686871 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:19:50 crc kubenswrapper[5005]: I0225 13:19:50.907208 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gtj9p"] Feb 25 13:19:50 crc kubenswrapper[5005]: E0225 13:19:50.907730 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edaae432-b990-4ba1-96c6-dd1d2b66322f" containerName="oc" Feb 25 13:19:50 crc kubenswrapper[5005]: I0225 13:19:50.907750 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="edaae432-b990-4ba1-96c6-dd1d2b66322f" containerName="oc" Feb 25 13:19:50 crc kubenswrapper[5005]: I0225 13:19:50.907920 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="edaae432-b990-4ba1-96c6-dd1d2b66322f" containerName="oc" Feb 25 13:19:50 crc kubenswrapper[5005]: I0225 13:19:50.918419 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gtj9p" Feb 25 13:19:50 crc kubenswrapper[5005]: I0225 13:19:50.947880 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gtj9p"] Feb 25 13:19:50 crc kubenswrapper[5005]: I0225 13:19:50.989557 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9w92\" (UniqueName: \"kubernetes.io/projected/c7b94af6-e6f5-41c9-b6ca-99f242e99b53-kube-api-access-b9w92\") pod \"community-operators-gtj9p\" (UID: \"c7b94af6-e6f5-41c9-b6ca-99f242e99b53\") " pod="openshift-marketplace/community-operators-gtj9p" Feb 25 13:19:50 crc kubenswrapper[5005]: I0225 13:19:50.989616 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7b94af6-e6f5-41c9-b6ca-99f242e99b53-utilities\") pod \"community-operators-gtj9p\" (UID: \"c7b94af6-e6f5-41c9-b6ca-99f242e99b53\") " pod="openshift-marketplace/community-operators-gtj9p" Feb 25 13:19:50 crc kubenswrapper[5005]: I0225 13:19:50.990041 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7b94af6-e6f5-41c9-b6ca-99f242e99b53-catalog-content\") pod \"community-operators-gtj9p\" (UID: \"c7b94af6-e6f5-41c9-b6ca-99f242e99b53\") " pod="openshift-marketplace/community-operators-gtj9p" Feb 25 13:19:51 crc kubenswrapper[5005]: I0225 13:19:51.091550 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7b94af6-e6f5-41c9-b6ca-99f242e99b53-catalog-content\") pod \"community-operators-gtj9p\" (UID: \"c7b94af6-e6f5-41c9-b6ca-99f242e99b53\") " pod="openshift-marketplace/community-operators-gtj9p" Feb 25 13:19:51 crc kubenswrapper[5005]: I0225 13:19:51.092210 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7b94af6-e6f5-41c9-b6ca-99f242e99b53-catalog-content\") pod \"community-operators-gtj9p\" (UID: \"c7b94af6-e6f5-41c9-b6ca-99f242e99b53\") " pod="openshift-marketplace/community-operators-gtj9p" Feb 25 13:19:51 crc kubenswrapper[5005]: I0225 13:19:51.093557 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9w92\" (UniqueName: \"kubernetes.io/projected/c7b94af6-e6f5-41c9-b6ca-99f242e99b53-kube-api-access-b9w92\") pod \"community-operators-gtj9p\" (UID: \"c7b94af6-e6f5-41c9-b6ca-99f242e99b53\") " pod="openshift-marketplace/community-operators-gtj9p" Feb 25 13:19:51 crc kubenswrapper[5005]: I0225 13:19:51.094088 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7b94af6-e6f5-41c9-b6ca-99f242e99b53-utilities\") pod \"community-operators-gtj9p\" (UID: \"c7b94af6-e6f5-41c9-b6ca-99f242e99b53\") " pod="openshift-marketplace/community-operators-gtj9p" Feb 25 13:19:51 crc kubenswrapper[5005]: I0225 13:19:51.094514 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7b94af6-e6f5-41c9-b6ca-99f242e99b53-utilities\") pod \"community-operators-gtj9p\" (UID: \"c7b94af6-e6f5-41c9-b6ca-99f242e99b53\") " pod="openshift-marketplace/community-operators-gtj9p" Feb 25 13:19:51 crc kubenswrapper[5005]: I0225 13:19:51.113400 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9w92\" (UniqueName: \"kubernetes.io/projected/c7b94af6-e6f5-41c9-b6ca-99f242e99b53-kube-api-access-b9w92\") pod \"community-operators-gtj9p\" (UID: \"c7b94af6-e6f5-41c9-b6ca-99f242e99b53\") " pod="openshift-marketplace/community-operators-gtj9p" Feb 25 13:19:51 crc kubenswrapper[5005]: I0225 13:19:51.249213 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gtj9p" Feb 25 13:19:51 crc kubenswrapper[5005]: I0225 13:19:51.813853 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gtj9p"] Feb 25 13:19:52 crc kubenswrapper[5005]: I0225 13:19:52.728620 5005 generic.go:334] "Generic (PLEG): container finished" podID="c7b94af6-e6f5-41c9-b6ca-99f242e99b53" containerID="20447bddce309cb3e255ab09c49f62ef397bcb2f2e8dfba08ee89434e7158f5d" exitCode=0 Feb 25 13:19:52 crc kubenswrapper[5005]: I0225 13:19:52.728923 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtj9p" event={"ID":"c7b94af6-e6f5-41c9-b6ca-99f242e99b53","Type":"ContainerDied","Data":"20447bddce309cb3e255ab09c49f62ef397bcb2f2e8dfba08ee89434e7158f5d"} Feb 25 13:19:52 crc kubenswrapper[5005]: I0225 13:19:52.728953 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtj9p" event={"ID":"c7b94af6-e6f5-41c9-b6ca-99f242e99b53","Type":"ContainerStarted","Data":"6be9ae13a3e6b40a90616f3f525ac2925d0ecb308c24032c1ccae68ebf9ad807"} Feb 25 13:19:53 crc kubenswrapper[5005]: I0225 13:19:53.740053 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtj9p" event={"ID":"c7b94af6-e6f5-41c9-b6ca-99f242e99b53","Type":"ContainerStarted","Data":"b05a21388110f9730753d99764f470759c22bc11b19d8b534021bf10dc9059a9"} Feb 25 13:19:54 crc kubenswrapper[5005]: I0225 13:19:54.749944 5005 generic.go:334] "Generic (PLEG): container finished" podID="c7b94af6-e6f5-41c9-b6ca-99f242e99b53" containerID="b05a21388110f9730753d99764f470759c22bc11b19d8b534021bf10dc9059a9" exitCode=0 Feb 25 13:19:54 crc kubenswrapper[5005]: I0225 13:19:54.750033 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtj9p" event={"ID":"c7b94af6-e6f5-41c9-b6ca-99f242e99b53","Type":"ContainerDied","Data":"b05a21388110f9730753d99764f470759c22bc11b19d8b534021bf10dc9059a9"} Feb 25 13:19:55 crc kubenswrapper[5005]: I0225 13:19:55.759273 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtj9p" event={"ID":"c7b94af6-e6f5-41c9-b6ca-99f242e99b53","Type":"ContainerStarted","Data":"af991874b7ec9354687dc25015af715e735e4f8085909fc29e23e206333ef3a1"} Feb 25 13:19:55 crc kubenswrapper[5005]: I0225 13:19:55.780732 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gtj9p" podStartSLOduration=3.179700006 podStartE2EDuration="5.780710408s" podCreationTimestamp="2026-02-25 13:19:50 +0000 UTC" firstStartedPulling="2026-02-25 13:19:52.730579029 +0000 UTC m=+7306.771311366" lastFinishedPulling="2026-02-25 13:19:55.331589441 +0000 UTC m=+7309.372321768" observedRunningTime="2026-02-25 13:19:55.777943803 +0000 UTC m=+7309.818676130" watchObservedRunningTime="2026-02-25 13:19:55.780710408 +0000 UTC m=+7309.821442735" Feb 25 13:20:00 crc kubenswrapper[5005]: I0225 13:20:00.139188 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533760-24vnn"] Feb 25 13:20:00 crc kubenswrapper[5005]: I0225 13:20:00.141133 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533760-24vnn" Feb 25 13:20:00 crc kubenswrapper[5005]: I0225 13:20:00.143002 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 13:20:00 crc kubenswrapper[5005]: I0225 13:20:00.143208 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 13:20:00 crc kubenswrapper[5005]: I0225 13:20:00.147213 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 13:20:00 crc kubenswrapper[5005]: I0225 13:20:00.148081 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533760-24vnn"] Feb 25 13:20:00 crc kubenswrapper[5005]: I0225 13:20:00.242557 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4p5b\" (UniqueName: \"kubernetes.io/projected/dce9c5a6-d647-4c6e-8c33-dc2f77cc5741-kube-api-access-b4p5b\") pod \"auto-csr-approver-29533760-24vnn\" (UID: \"dce9c5a6-d647-4c6e-8c33-dc2f77cc5741\") " pod="openshift-infra/auto-csr-approver-29533760-24vnn" Feb 25 13:20:00 crc kubenswrapper[5005]: I0225 13:20:00.346118 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4p5b\" (UniqueName: \"kubernetes.io/projected/dce9c5a6-d647-4c6e-8c33-dc2f77cc5741-kube-api-access-b4p5b\") pod \"auto-csr-approver-29533760-24vnn\" (UID: \"dce9c5a6-d647-4c6e-8c33-dc2f77cc5741\") " pod="openshift-infra/auto-csr-approver-29533760-24vnn" Feb 25 13:20:00 crc kubenswrapper[5005]: I0225 13:20:00.370325 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4p5b\" (UniqueName: \"kubernetes.io/projected/dce9c5a6-d647-4c6e-8c33-dc2f77cc5741-kube-api-access-b4p5b\") pod \"auto-csr-approver-29533760-24vnn\" (UID: \"dce9c5a6-d647-4c6e-8c33-dc2f77cc5741\") " pod="openshift-infra/auto-csr-approver-29533760-24vnn" Feb 25 13:20:00 crc kubenswrapper[5005]: I0225 13:20:00.459847 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533760-24vnn" Feb 25 13:20:00 crc kubenswrapper[5005]: I0225 13:20:00.987131 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533760-24vnn"] Feb 25 13:20:01 crc kubenswrapper[5005]: I0225 13:20:01.250011 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gtj9p" Feb 25 13:20:01 crc kubenswrapper[5005]: I0225 13:20:01.250224 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gtj9p" Feb 25 13:20:01 crc kubenswrapper[5005]: I0225 13:20:01.298359 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gtj9p" Feb 25 13:20:01 crc kubenswrapper[5005]: I0225 13:20:01.995892 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533760-24vnn" event={"ID":"dce9c5a6-d647-4c6e-8c33-dc2f77cc5741","Type":"ContainerStarted","Data":"632fb816d4d360f18dc567eeac8c147aa97ace60ef3086fc81b8e6d26065e327"} Feb 25 13:20:02 crc kubenswrapper[5005]: I0225 13:20:02.056642 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gtj9p" Feb 25 13:20:02 crc kubenswrapper[5005]: I0225 13:20:02.106094 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gtj9p"] Feb 25 13:20:02 crc kubenswrapper[5005]: I0225 13:20:02.685616 5005 scope.go:117] "RemoveContainer" containerID="e69861600cc9b89c742822fae1093e7ec75d5ae94ae770e895069e574a9b3089" Feb 25 13:20:02 crc kubenswrapper[5005]: E0225 13:20:02.687095 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:20:03 crc kubenswrapper[5005]: I0225 13:20:03.006531 5005 generic.go:334] "Generic (PLEG): container finished" podID="dce9c5a6-d647-4c6e-8c33-dc2f77cc5741" containerID="f6fb75b056c6348410bf104269a6df4c9120e5a690729dd3aa8e3289f451a1ed" exitCode=0 Feb 25 13:20:03 crc kubenswrapper[5005]: I0225 13:20:03.006982 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533760-24vnn" event={"ID":"dce9c5a6-d647-4c6e-8c33-dc2f77cc5741","Type":"ContainerDied","Data":"f6fb75b056c6348410bf104269a6df4c9120e5a690729dd3aa8e3289f451a1ed"} Feb 25 13:20:04 crc kubenswrapper[5005]: I0225 13:20:04.018161 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gtj9p" podUID="c7b94af6-e6f5-41c9-b6ca-99f242e99b53" containerName="registry-server" containerID="cri-o://af991874b7ec9354687dc25015af715e735e4f8085909fc29e23e206333ef3a1" gracePeriod=2 Feb 25 13:20:04 crc kubenswrapper[5005]: I0225 13:20:04.388433 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533760-24vnn" Feb 25 13:20:04 crc kubenswrapper[5005]: I0225 13:20:04.537494 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4p5b\" (UniqueName: \"kubernetes.io/projected/dce9c5a6-d647-4c6e-8c33-dc2f77cc5741-kube-api-access-b4p5b\") pod \"dce9c5a6-d647-4c6e-8c33-dc2f77cc5741\" (UID: \"dce9c5a6-d647-4c6e-8c33-dc2f77cc5741\") " Feb 25 13:20:04 crc kubenswrapper[5005]: I0225 13:20:04.544979 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dce9c5a6-d647-4c6e-8c33-dc2f77cc5741-kube-api-access-b4p5b" (OuterVolumeSpecName: "kube-api-access-b4p5b") pod "dce9c5a6-d647-4c6e-8c33-dc2f77cc5741" (UID: "dce9c5a6-d647-4c6e-8c33-dc2f77cc5741"). InnerVolumeSpecName "kube-api-access-b4p5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 13:20:04 crc kubenswrapper[5005]: I0225 13:20:04.640338 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4p5b\" (UniqueName: \"kubernetes.io/projected/dce9c5a6-d647-4c6e-8c33-dc2f77cc5741-kube-api-access-b4p5b\") on node \"crc\" DevicePath \"\"" Feb 25 13:20:05 crc kubenswrapper[5005]: I0225 13:20:05.027646 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533760-24vnn" Feb 25 13:20:05 crc kubenswrapper[5005]: I0225 13:20:05.027635 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533760-24vnn" event={"ID":"dce9c5a6-d647-4c6e-8c33-dc2f77cc5741","Type":"ContainerDied","Data":"632fb816d4d360f18dc567eeac8c147aa97ace60ef3086fc81b8e6d26065e327"} Feb 25 13:20:05 crc kubenswrapper[5005]: I0225 13:20:05.027785 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="632fb816d4d360f18dc567eeac8c147aa97ace60ef3086fc81b8e6d26065e327" Feb 25 13:20:05 crc kubenswrapper[5005]: I0225 13:20:05.031535 5005 generic.go:334] "Generic (PLEG): container finished" podID="c7b94af6-e6f5-41c9-b6ca-99f242e99b53" containerID="af991874b7ec9354687dc25015af715e735e4f8085909fc29e23e206333ef3a1" exitCode=0 Feb 25 13:20:05 crc kubenswrapper[5005]: I0225 13:20:05.031890 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtj9p" event={"ID":"c7b94af6-e6f5-41c9-b6ca-99f242e99b53","Type":"ContainerDied","Data":"af991874b7ec9354687dc25015af715e735e4f8085909fc29e23e206333ef3a1"} Feb 25 13:20:05 crc kubenswrapper[5005]: I0225 13:20:05.305685 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gtj9p" Feb 25 13:20:05 crc kubenswrapper[5005]: I0225 13:20:05.454413 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9w92\" (UniqueName: \"kubernetes.io/projected/c7b94af6-e6f5-41c9-b6ca-99f242e99b53-kube-api-access-b9w92\") pod \"c7b94af6-e6f5-41c9-b6ca-99f242e99b53\" (UID: \"c7b94af6-e6f5-41c9-b6ca-99f242e99b53\") " Feb 25 13:20:05 crc kubenswrapper[5005]: I0225 13:20:05.454793 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7b94af6-e6f5-41c9-b6ca-99f242e99b53-catalog-content\") pod \"c7b94af6-e6f5-41c9-b6ca-99f242e99b53\" (UID: \"c7b94af6-e6f5-41c9-b6ca-99f242e99b53\") " Feb 25 13:20:05 crc kubenswrapper[5005]: I0225 13:20:05.454988 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7b94af6-e6f5-41c9-b6ca-99f242e99b53-utilities\") pod \"c7b94af6-e6f5-41c9-b6ca-99f242e99b53\" (UID: \"c7b94af6-e6f5-41c9-b6ca-99f242e99b53\") " Feb 25 13:20:05 crc kubenswrapper[5005]: I0225 13:20:05.455875 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7b94af6-e6f5-41c9-b6ca-99f242e99b53-utilities" (OuterVolumeSpecName: "utilities") pod "c7b94af6-e6f5-41c9-b6ca-99f242e99b53" (UID: "c7b94af6-e6f5-41c9-b6ca-99f242e99b53"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 13:20:05 crc kubenswrapper[5005]: I0225 13:20:05.456362 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7b94af6-e6f5-41c9-b6ca-99f242e99b53-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 13:20:05 crc kubenswrapper[5005]: I0225 13:20:05.459136 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7b94af6-e6f5-41c9-b6ca-99f242e99b53-kube-api-access-b9w92" (OuterVolumeSpecName: "kube-api-access-b9w92") pod "c7b94af6-e6f5-41c9-b6ca-99f242e99b53" (UID: "c7b94af6-e6f5-41c9-b6ca-99f242e99b53"). InnerVolumeSpecName "kube-api-access-b9w92". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 13:20:05 crc kubenswrapper[5005]: I0225 13:20:05.464827 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533754-fz9h2"] Feb 25 13:20:05 crc kubenswrapper[5005]: I0225 13:20:05.473709 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533754-fz9h2"] Feb 25 13:20:05 crc kubenswrapper[5005]: I0225 13:20:05.521405 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7b94af6-e6f5-41c9-b6ca-99f242e99b53-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c7b94af6-e6f5-41c9-b6ca-99f242e99b53" (UID: "c7b94af6-e6f5-41c9-b6ca-99f242e99b53"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 13:20:05 crc kubenswrapper[5005]: I0225 13:20:05.557791 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9w92\" (UniqueName: \"kubernetes.io/projected/c7b94af6-e6f5-41c9-b6ca-99f242e99b53-kube-api-access-b9w92\") on node \"crc\" DevicePath \"\"" Feb 25 13:20:05 crc kubenswrapper[5005]: I0225 13:20:05.557946 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7b94af6-e6f5-41c9-b6ca-99f242e99b53-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 13:20:06 crc kubenswrapper[5005]: I0225 13:20:06.051809 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtj9p" event={"ID":"c7b94af6-e6f5-41c9-b6ca-99f242e99b53","Type":"ContainerDied","Data":"6be9ae13a3e6b40a90616f3f525ac2925d0ecb308c24032c1ccae68ebf9ad807"} Feb 25 13:20:06 crc kubenswrapper[5005]: I0225 13:20:06.051859 5005 scope.go:117] "RemoveContainer" containerID="af991874b7ec9354687dc25015af715e735e4f8085909fc29e23e206333ef3a1" Feb 25 13:20:06 crc kubenswrapper[5005]: I0225 13:20:06.051902 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gtj9p" Feb 25 13:20:06 crc kubenswrapper[5005]: I0225 13:20:06.084018 5005 scope.go:117] "RemoveContainer" containerID="b05a21388110f9730753d99764f470759c22bc11b19d8b534021bf10dc9059a9" Feb 25 13:20:06 crc kubenswrapper[5005]: I0225 13:20:06.092662 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gtj9p"] Feb 25 13:20:06 crc kubenswrapper[5005]: I0225 13:20:06.107075 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gtj9p"] Feb 25 13:20:06 crc kubenswrapper[5005]: I0225 13:20:06.119225 5005 scope.go:117] "RemoveContainer" containerID="20447bddce309cb3e255ab09c49f62ef397bcb2f2e8dfba08ee89434e7158f5d" Feb 25 13:20:06 crc kubenswrapper[5005]: I0225 13:20:06.696582 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7b94af6-e6f5-41c9-b6ca-99f242e99b53" path="/var/lib/kubelet/pods/c7b94af6-e6f5-41c9-b6ca-99f242e99b53/volumes" Feb 25 13:20:06 crc kubenswrapper[5005]: I0225 13:20:06.698203 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d910565a-3322-4078-9a06-204b33bcece8" path="/var/lib/kubelet/pods/d910565a-3322-4078-9a06-204b33bcece8/volumes" Feb 25 13:20:16 crc kubenswrapper[5005]: I0225 13:20:16.115066 5005 scope.go:117] "RemoveContainer" containerID="e69861600cc9b89c742822fae1093e7ec75d5ae94ae770e895069e574a9b3089" Feb 25 13:20:16 crc kubenswrapper[5005]: E0225 13:20:16.119628 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:20:24 crc kubenswrapper[5005]: I0225 13:20:24.907283 5005 scope.go:117] "RemoveContainer" containerID="58a278f120a8a16cd383554711507040b797c1ff01e6e9a240f4fdc73cc04fdc" Feb 25 13:20:30 crc kubenswrapper[5005]: I0225 13:20:30.685979 5005 scope.go:117] "RemoveContainer" containerID="e69861600cc9b89c742822fae1093e7ec75d5ae94ae770e895069e574a9b3089" Feb 25 13:20:31 crc kubenswrapper[5005]: I0225 13:20:31.283671 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerStarted","Data":"ffab8b4137ccfcf64a66c86d8b1801844c0afc8dfde2226efbec91c5c8562db1"} Feb 25 13:22:00 crc kubenswrapper[5005]: I0225 13:22:00.151818 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533762-kkcbc"] Feb 25 13:22:00 crc kubenswrapper[5005]: E0225 13:22:00.153974 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7b94af6-e6f5-41c9-b6ca-99f242e99b53" containerName="extract-content" Feb 25 13:22:00 crc kubenswrapper[5005]: I0225 13:22:00.154055 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7b94af6-e6f5-41c9-b6ca-99f242e99b53" containerName="extract-content" Feb 25 13:22:00 crc kubenswrapper[5005]: E0225 13:22:00.154109 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7b94af6-e6f5-41c9-b6ca-99f242e99b53" containerName="registry-server" Feb 25 13:22:00 crc kubenswrapper[5005]: I0225 13:22:00.154159 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7b94af6-e6f5-41c9-b6ca-99f242e99b53" containerName="registry-server" Feb 25 13:22:00 crc kubenswrapper[5005]: E0225 13:22:00.154217 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7b94af6-e6f5-41c9-b6ca-99f242e99b53" containerName="extract-utilities" Feb 25 13:22:00 crc kubenswrapper[5005]: I0225 13:22:00.154279 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7b94af6-e6f5-41c9-b6ca-99f242e99b53" containerName="extract-utilities" Feb 25 13:22:00 crc kubenswrapper[5005]: E0225 13:22:00.154329 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dce9c5a6-d647-4c6e-8c33-dc2f77cc5741" containerName="oc" Feb 25 13:22:00 crc kubenswrapper[5005]: I0225 13:22:00.154393 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="dce9c5a6-d647-4c6e-8c33-dc2f77cc5741" containerName="oc" Feb 25 13:22:00 crc kubenswrapper[5005]: I0225 13:22:00.154646 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7b94af6-e6f5-41c9-b6ca-99f242e99b53" containerName="registry-server" Feb 25 13:22:00 crc kubenswrapper[5005]: I0225 13:22:00.154714 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="dce9c5a6-d647-4c6e-8c33-dc2f77cc5741" containerName="oc" Feb 25 13:22:00 crc kubenswrapper[5005]: I0225 13:22:00.155565 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533762-kkcbc" Feb 25 13:22:00 crc kubenswrapper[5005]: I0225 13:22:00.158405 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 13:22:00 crc kubenswrapper[5005]: I0225 13:22:00.158524 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 13:22:00 crc kubenswrapper[5005]: I0225 13:22:00.158552 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 13:22:00 crc kubenswrapper[5005]: I0225 13:22:00.162331 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533762-kkcbc"] Feb 25 13:22:00 crc kubenswrapper[5005]: I0225 13:22:00.327328 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftqc6\" (UniqueName: \"kubernetes.io/projected/cda21550-312d-489d-81ae-944b6cd64897-kube-api-access-ftqc6\") pod \"auto-csr-approver-29533762-kkcbc\" (UID: \"cda21550-312d-489d-81ae-944b6cd64897\") " pod="openshift-infra/auto-csr-approver-29533762-kkcbc" Feb 25 13:22:00 crc kubenswrapper[5005]: I0225 13:22:00.430674 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftqc6\" (UniqueName: \"kubernetes.io/projected/cda21550-312d-489d-81ae-944b6cd64897-kube-api-access-ftqc6\") pod \"auto-csr-approver-29533762-kkcbc\" (UID: \"cda21550-312d-489d-81ae-944b6cd64897\") " pod="openshift-infra/auto-csr-approver-29533762-kkcbc" Feb 25 13:22:00 crc kubenswrapper[5005]: I0225 13:22:00.455613 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftqc6\" (UniqueName: \"kubernetes.io/projected/cda21550-312d-489d-81ae-944b6cd64897-kube-api-access-ftqc6\") pod \"auto-csr-approver-29533762-kkcbc\" (UID: \"cda21550-312d-489d-81ae-944b6cd64897\") " pod="openshift-infra/auto-csr-approver-29533762-kkcbc" Feb 25 13:22:00 crc kubenswrapper[5005]: I0225 13:22:00.476760 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533762-kkcbc" Feb 25 13:22:00 crc kubenswrapper[5005]: I0225 13:22:00.972128 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533762-kkcbc"] Feb 25 13:22:01 crc kubenswrapper[5005]: I0225 13:22:01.301042 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533762-kkcbc" event={"ID":"cda21550-312d-489d-81ae-944b6cd64897","Type":"ContainerStarted","Data":"8c1f1d4cecd2665e3ec72703100154b1d09a0587487e1493a4e91f0ed7105cd0"} Feb 25 13:22:03 crc kubenswrapper[5005]: I0225 13:22:03.318764 5005 generic.go:334] "Generic (PLEG): container finished" podID="cda21550-312d-489d-81ae-944b6cd64897" containerID="eaee758a0e30923c6c246e4d2d9d8ede930988d4495fec92301474c4c5c9284b" exitCode=0 Feb 25 13:22:03 crc kubenswrapper[5005]: I0225 13:22:03.318917 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533762-kkcbc" event={"ID":"cda21550-312d-489d-81ae-944b6cd64897","Type":"ContainerDied","Data":"eaee758a0e30923c6c246e4d2d9d8ede930988d4495fec92301474c4c5c9284b"} Feb 25 13:22:04 crc kubenswrapper[5005]: I0225 13:22:04.678883 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533762-kkcbc" Feb 25 13:22:04 crc kubenswrapper[5005]: I0225 13:22:04.814938 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftqc6\" (UniqueName: \"kubernetes.io/projected/cda21550-312d-489d-81ae-944b6cd64897-kube-api-access-ftqc6\") pod \"cda21550-312d-489d-81ae-944b6cd64897\" (UID: \"cda21550-312d-489d-81ae-944b6cd64897\") " Feb 25 13:22:04 crc kubenswrapper[5005]: I0225 13:22:04.820325 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cda21550-312d-489d-81ae-944b6cd64897-kube-api-access-ftqc6" (OuterVolumeSpecName: "kube-api-access-ftqc6") pod "cda21550-312d-489d-81ae-944b6cd64897" (UID: "cda21550-312d-489d-81ae-944b6cd64897"). InnerVolumeSpecName "kube-api-access-ftqc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 13:22:04 crc kubenswrapper[5005]: I0225 13:22:04.918502 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftqc6\" (UniqueName: \"kubernetes.io/projected/cda21550-312d-489d-81ae-944b6cd64897-kube-api-access-ftqc6\") on node \"crc\" DevicePath \"\"" Feb 25 13:22:05 crc kubenswrapper[5005]: I0225 13:22:05.353216 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533762-kkcbc" event={"ID":"cda21550-312d-489d-81ae-944b6cd64897","Type":"ContainerDied","Data":"8c1f1d4cecd2665e3ec72703100154b1d09a0587487e1493a4e91f0ed7105cd0"} Feb 25 13:22:05 crc kubenswrapper[5005]: I0225 13:22:05.353674 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c1f1d4cecd2665e3ec72703100154b1d09a0587487e1493a4e91f0ed7105cd0" Feb 25 13:22:05 crc kubenswrapper[5005]: I0225 13:22:05.353334 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533762-kkcbc" Feb 25 13:22:05 crc kubenswrapper[5005]: I0225 13:22:05.741786 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533756-cd5gh"] Feb 25 13:22:05 crc kubenswrapper[5005]: I0225 13:22:05.749502 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533756-cd5gh"] Feb 25 13:22:06 crc kubenswrapper[5005]: I0225 13:22:06.696627 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c5d9cbd-5bdd-46ba-8ff0-8b79f91d41f4" path="/var/lib/kubelet/pods/5c5d9cbd-5bdd-46ba-8ff0-8b79f91d41f4/volumes" Feb 25 13:22:25 crc kubenswrapper[5005]: I0225 13:22:25.025533 5005 scope.go:117] "RemoveContainer" containerID="ccaec3dc9e43abb822102da65116558c1ed088a2186d254e7c9b329f43dbfa31" Feb 25 13:22:58 crc kubenswrapper[5005]: I0225 13:22:58.087942 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 13:22:58 crc kubenswrapper[5005]: I0225 13:22:58.088538 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 13:23:28 crc kubenswrapper[5005]: I0225 13:23:28.087180 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 13:23:28 crc kubenswrapper[5005]: I0225 13:23:28.088061 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 13:23:58 crc kubenswrapper[5005]: I0225 13:23:58.087687 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 13:23:58 crc kubenswrapper[5005]: I0225 13:23:58.088306 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 13:23:58 crc kubenswrapper[5005]: I0225 13:23:58.088358 5005 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" Feb 25 13:23:58 crc kubenswrapper[5005]: I0225 13:23:58.089227 5005 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ffab8b4137ccfcf64a66c86d8b1801844c0afc8dfde2226efbec91c5c8562db1"} pod="openshift-machine-config-operator/machine-config-daemon-tct5q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 25 13:23:58 crc kubenswrapper[5005]: I0225 13:23:58.089282 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" containerID="cri-o://ffab8b4137ccfcf64a66c86d8b1801844c0afc8dfde2226efbec91c5c8562db1" gracePeriod=600 Feb 25 13:23:58 crc kubenswrapper[5005]: I0225 13:23:58.281665 5005 generic.go:334] "Generic (PLEG): container finished" podID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerID="ffab8b4137ccfcf64a66c86d8b1801844c0afc8dfde2226efbec91c5c8562db1" exitCode=0 Feb 25 13:23:58 crc kubenswrapper[5005]: I0225 13:23:58.281709 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerDied","Data":"ffab8b4137ccfcf64a66c86d8b1801844c0afc8dfde2226efbec91c5c8562db1"} Feb 25 13:23:58 crc kubenswrapper[5005]: I0225 13:23:58.281741 5005 scope.go:117] "RemoveContainer" containerID="e69861600cc9b89c742822fae1093e7ec75d5ae94ae770e895069e574a9b3089" Feb 25 13:23:59 crc kubenswrapper[5005]: I0225 13:23:59.291411 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerStarted","Data":"bc75c2e9a867a5fd2fd831dc790b4d62730641594da052cfacfb613146a98aae"} Feb 25 13:24:00 crc kubenswrapper[5005]: I0225 13:24:00.140870 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533764-nfdfr"] Feb 25 13:24:00 crc kubenswrapper[5005]: E0225 13:24:00.141973 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cda21550-312d-489d-81ae-944b6cd64897" containerName="oc" Feb 25 13:24:00 crc kubenswrapper[5005]: I0225 13:24:00.142005 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="cda21550-312d-489d-81ae-944b6cd64897" containerName="oc" Feb 25 13:24:00 crc kubenswrapper[5005]: I0225 13:24:00.142481 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="cda21550-312d-489d-81ae-944b6cd64897" containerName="oc" Feb 25 13:24:00 crc kubenswrapper[5005]: I0225 13:24:00.143549 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533764-nfdfr" Feb 25 13:24:00 crc kubenswrapper[5005]: I0225 13:24:00.145968 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 13:24:00 crc kubenswrapper[5005]: I0225 13:24:00.148778 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 13:24:00 crc kubenswrapper[5005]: I0225 13:24:00.148904 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 13:24:00 crc kubenswrapper[5005]: I0225 13:24:00.152414 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533764-nfdfr"] Feb 25 13:24:00 crc kubenswrapper[5005]: I0225 13:24:00.270575 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crb5m\" (UniqueName: \"kubernetes.io/projected/b56ff460-a07c-4b55-8421-2f828e008427-kube-api-access-crb5m\") pod \"auto-csr-approver-29533764-nfdfr\" (UID: \"b56ff460-a07c-4b55-8421-2f828e008427\") " pod="openshift-infra/auto-csr-approver-29533764-nfdfr" Feb 25 13:24:00 crc kubenswrapper[5005]: I0225 13:24:00.373672 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crb5m\" (UniqueName: \"kubernetes.io/projected/b56ff460-a07c-4b55-8421-2f828e008427-kube-api-access-crb5m\") pod \"auto-csr-approver-29533764-nfdfr\" (UID: \"b56ff460-a07c-4b55-8421-2f828e008427\") " pod="openshift-infra/auto-csr-approver-29533764-nfdfr" Feb 25 13:24:00 crc kubenswrapper[5005]: I0225 13:24:00.392917 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crb5m\" (UniqueName: \"kubernetes.io/projected/b56ff460-a07c-4b55-8421-2f828e008427-kube-api-access-crb5m\") pod \"auto-csr-approver-29533764-nfdfr\" (UID: \"b56ff460-a07c-4b55-8421-2f828e008427\") " pod="openshift-infra/auto-csr-approver-29533764-nfdfr" Feb 25 13:24:00 crc kubenswrapper[5005]: I0225 13:24:00.467410 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533764-nfdfr" Feb 25 13:24:00 crc kubenswrapper[5005]: I0225 13:24:00.937992 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533764-nfdfr"] Feb 25 13:24:00 crc kubenswrapper[5005]: W0225 13:24:00.955938 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb56ff460_a07c_4b55_8421_2f828e008427.slice/crio-9f1f095289d33a218afa3b868405839955934a57b0a4f42db46d9815fa803d04 WatchSource:0}: Error finding container 9f1f095289d33a218afa3b868405839955934a57b0a4f42db46d9815fa803d04: Status 404 returned error can't find the container with id 9f1f095289d33a218afa3b868405839955934a57b0a4f42db46d9815fa803d04 Feb 25 13:24:00 crc kubenswrapper[5005]: I0225 13:24:00.959652 5005 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 25 13:24:01 crc kubenswrapper[5005]: I0225 13:24:01.309545 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533764-nfdfr" event={"ID":"b56ff460-a07c-4b55-8421-2f828e008427","Type":"ContainerStarted","Data":"9f1f095289d33a218afa3b868405839955934a57b0a4f42db46d9815fa803d04"} Feb 25 13:24:02 crc kubenswrapper[5005]: I0225 13:24:02.320194 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533764-nfdfr" event={"ID":"b56ff460-a07c-4b55-8421-2f828e008427","Type":"ContainerStarted","Data":"ca3ebebc3eae409be28cb25ad9c311d14f2f6e89565e806c26a32074494607d3"} Feb 25 13:24:02 crc kubenswrapper[5005]: I0225 13:24:02.337448 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29533764-nfdfr" podStartSLOduration=1.2566664109999999 podStartE2EDuration="2.337429461s" podCreationTimestamp="2026-02-25 13:24:00 +0000 UTC" firstStartedPulling="2026-02-25 13:24:00.959186736 +0000 UTC m=+7554.999919063" lastFinishedPulling="2026-02-25 13:24:02.039949786 +0000 UTC m=+7556.080682113" observedRunningTime="2026-02-25 13:24:02.336624716 +0000 UTC m=+7556.377357053" watchObservedRunningTime="2026-02-25 13:24:02.337429461 +0000 UTC m=+7556.378161798" Feb 25 13:24:03 crc kubenswrapper[5005]: I0225 13:24:03.331125 5005 generic.go:334] "Generic (PLEG): container finished" podID="b56ff460-a07c-4b55-8421-2f828e008427" containerID="ca3ebebc3eae409be28cb25ad9c311d14f2f6e89565e806c26a32074494607d3" exitCode=0 Feb 25 13:24:03 crc kubenswrapper[5005]: I0225 13:24:03.331336 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533764-nfdfr" event={"ID":"b56ff460-a07c-4b55-8421-2f828e008427","Type":"ContainerDied","Data":"ca3ebebc3eae409be28cb25ad9c311d14f2f6e89565e806c26a32074494607d3"} Feb 25 13:24:04 crc kubenswrapper[5005]: I0225 13:24:04.744085 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533764-nfdfr" Feb 25 13:24:04 crc kubenswrapper[5005]: I0225 13:24:04.872704 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crb5m\" (UniqueName: \"kubernetes.io/projected/b56ff460-a07c-4b55-8421-2f828e008427-kube-api-access-crb5m\") pod \"b56ff460-a07c-4b55-8421-2f828e008427\" (UID: \"b56ff460-a07c-4b55-8421-2f828e008427\") " Feb 25 13:24:04 crc kubenswrapper[5005]: I0225 13:24:04.878474 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b56ff460-a07c-4b55-8421-2f828e008427-kube-api-access-crb5m" (OuterVolumeSpecName: "kube-api-access-crb5m") pod "b56ff460-a07c-4b55-8421-2f828e008427" (UID: "b56ff460-a07c-4b55-8421-2f828e008427"). InnerVolumeSpecName "kube-api-access-crb5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 13:24:04 crc kubenswrapper[5005]: I0225 13:24:04.974789 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crb5m\" (UniqueName: \"kubernetes.io/projected/b56ff460-a07c-4b55-8421-2f828e008427-kube-api-access-crb5m\") on node \"crc\" DevicePath \"\"" Feb 25 13:24:05 crc kubenswrapper[5005]: I0225 13:24:05.350368 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533764-nfdfr" event={"ID":"b56ff460-a07c-4b55-8421-2f828e008427","Type":"ContainerDied","Data":"9f1f095289d33a218afa3b868405839955934a57b0a4f42db46d9815fa803d04"} Feb 25 13:24:05 crc kubenswrapper[5005]: I0225 13:24:05.350429 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f1f095289d33a218afa3b868405839955934a57b0a4f42db46d9815fa803d04" Feb 25 13:24:05 crc kubenswrapper[5005]: I0225 13:24:05.350506 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533764-nfdfr" Feb 25 13:24:05 crc kubenswrapper[5005]: I0225 13:24:05.404280 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533758-wgbwm"] Feb 25 13:24:05 crc kubenswrapper[5005]: I0225 13:24:05.413796 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533758-wgbwm"] Feb 25 13:24:06 crc kubenswrapper[5005]: I0225 13:24:06.695448 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edaae432-b990-4ba1-96c6-dd1d2b66322f" path="/var/lib/kubelet/pods/edaae432-b990-4ba1-96c6-dd1d2b66322f/volumes" Feb 25 13:24:25 crc kubenswrapper[5005]: I0225 13:24:25.109730 5005 scope.go:117] "RemoveContainer" containerID="7863eebea3504a9eb522a0a75b4e33564e9020a7aaf0134f2335c9cb035817ae" Feb 25 13:25:58 crc kubenswrapper[5005]: I0225 13:25:58.087296 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 13:25:58 crc kubenswrapper[5005]: I0225 13:25:58.088029 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 13:26:00 crc kubenswrapper[5005]: I0225 13:26:00.138477 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533766-85snf"] Feb 25 13:26:00 crc kubenswrapper[5005]: E0225 13:26:00.139182 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b56ff460-a07c-4b55-8421-2f828e008427" containerName="oc" Feb 25 13:26:00 crc kubenswrapper[5005]: I0225 13:26:00.139195 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="b56ff460-a07c-4b55-8421-2f828e008427" containerName="oc" Feb 25 13:26:00 crc kubenswrapper[5005]: I0225 13:26:00.139388 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="b56ff460-a07c-4b55-8421-2f828e008427" containerName="oc" Feb 25 13:26:00 crc kubenswrapper[5005]: I0225 13:26:00.139968 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533766-85snf" Feb 25 13:26:00 crc kubenswrapper[5005]: I0225 13:26:00.145858 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 13:26:00 crc kubenswrapper[5005]: I0225 13:26:00.145940 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 13:26:00 crc kubenswrapper[5005]: I0225 13:26:00.146933 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 13:26:00 crc kubenswrapper[5005]: I0225 13:26:00.149987 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533766-85snf"] Feb 25 13:26:00 crc kubenswrapper[5005]: I0225 13:26:00.246175 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2s8xg\" (UniqueName: \"kubernetes.io/projected/fb708f49-97df-4dd7-a088-1d4240ba3831-kube-api-access-2s8xg\") pod \"auto-csr-approver-29533766-85snf\" (UID: \"fb708f49-97df-4dd7-a088-1d4240ba3831\") " pod="openshift-infra/auto-csr-approver-29533766-85snf" Feb 25 13:26:00 crc kubenswrapper[5005]: I0225 13:26:00.349122 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2s8xg\" (UniqueName: \"kubernetes.io/projected/fb708f49-97df-4dd7-a088-1d4240ba3831-kube-api-access-2s8xg\") pod \"auto-csr-approver-29533766-85snf\" (UID: \"fb708f49-97df-4dd7-a088-1d4240ba3831\") " pod="openshift-infra/auto-csr-approver-29533766-85snf" Feb 25 13:26:00 crc kubenswrapper[5005]: I0225 13:26:00.370110 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2s8xg\" (UniqueName: \"kubernetes.io/projected/fb708f49-97df-4dd7-a088-1d4240ba3831-kube-api-access-2s8xg\") pod \"auto-csr-approver-29533766-85snf\" (UID: \"fb708f49-97df-4dd7-a088-1d4240ba3831\") " pod="openshift-infra/auto-csr-approver-29533766-85snf" Feb 25 13:26:00 crc kubenswrapper[5005]: I0225 13:26:00.461412 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533766-85snf" Feb 25 13:26:00 crc kubenswrapper[5005]: I0225 13:26:00.984219 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533766-85snf"] Feb 25 13:26:01 crc kubenswrapper[5005]: I0225 13:26:01.439764 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533766-85snf" event={"ID":"fb708f49-97df-4dd7-a088-1d4240ba3831","Type":"ContainerStarted","Data":"6b74b025e59a030e07aa978652239dd321cd61e8a56ec3603a68e656080b0e39"} Feb 25 13:26:02 crc kubenswrapper[5005]: I0225 13:26:02.452249 5005 generic.go:334] "Generic (PLEG): container finished" podID="fb708f49-97df-4dd7-a088-1d4240ba3831" containerID="75fccea0c651dd631abc3a8e288c925ac0f4aa8701db388fce331e2497acd952" exitCode=0 Feb 25 13:26:02 crc kubenswrapper[5005]: I0225 13:26:02.452499 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533766-85snf" event={"ID":"fb708f49-97df-4dd7-a088-1d4240ba3831","Type":"ContainerDied","Data":"75fccea0c651dd631abc3a8e288c925ac0f4aa8701db388fce331e2497acd952"} Feb 25 13:26:03 crc kubenswrapper[5005]: I0225 13:26:03.752193 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533766-85snf" Feb 25 13:26:03 crc kubenswrapper[5005]: I0225 13:26:03.836894 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2s8xg\" (UniqueName: \"kubernetes.io/projected/fb708f49-97df-4dd7-a088-1d4240ba3831-kube-api-access-2s8xg\") pod \"fb708f49-97df-4dd7-a088-1d4240ba3831\" (UID: \"fb708f49-97df-4dd7-a088-1d4240ba3831\") " Feb 25 13:26:03 crc kubenswrapper[5005]: I0225 13:26:03.843006 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb708f49-97df-4dd7-a088-1d4240ba3831-kube-api-access-2s8xg" (OuterVolumeSpecName: "kube-api-access-2s8xg") pod "fb708f49-97df-4dd7-a088-1d4240ba3831" (UID: "fb708f49-97df-4dd7-a088-1d4240ba3831"). InnerVolumeSpecName "kube-api-access-2s8xg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 13:26:03 crc kubenswrapper[5005]: I0225 13:26:03.939814 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2s8xg\" (UniqueName: \"kubernetes.io/projected/fb708f49-97df-4dd7-a088-1d4240ba3831-kube-api-access-2s8xg\") on node \"crc\" DevicePath \"\"" Feb 25 13:26:04 crc kubenswrapper[5005]: I0225 13:26:04.474313 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533766-85snf" event={"ID":"fb708f49-97df-4dd7-a088-1d4240ba3831","Type":"ContainerDied","Data":"6b74b025e59a030e07aa978652239dd321cd61e8a56ec3603a68e656080b0e39"} Feb 25 13:26:04 crc kubenswrapper[5005]: I0225 13:26:04.474353 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b74b025e59a030e07aa978652239dd321cd61e8a56ec3603a68e656080b0e39" Feb 25 13:26:04 crc kubenswrapper[5005]: I0225 13:26:04.474465 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533766-85snf" Feb 25 13:26:04 crc kubenswrapper[5005]: I0225 13:26:04.831794 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533760-24vnn"] Feb 25 13:26:04 crc kubenswrapper[5005]: I0225 13:26:04.837763 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533760-24vnn"] Feb 25 13:26:06 crc kubenswrapper[5005]: I0225 13:26:06.697393 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dce9c5a6-d647-4c6e-8c33-dc2f77cc5741" path="/var/lib/kubelet/pods/dce9c5a6-d647-4c6e-8c33-dc2f77cc5741/volumes" Feb 25 13:26:25 crc kubenswrapper[5005]: I0225 13:26:25.250129 5005 scope.go:117] "RemoveContainer" containerID="f6fb75b056c6348410bf104269a6df4c9120e5a690729dd3aa8e3289f451a1ed" Feb 25 13:26:28 crc kubenswrapper[5005]: I0225 13:26:28.086957 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 13:26:28 crc kubenswrapper[5005]: I0225 13:26:28.087636 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 13:26:58 crc kubenswrapper[5005]: I0225 13:26:58.087500 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 13:26:58 crc kubenswrapper[5005]: I0225 13:26:58.088043 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 13:26:58 crc kubenswrapper[5005]: I0225 13:26:58.088102 5005 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" Feb 25 13:26:58 crc kubenswrapper[5005]: I0225 13:26:58.088814 5005 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bc75c2e9a867a5fd2fd831dc790b4d62730641594da052cfacfb613146a98aae"} pod="openshift-machine-config-operator/machine-config-daemon-tct5q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 25 13:26:58 crc kubenswrapper[5005]: I0225 13:26:58.088867 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" containerID="cri-o://bc75c2e9a867a5fd2fd831dc790b4d62730641594da052cfacfb613146a98aae" gracePeriod=600 Feb 25 13:26:58 crc kubenswrapper[5005]: E0225 13:26:58.207405 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:26:58 crc kubenswrapper[5005]: I0225 13:26:58.941978 5005 generic.go:334] "Generic (PLEG): container finished" podID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerID="bc75c2e9a867a5fd2fd831dc790b4d62730641594da052cfacfb613146a98aae" exitCode=0 Feb 25 13:26:58 crc kubenswrapper[5005]: I0225 13:26:58.942273 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerDied","Data":"bc75c2e9a867a5fd2fd831dc790b4d62730641594da052cfacfb613146a98aae"} Feb 25 13:26:58 crc kubenswrapper[5005]: I0225 13:26:58.942306 5005 scope.go:117] "RemoveContainer" containerID="ffab8b4137ccfcf64a66c86d8b1801844c0afc8dfde2226efbec91c5c8562db1" Feb 25 13:26:58 crc kubenswrapper[5005]: I0225 13:26:58.943189 5005 scope.go:117] "RemoveContainer" containerID="bc75c2e9a867a5fd2fd831dc790b4d62730641594da052cfacfb613146a98aae" Feb 25 13:26:58 crc kubenswrapper[5005]: E0225 13:26:58.943460 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:27:01 crc kubenswrapper[5005]: I0225 13:27:01.847108 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-p9w5s"] Feb 25 13:27:01 crc kubenswrapper[5005]: E0225 13:27:01.848996 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb708f49-97df-4dd7-a088-1d4240ba3831" containerName="oc" Feb 25 13:27:01 crc kubenswrapper[5005]: I0225 13:27:01.849012 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb708f49-97df-4dd7-a088-1d4240ba3831" containerName="oc" Feb 25 13:27:01 crc kubenswrapper[5005]: I0225 13:27:01.849227 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb708f49-97df-4dd7-a088-1d4240ba3831" containerName="oc" Feb 25 13:27:01 crc kubenswrapper[5005]: I0225 13:27:01.850672 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p9w5s" Feb 25 13:27:01 crc kubenswrapper[5005]: I0225 13:27:01.869342 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p9w5s"] Feb 25 13:27:01 crc kubenswrapper[5005]: I0225 13:27:01.967328 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1fd0906-f477-4f8d-9c0e-489cd877c687-catalog-content\") pod \"redhat-operators-p9w5s\" (UID: \"e1fd0906-f477-4f8d-9c0e-489cd877c687\") " pod="openshift-marketplace/redhat-operators-p9w5s" Feb 25 13:27:01 crc kubenswrapper[5005]: I0225 13:27:01.967442 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dhxm\" (UniqueName: \"kubernetes.io/projected/e1fd0906-f477-4f8d-9c0e-489cd877c687-kube-api-access-6dhxm\") pod \"redhat-operators-p9w5s\" (UID: \"e1fd0906-f477-4f8d-9c0e-489cd877c687\") " pod="openshift-marketplace/redhat-operators-p9w5s" Feb 25 13:27:01 crc kubenswrapper[5005]: I0225 13:27:01.967714 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1fd0906-f477-4f8d-9c0e-489cd877c687-utilities\") pod \"redhat-operators-p9w5s\" (UID: \"e1fd0906-f477-4f8d-9c0e-489cd877c687\") " pod="openshift-marketplace/redhat-operators-p9w5s" Feb 25 13:27:02 crc kubenswrapper[5005]: I0225 13:27:02.069661 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1fd0906-f477-4f8d-9c0e-489cd877c687-catalog-content\") pod \"redhat-operators-p9w5s\" (UID: \"e1fd0906-f477-4f8d-9c0e-489cd877c687\") " pod="openshift-marketplace/redhat-operators-p9w5s" Feb 25 13:27:02 crc kubenswrapper[5005]: I0225 13:27:02.069713 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dhxm\" (UniqueName: \"kubernetes.io/projected/e1fd0906-f477-4f8d-9c0e-489cd877c687-kube-api-access-6dhxm\") pod \"redhat-operators-p9w5s\" (UID: \"e1fd0906-f477-4f8d-9c0e-489cd877c687\") " pod="openshift-marketplace/redhat-operators-p9w5s" Feb 25 13:27:02 crc kubenswrapper[5005]: I0225 13:27:02.069787 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1fd0906-f477-4f8d-9c0e-489cd877c687-utilities\") pod \"redhat-operators-p9w5s\" (UID: \"e1fd0906-f477-4f8d-9c0e-489cd877c687\") " pod="openshift-marketplace/redhat-operators-p9w5s" Feb 25 13:27:02 crc kubenswrapper[5005]: I0225 13:27:02.070225 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1fd0906-f477-4f8d-9c0e-489cd877c687-catalog-content\") pod \"redhat-operators-p9w5s\" (UID: \"e1fd0906-f477-4f8d-9c0e-489cd877c687\") " pod="openshift-marketplace/redhat-operators-p9w5s" Feb 25 13:27:02 crc kubenswrapper[5005]: I0225 13:27:02.070254 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1fd0906-f477-4f8d-9c0e-489cd877c687-utilities\") pod \"redhat-operators-p9w5s\" (UID: \"e1fd0906-f477-4f8d-9c0e-489cd877c687\") " pod="openshift-marketplace/redhat-operators-p9w5s" Feb 25 13:27:02 crc kubenswrapper[5005]: I0225 13:27:02.107617 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dhxm\" (UniqueName: \"kubernetes.io/projected/e1fd0906-f477-4f8d-9c0e-489cd877c687-kube-api-access-6dhxm\") pod \"redhat-operators-p9w5s\" (UID: \"e1fd0906-f477-4f8d-9c0e-489cd877c687\") " pod="openshift-marketplace/redhat-operators-p9w5s" Feb 25 13:27:02 crc kubenswrapper[5005]: I0225 13:27:02.178362 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p9w5s" Feb 25 13:27:02 crc kubenswrapper[5005]: I0225 13:27:02.675673 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p9w5s"] Feb 25 13:27:02 crc kubenswrapper[5005]: I0225 13:27:02.981569 5005 generic.go:334] "Generic (PLEG): container finished" podID="e1fd0906-f477-4f8d-9c0e-489cd877c687" containerID="1dc23c6a81fff80cdfb10f6e9fb0341faf97d771e1a513b39abfa1289c21cf72" exitCode=0 Feb 25 13:27:02 crc kubenswrapper[5005]: I0225 13:27:02.981615 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p9w5s" event={"ID":"e1fd0906-f477-4f8d-9c0e-489cd877c687","Type":"ContainerDied","Data":"1dc23c6a81fff80cdfb10f6e9fb0341faf97d771e1a513b39abfa1289c21cf72"} Feb 25 13:27:02 crc kubenswrapper[5005]: I0225 13:27:02.981643 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p9w5s" event={"ID":"e1fd0906-f477-4f8d-9c0e-489cd877c687","Type":"ContainerStarted","Data":"c3c2e2125702010021b4d6f7f91035d0e31b7cbdc56950d3db62b5f264684993"} Feb 25 13:27:03 crc kubenswrapper[5005]: I0225 13:27:03.990542 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p9w5s" event={"ID":"e1fd0906-f477-4f8d-9c0e-489cd877c687","Type":"ContainerStarted","Data":"2092b865ee4a2cffa5d3f9632f06d8ed339e8f0d0e8c77fe639582b0cde6ee56"} Feb 25 13:27:05 crc kubenswrapper[5005]: I0225 13:27:05.000733 5005 generic.go:334] "Generic (PLEG): container finished" podID="e1fd0906-f477-4f8d-9c0e-489cd877c687" containerID="2092b865ee4a2cffa5d3f9632f06d8ed339e8f0d0e8c77fe639582b0cde6ee56" exitCode=0 Feb 25 13:27:05 crc kubenswrapper[5005]: I0225 13:27:05.001480 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p9w5s" event={"ID":"e1fd0906-f477-4f8d-9c0e-489cd877c687","Type":"ContainerDied","Data":"2092b865ee4a2cffa5d3f9632f06d8ed339e8f0d0e8c77fe639582b0cde6ee56"} Feb 25 13:27:06 crc kubenswrapper[5005]: I0225 13:27:06.012466 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p9w5s" event={"ID":"e1fd0906-f477-4f8d-9c0e-489cd877c687","Type":"ContainerStarted","Data":"5c8a38f68952affbac214f2cbbe1d6d47f8ab48332c1f614cacd56f09573b574"} Feb 25 13:27:06 crc kubenswrapper[5005]: I0225 13:27:06.035325 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-p9w5s" podStartSLOduration=2.517921769 podStartE2EDuration="5.035303717s" podCreationTimestamp="2026-02-25 13:27:01 +0000 UTC" firstStartedPulling="2026-02-25 13:27:02.984091506 +0000 UTC m=+7737.024823833" lastFinishedPulling="2026-02-25 13:27:05.501473454 +0000 UTC m=+7739.542205781" observedRunningTime="2026-02-25 13:27:06.031983355 +0000 UTC m=+7740.072715682" watchObservedRunningTime="2026-02-25 13:27:06.035303717 +0000 UTC m=+7740.076036044" Feb 25 13:27:11 crc kubenswrapper[5005]: I0225 13:27:11.685927 5005 scope.go:117] "RemoveContainer" containerID="bc75c2e9a867a5fd2fd831dc790b4d62730641594da052cfacfb613146a98aae" Feb 25 13:27:11 crc kubenswrapper[5005]: E0225 13:27:11.686629 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:27:12 crc kubenswrapper[5005]: I0225 13:27:12.178806 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-p9w5s" Feb 25 13:27:12 crc kubenswrapper[5005]: I0225 13:27:12.178879 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-p9w5s" Feb 25 13:27:12 crc kubenswrapper[5005]: I0225 13:27:12.225018 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-p9w5s" Feb 25 13:27:13 crc kubenswrapper[5005]: I0225 13:27:13.109622 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-p9w5s" Feb 25 13:27:13 crc kubenswrapper[5005]: I0225 13:27:13.154558 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p9w5s"] Feb 25 13:27:15 crc kubenswrapper[5005]: I0225 13:27:15.080738 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-p9w5s" podUID="e1fd0906-f477-4f8d-9c0e-489cd877c687" containerName="registry-server" containerID="cri-o://5c8a38f68952affbac214f2cbbe1d6d47f8ab48332c1f614cacd56f09573b574" gracePeriod=2 Feb 25 13:27:16 crc kubenswrapper[5005]: I0225 13:27:16.030485 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p9w5s" Feb 25 13:27:16 crc kubenswrapper[5005]: I0225 13:27:16.091093 5005 generic.go:334] "Generic (PLEG): container finished" podID="e1fd0906-f477-4f8d-9c0e-489cd877c687" containerID="5c8a38f68952affbac214f2cbbe1d6d47f8ab48332c1f614cacd56f09573b574" exitCode=0 Feb 25 13:27:16 crc kubenswrapper[5005]: I0225 13:27:16.091136 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p9w5s" event={"ID":"e1fd0906-f477-4f8d-9c0e-489cd877c687","Type":"ContainerDied","Data":"5c8a38f68952affbac214f2cbbe1d6d47f8ab48332c1f614cacd56f09573b574"} Feb 25 13:27:16 crc kubenswrapper[5005]: I0225 13:27:16.091177 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p9w5s" Feb 25 13:27:16 crc kubenswrapper[5005]: I0225 13:27:16.091195 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p9w5s" event={"ID":"e1fd0906-f477-4f8d-9c0e-489cd877c687","Type":"ContainerDied","Data":"c3c2e2125702010021b4d6f7f91035d0e31b7cbdc56950d3db62b5f264684993"} Feb 25 13:27:16 crc kubenswrapper[5005]: I0225 13:27:16.091212 5005 scope.go:117] "RemoveContainer" containerID="5c8a38f68952affbac214f2cbbe1d6d47f8ab48332c1f614cacd56f09573b574" Feb 25 13:27:16 crc kubenswrapper[5005]: I0225 13:27:16.110486 5005 scope.go:117] "RemoveContainer" containerID="2092b865ee4a2cffa5d3f9632f06d8ed339e8f0d0e8c77fe639582b0cde6ee56" Feb 25 13:27:16 crc kubenswrapper[5005]: I0225 13:27:16.132715 5005 scope.go:117] "RemoveContainer" containerID="1dc23c6a81fff80cdfb10f6e9fb0341faf97d771e1a513b39abfa1289c21cf72" Feb 25 13:27:16 crc kubenswrapper[5005]: I0225 13:27:16.170342 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1fd0906-f477-4f8d-9c0e-489cd877c687-catalog-content\") pod \"e1fd0906-f477-4f8d-9c0e-489cd877c687\" (UID: \"e1fd0906-f477-4f8d-9c0e-489cd877c687\") " Feb 25 13:27:16 crc kubenswrapper[5005]: I0225 13:27:16.170570 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1fd0906-f477-4f8d-9c0e-489cd877c687-utilities\") pod \"e1fd0906-f477-4f8d-9c0e-489cd877c687\" (UID: \"e1fd0906-f477-4f8d-9c0e-489cd877c687\") " Feb 25 13:27:16 crc kubenswrapper[5005]: I0225 13:27:16.170655 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dhxm\" (UniqueName: \"kubernetes.io/projected/e1fd0906-f477-4f8d-9c0e-489cd877c687-kube-api-access-6dhxm\") pod \"e1fd0906-f477-4f8d-9c0e-489cd877c687\" (UID: \"e1fd0906-f477-4f8d-9c0e-489cd877c687\") " Feb 25 13:27:16 crc kubenswrapper[5005]: I0225 13:27:16.171624 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1fd0906-f477-4f8d-9c0e-489cd877c687-utilities" (OuterVolumeSpecName: "utilities") pod "e1fd0906-f477-4f8d-9c0e-489cd877c687" (UID: "e1fd0906-f477-4f8d-9c0e-489cd877c687"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 13:27:16 crc kubenswrapper[5005]: I0225 13:27:16.175161 5005 scope.go:117] "RemoveContainer" containerID="5c8a38f68952affbac214f2cbbe1d6d47f8ab48332c1f614cacd56f09573b574" Feb 25 13:27:16 crc kubenswrapper[5005]: E0225 13:27:16.175560 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c8a38f68952affbac214f2cbbe1d6d47f8ab48332c1f614cacd56f09573b574\": container with ID starting with 5c8a38f68952affbac214f2cbbe1d6d47f8ab48332c1f614cacd56f09573b574 not found: ID does not exist" containerID="5c8a38f68952affbac214f2cbbe1d6d47f8ab48332c1f614cacd56f09573b574" Feb 25 13:27:16 crc kubenswrapper[5005]: I0225 13:27:16.175609 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c8a38f68952affbac214f2cbbe1d6d47f8ab48332c1f614cacd56f09573b574"} err="failed to get container status \"5c8a38f68952affbac214f2cbbe1d6d47f8ab48332c1f614cacd56f09573b574\": rpc error: code = NotFound desc = could not find container \"5c8a38f68952affbac214f2cbbe1d6d47f8ab48332c1f614cacd56f09573b574\": container with ID starting with 5c8a38f68952affbac214f2cbbe1d6d47f8ab48332c1f614cacd56f09573b574 not found: ID does not exist" Feb 25 13:27:16 crc kubenswrapper[5005]: I0225 13:27:16.175638 5005 scope.go:117] "RemoveContainer" containerID="2092b865ee4a2cffa5d3f9632f06d8ed339e8f0d0e8c77fe639582b0cde6ee56" Feb 25 13:27:16 crc kubenswrapper[5005]: E0225 13:27:16.175952 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2092b865ee4a2cffa5d3f9632f06d8ed339e8f0d0e8c77fe639582b0cde6ee56\": container with ID starting with 2092b865ee4a2cffa5d3f9632f06d8ed339e8f0d0e8c77fe639582b0cde6ee56 not found: ID does not exist" containerID="2092b865ee4a2cffa5d3f9632f06d8ed339e8f0d0e8c77fe639582b0cde6ee56" Feb 25 13:27:16 crc kubenswrapper[5005]: I0225 13:27:16.175987 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2092b865ee4a2cffa5d3f9632f06d8ed339e8f0d0e8c77fe639582b0cde6ee56"} err="failed to get container status \"2092b865ee4a2cffa5d3f9632f06d8ed339e8f0d0e8c77fe639582b0cde6ee56\": rpc error: code = NotFound desc = could not find container \"2092b865ee4a2cffa5d3f9632f06d8ed339e8f0d0e8c77fe639582b0cde6ee56\": container with ID starting with 2092b865ee4a2cffa5d3f9632f06d8ed339e8f0d0e8c77fe639582b0cde6ee56 not found: ID does not exist" Feb 25 13:27:16 crc kubenswrapper[5005]: I0225 13:27:16.176013 5005 scope.go:117] "RemoveContainer" containerID="1dc23c6a81fff80cdfb10f6e9fb0341faf97d771e1a513b39abfa1289c21cf72" Feb 25 13:27:16 crc kubenswrapper[5005]: E0225 13:27:16.176319 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1dc23c6a81fff80cdfb10f6e9fb0341faf97d771e1a513b39abfa1289c21cf72\": container with ID starting with 1dc23c6a81fff80cdfb10f6e9fb0341faf97d771e1a513b39abfa1289c21cf72 not found: ID does not exist" containerID="1dc23c6a81fff80cdfb10f6e9fb0341faf97d771e1a513b39abfa1289c21cf72" Feb 25 13:27:16 crc kubenswrapper[5005]: I0225 13:27:16.176353 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1dc23c6a81fff80cdfb10f6e9fb0341faf97d771e1a513b39abfa1289c21cf72"} err="failed to get container status \"1dc23c6a81fff80cdfb10f6e9fb0341faf97d771e1a513b39abfa1289c21cf72\": rpc error: code = NotFound desc = could not find container \"1dc23c6a81fff80cdfb10f6e9fb0341faf97d771e1a513b39abfa1289c21cf72\": container with ID starting with 1dc23c6a81fff80cdfb10f6e9fb0341faf97d771e1a513b39abfa1289c21cf72 not found: ID does not exist" Feb 25 13:27:16 crc kubenswrapper[5005]: I0225 13:27:16.177807 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1fd0906-f477-4f8d-9c0e-489cd877c687-kube-api-access-6dhxm" (OuterVolumeSpecName: "kube-api-access-6dhxm") pod "e1fd0906-f477-4f8d-9c0e-489cd877c687" (UID: "e1fd0906-f477-4f8d-9c0e-489cd877c687"). InnerVolumeSpecName "kube-api-access-6dhxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 13:27:16 crc kubenswrapper[5005]: I0225 13:27:16.272805 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1fd0906-f477-4f8d-9c0e-489cd877c687-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 13:27:16 crc kubenswrapper[5005]: I0225 13:27:16.272845 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dhxm\" (UniqueName: \"kubernetes.io/projected/e1fd0906-f477-4f8d-9c0e-489cd877c687-kube-api-access-6dhxm\") on node \"crc\" DevicePath \"\"" Feb 25 13:27:16 crc kubenswrapper[5005]: I0225 13:27:16.308640 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1fd0906-f477-4f8d-9c0e-489cd877c687-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e1fd0906-f477-4f8d-9c0e-489cd877c687" (UID: "e1fd0906-f477-4f8d-9c0e-489cd877c687"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 13:27:16 crc kubenswrapper[5005]: I0225 13:27:16.374418 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1fd0906-f477-4f8d-9c0e-489cd877c687-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 13:27:16 crc kubenswrapper[5005]: I0225 13:27:16.424185 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p9w5s"] Feb 25 13:27:16 crc kubenswrapper[5005]: I0225 13:27:16.432712 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-p9w5s"] Feb 25 13:27:16 crc kubenswrapper[5005]: I0225 13:27:16.696709 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1fd0906-f477-4f8d-9c0e-489cd877c687" path="/var/lib/kubelet/pods/e1fd0906-f477-4f8d-9c0e-489cd877c687/volumes" Feb 25 13:27:23 crc kubenswrapper[5005]: I0225 13:27:23.685820 5005 scope.go:117] "RemoveContainer" containerID="bc75c2e9a867a5fd2fd831dc790b4d62730641594da052cfacfb613146a98aae" Feb 25 13:27:23 crc kubenswrapper[5005]: E0225 13:27:23.686611 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:27:38 crc kubenswrapper[5005]: I0225 13:27:38.688047 5005 scope.go:117] "RemoveContainer" containerID="bc75c2e9a867a5fd2fd831dc790b4d62730641594da052cfacfb613146a98aae" Feb 25 13:27:38 crc kubenswrapper[5005]: E0225 13:27:38.688708 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:27:48 crc kubenswrapper[5005]: I0225 13:27:48.463950 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5m2mn"] Feb 25 13:27:48 crc kubenswrapper[5005]: E0225 13:27:48.464928 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1fd0906-f477-4f8d-9c0e-489cd877c687" containerName="extract-content" Feb 25 13:27:48 crc kubenswrapper[5005]: I0225 13:27:48.464940 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1fd0906-f477-4f8d-9c0e-489cd877c687" containerName="extract-content" Feb 25 13:27:48 crc kubenswrapper[5005]: E0225 13:27:48.464954 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1fd0906-f477-4f8d-9c0e-489cd877c687" containerName="registry-server" Feb 25 13:27:48 crc kubenswrapper[5005]: I0225 13:27:48.464960 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1fd0906-f477-4f8d-9c0e-489cd877c687" containerName="registry-server" Feb 25 13:27:48 crc kubenswrapper[5005]: E0225 13:27:48.464977 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1fd0906-f477-4f8d-9c0e-489cd877c687" containerName="extract-utilities" Feb 25 13:27:48 crc kubenswrapper[5005]: I0225 13:27:48.464984 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1fd0906-f477-4f8d-9c0e-489cd877c687" containerName="extract-utilities" Feb 25 13:27:48 crc kubenswrapper[5005]: I0225 13:27:48.465184 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1fd0906-f477-4f8d-9c0e-489cd877c687" containerName="registry-server" Feb 25 13:27:48 crc kubenswrapper[5005]: I0225 13:27:48.466765 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5m2mn" Feb 25 13:27:48 crc kubenswrapper[5005]: I0225 13:27:48.479423 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5m2mn"] Feb 25 13:27:48 crc kubenswrapper[5005]: I0225 13:27:48.540563 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8ca94a0-3d97-4bc0-a7d8-9eb2f96304b0-catalog-content\") pod \"redhat-marketplace-5m2mn\" (UID: \"a8ca94a0-3d97-4bc0-a7d8-9eb2f96304b0\") " pod="openshift-marketplace/redhat-marketplace-5m2mn" Feb 25 13:27:48 crc kubenswrapper[5005]: I0225 13:27:48.540622 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8ca94a0-3d97-4bc0-a7d8-9eb2f96304b0-utilities\") pod \"redhat-marketplace-5m2mn\" (UID: \"a8ca94a0-3d97-4bc0-a7d8-9eb2f96304b0\") " pod="openshift-marketplace/redhat-marketplace-5m2mn" Feb 25 13:27:48 crc kubenswrapper[5005]: I0225 13:27:48.540660 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt6ng\" (UniqueName: \"kubernetes.io/projected/a8ca94a0-3d97-4bc0-a7d8-9eb2f96304b0-kube-api-access-qt6ng\") pod \"redhat-marketplace-5m2mn\" (UID: \"a8ca94a0-3d97-4bc0-a7d8-9eb2f96304b0\") " pod="openshift-marketplace/redhat-marketplace-5m2mn" Feb 25 13:27:48 crc kubenswrapper[5005]: I0225 13:27:48.641973 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8ca94a0-3d97-4bc0-a7d8-9eb2f96304b0-catalog-content\") pod \"redhat-marketplace-5m2mn\" (UID: \"a8ca94a0-3d97-4bc0-a7d8-9eb2f96304b0\") " pod="openshift-marketplace/redhat-marketplace-5m2mn" Feb 25 13:27:48 crc kubenswrapper[5005]: I0225 13:27:48.642029 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8ca94a0-3d97-4bc0-a7d8-9eb2f96304b0-utilities\") pod \"redhat-marketplace-5m2mn\" (UID: \"a8ca94a0-3d97-4bc0-a7d8-9eb2f96304b0\") " pod="openshift-marketplace/redhat-marketplace-5m2mn" Feb 25 13:27:48 crc kubenswrapper[5005]: I0225 13:27:48.642086 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qt6ng\" (UniqueName: \"kubernetes.io/projected/a8ca94a0-3d97-4bc0-a7d8-9eb2f96304b0-kube-api-access-qt6ng\") pod \"redhat-marketplace-5m2mn\" (UID: \"a8ca94a0-3d97-4bc0-a7d8-9eb2f96304b0\") " pod="openshift-marketplace/redhat-marketplace-5m2mn" Feb 25 13:27:48 crc kubenswrapper[5005]: I0225 13:27:48.642390 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8ca94a0-3d97-4bc0-a7d8-9eb2f96304b0-catalog-content\") pod \"redhat-marketplace-5m2mn\" (UID: \"a8ca94a0-3d97-4bc0-a7d8-9eb2f96304b0\") " pod="openshift-marketplace/redhat-marketplace-5m2mn" Feb 25 13:27:48 crc kubenswrapper[5005]: I0225 13:27:48.642501 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8ca94a0-3d97-4bc0-a7d8-9eb2f96304b0-utilities\") pod \"redhat-marketplace-5m2mn\" (UID: \"a8ca94a0-3d97-4bc0-a7d8-9eb2f96304b0\") " pod="openshift-marketplace/redhat-marketplace-5m2mn" Feb 25 13:27:48 crc kubenswrapper[5005]: I0225 13:27:48.668015 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qt6ng\" (UniqueName: \"kubernetes.io/projected/a8ca94a0-3d97-4bc0-a7d8-9eb2f96304b0-kube-api-access-qt6ng\") pod \"redhat-marketplace-5m2mn\" (UID: \"a8ca94a0-3d97-4bc0-a7d8-9eb2f96304b0\") " pod="openshift-marketplace/redhat-marketplace-5m2mn" Feb 25 13:27:48 crc kubenswrapper[5005]: I0225 13:27:48.797927 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5m2mn" Feb 25 13:27:49 crc kubenswrapper[5005]: I0225 13:27:49.294038 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5m2mn"] Feb 25 13:27:49 crc kubenswrapper[5005]: I0225 13:27:49.397222 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5m2mn" event={"ID":"a8ca94a0-3d97-4bc0-a7d8-9eb2f96304b0","Type":"ContainerStarted","Data":"81a5e23d5bf140f96426645f5a5cbef23eecf86146a37d1f1a4a31076ea2bbd7"} Feb 25 13:27:50 crc kubenswrapper[5005]: I0225 13:27:50.407909 5005 generic.go:334] "Generic (PLEG): container finished" podID="a8ca94a0-3d97-4bc0-a7d8-9eb2f96304b0" containerID="0d2b15462019ebba497d53d010bd11a6060b60d8291e24d7988b594a6a84e45b" exitCode=0 Feb 25 13:27:50 crc kubenswrapper[5005]: I0225 13:27:50.408002 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5m2mn" event={"ID":"a8ca94a0-3d97-4bc0-a7d8-9eb2f96304b0","Type":"ContainerDied","Data":"0d2b15462019ebba497d53d010bd11a6060b60d8291e24d7988b594a6a84e45b"} Feb 25 13:27:50 crc kubenswrapper[5005]: I0225 13:27:50.663993 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hq6mr"] Feb 25 13:27:50 crc kubenswrapper[5005]: I0225 13:27:50.666708 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hq6mr" Feb 25 13:27:50 crc kubenswrapper[5005]: I0225 13:27:50.685032 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hq6mr"] Feb 25 13:27:50 crc kubenswrapper[5005]: I0225 13:27:50.688850 5005 scope.go:117] "RemoveContainer" containerID="bc75c2e9a867a5fd2fd831dc790b4d62730641594da052cfacfb613146a98aae" Feb 25 13:27:50 crc kubenswrapper[5005]: E0225 13:27:50.689049 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:27:50 crc kubenswrapper[5005]: I0225 13:27:50.785764 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9ffw\" (UniqueName: \"kubernetes.io/projected/e109d083-e68e-4b9c-8664-c17e0b2bd75d-kube-api-access-t9ffw\") pod \"certified-operators-hq6mr\" (UID: \"e109d083-e68e-4b9c-8664-c17e0b2bd75d\") " pod="openshift-marketplace/certified-operators-hq6mr" Feb 25 13:27:50 crc kubenswrapper[5005]: I0225 13:27:50.785959 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e109d083-e68e-4b9c-8664-c17e0b2bd75d-utilities\") pod \"certified-operators-hq6mr\" (UID: \"e109d083-e68e-4b9c-8664-c17e0b2bd75d\") " pod="openshift-marketplace/certified-operators-hq6mr" Feb 25 13:27:50 crc kubenswrapper[5005]: I0225 13:27:50.786096 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e109d083-e68e-4b9c-8664-c17e0b2bd75d-catalog-content\") pod \"certified-operators-hq6mr\" (UID: \"e109d083-e68e-4b9c-8664-c17e0b2bd75d\") " pod="openshift-marketplace/certified-operators-hq6mr" Feb 25 13:27:50 crc kubenswrapper[5005]: I0225 13:27:50.889910 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9ffw\" (UniqueName: \"kubernetes.io/projected/e109d083-e68e-4b9c-8664-c17e0b2bd75d-kube-api-access-t9ffw\") pod \"certified-operators-hq6mr\" (UID: \"e109d083-e68e-4b9c-8664-c17e0b2bd75d\") " pod="openshift-marketplace/certified-operators-hq6mr" Feb 25 13:27:50 crc kubenswrapper[5005]: I0225 13:27:50.890015 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e109d083-e68e-4b9c-8664-c17e0b2bd75d-utilities\") pod \"certified-operators-hq6mr\" (UID: \"e109d083-e68e-4b9c-8664-c17e0b2bd75d\") " pod="openshift-marketplace/certified-operators-hq6mr" Feb 25 13:27:50 crc kubenswrapper[5005]: I0225 13:27:50.890101 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e109d083-e68e-4b9c-8664-c17e0b2bd75d-catalog-content\") pod \"certified-operators-hq6mr\" (UID: \"e109d083-e68e-4b9c-8664-c17e0b2bd75d\") " pod="openshift-marketplace/certified-operators-hq6mr" Feb 25 13:27:50 crc kubenswrapper[5005]: I0225 13:27:50.890732 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e109d083-e68e-4b9c-8664-c17e0b2bd75d-catalog-content\") pod \"certified-operators-hq6mr\" (UID: \"e109d083-e68e-4b9c-8664-c17e0b2bd75d\") " pod="openshift-marketplace/certified-operators-hq6mr" Feb 25 13:27:50 crc kubenswrapper[5005]: I0225 13:27:50.890767 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e109d083-e68e-4b9c-8664-c17e0b2bd75d-utilities\") pod \"certified-operators-hq6mr\" (UID: \"e109d083-e68e-4b9c-8664-c17e0b2bd75d\") " pod="openshift-marketplace/certified-operators-hq6mr" Feb 25 13:27:50 crc kubenswrapper[5005]: I0225 13:27:50.917711 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9ffw\" (UniqueName: \"kubernetes.io/projected/e109d083-e68e-4b9c-8664-c17e0b2bd75d-kube-api-access-t9ffw\") pod \"certified-operators-hq6mr\" (UID: \"e109d083-e68e-4b9c-8664-c17e0b2bd75d\") " pod="openshift-marketplace/certified-operators-hq6mr" Feb 25 13:27:51 crc kubenswrapper[5005]: I0225 13:27:51.015948 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hq6mr" Feb 25 13:27:51 crc kubenswrapper[5005]: W0225 13:27:51.530473 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode109d083_e68e_4b9c_8664_c17e0b2bd75d.slice/crio-0a4228896896ce2053a67cba71644f4c267510da9b9d4f3e7cc06a9eb9c5a4cc WatchSource:0}: Error finding container 0a4228896896ce2053a67cba71644f4c267510da9b9d4f3e7cc06a9eb9c5a4cc: Status 404 returned error can't find the container with id 0a4228896896ce2053a67cba71644f4c267510da9b9d4f3e7cc06a9eb9c5a4cc Feb 25 13:27:51 crc kubenswrapper[5005]: I0225 13:27:51.532250 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hq6mr"] Feb 25 13:27:52 crc kubenswrapper[5005]: I0225 13:27:52.426215 5005 generic.go:334] "Generic (PLEG): container finished" podID="e109d083-e68e-4b9c-8664-c17e0b2bd75d" containerID="6937fca9b7f4fdaf53ad46aa62ea70902353a508e22093e219e25a517a0af7c5" exitCode=0 Feb 25 13:27:52 crc kubenswrapper[5005]: I0225 13:27:52.426313 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hq6mr" event={"ID":"e109d083-e68e-4b9c-8664-c17e0b2bd75d","Type":"ContainerDied","Data":"6937fca9b7f4fdaf53ad46aa62ea70902353a508e22093e219e25a517a0af7c5"} Feb 25 13:27:52 crc kubenswrapper[5005]: I0225 13:27:52.426576 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hq6mr" event={"ID":"e109d083-e68e-4b9c-8664-c17e0b2bd75d","Type":"ContainerStarted","Data":"0a4228896896ce2053a67cba71644f4c267510da9b9d4f3e7cc06a9eb9c5a4cc"} Feb 25 13:27:55 crc kubenswrapper[5005]: I0225 13:27:55.451651 5005 generic.go:334] "Generic (PLEG): container finished" podID="e109d083-e68e-4b9c-8664-c17e0b2bd75d" containerID="58eec734d82945a4ded22be7581a2f7e92b01c5029184b355ed9b4705a00f5a5" exitCode=0 Feb 25 13:27:55 crc kubenswrapper[5005]: I0225 13:27:55.451726 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hq6mr" event={"ID":"e109d083-e68e-4b9c-8664-c17e0b2bd75d","Type":"ContainerDied","Data":"58eec734d82945a4ded22be7581a2f7e92b01c5029184b355ed9b4705a00f5a5"} Feb 25 13:27:55 crc kubenswrapper[5005]: I0225 13:27:55.456521 5005 generic.go:334] "Generic (PLEG): container finished" podID="a8ca94a0-3d97-4bc0-a7d8-9eb2f96304b0" containerID="950f97db7cc4bb4eac6dcac0818a20100959f123d49f6b57b3de783fa9ad7970" exitCode=0 Feb 25 13:27:55 crc kubenswrapper[5005]: I0225 13:27:55.456931 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5m2mn" event={"ID":"a8ca94a0-3d97-4bc0-a7d8-9eb2f96304b0","Type":"ContainerDied","Data":"950f97db7cc4bb4eac6dcac0818a20100959f123d49f6b57b3de783fa9ad7970"} Feb 25 13:27:56 crc kubenswrapper[5005]: I0225 13:27:56.465860 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hq6mr" event={"ID":"e109d083-e68e-4b9c-8664-c17e0b2bd75d","Type":"ContainerStarted","Data":"9c7f858022131041b1a9741d8f1cafb1ad8ca518575bae6b7fc77507619c820a"} Feb 25 13:27:56 crc kubenswrapper[5005]: I0225 13:27:56.468587 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5m2mn" event={"ID":"a8ca94a0-3d97-4bc0-a7d8-9eb2f96304b0","Type":"ContainerStarted","Data":"5e251b4c6fd22658168a25737d03400dd3aeafba632fea30cdce47973a5f7346"} Feb 25 13:27:56 crc kubenswrapper[5005]: I0225 13:27:56.486885 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hq6mr" podStartSLOduration=3.010087331 podStartE2EDuration="6.486864371s" podCreationTimestamp="2026-02-25 13:27:50 +0000 UTC" firstStartedPulling="2026-02-25 13:27:52.428444712 +0000 UTC m=+7786.469177039" lastFinishedPulling="2026-02-25 13:27:55.905221752 +0000 UTC m=+7789.945954079" observedRunningTime="2026-02-25 13:27:56.484481798 +0000 UTC m=+7790.525214135" watchObservedRunningTime="2026-02-25 13:27:56.486864371 +0000 UTC m=+7790.527596708" Feb 25 13:27:56 crc kubenswrapper[5005]: I0225 13:27:56.512151 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5m2mn" podStartSLOduration=3.094646168 podStartE2EDuration="8.512130891s" podCreationTimestamp="2026-02-25 13:27:48 +0000 UTC" firstStartedPulling="2026-02-25 13:27:50.409798973 +0000 UTC m=+7784.450531320" lastFinishedPulling="2026-02-25 13:27:55.827283716 +0000 UTC m=+7789.868016043" observedRunningTime="2026-02-25 13:27:56.505929881 +0000 UTC m=+7790.546662208" watchObservedRunningTime="2026-02-25 13:27:56.512130891 +0000 UTC m=+7790.552863218" Feb 25 13:27:58 crc kubenswrapper[5005]: I0225 13:27:58.799037 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5m2mn" Feb 25 13:27:58 crc kubenswrapper[5005]: I0225 13:27:58.799331 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5m2mn" Feb 25 13:27:58 crc kubenswrapper[5005]: I0225 13:27:58.864686 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5m2mn" Feb 25 13:28:00 crc kubenswrapper[5005]: I0225 13:28:00.140384 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533768-rcrlb"] Feb 25 13:28:00 crc kubenswrapper[5005]: I0225 13:28:00.141931 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533768-rcrlb" Feb 25 13:28:00 crc kubenswrapper[5005]: I0225 13:28:00.145549 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 13:28:00 crc kubenswrapper[5005]: I0225 13:28:00.146481 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 13:28:00 crc kubenswrapper[5005]: I0225 13:28:00.147513 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 13:28:00 crc kubenswrapper[5005]: I0225 13:28:00.167477 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533768-rcrlb"] Feb 25 13:28:00 crc kubenswrapper[5005]: I0225 13:28:00.270786 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc484\" (UniqueName: \"kubernetes.io/projected/5c863f15-7316-4ba9-9f46-dfe5705563ac-kube-api-access-sc484\") pod \"auto-csr-approver-29533768-rcrlb\" (UID: \"5c863f15-7316-4ba9-9f46-dfe5705563ac\") " pod="openshift-infra/auto-csr-approver-29533768-rcrlb" Feb 25 13:28:00 crc kubenswrapper[5005]: I0225 13:28:00.373156 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sc484\" (UniqueName: \"kubernetes.io/projected/5c863f15-7316-4ba9-9f46-dfe5705563ac-kube-api-access-sc484\") pod \"auto-csr-approver-29533768-rcrlb\" (UID: \"5c863f15-7316-4ba9-9f46-dfe5705563ac\") " pod="openshift-infra/auto-csr-approver-29533768-rcrlb" Feb 25 13:28:00 crc kubenswrapper[5005]: I0225 13:28:00.396811 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sc484\" (UniqueName: \"kubernetes.io/projected/5c863f15-7316-4ba9-9f46-dfe5705563ac-kube-api-access-sc484\") pod \"auto-csr-approver-29533768-rcrlb\" (UID: \"5c863f15-7316-4ba9-9f46-dfe5705563ac\") " pod="openshift-infra/auto-csr-approver-29533768-rcrlb" Feb 25 13:28:00 crc kubenswrapper[5005]: I0225 13:28:00.464222 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533768-rcrlb" Feb 25 13:28:00 crc kubenswrapper[5005]: I0225 13:28:00.958765 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533768-rcrlb"] Feb 25 13:28:00 crc kubenswrapper[5005]: W0225 13:28:00.964292 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c863f15_7316_4ba9_9f46_dfe5705563ac.slice/crio-eda8bc6afe9ade08faa6d2d5a08a2316fdb09dc7f17b34528d00b2a8c3dbe0e6 WatchSource:0}: Error finding container eda8bc6afe9ade08faa6d2d5a08a2316fdb09dc7f17b34528d00b2a8c3dbe0e6: Status 404 returned error can't find the container with id eda8bc6afe9ade08faa6d2d5a08a2316fdb09dc7f17b34528d00b2a8c3dbe0e6 Feb 25 13:28:01 crc kubenswrapper[5005]: I0225 13:28:01.016389 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hq6mr" Feb 25 13:28:01 crc kubenswrapper[5005]: I0225 13:28:01.016446 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hq6mr" Feb 25 13:28:01 crc kubenswrapper[5005]: I0225 13:28:01.065526 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hq6mr" Feb 25 13:28:01 crc kubenswrapper[5005]: I0225 13:28:01.543936 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533768-rcrlb" event={"ID":"5c863f15-7316-4ba9-9f46-dfe5705563ac","Type":"ContainerStarted","Data":"eda8bc6afe9ade08faa6d2d5a08a2316fdb09dc7f17b34528d00b2a8c3dbe0e6"} Feb 25 13:28:01 crc kubenswrapper[5005]: I0225 13:28:01.589471 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hq6mr" Feb 25 13:28:01 crc kubenswrapper[5005]: I0225 13:28:01.636730 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hq6mr"] Feb 25 13:28:02 crc kubenswrapper[5005]: I0225 13:28:02.552537 5005 generic.go:334] "Generic (PLEG): container finished" podID="5c863f15-7316-4ba9-9f46-dfe5705563ac" containerID="6c89447031bb15dd56537a8e4ca49e6e83c592e11189e7664368f8470dc16f9c" exitCode=0 Feb 25 13:28:02 crc kubenswrapper[5005]: I0225 13:28:02.552598 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533768-rcrlb" event={"ID":"5c863f15-7316-4ba9-9f46-dfe5705563ac","Type":"ContainerDied","Data":"6c89447031bb15dd56537a8e4ca49e6e83c592e11189e7664368f8470dc16f9c"} Feb 25 13:28:03 crc kubenswrapper[5005]: I0225 13:28:03.559956 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hq6mr" podUID="e109d083-e68e-4b9c-8664-c17e0b2bd75d" containerName="registry-server" containerID="cri-o://9c7f858022131041b1a9741d8f1cafb1ad8ca518575bae6b7fc77507619c820a" gracePeriod=2 Feb 25 13:28:03 crc kubenswrapper[5005]: I0225 13:28:03.922631 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533768-rcrlb" Feb 25 13:28:03 crc kubenswrapper[5005]: I0225 13:28:03.968361 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sc484\" (UniqueName: \"kubernetes.io/projected/5c863f15-7316-4ba9-9f46-dfe5705563ac-kube-api-access-sc484\") pod \"5c863f15-7316-4ba9-9f46-dfe5705563ac\" (UID: \"5c863f15-7316-4ba9-9f46-dfe5705563ac\") " Feb 25 13:28:03 crc kubenswrapper[5005]: I0225 13:28:03.975132 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c863f15-7316-4ba9-9f46-dfe5705563ac-kube-api-access-sc484" (OuterVolumeSpecName: "kube-api-access-sc484") pod "5c863f15-7316-4ba9-9f46-dfe5705563ac" (UID: "5c863f15-7316-4ba9-9f46-dfe5705563ac"). InnerVolumeSpecName "kube-api-access-sc484". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 13:28:04 crc kubenswrapper[5005]: I0225 13:28:04.033860 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hq6mr" Feb 25 13:28:04 crc kubenswrapper[5005]: I0225 13:28:04.070753 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e109d083-e68e-4b9c-8664-c17e0b2bd75d-catalog-content\") pod \"e109d083-e68e-4b9c-8664-c17e0b2bd75d\" (UID: \"e109d083-e68e-4b9c-8664-c17e0b2bd75d\") " Feb 25 13:28:04 crc kubenswrapper[5005]: I0225 13:28:04.070843 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9ffw\" (UniqueName: \"kubernetes.io/projected/e109d083-e68e-4b9c-8664-c17e0b2bd75d-kube-api-access-t9ffw\") pod \"e109d083-e68e-4b9c-8664-c17e0b2bd75d\" (UID: \"e109d083-e68e-4b9c-8664-c17e0b2bd75d\") " Feb 25 13:28:04 crc kubenswrapper[5005]: I0225 13:28:04.071081 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e109d083-e68e-4b9c-8664-c17e0b2bd75d-utilities\") pod \"e109d083-e68e-4b9c-8664-c17e0b2bd75d\" (UID: \"e109d083-e68e-4b9c-8664-c17e0b2bd75d\") " Feb 25 13:28:04 crc kubenswrapper[5005]: I0225 13:28:04.071789 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sc484\" (UniqueName: \"kubernetes.io/projected/5c863f15-7316-4ba9-9f46-dfe5705563ac-kube-api-access-sc484\") on node \"crc\" DevicePath \"\"" Feb 25 13:28:04 crc kubenswrapper[5005]: I0225 13:28:04.072017 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e109d083-e68e-4b9c-8664-c17e0b2bd75d-utilities" (OuterVolumeSpecName: "utilities") pod "e109d083-e68e-4b9c-8664-c17e0b2bd75d" (UID: "e109d083-e68e-4b9c-8664-c17e0b2bd75d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 13:28:04 crc kubenswrapper[5005]: I0225 13:28:04.073911 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e109d083-e68e-4b9c-8664-c17e0b2bd75d-kube-api-access-t9ffw" (OuterVolumeSpecName: "kube-api-access-t9ffw") pod "e109d083-e68e-4b9c-8664-c17e0b2bd75d" (UID: "e109d083-e68e-4b9c-8664-c17e0b2bd75d"). InnerVolumeSpecName "kube-api-access-t9ffw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 13:28:04 crc kubenswrapper[5005]: I0225 13:28:04.119316 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e109d083-e68e-4b9c-8664-c17e0b2bd75d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e109d083-e68e-4b9c-8664-c17e0b2bd75d" (UID: "e109d083-e68e-4b9c-8664-c17e0b2bd75d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 13:28:04 crc kubenswrapper[5005]: I0225 13:28:04.173419 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9ffw\" (UniqueName: \"kubernetes.io/projected/e109d083-e68e-4b9c-8664-c17e0b2bd75d-kube-api-access-t9ffw\") on node \"crc\" DevicePath \"\"" Feb 25 13:28:04 crc kubenswrapper[5005]: I0225 13:28:04.173458 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e109d083-e68e-4b9c-8664-c17e0b2bd75d-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 13:28:04 crc kubenswrapper[5005]: I0225 13:28:04.173468 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e109d083-e68e-4b9c-8664-c17e0b2bd75d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 13:28:04 crc kubenswrapper[5005]: I0225 13:28:04.571040 5005 generic.go:334] "Generic (PLEG): container finished" podID="e109d083-e68e-4b9c-8664-c17e0b2bd75d" containerID="9c7f858022131041b1a9741d8f1cafb1ad8ca518575bae6b7fc77507619c820a" exitCode=0 Feb 25 13:28:04 crc kubenswrapper[5005]: I0225 13:28:04.571094 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hq6mr" event={"ID":"e109d083-e68e-4b9c-8664-c17e0b2bd75d","Type":"ContainerDied","Data":"9c7f858022131041b1a9741d8f1cafb1ad8ca518575bae6b7fc77507619c820a"} Feb 25 13:28:04 crc kubenswrapper[5005]: I0225 13:28:04.571122 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hq6mr" event={"ID":"e109d083-e68e-4b9c-8664-c17e0b2bd75d","Type":"ContainerDied","Data":"0a4228896896ce2053a67cba71644f4c267510da9b9d4f3e7cc06a9eb9c5a4cc"} Feb 25 13:28:04 crc kubenswrapper[5005]: I0225 13:28:04.571142 5005 scope.go:117] "RemoveContainer" containerID="9c7f858022131041b1a9741d8f1cafb1ad8ca518575bae6b7fc77507619c820a" Feb 25 13:28:04 crc kubenswrapper[5005]: I0225 13:28:04.571240 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hq6mr" Feb 25 13:28:04 crc kubenswrapper[5005]: I0225 13:28:04.575778 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533768-rcrlb" event={"ID":"5c863f15-7316-4ba9-9f46-dfe5705563ac","Type":"ContainerDied","Data":"eda8bc6afe9ade08faa6d2d5a08a2316fdb09dc7f17b34528d00b2a8c3dbe0e6"} Feb 25 13:28:04 crc kubenswrapper[5005]: I0225 13:28:04.575824 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eda8bc6afe9ade08faa6d2d5a08a2316fdb09dc7f17b34528d00b2a8c3dbe0e6" Feb 25 13:28:04 crc kubenswrapper[5005]: I0225 13:28:04.575799 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533768-rcrlb" Feb 25 13:28:04 crc kubenswrapper[5005]: I0225 13:28:04.600847 5005 scope.go:117] "RemoveContainer" containerID="58eec734d82945a4ded22be7581a2f7e92b01c5029184b355ed9b4705a00f5a5" Feb 25 13:28:04 crc kubenswrapper[5005]: I0225 13:28:04.611836 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hq6mr"] Feb 25 13:28:04 crc kubenswrapper[5005]: I0225 13:28:04.619842 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hq6mr"] Feb 25 13:28:04 crc kubenswrapper[5005]: I0225 13:28:04.640995 5005 scope.go:117] "RemoveContainer" containerID="6937fca9b7f4fdaf53ad46aa62ea70902353a508e22093e219e25a517a0af7c5" Feb 25 13:28:04 crc kubenswrapper[5005]: I0225 13:28:04.658071 5005 scope.go:117] "RemoveContainer" containerID="9c7f858022131041b1a9741d8f1cafb1ad8ca518575bae6b7fc77507619c820a" Feb 25 13:28:04 crc kubenswrapper[5005]: E0225 13:28:04.658656 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c7f858022131041b1a9741d8f1cafb1ad8ca518575bae6b7fc77507619c820a\": container with ID starting with 9c7f858022131041b1a9741d8f1cafb1ad8ca518575bae6b7fc77507619c820a not found: ID does not exist" containerID="9c7f858022131041b1a9741d8f1cafb1ad8ca518575bae6b7fc77507619c820a" Feb 25 13:28:04 crc kubenswrapper[5005]: I0225 13:28:04.658704 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c7f858022131041b1a9741d8f1cafb1ad8ca518575bae6b7fc77507619c820a"} err="failed to get container status \"9c7f858022131041b1a9741d8f1cafb1ad8ca518575bae6b7fc77507619c820a\": rpc error: code = NotFound desc = could not find container \"9c7f858022131041b1a9741d8f1cafb1ad8ca518575bae6b7fc77507619c820a\": container with ID starting with 9c7f858022131041b1a9741d8f1cafb1ad8ca518575bae6b7fc77507619c820a not found: ID does not exist" Feb 25 13:28:04 crc kubenswrapper[5005]: I0225 13:28:04.658740 5005 scope.go:117] "RemoveContainer" containerID="58eec734d82945a4ded22be7581a2f7e92b01c5029184b355ed9b4705a00f5a5" Feb 25 13:28:04 crc kubenswrapper[5005]: E0225 13:28:04.659162 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58eec734d82945a4ded22be7581a2f7e92b01c5029184b355ed9b4705a00f5a5\": container with ID starting with 58eec734d82945a4ded22be7581a2f7e92b01c5029184b355ed9b4705a00f5a5 not found: ID does not exist" containerID="58eec734d82945a4ded22be7581a2f7e92b01c5029184b355ed9b4705a00f5a5" Feb 25 13:28:04 crc kubenswrapper[5005]: I0225 13:28:04.659192 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58eec734d82945a4ded22be7581a2f7e92b01c5029184b355ed9b4705a00f5a5"} err="failed to get container status \"58eec734d82945a4ded22be7581a2f7e92b01c5029184b355ed9b4705a00f5a5\": rpc error: code = NotFound desc = could not find container \"58eec734d82945a4ded22be7581a2f7e92b01c5029184b355ed9b4705a00f5a5\": container with ID starting with 58eec734d82945a4ded22be7581a2f7e92b01c5029184b355ed9b4705a00f5a5 not found: ID does not exist" Feb 25 13:28:04 crc kubenswrapper[5005]: I0225 13:28:04.659214 5005 scope.go:117] "RemoveContainer" containerID="6937fca9b7f4fdaf53ad46aa62ea70902353a508e22093e219e25a517a0af7c5" Feb 25 13:28:04 crc kubenswrapper[5005]: E0225 13:28:04.659576 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6937fca9b7f4fdaf53ad46aa62ea70902353a508e22093e219e25a517a0af7c5\": container with ID starting with 6937fca9b7f4fdaf53ad46aa62ea70902353a508e22093e219e25a517a0af7c5 not found: ID does not exist" containerID="6937fca9b7f4fdaf53ad46aa62ea70902353a508e22093e219e25a517a0af7c5" Feb 25 13:28:04 crc kubenswrapper[5005]: I0225 13:28:04.659603 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6937fca9b7f4fdaf53ad46aa62ea70902353a508e22093e219e25a517a0af7c5"} err="failed to get container status \"6937fca9b7f4fdaf53ad46aa62ea70902353a508e22093e219e25a517a0af7c5\": rpc error: code = NotFound desc = could not find container \"6937fca9b7f4fdaf53ad46aa62ea70902353a508e22093e219e25a517a0af7c5\": container with ID starting with 6937fca9b7f4fdaf53ad46aa62ea70902353a508e22093e219e25a517a0af7c5 not found: ID does not exist" Feb 25 13:28:04 crc kubenswrapper[5005]: I0225 13:28:04.685904 5005 scope.go:117] "RemoveContainer" containerID="bc75c2e9a867a5fd2fd831dc790b4d62730641594da052cfacfb613146a98aae" Feb 25 13:28:04 crc kubenswrapper[5005]: E0225 13:28:04.686519 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:28:04 crc kubenswrapper[5005]: I0225 13:28:04.697455 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e109d083-e68e-4b9c-8664-c17e0b2bd75d" path="/var/lib/kubelet/pods/e109d083-e68e-4b9c-8664-c17e0b2bd75d/volumes" Feb 25 13:28:04 crc kubenswrapper[5005]: I0225 13:28:04.991228 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533762-kkcbc"] Feb 25 13:28:04 crc kubenswrapper[5005]: I0225 13:28:04.999663 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533762-kkcbc"] Feb 25 13:28:06 crc kubenswrapper[5005]: I0225 13:28:06.708121 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cda21550-312d-489d-81ae-944b6cd64897" path="/var/lib/kubelet/pods/cda21550-312d-489d-81ae-944b6cd64897/volumes" Feb 25 13:28:08 crc kubenswrapper[5005]: I0225 13:28:08.850452 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5m2mn" Feb 25 13:28:08 crc kubenswrapper[5005]: I0225 13:28:08.938144 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5m2mn"] Feb 25 13:28:08 crc kubenswrapper[5005]: I0225 13:28:08.980693 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8wzmf"] Feb 25 13:28:08 crc kubenswrapper[5005]: I0225 13:28:08.980988 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8wzmf" podUID="d339f835-0982-43e9-9d42-3a6893c3905e" containerName="registry-server" containerID="cri-o://46aa60136b3194ba523cf3585d3ad66a5abf6a809ba562e631771c95f3c87dd0" gracePeriod=2 Feb 25 13:28:09 crc kubenswrapper[5005]: I0225 13:28:09.401949 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8wzmf" Feb 25 13:28:09 crc kubenswrapper[5005]: I0225 13:28:09.573103 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d339f835-0982-43e9-9d42-3a6893c3905e-utilities\") pod \"d339f835-0982-43e9-9d42-3a6893c3905e\" (UID: \"d339f835-0982-43e9-9d42-3a6893c3905e\") " Feb 25 13:28:09 crc kubenswrapper[5005]: I0225 13:28:09.573404 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhq7s\" (UniqueName: \"kubernetes.io/projected/d339f835-0982-43e9-9d42-3a6893c3905e-kube-api-access-mhq7s\") pod \"d339f835-0982-43e9-9d42-3a6893c3905e\" (UID: \"d339f835-0982-43e9-9d42-3a6893c3905e\") " Feb 25 13:28:09 crc kubenswrapper[5005]: I0225 13:28:09.573527 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d339f835-0982-43e9-9d42-3a6893c3905e-catalog-content\") pod \"d339f835-0982-43e9-9d42-3a6893c3905e\" (UID: \"d339f835-0982-43e9-9d42-3a6893c3905e\") " Feb 25 13:28:09 crc kubenswrapper[5005]: I0225 13:28:09.574488 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d339f835-0982-43e9-9d42-3a6893c3905e-utilities" (OuterVolumeSpecName: "utilities") pod "d339f835-0982-43e9-9d42-3a6893c3905e" (UID: "d339f835-0982-43e9-9d42-3a6893c3905e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 13:28:09 crc kubenswrapper[5005]: I0225 13:28:09.578622 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d339f835-0982-43e9-9d42-3a6893c3905e-kube-api-access-mhq7s" (OuterVolumeSpecName: "kube-api-access-mhq7s") pod "d339f835-0982-43e9-9d42-3a6893c3905e" (UID: "d339f835-0982-43e9-9d42-3a6893c3905e"). InnerVolumeSpecName "kube-api-access-mhq7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 13:28:09 crc kubenswrapper[5005]: I0225 13:28:09.627088 5005 generic.go:334] "Generic (PLEG): container finished" podID="d339f835-0982-43e9-9d42-3a6893c3905e" containerID="46aa60136b3194ba523cf3585d3ad66a5abf6a809ba562e631771c95f3c87dd0" exitCode=0 Feb 25 13:28:09 crc kubenswrapper[5005]: I0225 13:28:09.627252 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8wzmf" Feb 25 13:28:09 crc kubenswrapper[5005]: I0225 13:28:09.627294 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8wzmf" event={"ID":"d339f835-0982-43e9-9d42-3a6893c3905e","Type":"ContainerDied","Data":"46aa60136b3194ba523cf3585d3ad66a5abf6a809ba562e631771c95f3c87dd0"} Feb 25 13:28:09 crc kubenswrapper[5005]: I0225 13:28:09.627326 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8wzmf" event={"ID":"d339f835-0982-43e9-9d42-3a6893c3905e","Type":"ContainerDied","Data":"00fcc0cd30bb8a361b362ddfd7f8d3578f16cc1f6fde7203a4155a9e362a3c0c"} Feb 25 13:28:09 crc kubenswrapper[5005]: I0225 13:28:09.627346 5005 scope.go:117] "RemoveContainer" containerID="46aa60136b3194ba523cf3585d3ad66a5abf6a809ba562e631771c95f3c87dd0" Feb 25 13:28:09 crc kubenswrapper[5005]: I0225 13:28:09.664259 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d339f835-0982-43e9-9d42-3a6893c3905e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d339f835-0982-43e9-9d42-3a6893c3905e" (UID: "d339f835-0982-43e9-9d42-3a6893c3905e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 13:28:09 crc kubenswrapper[5005]: I0225 13:28:09.666259 5005 scope.go:117] "RemoveContainer" containerID="be4197429856bc33c23ff1e52baa50c26243257f70e6bfec9e66b2e67459c25c" Feb 25 13:28:09 crc kubenswrapper[5005]: I0225 13:28:09.678652 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d339f835-0982-43e9-9d42-3a6893c3905e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 13:28:09 crc kubenswrapper[5005]: I0225 13:28:09.678679 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d339f835-0982-43e9-9d42-3a6893c3905e-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 13:28:09 crc kubenswrapper[5005]: I0225 13:28:09.678689 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhq7s\" (UniqueName: \"kubernetes.io/projected/d339f835-0982-43e9-9d42-3a6893c3905e-kube-api-access-mhq7s\") on node \"crc\" DevicePath \"\"" Feb 25 13:28:09 crc kubenswrapper[5005]: I0225 13:28:09.734958 5005 scope.go:117] "RemoveContainer" containerID="662156c5a59d5baa1f3fbd53e3fb3183cf9f3e97e7637e9f1ddc9da254d53143" Feb 25 13:28:09 crc kubenswrapper[5005]: I0225 13:28:09.758646 5005 scope.go:117] "RemoveContainer" containerID="46aa60136b3194ba523cf3585d3ad66a5abf6a809ba562e631771c95f3c87dd0" Feb 25 13:28:09 crc kubenswrapper[5005]: E0225 13:28:09.761080 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46aa60136b3194ba523cf3585d3ad66a5abf6a809ba562e631771c95f3c87dd0\": container with ID starting with 46aa60136b3194ba523cf3585d3ad66a5abf6a809ba562e631771c95f3c87dd0 not found: ID does not exist" containerID="46aa60136b3194ba523cf3585d3ad66a5abf6a809ba562e631771c95f3c87dd0" Feb 25 13:28:09 crc kubenswrapper[5005]: I0225 13:28:09.761126 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46aa60136b3194ba523cf3585d3ad66a5abf6a809ba562e631771c95f3c87dd0"} err="failed to get container status \"46aa60136b3194ba523cf3585d3ad66a5abf6a809ba562e631771c95f3c87dd0\": rpc error: code = NotFound desc = could not find container \"46aa60136b3194ba523cf3585d3ad66a5abf6a809ba562e631771c95f3c87dd0\": container with ID starting with 46aa60136b3194ba523cf3585d3ad66a5abf6a809ba562e631771c95f3c87dd0 not found: ID does not exist" Feb 25 13:28:09 crc kubenswrapper[5005]: I0225 13:28:09.761154 5005 scope.go:117] "RemoveContainer" containerID="be4197429856bc33c23ff1e52baa50c26243257f70e6bfec9e66b2e67459c25c" Feb 25 13:28:09 crc kubenswrapper[5005]: E0225 13:28:09.761638 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be4197429856bc33c23ff1e52baa50c26243257f70e6bfec9e66b2e67459c25c\": container with ID starting with be4197429856bc33c23ff1e52baa50c26243257f70e6bfec9e66b2e67459c25c not found: ID does not exist" containerID="be4197429856bc33c23ff1e52baa50c26243257f70e6bfec9e66b2e67459c25c" Feb 25 13:28:09 crc kubenswrapper[5005]: I0225 13:28:09.761687 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be4197429856bc33c23ff1e52baa50c26243257f70e6bfec9e66b2e67459c25c"} err="failed to get container status \"be4197429856bc33c23ff1e52baa50c26243257f70e6bfec9e66b2e67459c25c\": rpc error: code = NotFound desc = could not find container \"be4197429856bc33c23ff1e52baa50c26243257f70e6bfec9e66b2e67459c25c\": container with ID starting with be4197429856bc33c23ff1e52baa50c26243257f70e6bfec9e66b2e67459c25c not found: ID does not exist" Feb 25 13:28:09 crc kubenswrapper[5005]: I0225 13:28:09.761719 5005 scope.go:117] "RemoveContainer" containerID="662156c5a59d5baa1f3fbd53e3fb3183cf9f3e97e7637e9f1ddc9da254d53143" Feb 25 13:28:09 crc kubenswrapper[5005]: E0225 13:28:09.762202 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"662156c5a59d5baa1f3fbd53e3fb3183cf9f3e97e7637e9f1ddc9da254d53143\": container with ID starting with 662156c5a59d5baa1f3fbd53e3fb3183cf9f3e97e7637e9f1ddc9da254d53143 not found: ID does not exist" containerID="662156c5a59d5baa1f3fbd53e3fb3183cf9f3e97e7637e9f1ddc9da254d53143" Feb 25 13:28:09 crc kubenswrapper[5005]: I0225 13:28:09.762226 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"662156c5a59d5baa1f3fbd53e3fb3183cf9f3e97e7637e9f1ddc9da254d53143"} err="failed to get container status \"662156c5a59d5baa1f3fbd53e3fb3183cf9f3e97e7637e9f1ddc9da254d53143\": rpc error: code = NotFound desc = could not find container \"662156c5a59d5baa1f3fbd53e3fb3183cf9f3e97e7637e9f1ddc9da254d53143\": container with ID starting with 662156c5a59d5baa1f3fbd53e3fb3183cf9f3e97e7637e9f1ddc9da254d53143 not found: ID does not exist" Feb 25 13:28:09 crc kubenswrapper[5005]: I0225 13:28:09.965455 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8wzmf"] Feb 25 13:28:09 crc kubenswrapper[5005]: I0225 13:28:09.974285 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8wzmf"] Feb 25 13:28:10 crc kubenswrapper[5005]: I0225 13:28:10.698447 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d339f835-0982-43e9-9d42-3a6893c3905e" path="/var/lib/kubelet/pods/d339f835-0982-43e9-9d42-3a6893c3905e/volumes" Feb 25 13:28:16 crc kubenswrapper[5005]: I0225 13:28:16.691811 5005 scope.go:117] "RemoveContainer" containerID="bc75c2e9a867a5fd2fd831dc790b4d62730641594da052cfacfb613146a98aae" Feb 25 13:28:16 crc kubenswrapper[5005]: E0225 13:28:16.692636 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:28:25 crc kubenswrapper[5005]: I0225 13:28:25.387411 5005 scope.go:117] "RemoveContainer" containerID="eaee758a0e30923c6c246e4d2d9d8ede930988d4495fec92301474c4c5c9284b" Feb 25 13:28:28 crc kubenswrapper[5005]: I0225 13:28:28.685061 5005 scope.go:117] "RemoveContainer" containerID="bc75c2e9a867a5fd2fd831dc790b4d62730641594da052cfacfb613146a98aae" Feb 25 13:28:28 crc kubenswrapper[5005]: E0225 13:28:28.685821 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:28:39 crc kubenswrapper[5005]: I0225 13:28:39.686274 5005 scope.go:117] "RemoveContainer" containerID="bc75c2e9a867a5fd2fd831dc790b4d62730641594da052cfacfb613146a98aae" Feb 25 13:28:39 crc kubenswrapper[5005]: E0225 13:28:39.687609 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:28:52 crc kubenswrapper[5005]: I0225 13:28:52.686057 5005 scope.go:117] "RemoveContainer" containerID="bc75c2e9a867a5fd2fd831dc790b4d62730641594da052cfacfb613146a98aae" Feb 25 13:28:52 crc kubenswrapper[5005]: E0225 13:28:52.687002 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:29:03 crc kubenswrapper[5005]: I0225 13:29:03.685286 5005 scope.go:117] "RemoveContainer" containerID="bc75c2e9a867a5fd2fd831dc790b4d62730641594da052cfacfb613146a98aae" Feb 25 13:29:03 crc kubenswrapper[5005]: E0225 13:29:03.686045 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:29:18 crc kubenswrapper[5005]: I0225 13:29:18.686756 5005 scope.go:117] "RemoveContainer" containerID="bc75c2e9a867a5fd2fd831dc790b4d62730641594da052cfacfb613146a98aae" Feb 25 13:29:18 crc kubenswrapper[5005]: E0225 13:29:18.687929 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:29:30 crc kubenswrapper[5005]: I0225 13:29:30.686107 5005 scope.go:117] "RemoveContainer" containerID="bc75c2e9a867a5fd2fd831dc790b4d62730641594da052cfacfb613146a98aae" Feb 25 13:29:30 crc kubenswrapper[5005]: E0225 13:29:30.687200 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:29:42 crc kubenswrapper[5005]: I0225 13:29:42.685695 5005 scope.go:117] "RemoveContainer" containerID="bc75c2e9a867a5fd2fd831dc790b4d62730641594da052cfacfb613146a98aae" Feb 25 13:29:42 crc kubenswrapper[5005]: E0225 13:29:42.686690 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:29:53 crc kubenswrapper[5005]: I0225 13:29:53.685314 5005 scope.go:117] "RemoveContainer" containerID="bc75c2e9a867a5fd2fd831dc790b4d62730641594da052cfacfb613146a98aae" Feb 25 13:29:53 crc kubenswrapper[5005]: E0225 13:29:53.686061 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:30:00 crc kubenswrapper[5005]: I0225 13:30:00.161835 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29533770-9jtth"] Feb 25 13:30:00 crc kubenswrapper[5005]: E0225 13:30:00.163150 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d339f835-0982-43e9-9d42-3a6893c3905e" containerName="extract-content" Feb 25 13:30:00 crc kubenswrapper[5005]: I0225 13:30:00.163172 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="d339f835-0982-43e9-9d42-3a6893c3905e" containerName="extract-content" Feb 25 13:30:00 crc kubenswrapper[5005]: E0225 13:30:00.163189 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e109d083-e68e-4b9c-8664-c17e0b2bd75d" containerName="registry-server" Feb 25 13:30:00 crc kubenswrapper[5005]: I0225 13:30:00.163196 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="e109d083-e68e-4b9c-8664-c17e0b2bd75d" containerName="registry-server" Feb 25 13:30:00 crc kubenswrapper[5005]: E0225 13:30:00.163219 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d339f835-0982-43e9-9d42-3a6893c3905e" containerName="extract-utilities" Feb 25 13:30:00 crc kubenswrapper[5005]: I0225 13:30:00.163229 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="d339f835-0982-43e9-9d42-3a6893c3905e" containerName="extract-utilities" Feb 25 13:30:00 crc kubenswrapper[5005]: E0225 13:30:00.163247 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e109d083-e68e-4b9c-8664-c17e0b2bd75d" containerName="extract-utilities" Feb 25 13:30:00 crc kubenswrapper[5005]: I0225 13:30:00.163253 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="e109d083-e68e-4b9c-8664-c17e0b2bd75d" containerName="extract-utilities" Feb 25 13:30:00 crc kubenswrapper[5005]: E0225 13:30:00.163270 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e109d083-e68e-4b9c-8664-c17e0b2bd75d" containerName="extract-content" Feb 25 13:30:00 crc kubenswrapper[5005]: I0225 13:30:00.163277 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="e109d083-e68e-4b9c-8664-c17e0b2bd75d" containerName="extract-content" Feb 25 13:30:00 crc kubenswrapper[5005]: E0225 13:30:00.163287 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d339f835-0982-43e9-9d42-3a6893c3905e" containerName="registry-server" Feb 25 13:30:00 crc kubenswrapper[5005]: I0225 13:30:00.163294 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="d339f835-0982-43e9-9d42-3a6893c3905e" containerName="registry-server" Feb 25 13:30:00 crc kubenswrapper[5005]: E0225 13:30:00.163303 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c863f15-7316-4ba9-9f46-dfe5705563ac" containerName="oc" Feb 25 13:30:00 crc kubenswrapper[5005]: I0225 13:30:00.163312 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c863f15-7316-4ba9-9f46-dfe5705563ac" containerName="oc" Feb 25 13:30:00 crc kubenswrapper[5005]: I0225 13:30:00.164030 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c863f15-7316-4ba9-9f46-dfe5705563ac" containerName="oc" Feb 25 13:30:00 crc kubenswrapper[5005]: I0225 13:30:00.164058 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="d339f835-0982-43e9-9d42-3a6893c3905e" containerName="registry-server" Feb 25 13:30:00 crc kubenswrapper[5005]: I0225 13:30:00.164089 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="e109d083-e68e-4b9c-8664-c17e0b2bd75d" containerName="registry-server" Feb 25 13:30:00 crc kubenswrapper[5005]: I0225 13:30:00.165145 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29533770-9jtth" Feb 25 13:30:00 crc kubenswrapper[5005]: I0225 13:30:00.167501 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 25 13:30:00 crc kubenswrapper[5005]: I0225 13:30:00.167787 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 25 13:30:00 crc kubenswrapper[5005]: I0225 13:30:00.174559 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533770-q968s"] Feb 25 13:30:00 crc kubenswrapper[5005]: I0225 13:30:00.176138 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533770-q968s" Feb 25 13:30:00 crc kubenswrapper[5005]: I0225 13:30:00.178923 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 13:30:00 crc kubenswrapper[5005]: I0225 13:30:00.178923 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 13:30:00 crc kubenswrapper[5005]: I0225 13:30:00.178938 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 13:30:00 crc kubenswrapper[5005]: I0225 13:30:00.191884 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29533770-9jtth"] Feb 25 13:30:00 crc kubenswrapper[5005]: I0225 13:30:00.217598 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533770-q968s"] Feb 25 13:30:00 crc kubenswrapper[5005]: I0225 13:30:00.310009 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxl9w\" (UniqueName: \"kubernetes.io/projected/bbd697d6-ee91-402b-a985-3ea1756f70d7-kube-api-access-jxl9w\") pod \"auto-csr-approver-29533770-q968s\" (UID: \"bbd697d6-ee91-402b-a985-3ea1756f70d7\") " pod="openshift-infra/auto-csr-approver-29533770-q968s" Feb 25 13:30:00 crc kubenswrapper[5005]: I0225 13:30:00.310117 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1cb76377-039f-40e0-8934-b8d0c7bc7bda-config-volume\") pod \"collect-profiles-29533770-9jtth\" (UID: \"1cb76377-039f-40e0-8934-b8d0c7bc7bda\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533770-9jtth" Feb 25 13:30:00 crc kubenswrapper[5005]: I0225 13:30:00.310209 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jjrx\" (UniqueName: \"kubernetes.io/projected/1cb76377-039f-40e0-8934-b8d0c7bc7bda-kube-api-access-9jjrx\") pod \"collect-profiles-29533770-9jtth\" (UID: \"1cb76377-039f-40e0-8934-b8d0c7bc7bda\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533770-9jtth" Feb 25 13:30:00 crc kubenswrapper[5005]: I0225 13:30:00.310271 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1cb76377-039f-40e0-8934-b8d0c7bc7bda-secret-volume\") pod \"collect-profiles-29533770-9jtth\" (UID: \"1cb76377-039f-40e0-8934-b8d0c7bc7bda\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533770-9jtth" Feb 25 13:30:00 crc kubenswrapper[5005]: I0225 13:30:00.412275 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jjrx\" (UniqueName: \"kubernetes.io/projected/1cb76377-039f-40e0-8934-b8d0c7bc7bda-kube-api-access-9jjrx\") pod \"collect-profiles-29533770-9jtth\" (UID: \"1cb76377-039f-40e0-8934-b8d0c7bc7bda\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533770-9jtth" Feb 25 13:30:00 crc kubenswrapper[5005]: I0225 13:30:00.412534 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1cb76377-039f-40e0-8934-b8d0c7bc7bda-secret-volume\") pod \"collect-profiles-29533770-9jtth\" (UID: \"1cb76377-039f-40e0-8934-b8d0c7bc7bda\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533770-9jtth" Feb 25 13:30:00 crc kubenswrapper[5005]: I0225 13:30:00.412572 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxl9w\" (UniqueName: \"kubernetes.io/projected/bbd697d6-ee91-402b-a985-3ea1756f70d7-kube-api-access-jxl9w\") pod \"auto-csr-approver-29533770-q968s\" (UID: \"bbd697d6-ee91-402b-a985-3ea1756f70d7\") " pod="openshift-infra/auto-csr-approver-29533770-q968s" Feb 25 13:30:00 crc kubenswrapper[5005]: I0225 13:30:00.412661 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1cb76377-039f-40e0-8934-b8d0c7bc7bda-config-volume\") pod \"collect-profiles-29533770-9jtth\" (UID: \"1cb76377-039f-40e0-8934-b8d0c7bc7bda\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533770-9jtth" Feb 25 13:30:00 crc kubenswrapper[5005]: I0225 13:30:00.414005 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1cb76377-039f-40e0-8934-b8d0c7bc7bda-config-volume\") pod \"collect-profiles-29533770-9jtth\" (UID: \"1cb76377-039f-40e0-8934-b8d0c7bc7bda\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533770-9jtth" Feb 25 13:30:00 crc kubenswrapper[5005]: I0225 13:30:00.419688 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1cb76377-039f-40e0-8934-b8d0c7bc7bda-secret-volume\") pod \"collect-profiles-29533770-9jtth\" (UID: \"1cb76377-039f-40e0-8934-b8d0c7bc7bda\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533770-9jtth" Feb 25 13:30:00 crc kubenswrapper[5005]: I0225 13:30:00.435744 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jjrx\" (UniqueName: \"kubernetes.io/projected/1cb76377-039f-40e0-8934-b8d0c7bc7bda-kube-api-access-9jjrx\") pod \"collect-profiles-29533770-9jtth\" (UID: \"1cb76377-039f-40e0-8934-b8d0c7bc7bda\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533770-9jtth" Feb 25 13:30:00 crc kubenswrapper[5005]: I0225 13:30:00.438149 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxl9w\" (UniqueName: \"kubernetes.io/projected/bbd697d6-ee91-402b-a985-3ea1756f70d7-kube-api-access-jxl9w\") pod \"auto-csr-approver-29533770-q968s\" (UID: \"bbd697d6-ee91-402b-a985-3ea1756f70d7\") " pod="openshift-infra/auto-csr-approver-29533770-q968s" Feb 25 13:30:00 crc kubenswrapper[5005]: I0225 13:30:00.506345 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29533770-9jtth" Feb 25 13:30:00 crc kubenswrapper[5005]: I0225 13:30:00.516197 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533770-q968s" Feb 25 13:30:00 crc kubenswrapper[5005]: I0225 13:30:00.993586 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29533770-9jtth"] Feb 25 13:30:01 crc kubenswrapper[5005]: I0225 13:30:01.066357 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533770-q968s"] Feb 25 13:30:01 crc kubenswrapper[5005]: W0225 13:30:01.068966 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbbd697d6_ee91_402b_a985_3ea1756f70d7.slice/crio-eae2cc6a23b4e78a3d8f043ddd8599eee3572bbcad380ef3c03a36b4eb07f01b WatchSource:0}: Error finding container eae2cc6a23b4e78a3d8f043ddd8599eee3572bbcad380ef3c03a36b4eb07f01b: Status 404 returned error can't find the container with id eae2cc6a23b4e78a3d8f043ddd8599eee3572bbcad380ef3c03a36b4eb07f01b Feb 25 13:30:01 crc kubenswrapper[5005]: I0225 13:30:01.071651 5005 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 25 13:30:01 crc kubenswrapper[5005]: I0225 13:30:01.835522 5005 generic.go:334] "Generic (PLEG): container finished" podID="1cb76377-039f-40e0-8934-b8d0c7bc7bda" containerID="5f7adb34e6c338d9739b660f5e91c844c71f456f0881e2b4f3c0d410e2522f6f" exitCode=0 Feb 25 13:30:01 crc kubenswrapper[5005]: I0225 13:30:01.835733 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29533770-9jtth" event={"ID":"1cb76377-039f-40e0-8934-b8d0c7bc7bda","Type":"ContainerDied","Data":"5f7adb34e6c338d9739b660f5e91c844c71f456f0881e2b4f3c0d410e2522f6f"} Feb 25 13:30:01 crc kubenswrapper[5005]: I0225 13:30:01.837347 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29533770-9jtth" event={"ID":"1cb76377-039f-40e0-8934-b8d0c7bc7bda","Type":"ContainerStarted","Data":"9ae9232f42ec147bfc962d53707fb43996c75d091df487dbdfaafdfebd0f7db3"} Feb 25 13:30:01 crc kubenswrapper[5005]: I0225 13:30:01.841424 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533770-q968s" event={"ID":"bbd697d6-ee91-402b-a985-3ea1756f70d7","Type":"ContainerStarted","Data":"eae2cc6a23b4e78a3d8f043ddd8599eee3572bbcad380ef3c03a36b4eb07f01b"} Feb 25 13:30:03 crc kubenswrapper[5005]: I0225 13:30:03.268144 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29533770-9jtth" Feb 25 13:30:03 crc kubenswrapper[5005]: I0225 13:30:03.368832 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1cb76377-039f-40e0-8934-b8d0c7bc7bda-config-volume\") pod \"1cb76377-039f-40e0-8934-b8d0c7bc7bda\" (UID: \"1cb76377-039f-40e0-8934-b8d0c7bc7bda\") " Feb 25 13:30:03 crc kubenswrapper[5005]: I0225 13:30:03.368940 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1cb76377-039f-40e0-8934-b8d0c7bc7bda-secret-volume\") pod \"1cb76377-039f-40e0-8934-b8d0c7bc7bda\" (UID: \"1cb76377-039f-40e0-8934-b8d0c7bc7bda\") " Feb 25 13:30:03 crc kubenswrapper[5005]: I0225 13:30:03.369027 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jjrx\" (UniqueName: \"kubernetes.io/projected/1cb76377-039f-40e0-8934-b8d0c7bc7bda-kube-api-access-9jjrx\") pod \"1cb76377-039f-40e0-8934-b8d0c7bc7bda\" (UID: \"1cb76377-039f-40e0-8934-b8d0c7bc7bda\") " Feb 25 13:30:03 crc kubenswrapper[5005]: I0225 13:30:03.369883 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cb76377-039f-40e0-8934-b8d0c7bc7bda-config-volume" (OuterVolumeSpecName: "config-volume") pod "1cb76377-039f-40e0-8934-b8d0c7bc7bda" (UID: "1cb76377-039f-40e0-8934-b8d0c7bc7bda"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 13:30:03 crc kubenswrapper[5005]: I0225 13:30:03.374621 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cb76377-039f-40e0-8934-b8d0c7bc7bda-kube-api-access-9jjrx" (OuterVolumeSpecName: "kube-api-access-9jjrx") pod "1cb76377-039f-40e0-8934-b8d0c7bc7bda" (UID: "1cb76377-039f-40e0-8934-b8d0c7bc7bda"). InnerVolumeSpecName "kube-api-access-9jjrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 13:30:03 crc kubenswrapper[5005]: I0225 13:30:03.374728 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cb76377-039f-40e0-8934-b8d0c7bc7bda-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1cb76377-039f-40e0-8934-b8d0c7bc7bda" (UID: "1cb76377-039f-40e0-8934-b8d0c7bc7bda"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 13:30:03 crc kubenswrapper[5005]: I0225 13:30:03.471333 5005 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1cb76377-039f-40e0-8934-b8d0c7bc7bda-config-volume\") on node \"crc\" DevicePath \"\"" Feb 25 13:30:03 crc kubenswrapper[5005]: I0225 13:30:03.471390 5005 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1cb76377-039f-40e0-8934-b8d0c7bc7bda-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 25 13:30:03 crc kubenswrapper[5005]: I0225 13:30:03.471404 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9jjrx\" (UniqueName: \"kubernetes.io/projected/1cb76377-039f-40e0-8934-b8d0c7bc7bda-kube-api-access-9jjrx\") on node \"crc\" DevicePath \"\"" Feb 25 13:30:03 crc kubenswrapper[5005]: I0225 13:30:03.866825 5005 generic.go:334] "Generic (PLEG): container finished" podID="bbd697d6-ee91-402b-a985-3ea1756f70d7" containerID="7f0e224f15b77ca322758f69c285648861e1beae953f29e02e2b24d2640568ca" exitCode=0 Feb 25 13:30:03 crc kubenswrapper[5005]: I0225 13:30:03.866889 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533770-q968s" event={"ID":"bbd697d6-ee91-402b-a985-3ea1756f70d7","Type":"ContainerDied","Data":"7f0e224f15b77ca322758f69c285648861e1beae953f29e02e2b24d2640568ca"} Feb 25 13:30:03 crc kubenswrapper[5005]: I0225 13:30:03.869226 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29533770-9jtth" event={"ID":"1cb76377-039f-40e0-8934-b8d0c7bc7bda","Type":"ContainerDied","Data":"9ae9232f42ec147bfc962d53707fb43996c75d091df487dbdfaafdfebd0f7db3"} Feb 25 13:30:03 crc kubenswrapper[5005]: I0225 13:30:03.869259 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ae9232f42ec147bfc962d53707fb43996c75d091df487dbdfaafdfebd0f7db3" Feb 25 13:30:03 crc kubenswrapper[5005]: I0225 13:30:03.869302 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29533770-9jtth" Feb 25 13:30:04 crc kubenswrapper[5005]: I0225 13:30:04.343522 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29533725-k4qtk"] Feb 25 13:30:04 crc kubenswrapper[5005]: I0225 13:30:04.353927 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29533725-k4qtk"] Feb 25 13:30:04 crc kubenswrapper[5005]: I0225 13:30:04.685829 5005 scope.go:117] "RemoveContainer" containerID="bc75c2e9a867a5fd2fd831dc790b4d62730641594da052cfacfb613146a98aae" Feb 25 13:30:04 crc kubenswrapper[5005]: E0225 13:30:04.686054 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:30:04 crc kubenswrapper[5005]: I0225 13:30:04.696497 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e3eb95e-5fe6-437b-bcb2-6398a67047e3" path="/var/lib/kubelet/pods/2e3eb95e-5fe6-437b-bcb2-6398a67047e3/volumes" Feb 25 13:30:05 crc kubenswrapper[5005]: I0225 13:30:05.169155 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533770-q968s" Feb 25 13:30:05 crc kubenswrapper[5005]: I0225 13:30:05.302243 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxl9w\" (UniqueName: \"kubernetes.io/projected/bbd697d6-ee91-402b-a985-3ea1756f70d7-kube-api-access-jxl9w\") pod \"bbd697d6-ee91-402b-a985-3ea1756f70d7\" (UID: \"bbd697d6-ee91-402b-a985-3ea1756f70d7\") " Feb 25 13:30:05 crc kubenswrapper[5005]: I0225 13:30:05.307915 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbd697d6-ee91-402b-a985-3ea1756f70d7-kube-api-access-jxl9w" (OuterVolumeSpecName: "kube-api-access-jxl9w") pod "bbd697d6-ee91-402b-a985-3ea1756f70d7" (UID: "bbd697d6-ee91-402b-a985-3ea1756f70d7"). InnerVolumeSpecName "kube-api-access-jxl9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 13:30:05 crc kubenswrapper[5005]: I0225 13:30:05.404626 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxl9w\" (UniqueName: \"kubernetes.io/projected/bbd697d6-ee91-402b-a985-3ea1756f70d7-kube-api-access-jxl9w\") on node \"crc\" DevicePath \"\"" Feb 25 13:30:05 crc kubenswrapper[5005]: I0225 13:30:05.887951 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533770-q968s" event={"ID":"bbd697d6-ee91-402b-a985-3ea1756f70d7","Type":"ContainerDied","Data":"eae2cc6a23b4e78a3d8f043ddd8599eee3572bbcad380ef3c03a36b4eb07f01b"} Feb 25 13:30:05 crc kubenswrapper[5005]: I0225 13:30:05.887999 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eae2cc6a23b4e78a3d8f043ddd8599eee3572bbcad380ef3c03a36b4eb07f01b" Feb 25 13:30:05 crc kubenswrapper[5005]: I0225 13:30:05.888419 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533770-q968s" Feb 25 13:30:06 crc kubenswrapper[5005]: I0225 13:30:06.224809 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533764-nfdfr"] Feb 25 13:30:06 crc kubenswrapper[5005]: I0225 13:30:06.232236 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533764-nfdfr"] Feb 25 13:30:06 crc kubenswrapper[5005]: I0225 13:30:06.703544 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b56ff460-a07c-4b55-8421-2f828e008427" path="/var/lib/kubelet/pods/b56ff460-a07c-4b55-8421-2f828e008427/volumes" Feb 25 13:30:15 crc kubenswrapper[5005]: I0225 13:30:15.685358 5005 scope.go:117] "RemoveContainer" containerID="bc75c2e9a867a5fd2fd831dc790b4d62730641594da052cfacfb613146a98aae" Feb 25 13:30:15 crc kubenswrapper[5005]: E0225 13:30:15.686159 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:30:25 crc kubenswrapper[5005]: I0225 13:30:25.620335 5005 scope.go:117] "RemoveContainer" containerID="ca3ebebc3eae409be28cb25ad9c311d14f2f6e89565e806c26a32074494607d3" Feb 25 13:30:25 crc kubenswrapper[5005]: I0225 13:30:25.678992 5005 scope.go:117] "RemoveContainer" containerID="60b626f194b038864217313f9b0d414505f3828f31558686c5072876a4fb0af5" Feb 25 13:30:27 crc kubenswrapper[5005]: I0225 13:30:27.686107 5005 scope.go:117] "RemoveContainer" containerID="bc75c2e9a867a5fd2fd831dc790b4d62730641594da052cfacfb613146a98aae" Feb 25 13:30:27 crc kubenswrapper[5005]: E0225 13:30:27.687681 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:30:42 crc kubenswrapper[5005]: I0225 13:30:42.686544 5005 scope.go:117] "RemoveContainer" containerID="bc75c2e9a867a5fd2fd831dc790b4d62730641594da052cfacfb613146a98aae" Feb 25 13:30:42 crc kubenswrapper[5005]: E0225 13:30:42.687551 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:30:45 crc kubenswrapper[5005]: I0225 13:30:45.341796 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jbznp"] Feb 25 13:30:45 crc kubenswrapper[5005]: E0225 13:30:45.353183 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cb76377-039f-40e0-8934-b8d0c7bc7bda" containerName="collect-profiles" Feb 25 13:30:45 crc kubenswrapper[5005]: I0225 13:30:45.353222 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cb76377-039f-40e0-8934-b8d0c7bc7bda" containerName="collect-profiles" Feb 25 13:30:45 crc kubenswrapper[5005]: E0225 13:30:45.353256 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbd697d6-ee91-402b-a985-3ea1756f70d7" containerName="oc" Feb 25 13:30:45 crc kubenswrapper[5005]: I0225 13:30:45.353265 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbd697d6-ee91-402b-a985-3ea1756f70d7" containerName="oc" Feb 25 13:30:45 crc kubenswrapper[5005]: I0225 13:30:45.353475 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cb76377-039f-40e0-8934-b8d0c7bc7bda" containerName="collect-profiles" Feb 25 13:30:45 crc kubenswrapper[5005]: I0225 13:30:45.353502 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbd697d6-ee91-402b-a985-3ea1756f70d7" containerName="oc" Feb 25 13:30:45 crc kubenswrapper[5005]: I0225 13:30:45.356553 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jbznp"] Feb 25 13:30:45 crc kubenswrapper[5005]: I0225 13:30:45.356640 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jbznp" Feb 25 13:30:45 crc kubenswrapper[5005]: I0225 13:30:45.449137 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74vcr\" (UniqueName: \"kubernetes.io/projected/a888f55e-9170-4b20-8b73-d29f2fc6dc23-kube-api-access-74vcr\") pod \"community-operators-jbznp\" (UID: \"a888f55e-9170-4b20-8b73-d29f2fc6dc23\") " pod="openshift-marketplace/community-operators-jbznp" Feb 25 13:30:45 crc kubenswrapper[5005]: I0225 13:30:45.449260 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a888f55e-9170-4b20-8b73-d29f2fc6dc23-catalog-content\") pod \"community-operators-jbznp\" (UID: \"a888f55e-9170-4b20-8b73-d29f2fc6dc23\") " pod="openshift-marketplace/community-operators-jbznp" Feb 25 13:30:45 crc kubenswrapper[5005]: I0225 13:30:45.449287 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a888f55e-9170-4b20-8b73-d29f2fc6dc23-utilities\") pod \"community-operators-jbznp\" (UID: \"a888f55e-9170-4b20-8b73-d29f2fc6dc23\") " pod="openshift-marketplace/community-operators-jbznp" Feb 25 13:30:45 crc kubenswrapper[5005]: I0225 13:30:45.550990 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a888f55e-9170-4b20-8b73-d29f2fc6dc23-utilities\") pod \"community-operators-jbznp\" (UID: \"a888f55e-9170-4b20-8b73-d29f2fc6dc23\") " pod="openshift-marketplace/community-operators-jbznp" Feb 25 13:30:45 crc kubenswrapper[5005]: I0225 13:30:45.551154 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74vcr\" (UniqueName: \"kubernetes.io/projected/a888f55e-9170-4b20-8b73-d29f2fc6dc23-kube-api-access-74vcr\") pod \"community-operators-jbznp\" (UID: \"a888f55e-9170-4b20-8b73-d29f2fc6dc23\") " pod="openshift-marketplace/community-operators-jbznp" Feb 25 13:30:45 crc kubenswrapper[5005]: I0225 13:30:45.551302 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a888f55e-9170-4b20-8b73-d29f2fc6dc23-catalog-content\") pod \"community-operators-jbznp\" (UID: \"a888f55e-9170-4b20-8b73-d29f2fc6dc23\") " pod="openshift-marketplace/community-operators-jbznp" Feb 25 13:30:45 crc kubenswrapper[5005]: I0225 13:30:45.551603 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a888f55e-9170-4b20-8b73-d29f2fc6dc23-utilities\") pod \"community-operators-jbznp\" (UID: \"a888f55e-9170-4b20-8b73-d29f2fc6dc23\") " pod="openshift-marketplace/community-operators-jbznp" Feb 25 13:30:45 crc kubenswrapper[5005]: I0225 13:30:45.551772 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a888f55e-9170-4b20-8b73-d29f2fc6dc23-catalog-content\") pod \"community-operators-jbznp\" (UID: \"a888f55e-9170-4b20-8b73-d29f2fc6dc23\") " pod="openshift-marketplace/community-operators-jbznp" Feb 25 13:30:45 crc kubenswrapper[5005]: I0225 13:30:45.573334 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74vcr\" (UniqueName: \"kubernetes.io/projected/a888f55e-9170-4b20-8b73-d29f2fc6dc23-kube-api-access-74vcr\") pod \"community-operators-jbznp\" (UID: \"a888f55e-9170-4b20-8b73-d29f2fc6dc23\") " pod="openshift-marketplace/community-operators-jbznp" Feb 25 13:30:45 crc kubenswrapper[5005]: I0225 13:30:45.691278 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jbznp" Feb 25 13:30:46 crc kubenswrapper[5005]: I0225 13:30:46.229936 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jbznp"] Feb 25 13:30:47 crc kubenswrapper[5005]: I0225 13:30:47.233946 5005 generic.go:334] "Generic (PLEG): container finished" podID="a888f55e-9170-4b20-8b73-d29f2fc6dc23" containerID="5f8ba93ef0ad39585fa1b7521e6ad2d426c7412fee834215b9ab3d934e7ab8a1" exitCode=0 Feb 25 13:30:47 crc kubenswrapper[5005]: I0225 13:30:47.234060 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jbznp" event={"ID":"a888f55e-9170-4b20-8b73-d29f2fc6dc23","Type":"ContainerDied","Data":"5f8ba93ef0ad39585fa1b7521e6ad2d426c7412fee834215b9ab3d934e7ab8a1"} Feb 25 13:30:47 crc kubenswrapper[5005]: I0225 13:30:47.234226 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jbznp" event={"ID":"a888f55e-9170-4b20-8b73-d29f2fc6dc23","Type":"ContainerStarted","Data":"5b3f633363b15067e25106405c78698bdf72ca7a51dcafe431c9397d354956e9"} Feb 25 13:30:48 crc kubenswrapper[5005]: I0225 13:30:48.269963 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jbznp" event={"ID":"a888f55e-9170-4b20-8b73-d29f2fc6dc23","Type":"ContainerStarted","Data":"6666f9b75cca35c0ea807ef33caff036da3044f52c66e9d4f7bc25135b30ec7b"} Feb 25 13:30:49 crc kubenswrapper[5005]: I0225 13:30:49.282287 5005 generic.go:334] "Generic (PLEG): container finished" podID="a888f55e-9170-4b20-8b73-d29f2fc6dc23" containerID="6666f9b75cca35c0ea807ef33caff036da3044f52c66e9d4f7bc25135b30ec7b" exitCode=0 Feb 25 13:30:49 crc kubenswrapper[5005]: I0225 13:30:49.282355 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jbznp" event={"ID":"a888f55e-9170-4b20-8b73-d29f2fc6dc23","Type":"ContainerDied","Data":"6666f9b75cca35c0ea807ef33caff036da3044f52c66e9d4f7bc25135b30ec7b"} Feb 25 13:30:50 crc kubenswrapper[5005]: I0225 13:30:50.293404 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jbznp" event={"ID":"a888f55e-9170-4b20-8b73-d29f2fc6dc23","Type":"ContainerStarted","Data":"fbae4fae3a43f0b79542583836a769a25c8368bf5eed0c5a4f1826b3bc3ce409"} Feb 25 13:30:50 crc kubenswrapper[5005]: I0225 13:30:50.315734 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jbznp" podStartSLOduration=2.869793005 podStartE2EDuration="5.315716576s" podCreationTimestamp="2026-02-25 13:30:45 +0000 UTC" firstStartedPulling="2026-02-25 13:30:47.236916124 +0000 UTC m=+7961.277648451" lastFinishedPulling="2026-02-25 13:30:49.682839675 +0000 UTC m=+7963.723572022" observedRunningTime="2026-02-25 13:30:50.309653889 +0000 UTC m=+7964.350386216" watchObservedRunningTime="2026-02-25 13:30:50.315716576 +0000 UTC m=+7964.356448903" Feb 25 13:30:55 crc kubenswrapper[5005]: I0225 13:30:55.692160 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jbznp" Feb 25 13:30:55 crc kubenswrapper[5005]: I0225 13:30:55.692745 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jbznp" Feb 25 13:30:55 crc kubenswrapper[5005]: I0225 13:30:55.747843 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jbznp" Feb 25 13:30:56 crc kubenswrapper[5005]: I0225 13:30:56.392994 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jbznp" Feb 25 13:30:56 crc kubenswrapper[5005]: I0225 13:30:56.693054 5005 scope.go:117] "RemoveContainer" containerID="bc75c2e9a867a5fd2fd831dc790b4d62730641594da052cfacfb613146a98aae" Feb 25 13:30:56 crc kubenswrapper[5005]: E0225 13:30:56.693334 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:30:59 crc kubenswrapper[5005]: I0225 13:30:59.506896 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jbznp"] Feb 25 13:30:59 crc kubenswrapper[5005]: I0225 13:30:59.507799 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jbznp" podUID="a888f55e-9170-4b20-8b73-d29f2fc6dc23" containerName="registry-server" containerID="cri-o://fbae4fae3a43f0b79542583836a769a25c8368bf5eed0c5a4f1826b3bc3ce409" gracePeriod=2 Feb 25 13:31:00 crc kubenswrapper[5005]: I0225 13:31:00.382586 5005 generic.go:334] "Generic (PLEG): container finished" podID="a888f55e-9170-4b20-8b73-d29f2fc6dc23" containerID="fbae4fae3a43f0b79542583836a769a25c8368bf5eed0c5a4f1826b3bc3ce409" exitCode=0 Feb 25 13:31:00 crc kubenswrapper[5005]: I0225 13:31:00.382669 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jbznp" event={"ID":"a888f55e-9170-4b20-8b73-d29f2fc6dc23","Type":"ContainerDied","Data":"fbae4fae3a43f0b79542583836a769a25c8368bf5eed0c5a4f1826b3bc3ce409"} Feb 25 13:31:01 crc kubenswrapper[5005]: I0225 13:31:01.031158 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jbznp" Feb 25 13:31:01 crc kubenswrapper[5005]: I0225 13:31:01.170082 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a888f55e-9170-4b20-8b73-d29f2fc6dc23-catalog-content\") pod \"a888f55e-9170-4b20-8b73-d29f2fc6dc23\" (UID: \"a888f55e-9170-4b20-8b73-d29f2fc6dc23\") " Feb 25 13:31:01 crc kubenswrapper[5005]: I0225 13:31:01.170249 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a888f55e-9170-4b20-8b73-d29f2fc6dc23-utilities\") pod \"a888f55e-9170-4b20-8b73-d29f2fc6dc23\" (UID: \"a888f55e-9170-4b20-8b73-d29f2fc6dc23\") " Feb 25 13:31:01 crc kubenswrapper[5005]: I0225 13:31:01.170350 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74vcr\" (UniqueName: \"kubernetes.io/projected/a888f55e-9170-4b20-8b73-d29f2fc6dc23-kube-api-access-74vcr\") pod \"a888f55e-9170-4b20-8b73-d29f2fc6dc23\" (UID: \"a888f55e-9170-4b20-8b73-d29f2fc6dc23\") " Feb 25 13:31:01 crc kubenswrapper[5005]: I0225 13:31:01.171402 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a888f55e-9170-4b20-8b73-d29f2fc6dc23-utilities" (OuterVolumeSpecName: "utilities") pod "a888f55e-9170-4b20-8b73-d29f2fc6dc23" (UID: "a888f55e-9170-4b20-8b73-d29f2fc6dc23"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 13:31:01 crc kubenswrapper[5005]: I0225 13:31:01.178867 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a888f55e-9170-4b20-8b73-d29f2fc6dc23-kube-api-access-74vcr" (OuterVolumeSpecName: "kube-api-access-74vcr") pod "a888f55e-9170-4b20-8b73-d29f2fc6dc23" (UID: "a888f55e-9170-4b20-8b73-d29f2fc6dc23"). InnerVolumeSpecName "kube-api-access-74vcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 13:31:01 crc kubenswrapper[5005]: I0225 13:31:01.221099 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a888f55e-9170-4b20-8b73-d29f2fc6dc23-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a888f55e-9170-4b20-8b73-d29f2fc6dc23" (UID: "a888f55e-9170-4b20-8b73-d29f2fc6dc23"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 13:31:01 crc kubenswrapper[5005]: I0225 13:31:01.273003 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a888f55e-9170-4b20-8b73-d29f2fc6dc23-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 13:31:01 crc kubenswrapper[5005]: I0225 13:31:01.273050 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a888f55e-9170-4b20-8b73-d29f2fc6dc23-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 13:31:01 crc kubenswrapper[5005]: I0225 13:31:01.273067 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74vcr\" (UniqueName: \"kubernetes.io/projected/a888f55e-9170-4b20-8b73-d29f2fc6dc23-kube-api-access-74vcr\") on node \"crc\" DevicePath \"\"" Feb 25 13:31:01 crc kubenswrapper[5005]: I0225 13:31:01.392271 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jbznp" event={"ID":"a888f55e-9170-4b20-8b73-d29f2fc6dc23","Type":"ContainerDied","Data":"5b3f633363b15067e25106405c78698bdf72ca7a51dcafe431c9397d354956e9"} Feb 25 13:31:01 crc kubenswrapper[5005]: I0225 13:31:01.392329 5005 scope.go:117] "RemoveContainer" containerID="fbae4fae3a43f0b79542583836a769a25c8368bf5eed0c5a4f1826b3bc3ce409" Feb 25 13:31:01 crc kubenswrapper[5005]: I0225 13:31:01.392468 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jbznp" Feb 25 13:31:01 crc kubenswrapper[5005]: I0225 13:31:01.416234 5005 scope.go:117] "RemoveContainer" containerID="6666f9b75cca35c0ea807ef33caff036da3044f52c66e9d4f7bc25135b30ec7b" Feb 25 13:31:01 crc kubenswrapper[5005]: I0225 13:31:01.427508 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jbznp"] Feb 25 13:31:01 crc kubenswrapper[5005]: I0225 13:31:01.435826 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jbznp"] Feb 25 13:31:01 crc kubenswrapper[5005]: I0225 13:31:01.449351 5005 scope.go:117] "RemoveContainer" containerID="5f8ba93ef0ad39585fa1b7521e6ad2d426c7412fee834215b9ab3d934e7ab8a1" Feb 25 13:31:02 crc kubenswrapper[5005]: I0225 13:31:02.697495 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a888f55e-9170-4b20-8b73-d29f2fc6dc23" path="/var/lib/kubelet/pods/a888f55e-9170-4b20-8b73-d29f2fc6dc23/volumes" Feb 25 13:31:11 crc kubenswrapper[5005]: I0225 13:31:11.685299 5005 scope.go:117] "RemoveContainer" containerID="bc75c2e9a867a5fd2fd831dc790b4d62730641594da052cfacfb613146a98aae" Feb 25 13:31:11 crc kubenswrapper[5005]: E0225 13:31:11.686335 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:31:26 crc kubenswrapper[5005]: I0225 13:31:26.692005 5005 scope.go:117] "RemoveContainer" containerID="bc75c2e9a867a5fd2fd831dc790b4d62730641594da052cfacfb613146a98aae" Feb 25 13:31:26 crc kubenswrapper[5005]: E0225 13:31:26.692853 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:31:40 crc kubenswrapper[5005]: I0225 13:31:40.685355 5005 scope.go:117] "RemoveContainer" containerID="bc75c2e9a867a5fd2fd831dc790b4d62730641594da052cfacfb613146a98aae" Feb 25 13:31:40 crc kubenswrapper[5005]: E0225 13:31:40.686092 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:31:54 crc kubenswrapper[5005]: I0225 13:31:54.685203 5005 scope.go:117] "RemoveContainer" containerID="bc75c2e9a867a5fd2fd831dc790b4d62730641594da052cfacfb613146a98aae" Feb 25 13:31:54 crc kubenswrapper[5005]: E0225 13:31:54.686136 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:32:00 crc kubenswrapper[5005]: I0225 13:32:00.141217 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533772-vwcmr"] Feb 25 13:32:00 crc kubenswrapper[5005]: E0225 13:32:00.143053 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a888f55e-9170-4b20-8b73-d29f2fc6dc23" containerName="registry-server" Feb 25 13:32:00 crc kubenswrapper[5005]: I0225 13:32:00.143148 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="a888f55e-9170-4b20-8b73-d29f2fc6dc23" containerName="registry-server" Feb 25 13:32:00 crc kubenswrapper[5005]: E0225 13:32:00.143226 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a888f55e-9170-4b20-8b73-d29f2fc6dc23" containerName="extract-content" Feb 25 13:32:00 crc kubenswrapper[5005]: I0225 13:32:00.143297 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="a888f55e-9170-4b20-8b73-d29f2fc6dc23" containerName="extract-content" Feb 25 13:32:00 crc kubenswrapper[5005]: E0225 13:32:00.143416 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a888f55e-9170-4b20-8b73-d29f2fc6dc23" containerName="extract-utilities" Feb 25 13:32:00 crc kubenswrapper[5005]: I0225 13:32:00.143504 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="a888f55e-9170-4b20-8b73-d29f2fc6dc23" containerName="extract-utilities" Feb 25 13:32:00 crc kubenswrapper[5005]: I0225 13:32:00.143840 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="a888f55e-9170-4b20-8b73-d29f2fc6dc23" containerName="registry-server" Feb 25 13:32:00 crc kubenswrapper[5005]: I0225 13:32:00.144568 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533772-vwcmr" Feb 25 13:32:00 crc kubenswrapper[5005]: I0225 13:32:00.148863 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 13:32:00 crc kubenswrapper[5005]: I0225 13:32:00.149472 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 13:32:00 crc kubenswrapper[5005]: I0225 13:32:00.149717 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 13:32:00 crc kubenswrapper[5005]: I0225 13:32:00.151827 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533772-vwcmr"] Feb 25 13:32:00 crc kubenswrapper[5005]: I0225 13:32:00.306939 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps5cn\" (UniqueName: \"kubernetes.io/projected/10aa38b8-236d-421c-9b9c-a6b3a3c3c361-kube-api-access-ps5cn\") pod \"auto-csr-approver-29533772-vwcmr\" (UID: \"10aa38b8-236d-421c-9b9c-a6b3a3c3c361\") " pod="openshift-infra/auto-csr-approver-29533772-vwcmr" Feb 25 13:32:00 crc kubenswrapper[5005]: I0225 13:32:00.409148 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ps5cn\" (UniqueName: \"kubernetes.io/projected/10aa38b8-236d-421c-9b9c-a6b3a3c3c361-kube-api-access-ps5cn\") pod \"auto-csr-approver-29533772-vwcmr\" (UID: \"10aa38b8-236d-421c-9b9c-a6b3a3c3c361\") " pod="openshift-infra/auto-csr-approver-29533772-vwcmr" Feb 25 13:32:00 crc kubenswrapper[5005]: I0225 13:32:00.439995 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ps5cn\" (UniqueName: \"kubernetes.io/projected/10aa38b8-236d-421c-9b9c-a6b3a3c3c361-kube-api-access-ps5cn\") pod \"auto-csr-approver-29533772-vwcmr\" (UID: \"10aa38b8-236d-421c-9b9c-a6b3a3c3c361\") " pod="openshift-infra/auto-csr-approver-29533772-vwcmr" Feb 25 13:32:00 crc kubenswrapper[5005]: I0225 13:32:00.462326 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533772-vwcmr" Feb 25 13:32:00 crc kubenswrapper[5005]: I0225 13:32:00.982458 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533772-vwcmr"] Feb 25 13:32:01 crc kubenswrapper[5005]: I0225 13:32:01.922330 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533772-vwcmr" event={"ID":"10aa38b8-236d-421c-9b9c-a6b3a3c3c361","Type":"ContainerStarted","Data":"ae8687588d23f96119972bf572b2a37c9db97f35548b315032073f9fd86a5ff8"} Feb 25 13:32:02 crc kubenswrapper[5005]: I0225 13:32:02.932108 5005 generic.go:334] "Generic (PLEG): container finished" podID="10aa38b8-236d-421c-9b9c-a6b3a3c3c361" containerID="4c3f272a399f948d05e2202f5d96e7a8a24821a98b06435e69f830b4cc1b47ed" exitCode=0 Feb 25 13:32:02 crc kubenswrapper[5005]: I0225 13:32:02.932166 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533772-vwcmr" event={"ID":"10aa38b8-236d-421c-9b9c-a6b3a3c3c361","Type":"ContainerDied","Data":"4c3f272a399f948d05e2202f5d96e7a8a24821a98b06435e69f830b4cc1b47ed"} Feb 25 13:32:04 crc kubenswrapper[5005]: I0225 13:32:04.241325 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533772-vwcmr" Feb 25 13:32:04 crc kubenswrapper[5005]: I0225 13:32:04.400951 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ps5cn\" (UniqueName: \"kubernetes.io/projected/10aa38b8-236d-421c-9b9c-a6b3a3c3c361-kube-api-access-ps5cn\") pod \"10aa38b8-236d-421c-9b9c-a6b3a3c3c361\" (UID: \"10aa38b8-236d-421c-9b9c-a6b3a3c3c361\") " Feb 25 13:32:04 crc kubenswrapper[5005]: I0225 13:32:04.406667 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10aa38b8-236d-421c-9b9c-a6b3a3c3c361-kube-api-access-ps5cn" (OuterVolumeSpecName: "kube-api-access-ps5cn") pod "10aa38b8-236d-421c-9b9c-a6b3a3c3c361" (UID: "10aa38b8-236d-421c-9b9c-a6b3a3c3c361"). InnerVolumeSpecName "kube-api-access-ps5cn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 13:32:04 crc kubenswrapper[5005]: I0225 13:32:04.502757 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ps5cn\" (UniqueName: \"kubernetes.io/projected/10aa38b8-236d-421c-9b9c-a6b3a3c3c361-kube-api-access-ps5cn\") on node \"crc\" DevicePath \"\"" Feb 25 13:32:04 crc kubenswrapper[5005]: I0225 13:32:04.949360 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533772-vwcmr" event={"ID":"10aa38b8-236d-421c-9b9c-a6b3a3c3c361","Type":"ContainerDied","Data":"ae8687588d23f96119972bf572b2a37c9db97f35548b315032073f9fd86a5ff8"} Feb 25 13:32:04 crc kubenswrapper[5005]: I0225 13:32:04.949743 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae8687588d23f96119972bf572b2a37c9db97f35548b315032073f9fd86a5ff8" Feb 25 13:32:04 crc kubenswrapper[5005]: I0225 13:32:04.949447 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533772-vwcmr" Feb 25 13:32:05 crc kubenswrapper[5005]: I0225 13:32:05.305408 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533766-85snf"] Feb 25 13:32:05 crc kubenswrapper[5005]: I0225 13:32:05.316661 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533766-85snf"] Feb 25 13:32:06 crc kubenswrapper[5005]: I0225 13:32:06.696812 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb708f49-97df-4dd7-a088-1d4240ba3831" path="/var/lib/kubelet/pods/fb708f49-97df-4dd7-a088-1d4240ba3831/volumes" Feb 25 13:32:09 crc kubenswrapper[5005]: I0225 13:32:09.685596 5005 scope.go:117] "RemoveContainer" containerID="bc75c2e9a867a5fd2fd831dc790b4d62730641594da052cfacfb613146a98aae" Feb 25 13:32:10 crc kubenswrapper[5005]: I0225 13:32:10.001760 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerStarted","Data":"85c752404d1f958a38155c7cd981c0e67f818f1f94ca2a7107f2e153f56b5223"} Feb 25 13:32:25 crc kubenswrapper[5005]: I0225 13:32:25.786954 5005 scope.go:117] "RemoveContainer" containerID="75fccea0c651dd631abc3a8e288c925ac0f4aa8701db388fce331e2497acd952" Feb 25 13:34:00 crc kubenswrapper[5005]: I0225 13:34:00.144079 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533774-j7zmg"] Feb 25 13:34:00 crc kubenswrapper[5005]: E0225 13:34:00.144989 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10aa38b8-236d-421c-9b9c-a6b3a3c3c361" containerName="oc" Feb 25 13:34:00 crc kubenswrapper[5005]: I0225 13:34:00.145000 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="10aa38b8-236d-421c-9b9c-a6b3a3c3c361" containerName="oc" Feb 25 13:34:00 crc kubenswrapper[5005]: I0225 13:34:00.145189 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="10aa38b8-236d-421c-9b9c-a6b3a3c3c361" containerName="oc" Feb 25 13:34:00 crc kubenswrapper[5005]: I0225 13:34:00.145768 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533774-j7zmg" Feb 25 13:34:00 crc kubenswrapper[5005]: I0225 13:34:00.148137 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 13:34:00 crc kubenswrapper[5005]: I0225 13:34:00.151116 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 13:34:00 crc kubenswrapper[5005]: I0225 13:34:00.151350 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 13:34:00 crc kubenswrapper[5005]: I0225 13:34:00.153145 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533774-j7zmg"] Feb 25 13:34:00 crc kubenswrapper[5005]: I0225 13:34:00.299142 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmzsq\" (UniqueName: \"kubernetes.io/projected/38c3de69-c8d2-457a-92be-5a509868e9ef-kube-api-access-pmzsq\") pod \"auto-csr-approver-29533774-j7zmg\" (UID: \"38c3de69-c8d2-457a-92be-5a509868e9ef\") " pod="openshift-infra/auto-csr-approver-29533774-j7zmg" Feb 25 13:34:00 crc kubenswrapper[5005]: I0225 13:34:00.401792 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmzsq\" (UniqueName: \"kubernetes.io/projected/38c3de69-c8d2-457a-92be-5a509868e9ef-kube-api-access-pmzsq\") pod \"auto-csr-approver-29533774-j7zmg\" (UID: \"38c3de69-c8d2-457a-92be-5a509868e9ef\") " pod="openshift-infra/auto-csr-approver-29533774-j7zmg" Feb 25 13:34:00 crc kubenswrapper[5005]: I0225 13:34:00.420949 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmzsq\" (UniqueName: \"kubernetes.io/projected/38c3de69-c8d2-457a-92be-5a509868e9ef-kube-api-access-pmzsq\") pod \"auto-csr-approver-29533774-j7zmg\" (UID: \"38c3de69-c8d2-457a-92be-5a509868e9ef\") " pod="openshift-infra/auto-csr-approver-29533774-j7zmg" Feb 25 13:34:00 crc kubenswrapper[5005]: I0225 13:34:00.508344 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533774-j7zmg" Feb 25 13:34:01 crc kubenswrapper[5005]: I0225 13:34:01.004571 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533774-j7zmg"] Feb 25 13:34:01 crc kubenswrapper[5005]: W0225 13:34:01.012562 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38c3de69_c8d2_457a_92be_5a509868e9ef.slice/crio-1b1d80a6ce7cb3d9aceb7e03e72e4d9a14dff2944d533ed5db75193901e5a38f WatchSource:0}: Error finding container 1b1d80a6ce7cb3d9aceb7e03e72e4d9a14dff2944d533ed5db75193901e5a38f: Status 404 returned error can't find the container with id 1b1d80a6ce7cb3d9aceb7e03e72e4d9a14dff2944d533ed5db75193901e5a38f Feb 25 13:34:01 crc kubenswrapper[5005]: I0225 13:34:01.515291 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533774-j7zmg" event={"ID":"38c3de69-c8d2-457a-92be-5a509868e9ef","Type":"ContainerStarted","Data":"1b1d80a6ce7cb3d9aceb7e03e72e4d9a14dff2944d533ed5db75193901e5a38f"} Feb 25 13:34:02 crc kubenswrapper[5005]: I0225 13:34:02.525413 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533774-j7zmg" event={"ID":"38c3de69-c8d2-457a-92be-5a509868e9ef","Type":"ContainerStarted","Data":"23a98077f66963094d2f54caad6c6ab440517b0c8b900c00969160a638572de4"} Feb 25 13:34:02 crc kubenswrapper[5005]: I0225 13:34:02.543687 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29533774-j7zmg" podStartSLOduration=1.433254977 podStartE2EDuration="2.543665392s" podCreationTimestamp="2026-02-25 13:34:00 +0000 UTC" firstStartedPulling="2026-02-25 13:34:01.015625541 +0000 UTC m=+8155.056357868" lastFinishedPulling="2026-02-25 13:34:02.126035956 +0000 UTC m=+8156.166768283" observedRunningTime="2026-02-25 13:34:02.539943296 +0000 UTC m=+8156.580675623" watchObservedRunningTime="2026-02-25 13:34:02.543665392 +0000 UTC m=+8156.584397719" Feb 25 13:34:03 crc kubenswrapper[5005]: I0225 13:34:03.536317 5005 generic.go:334] "Generic (PLEG): container finished" podID="38c3de69-c8d2-457a-92be-5a509868e9ef" containerID="23a98077f66963094d2f54caad6c6ab440517b0c8b900c00969160a638572de4" exitCode=0 Feb 25 13:34:03 crc kubenswrapper[5005]: I0225 13:34:03.536435 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533774-j7zmg" event={"ID":"38c3de69-c8d2-457a-92be-5a509868e9ef","Type":"ContainerDied","Data":"23a98077f66963094d2f54caad6c6ab440517b0c8b900c00969160a638572de4"} Feb 25 13:34:04 crc kubenswrapper[5005]: I0225 13:34:04.952121 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533774-j7zmg" Feb 25 13:34:05 crc kubenswrapper[5005]: I0225 13:34:05.085745 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmzsq\" (UniqueName: \"kubernetes.io/projected/38c3de69-c8d2-457a-92be-5a509868e9ef-kube-api-access-pmzsq\") pod \"38c3de69-c8d2-457a-92be-5a509868e9ef\" (UID: \"38c3de69-c8d2-457a-92be-5a509868e9ef\") " Feb 25 13:34:05 crc kubenswrapper[5005]: I0225 13:34:05.092625 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38c3de69-c8d2-457a-92be-5a509868e9ef-kube-api-access-pmzsq" (OuterVolumeSpecName: "kube-api-access-pmzsq") pod "38c3de69-c8d2-457a-92be-5a509868e9ef" (UID: "38c3de69-c8d2-457a-92be-5a509868e9ef"). InnerVolumeSpecName "kube-api-access-pmzsq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 13:34:05 crc kubenswrapper[5005]: I0225 13:34:05.188555 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmzsq\" (UniqueName: \"kubernetes.io/projected/38c3de69-c8d2-457a-92be-5a509868e9ef-kube-api-access-pmzsq\") on node \"crc\" DevicePath \"\"" Feb 25 13:34:05 crc kubenswrapper[5005]: I0225 13:34:05.558159 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533774-j7zmg" event={"ID":"38c3de69-c8d2-457a-92be-5a509868e9ef","Type":"ContainerDied","Data":"1b1d80a6ce7cb3d9aceb7e03e72e4d9a14dff2944d533ed5db75193901e5a38f"} Feb 25 13:34:05 crc kubenswrapper[5005]: I0225 13:34:05.558461 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b1d80a6ce7cb3d9aceb7e03e72e4d9a14dff2944d533ed5db75193901e5a38f" Feb 25 13:34:05 crc kubenswrapper[5005]: I0225 13:34:05.558299 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533774-j7zmg" Feb 25 13:34:05 crc kubenswrapper[5005]: I0225 13:34:05.609212 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533768-rcrlb"] Feb 25 13:34:05 crc kubenswrapper[5005]: I0225 13:34:05.620075 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533768-rcrlb"] Feb 25 13:34:06 crc kubenswrapper[5005]: I0225 13:34:06.701180 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c863f15-7316-4ba9-9f46-dfe5705563ac" path="/var/lib/kubelet/pods/5c863f15-7316-4ba9-9f46-dfe5705563ac/volumes" Feb 25 13:34:25 crc kubenswrapper[5005]: I0225 13:34:25.885407 5005 scope.go:117] "RemoveContainer" containerID="6c89447031bb15dd56537a8e4ca49e6e83c592e11189e7664368f8470dc16f9c" Feb 25 13:34:28 crc kubenswrapper[5005]: I0225 13:34:28.087337 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 13:34:28 crc kubenswrapper[5005]: I0225 13:34:28.088295 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 13:34:58 crc kubenswrapper[5005]: I0225 13:34:58.087053 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 13:34:58 crc kubenswrapper[5005]: I0225 13:34:58.087694 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 13:35:28 crc kubenswrapper[5005]: I0225 13:35:28.087922 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 13:35:28 crc kubenswrapper[5005]: I0225 13:35:28.088577 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 13:35:28 crc kubenswrapper[5005]: I0225 13:35:28.088625 5005 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" Feb 25 13:35:28 crc kubenswrapper[5005]: I0225 13:35:28.089327 5005 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"85c752404d1f958a38155c7cd981c0e67f818f1f94ca2a7107f2e153f56b5223"} pod="openshift-machine-config-operator/machine-config-daemon-tct5q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 25 13:35:28 crc kubenswrapper[5005]: I0225 13:35:28.089391 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" containerID="cri-o://85c752404d1f958a38155c7cd981c0e67f818f1f94ca2a7107f2e153f56b5223" gracePeriod=600 Feb 25 13:35:28 crc kubenswrapper[5005]: I0225 13:35:28.357184 5005 generic.go:334] "Generic (PLEG): container finished" podID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerID="85c752404d1f958a38155c7cd981c0e67f818f1f94ca2a7107f2e153f56b5223" exitCode=0 Feb 25 13:35:28 crc kubenswrapper[5005]: I0225 13:35:28.357261 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerDied","Data":"85c752404d1f958a38155c7cd981c0e67f818f1f94ca2a7107f2e153f56b5223"} Feb 25 13:35:28 crc kubenswrapper[5005]: I0225 13:35:28.357768 5005 scope.go:117] "RemoveContainer" containerID="bc75c2e9a867a5fd2fd831dc790b4d62730641594da052cfacfb613146a98aae" Feb 25 13:35:29 crc kubenswrapper[5005]: I0225 13:35:29.366240 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerStarted","Data":"c5f542d1ec6dc300e3a44e3a29fbd2c7f4d7950016e38a2791f2d7acc0a680b4"} Feb 25 13:36:00 crc kubenswrapper[5005]: I0225 13:36:00.154139 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533776-grflq"] Feb 25 13:36:00 crc kubenswrapper[5005]: E0225 13:36:00.156867 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38c3de69-c8d2-457a-92be-5a509868e9ef" containerName="oc" Feb 25 13:36:00 crc kubenswrapper[5005]: I0225 13:36:00.157046 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="38c3de69-c8d2-457a-92be-5a509868e9ef" containerName="oc" Feb 25 13:36:00 crc kubenswrapper[5005]: I0225 13:36:00.157552 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="38c3de69-c8d2-457a-92be-5a509868e9ef" containerName="oc" Feb 25 13:36:00 crc kubenswrapper[5005]: I0225 13:36:00.158589 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533776-grflq" Feb 25 13:36:00 crc kubenswrapper[5005]: I0225 13:36:00.162476 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 13:36:00 crc kubenswrapper[5005]: I0225 13:36:00.162519 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 13:36:00 crc kubenswrapper[5005]: I0225 13:36:00.162499 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 13:36:00 crc kubenswrapper[5005]: I0225 13:36:00.168494 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533776-grflq"] Feb 25 13:36:00 crc kubenswrapper[5005]: I0225 13:36:00.199449 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6jht\" (UniqueName: \"kubernetes.io/projected/85527208-6e1a-4c3e-b0ef-59799382f98e-kube-api-access-z6jht\") pod \"auto-csr-approver-29533776-grflq\" (UID: \"85527208-6e1a-4c3e-b0ef-59799382f98e\") " pod="openshift-infra/auto-csr-approver-29533776-grflq" Feb 25 13:36:00 crc kubenswrapper[5005]: I0225 13:36:00.301256 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6jht\" (UniqueName: \"kubernetes.io/projected/85527208-6e1a-4c3e-b0ef-59799382f98e-kube-api-access-z6jht\") pod \"auto-csr-approver-29533776-grflq\" (UID: \"85527208-6e1a-4c3e-b0ef-59799382f98e\") " pod="openshift-infra/auto-csr-approver-29533776-grflq" Feb 25 13:36:00 crc kubenswrapper[5005]: I0225 13:36:00.324353 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6jht\" (UniqueName: \"kubernetes.io/projected/85527208-6e1a-4c3e-b0ef-59799382f98e-kube-api-access-z6jht\") pod \"auto-csr-approver-29533776-grflq\" (UID: \"85527208-6e1a-4c3e-b0ef-59799382f98e\") " pod="openshift-infra/auto-csr-approver-29533776-grflq" Feb 25 13:36:00 crc kubenswrapper[5005]: I0225 13:36:00.483932 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533776-grflq" Feb 25 13:36:00 crc kubenswrapper[5005]: I0225 13:36:00.979486 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533776-grflq"] Feb 25 13:36:00 crc kubenswrapper[5005]: I0225 13:36:00.983260 5005 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 25 13:36:01 crc kubenswrapper[5005]: I0225 13:36:01.663802 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533776-grflq" event={"ID":"85527208-6e1a-4c3e-b0ef-59799382f98e","Type":"ContainerStarted","Data":"07d0c6a8742cb614defec526fb541f5899fab37e527fe8651cf8567e44bde88a"} Feb 25 13:36:03 crc kubenswrapper[5005]: I0225 13:36:03.681383 5005 generic.go:334] "Generic (PLEG): container finished" podID="85527208-6e1a-4c3e-b0ef-59799382f98e" containerID="5ff883db091143ab328e02ffcfd3ea13f3e1aa637519f813cf65d33a1f8dd8df" exitCode=0 Feb 25 13:36:03 crc kubenswrapper[5005]: I0225 13:36:03.681480 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533776-grflq" event={"ID":"85527208-6e1a-4c3e-b0ef-59799382f98e","Type":"ContainerDied","Data":"5ff883db091143ab328e02ffcfd3ea13f3e1aa637519f813cf65d33a1f8dd8df"} Feb 25 13:36:05 crc kubenswrapper[5005]: I0225 13:36:05.083717 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533776-grflq" Feb 25 13:36:05 crc kubenswrapper[5005]: I0225 13:36:05.207410 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6jht\" (UniqueName: \"kubernetes.io/projected/85527208-6e1a-4c3e-b0ef-59799382f98e-kube-api-access-z6jht\") pod \"85527208-6e1a-4c3e-b0ef-59799382f98e\" (UID: \"85527208-6e1a-4c3e-b0ef-59799382f98e\") " Feb 25 13:36:05 crc kubenswrapper[5005]: I0225 13:36:05.212523 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85527208-6e1a-4c3e-b0ef-59799382f98e-kube-api-access-z6jht" (OuterVolumeSpecName: "kube-api-access-z6jht") pod "85527208-6e1a-4c3e-b0ef-59799382f98e" (UID: "85527208-6e1a-4c3e-b0ef-59799382f98e"). InnerVolumeSpecName "kube-api-access-z6jht". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 13:36:05 crc kubenswrapper[5005]: I0225 13:36:05.309609 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6jht\" (UniqueName: \"kubernetes.io/projected/85527208-6e1a-4c3e-b0ef-59799382f98e-kube-api-access-z6jht\") on node \"crc\" DevicePath \"\"" Feb 25 13:36:05 crc kubenswrapper[5005]: I0225 13:36:05.702328 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533776-grflq" event={"ID":"85527208-6e1a-4c3e-b0ef-59799382f98e","Type":"ContainerDied","Data":"07d0c6a8742cb614defec526fb541f5899fab37e527fe8651cf8567e44bde88a"} Feb 25 13:36:05 crc kubenswrapper[5005]: I0225 13:36:05.702391 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07d0c6a8742cb614defec526fb541f5899fab37e527fe8651cf8567e44bde88a" Feb 25 13:36:05 crc kubenswrapper[5005]: I0225 13:36:05.702389 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533776-grflq" Feb 25 13:36:06 crc kubenswrapper[5005]: I0225 13:36:06.144993 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533770-q968s"] Feb 25 13:36:06 crc kubenswrapper[5005]: I0225 13:36:06.157227 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533770-q968s"] Feb 25 13:36:06 crc kubenswrapper[5005]: I0225 13:36:06.698569 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbd697d6-ee91-402b-a985-3ea1756f70d7" path="/var/lib/kubelet/pods/bbd697d6-ee91-402b-a985-3ea1756f70d7/volumes" Feb 25 13:36:25 crc kubenswrapper[5005]: I0225 13:36:25.978938 5005 scope.go:117] "RemoveContainer" containerID="7f0e224f15b77ca322758f69c285648861e1beae953f29e02e2b24d2640568ca" Feb 25 13:37:05 crc kubenswrapper[5005]: I0225 13:37:05.735049 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6r6jd"] Feb 25 13:37:05 crc kubenswrapper[5005]: E0225 13:37:05.736364 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85527208-6e1a-4c3e-b0ef-59799382f98e" containerName="oc" Feb 25 13:37:05 crc kubenswrapper[5005]: I0225 13:37:05.736395 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="85527208-6e1a-4c3e-b0ef-59799382f98e" containerName="oc" Feb 25 13:37:05 crc kubenswrapper[5005]: I0225 13:37:05.736661 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="85527208-6e1a-4c3e-b0ef-59799382f98e" containerName="oc" Feb 25 13:37:05 crc kubenswrapper[5005]: I0225 13:37:05.738326 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6r6jd" Feb 25 13:37:05 crc kubenswrapper[5005]: I0225 13:37:05.754002 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6r6jd"] Feb 25 13:37:05 crc kubenswrapper[5005]: I0225 13:37:05.771190 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnj98\" (UniqueName: \"kubernetes.io/projected/368856bb-a44a-49bd-83a3-e7a5f768a183-kube-api-access-tnj98\") pod \"redhat-operators-6r6jd\" (UID: \"368856bb-a44a-49bd-83a3-e7a5f768a183\") " pod="openshift-marketplace/redhat-operators-6r6jd" Feb 25 13:37:05 crc kubenswrapper[5005]: I0225 13:37:05.771258 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/368856bb-a44a-49bd-83a3-e7a5f768a183-catalog-content\") pod \"redhat-operators-6r6jd\" (UID: \"368856bb-a44a-49bd-83a3-e7a5f768a183\") " pod="openshift-marketplace/redhat-operators-6r6jd" Feb 25 13:37:05 crc kubenswrapper[5005]: I0225 13:37:05.771306 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/368856bb-a44a-49bd-83a3-e7a5f768a183-utilities\") pod \"redhat-operators-6r6jd\" (UID: \"368856bb-a44a-49bd-83a3-e7a5f768a183\") " pod="openshift-marketplace/redhat-operators-6r6jd" Feb 25 13:37:05 crc kubenswrapper[5005]: I0225 13:37:05.873523 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnj98\" (UniqueName: \"kubernetes.io/projected/368856bb-a44a-49bd-83a3-e7a5f768a183-kube-api-access-tnj98\") pod \"redhat-operators-6r6jd\" (UID: \"368856bb-a44a-49bd-83a3-e7a5f768a183\") " pod="openshift-marketplace/redhat-operators-6r6jd" Feb 25 13:37:05 crc kubenswrapper[5005]: I0225 13:37:05.873604 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/368856bb-a44a-49bd-83a3-e7a5f768a183-catalog-content\") pod \"redhat-operators-6r6jd\" (UID: \"368856bb-a44a-49bd-83a3-e7a5f768a183\") " pod="openshift-marketplace/redhat-operators-6r6jd" Feb 25 13:37:05 crc kubenswrapper[5005]: I0225 13:37:05.873651 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/368856bb-a44a-49bd-83a3-e7a5f768a183-utilities\") pod \"redhat-operators-6r6jd\" (UID: \"368856bb-a44a-49bd-83a3-e7a5f768a183\") " pod="openshift-marketplace/redhat-operators-6r6jd" Feb 25 13:37:05 crc kubenswrapper[5005]: I0225 13:37:05.874135 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/368856bb-a44a-49bd-83a3-e7a5f768a183-catalog-content\") pod \"redhat-operators-6r6jd\" (UID: \"368856bb-a44a-49bd-83a3-e7a5f768a183\") " pod="openshift-marketplace/redhat-operators-6r6jd" Feb 25 13:37:05 crc kubenswrapper[5005]: I0225 13:37:05.876087 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/368856bb-a44a-49bd-83a3-e7a5f768a183-utilities\") pod \"redhat-operators-6r6jd\" (UID: \"368856bb-a44a-49bd-83a3-e7a5f768a183\") " pod="openshift-marketplace/redhat-operators-6r6jd" Feb 25 13:37:05 crc kubenswrapper[5005]: I0225 13:37:05.899568 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnj98\" (UniqueName: \"kubernetes.io/projected/368856bb-a44a-49bd-83a3-e7a5f768a183-kube-api-access-tnj98\") pod \"redhat-operators-6r6jd\" (UID: \"368856bb-a44a-49bd-83a3-e7a5f768a183\") " pod="openshift-marketplace/redhat-operators-6r6jd" Feb 25 13:37:06 crc kubenswrapper[5005]: I0225 13:37:06.060509 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6r6jd" Feb 25 13:37:06 crc kubenswrapper[5005]: I0225 13:37:06.513724 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6r6jd"] Feb 25 13:37:07 crc kubenswrapper[5005]: I0225 13:37:07.390455 5005 generic.go:334] "Generic (PLEG): container finished" podID="368856bb-a44a-49bd-83a3-e7a5f768a183" containerID="39e44cc9835c2afd127f779748ccc1e209d7eef8a1ae05c1ada338422573e490" exitCode=0 Feb 25 13:37:07 crc kubenswrapper[5005]: I0225 13:37:07.390749 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6r6jd" event={"ID":"368856bb-a44a-49bd-83a3-e7a5f768a183","Type":"ContainerDied","Data":"39e44cc9835c2afd127f779748ccc1e209d7eef8a1ae05c1ada338422573e490"} Feb 25 13:37:07 crc kubenswrapper[5005]: I0225 13:37:07.390860 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6r6jd" event={"ID":"368856bb-a44a-49bd-83a3-e7a5f768a183","Type":"ContainerStarted","Data":"fa925fa47185ae0a6bb5dfb9fef93cdbbd59674c4964aad022ccfb5807e5151f"} Feb 25 13:37:08 crc kubenswrapper[5005]: I0225 13:37:08.399541 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6r6jd" event={"ID":"368856bb-a44a-49bd-83a3-e7a5f768a183","Type":"ContainerStarted","Data":"675822788f830f166d0af61ceb1e5c9bcddc11b9def7a6bf89aa025d6f57c7dd"} Feb 25 13:37:09 crc kubenswrapper[5005]: I0225 13:37:09.413168 5005 generic.go:334] "Generic (PLEG): container finished" podID="368856bb-a44a-49bd-83a3-e7a5f768a183" containerID="675822788f830f166d0af61ceb1e5c9bcddc11b9def7a6bf89aa025d6f57c7dd" exitCode=0 Feb 25 13:37:09 crc kubenswrapper[5005]: I0225 13:37:09.413233 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6r6jd" event={"ID":"368856bb-a44a-49bd-83a3-e7a5f768a183","Type":"ContainerDied","Data":"675822788f830f166d0af61ceb1e5c9bcddc11b9def7a6bf89aa025d6f57c7dd"} Feb 25 13:37:10 crc kubenswrapper[5005]: I0225 13:37:10.427221 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6r6jd" event={"ID":"368856bb-a44a-49bd-83a3-e7a5f768a183","Type":"ContainerStarted","Data":"48bec78ccb666264f477468bf2da3f5005b48b69d126470842df7e3ab58c4275"} Feb 25 13:37:10 crc kubenswrapper[5005]: I0225 13:37:10.448755 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6r6jd" podStartSLOduration=2.9626639580000003 podStartE2EDuration="5.448735679s" podCreationTimestamp="2026-02-25 13:37:05 +0000 UTC" firstStartedPulling="2026-02-25 13:37:07.392989879 +0000 UTC m=+8341.433722206" lastFinishedPulling="2026-02-25 13:37:09.8790616 +0000 UTC m=+8343.919793927" observedRunningTime="2026-02-25 13:37:10.443821478 +0000 UTC m=+8344.484553805" watchObservedRunningTime="2026-02-25 13:37:10.448735679 +0000 UTC m=+8344.489468006" Feb 25 13:37:11 crc kubenswrapper[5005]: E0225 13:37:11.656148 5005 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod368856bb_a44a_49bd_83a3_e7a5f768a183.slice/crio-conmon-39e44cc9835c2afd127f779748ccc1e209d7eef8a1ae05c1ada338422573e490.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod368856bb_a44a_49bd_83a3_e7a5f768a183.slice/crio-39e44cc9835c2afd127f779748ccc1e209d7eef8a1ae05c1ada338422573e490.scope\": RecentStats: unable to find data in memory cache]" Feb 25 13:37:16 crc kubenswrapper[5005]: I0225 13:37:16.060960 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6r6jd" Feb 25 13:37:16 crc kubenswrapper[5005]: I0225 13:37:16.061524 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6r6jd" Feb 25 13:37:16 crc kubenswrapper[5005]: I0225 13:37:16.104737 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6r6jd" Feb 25 13:37:16 crc kubenswrapper[5005]: I0225 13:37:16.515790 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6r6jd" Feb 25 13:37:16 crc kubenswrapper[5005]: I0225 13:37:16.563311 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6r6jd"] Feb 25 13:37:18 crc kubenswrapper[5005]: I0225 13:37:18.490980 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6r6jd" podUID="368856bb-a44a-49bd-83a3-e7a5f768a183" containerName="registry-server" containerID="cri-o://48bec78ccb666264f477468bf2da3f5005b48b69d126470842df7e3ab58c4275" gracePeriod=2 Feb 25 13:37:18 crc kubenswrapper[5005]: I0225 13:37:18.974276 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6r6jd" Feb 25 13:37:19 crc kubenswrapper[5005]: I0225 13:37:19.136175 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/368856bb-a44a-49bd-83a3-e7a5f768a183-catalog-content\") pod \"368856bb-a44a-49bd-83a3-e7a5f768a183\" (UID: \"368856bb-a44a-49bd-83a3-e7a5f768a183\") " Feb 25 13:37:19 crc kubenswrapper[5005]: I0225 13:37:19.136268 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnj98\" (UniqueName: \"kubernetes.io/projected/368856bb-a44a-49bd-83a3-e7a5f768a183-kube-api-access-tnj98\") pod \"368856bb-a44a-49bd-83a3-e7a5f768a183\" (UID: \"368856bb-a44a-49bd-83a3-e7a5f768a183\") " Feb 25 13:37:19 crc kubenswrapper[5005]: I0225 13:37:19.136408 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/368856bb-a44a-49bd-83a3-e7a5f768a183-utilities\") pod \"368856bb-a44a-49bd-83a3-e7a5f768a183\" (UID: \"368856bb-a44a-49bd-83a3-e7a5f768a183\") " Feb 25 13:37:19 crc kubenswrapper[5005]: I0225 13:37:19.137466 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/368856bb-a44a-49bd-83a3-e7a5f768a183-utilities" (OuterVolumeSpecName: "utilities") pod "368856bb-a44a-49bd-83a3-e7a5f768a183" (UID: "368856bb-a44a-49bd-83a3-e7a5f768a183"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 13:37:19 crc kubenswrapper[5005]: I0225 13:37:19.149888 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/368856bb-a44a-49bd-83a3-e7a5f768a183-kube-api-access-tnj98" (OuterVolumeSpecName: "kube-api-access-tnj98") pod "368856bb-a44a-49bd-83a3-e7a5f768a183" (UID: "368856bb-a44a-49bd-83a3-e7a5f768a183"). InnerVolumeSpecName "kube-api-access-tnj98". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 13:37:19 crc kubenswrapper[5005]: I0225 13:37:19.238946 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnj98\" (UniqueName: \"kubernetes.io/projected/368856bb-a44a-49bd-83a3-e7a5f768a183-kube-api-access-tnj98\") on node \"crc\" DevicePath \"\"" Feb 25 13:37:19 crc kubenswrapper[5005]: I0225 13:37:19.238997 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/368856bb-a44a-49bd-83a3-e7a5f768a183-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 13:37:19 crc kubenswrapper[5005]: I0225 13:37:19.499451 5005 generic.go:334] "Generic (PLEG): container finished" podID="368856bb-a44a-49bd-83a3-e7a5f768a183" containerID="48bec78ccb666264f477468bf2da3f5005b48b69d126470842df7e3ab58c4275" exitCode=0 Feb 25 13:37:19 crc kubenswrapper[5005]: I0225 13:37:19.500584 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6r6jd" Feb 25 13:37:19 crc kubenswrapper[5005]: I0225 13:37:19.500609 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6r6jd" event={"ID":"368856bb-a44a-49bd-83a3-e7a5f768a183","Type":"ContainerDied","Data":"48bec78ccb666264f477468bf2da3f5005b48b69d126470842df7e3ab58c4275"} Feb 25 13:37:19 crc kubenswrapper[5005]: I0225 13:37:19.500840 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6r6jd" event={"ID":"368856bb-a44a-49bd-83a3-e7a5f768a183","Type":"ContainerDied","Data":"fa925fa47185ae0a6bb5dfb9fef93cdbbd59674c4964aad022ccfb5807e5151f"} Feb 25 13:37:19 crc kubenswrapper[5005]: I0225 13:37:19.500865 5005 scope.go:117] "RemoveContainer" containerID="48bec78ccb666264f477468bf2da3f5005b48b69d126470842df7e3ab58c4275" Feb 25 13:37:19 crc kubenswrapper[5005]: I0225 13:37:19.521897 5005 scope.go:117] "RemoveContainer" containerID="675822788f830f166d0af61ceb1e5c9bcddc11b9def7a6bf89aa025d6f57c7dd" Feb 25 13:37:19 crc kubenswrapper[5005]: I0225 13:37:19.548094 5005 scope.go:117] "RemoveContainer" containerID="39e44cc9835c2afd127f779748ccc1e209d7eef8a1ae05c1ada338422573e490" Feb 25 13:37:19 crc kubenswrapper[5005]: I0225 13:37:19.598713 5005 scope.go:117] "RemoveContainer" containerID="48bec78ccb666264f477468bf2da3f5005b48b69d126470842df7e3ab58c4275" Feb 25 13:37:19 crc kubenswrapper[5005]: E0225 13:37:19.599206 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48bec78ccb666264f477468bf2da3f5005b48b69d126470842df7e3ab58c4275\": container with ID starting with 48bec78ccb666264f477468bf2da3f5005b48b69d126470842df7e3ab58c4275 not found: ID does not exist" containerID="48bec78ccb666264f477468bf2da3f5005b48b69d126470842df7e3ab58c4275" Feb 25 13:37:19 crc kubenswrapper[5005]: I0225 13:37:19.599279 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48bec78ccb666264f477468bf2da3f5005b48b69d126470842df7e3ab58c4275"} err="failed to get container status \"48bec78ccb666264f477468bf2da3f5005b48b69d126470842df7e3ab58c4275\": rpc error: code = NotFound desc = could not find container \"48bec78ccb666264f477468bf2da3f5005b48b69d126470842df7e3ab58c4275\": container with ID starting with 48bec78ccb666264f477468bf2da3f5005b48b69d126470842df7e3ab58c4275 not found: ID does not exist" Feb 25 13:37:19 crc kubenswrapper[5005]: I0225 13:37:19.599299 5005 scope.go:117] "RemoveContainer" containerID="675822788f830f166d0af61ceb1e5c9bcddc11b9def7a6bf89aa025d6f57c7dd" Feb 25 13:37:19 crc kubenswrapper[5005]: E0225 13:37:19.599759 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"675822788f830f166d0af61ceb1e5c9bcddc11b9def7a6bf89aa025d6f57c7dd\": container with ID starting with 675822788f830f166d0af61ceb1e5c9bcddc11b9def7a6bf89aa025d6f57c7dd not found: ID does not exist" containerID="675822788f830f166d0af61ceb1e5c9bcddc11b9def7a6bf89aa025d6f57c7dd" Feb 25 13:37:19 crc kubenswrapper[5005]: I0225 13:37:19.599795 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"675822788f830f166d0af61ceb1e5c9bcddc11b9def7a6bf89aa025d6f57c7dd"} err="failed to get container status \"675822788f830f166d0af61ceb1e5c9bcddc11b9def7a6bf89aa025d6f57c7dd\": rpc error: code = NotFound desc = could not find container \"675822788f830f166d0af61ceb1e5c9bcddc11b9def7a6bf89aa025d6f57c7dd\": container with ID starting with 675822788f830f166d0af61ceb1e5c9bcddc11b9def7a6bf89aa025d6f57c7dd not found: ID does not exist" Feb 25 13:37:19 crc kubenswrapper[5005]: I0225 13:37:19.599821 5005 scope.go:117] "RemoveContainer" containerID="39e44cc9835c2afd127f779748ccc1e209d7eef8a1ae05c1ada338422573e490" Feb 25 13:37:19 crc kubenswrapper[5005]: E0225 13:37:19.600158 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39e44cc9835c2afd127f779748ccc1e209d7eef8a1ae05c1ada338422573e490\": container with ID starting with 39e44cc9835c2afd127f779748ccc1e209d7eef8a1ae05c1ada338422573e490 not found: ID does not exist" containerID="39e44cc9835c2afd127f779748ccc1e209d7eef8a1ae05c1ada338422573e490" Feb 25 13:37:19 crc kubenswrapper[5005]: I0225 13:37:19.600182 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39e44cc9835c2afd127f779748ccc1e209d7eef8a1ae05c1ada338422573e490"} err="failed to get container status \"39e44cc9835c2afd127f779748ccc1e209d7eef8a1ae05c1ada338422573e490\": rpc error: code = NotFound desc = could not find container \"39e44cc9835c2afd127f779748ccc1e209d7eef8a1ae05c1ada338422573e490\": container with ID starting with 39e44cc9835c2afd127f779748ccc1e209d7eef8a1ae05c1ada338422573e490 not found: ID does not exist" Feb 25 13:37:19 crc kubenswrapper[5005]: I0225 13:37:19.626586 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/368856bb-a44a-49bd-83a3-e7a5f768a183-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "368856bb-a44a-49bd-83a3-e7a5f768a183" (UID: "368856bb-a44a-49bd-83a3-e7a5f768a183"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 13:37:19 crc kubenswrapper[5005]: I0225 13:37:19.649505 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/368856bb-a44a-49bd-83a3-e7a5f768a183-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 13:37:19 crc kubenswrapper[5005]: I0225 13:37:19.836355 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6r6jd"] Feb 25 13:37:19 crc kubenswrapper[5005]: I0225 13:37:19.845436 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6r6jd"] Feb 25 13:37:20 crc kubenswrapper[5005]: I0225 13:37:20.694514 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="368856bb-a44a-49bd-83a3-e7a5f768a183" path="/var/lib/kubelet/pods/368856bb-a44a-49bd-83a3-e7a5f768a183/volumes" Feb 25 13:37:21 crc kubenswrapper[5005]: E0225 13:37:21.876641 5005 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod368856bb_a44a_49bd_83a3_e7a5f768a183.slice/crio-conmon-39e44cc9835c2afd127f779748ccc1e209d7eef8a1ae05c1ada338422573e490.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod368856bb_a44a_49bd_83a3_e7a5f768a183.slice/crio-39e44cc9835c2afd127f779748ccc1e209d7eef8a1ae05c1ada338422573e490.scope\": RecentStats: unable to find data in memory cache]" Feb 25 13:37:28 crc kubenswrapper[5005]: I0225 13:37:28.087301 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 13:37:28 crc kubenswrapper[5005]: I0225 13:37:28.087902 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 13:37:32 crc kubenswrapper[5005]: E0225 13:37:32.131162 5005 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod368856bb_a44a_49bd_83a3_e7a5f768a183.slice/crio-conmon-39e44cc9835c2afd127f779748ccc1e209d7eef8a1ae05c1ada338422573e490.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod368856bb_a44a_49bd_83a3_e7a5f768a183.slice/crio-39e44cc9835c2afd127f779748ccc1e209d7eef8a1ae05c1ada338422573e490.scope\": RecentStats: unable to find data in memory cache]" Feb 25 13:37:42 crc kubenswrapper[5005]: E0225 13:37:42.413512 5005 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod368856bb_a44a_49bd_83a3_e7a5f768a183.slice/crio-39e44cc9835c2afd127f779748ccc1e209d7eef8a1ae05c1ada338422573e490.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod368856bb_a44a_49bd_83a3_e7a5f768a183.slice/crio-conmon-39e44cc9835c2afd127f779748ccc1e209d7eef8a1ae05c1ada338422573e490.scope\": RecentStats: unable to find data in memory cache]" Feb 25 13:37:52 crc kubenswrapper[5005]: E0225 13:37:52.640437 5005 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod368856bb_a44a_49bd_83a3_e7a5f768a183.slice/crio-39e44cc9835c2afd127f779748ccc1e209d7eef8a1ae05c1ada338422573e490.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod368856bb_a44a_49bd_83a3_e7a5f768a183.slice/crio-conmon-39e44cc9835c2afd127f779748ccc1e209d7eef8a1ae05c1ada338422573e490.scope\": RecentStats: unable to find data in memory cache]" Feb 25 13:37:56 crc kubenswrapper[5005]: I0225 13:37:56.997659 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8hvjs"] Feb 25 13:37:56 crc kubenswrapper[5005]: E0225 13:37:56.998790 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="368856bb-a44a-49bd-83a3-e7a5f768a183" containerName="extract-utilities" Feb 25 13:37:56 crc kubenswrapper[5005]: I0225 13:37:56.998825 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="368856bb-a44a-49bd-83a3-e7a5f768a183" containerName="extract-utilities" Feb 25 13:37:56 crc kubenswrapper[5005]: E0225 13:37:56.998836 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="368856bb-a44a-49bd-83a3-e7a5f768a183" containerName="registry-server" Feb 25 13:37:56 crc kubenswrapper[5005]: I0225 13:37:56.998844 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="368856bb-a44a-49bd-83a3-e7a5f768a183" containerName="registry-server" Feb 25 13:37:56 crc kubenswrapper[5005]: E0225 13:37:56.998870 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="368856bb-a44a-49bd-83a3-e7a5f768a183" containerName="extract-content" Feb 25 13:37:56 crc kubenswrapper[5005]: I0225 13:37:56.998880 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="368856bb-a44a-49bd-83a3-e7a5f768a183" containerName="extract-content" Feb 25 13:37:56 crc kubenswrapper[5005]: I0225 13:37:56.999138 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="368856bb-a44a-49bd-83a3-e7a5f768a183" containerName="registry-server" Feb 25 13:37:57 crc kubenswrapper[5005]: I0225 13:37:57.000903 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8hvjs" Feb 25 13:37:57 crc kubenswrapper[5005]: I0225 13:37:57.008664 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8hvjs"] Feb 25 13:37:57 crc kubenswrapper[5005]: I0225 13:37:57.137723 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbddj\" (UniqueName: \"kubernetes.io/projected/da2db021-7bf0-42ce-81e3-1e260d30f908-kube-api-access-zbddj\") pod \"redhat-marketplace-8hvjs\" (UID: \"da2db021-7bf0-42ce-81e3-1e260d30f908\") " pod="openshift-marketplace/redhat-marketplace-8hvjs" Feb 25 13:37:57 crc kubenswrapper[5005]: I0225 13:37:57.137791 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da2db021-7bf0-42ce-81e3-1e260d30f908-utilities\") pod \"redhat-marketplace-8hvjs\" (UID: \"da2db021-7bf0-42ce-81e3-1e260d30f908\") " pod="openshift-marketplace/redhat-marketplace-8hvjs" Feb 25 13:37:57 crc kubenswrapper[5005]: I0225 13:37:57.138216 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da2db021-7bf0-42ce-81e3-1e260d30f908-catalog-content\") pod \"redhat-marketplace-8hvjs\" (UID: \"da2db021-7bf0-42ce-81e3-1e260d30f908\") " pod="openshift-marketplace/redhat-marketplace-8hvjs" Feb 25 13:37:57 crc kubenswrapper[5005]: I0225 13:37:57.239995 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da2db021-7bf0-42ce-81e3-1e260d30f908-utilities\") pod \"redhat-marketplace-8hvjs\" (UID: \"da2db021-7bf0-42ce-81e3-1e260d30f908\") " pod="openshift-marketplace/redhat-marketplace-8hvjs" Feb 25 13:37:57 crc kubenswrapper[5005]: I0225 13:37:57.240134 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da2db021-7bf0-42ce-81e3-1e260d30f908-catalog-content\") pod \"redhat-marketplace-8hvjs\" (UID: \"da2db021-7bf0-42ce-81e3-1e260d30f908\") " pod="openshift-marketplace/redhat-marketplace-8hvjs" Feb 25 13:37:57 crc kubenswrapper[5005]: I0225 13:37:57.240204 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbddj\" (UniqueName: \"kubernetes.io/projected/da2db021-7bf0-42ce-81e3-1e260d30f908-kube-api-access-zbddj\") pod \"redhat-marketplace-8hvjs\" (UID: \"da2db021-7bf0-42ce-81e3-1e260d30f908\") " pod="openshift-marketplace/redhat-marketplace-8hvjs" Feb 25 13:37:57 crc kubenswrapper[5005]: I0225 13:37:57.240759 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da2db021-7bf0-42ce-81e3-1e260d30f908-utilities\") pod \"redhat-marketplace-8hvjs\" (UID: \"da2db021-7bf0-42ce-81e3-1e260d30f908\") " pod="openshift-marketplace/redhat-marketplace-8hvjs" Feb 25 13:37:57 crc kubenswrapper[5005]: I0225 13:37:57.240828 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da2db021-7bf0-42ce-81e3-1e260d30f908-catalog-content\") pod \"redhat-marketplace-8hvjs\" (UID: \"da2db021-7bf0-42ce-81e3-1e260d30f908\") " pod="openshift-marketplace/redhat-marketplace-8hvjs" Feb 25 13:37:57 crc kubenswrapper[5005]: I0225 13:37:57.259647 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbddj\" (UniqueName: \"kubernetes.io/projected/da2db021-7bf0-42ce-81e3-1e260d30f908-kube-api-access-zbddj\") pod \"redhat-marketplace-8hvjs\" (UID: \"da2db021-7bf0-42ce-81e3-1e260d30f908\") " pod="openshift-marketplace/redhat-marketplace-8hvjs" Feb 25 13:37:57 crc kubenswrapper[5005]: I0225 13:37:57.329860 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8hvjs" Feb 25 13:37:57 crc kubenswrapper[5005]: I0225 13:37:57.776082 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8hvjs"] Feb 25 13:37:57 crc kubenswrapper[5005]: I0225 13:37:57.868528 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8hvjs" event={"ID":"da2db021-7bf0-42ce-81e3-1e260d30f908","Type":"ContainerStarted","Data":"d6cd4963744613337a5772a721fa7b8125ef751bb51ed65c1ea0a86998d0f19a"} Feb 25 13:37:58 crc kubenswrapper[5005]: I0225 13:37:58.087891 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 13:37:58 crc kubenswrapper[5005]: I0225 13:37:58.088318 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 13:37:58 crc kubenswrapper[5005]: I0225 13:37:58.881690 5005 generic.go:334] "Generic (PLEG): container finished" podID="da2db021-7bf0-42ce-81e3-1e260d30f908" containerID="4f4049eb5de8bf70ed3ee6aba3e575132a27557949acd2c7245fd12099688e2d" exitCode=0 Feb 25 13:37:58 crc kubenswrapper[5005]: I0225 13:37:58.881741 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8hvjs" event={"ID":"da2db021-7bf0-42ce-81e3-1e260d30f908","Type":"ContainerDied","Data":"4f4049eb5de8bf70ed3ee6aba3e575132a27557949acd2c7245fd12099688e2d"} Feb 25 13:38:00 crc kubenswrapper[5005]: I0225 13:38:00.145566 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533778-l9k9s"] Feb 25 13:38:00 crc kubenswrapper[5005]: I0225 13:38:00.147110 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533778-l9k9s" Feb 25 13:38:00 crc kubenswrapper[5005]: I0225 13:38:00.149431 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 13:38:00 crc kubenswrapper[5005]: I0225 13:38:00.149532 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 13:38:00 crc kubenswrapper[5005]: I0225 13:38:00.150350 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 13:38:00 crc kubenswrapper[5005]: I0225 13:38:00.162184 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533778-l9k9s"] Feb 25 13:38:00 crc kubenswrapper[5005]: I0225 13:38:00.311090 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljdsm\" (UniqueName: \"kubernetes.io/projected/02a997be-9c86-4e1a-9684-709bacc2ba6e-kube-api-access-ljdsm\") pod \"auto-csr-approver-29533778-l9k9s\" (UID: \"02a997be-9c86-4e1a-9684-709bacc2ba6e\") " pod="openshift-infra/auto-csr-approver-29533778-l9k9s" Feb 25 13:38:00 crc kubenswrapper[5005]: I0225 13:38:00.412953 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljdsm\" (UniqueName: \"kubernetes.io/projected/02a997be-9c86-4e1a-9684-709bacc2ba6e-kube-api-access-ljdsm\") pod \"auto-csr-approver-29533778-l9k9s\" (UID: \"02a997be-9c86-4e1a-9684-709bacc2ba6e\") " pod="openshift-infra/auto-csr-approver-29533778-l9k9s" Feb 25 13:38:00 crc kubenswrapper[5005]: I0225 13:38:00.431312 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljdsm\" (UniqueName: \"kubernetes.io/projected/02a997be-9c86-4e1a-9684-709bacc2ba6e-kube-api-access-ljdsm\") pod \"auto-csr-approver-29533778-l9k9s\" (UID: \"02a997be-9c86-4e1a-9684-709bacc2ba6e\") " pod="openshift-infra/auto-csr-approver-29533778-l9k9s" Feb 25 13:38:00 crc kubenswrapper[5005]: I0225 13:38:00.468394 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533778-l9k9s" Feb 25 13:38:00 crc kubenswrapper[5005]: I0225 13:38:00.906829 5005 generic.go:334] "Generic (PLEG): container finished" podID="da2db021-7bf0-42ce-81e3-1e260d30f908" containerID="b03c3d7db3acbf440850a6e63ed4400c10cd308139b0dbdafeead89ca40aab2c" exitCode=0 Feb 25 13:38:00 crc kubenswrapper[5005]: I0225 13:38:00.906955 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8hvjs" event={"ID":"da2db021-7bf0-42ce-81e3-1e260d30f908","Type":"ContainerDied","Data":"b03c3d7db3acbf440850a6e63ed4400c10cd308139b0dbdafeead89ca40aab2c"} Feb 25 13:38:00 crc kubenswrapper[5005]: I0225 13:38:00.966332 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533778-l9k9s"] Feb 25 13:38:01 crc kubenswrapper[5005]: I0225 13:38:01.916411 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533778-l9k9s" event={"ID":"02a997be-9c86-4e1a-9684-709bacc2ba6e","Type":"ContainerStarted","Data":"a4628093f09dcff85b4c824a24fc039dcb4113f29cd5189e094ea8b664108865"} Feb 25 13:38:01 crc kubenswrapper[5005]: I0225 13:38:01.919020 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8hvjs" event={"ID":"da2db021-7bf0-42ce-81e3-1e260d30f908","Type":"ContainerStarted","Data":"cd8fc35abeab4f216a0a32fdd8edd0093496a0a5cbc81c6d92a254951c2910e0"} Feb 25 13:38:01 crc kubenswrapper[5005]: I0225 13:38:01.942866 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8hvjs" podStartSLOduration=3.4463053 podStartE2EDuration="5.942848084s" podCreationTimestamp="2026-02-25 13:37:56 +0000 UTC" firstStartedPulling="2026-02-25 13:37:58.883867274 +0000 UTC m=+8392.924599601" lastFinishedPulling="2026-02-25 13:38:01.380410058 +0000 UTC m=+8395.421142385" observedRunningTime="2026-02-25 13:38:01.933691191 +0000 UTC m=+8395.974423518" watchObservedRunningTime="2026-02-25 13:38:01.942848084 +0000 UTC m=+8395.983580411" Feb 25 13:38:02 crc kubenswrapper[5005]: E0225 13:38:02.902768 5005 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod368856bb_a44a_49bd_83a3_e7a5f768a183.slice/crio-39e44cc9835c2afd127f779748ccc1e209d7eef8a1ae05c1ada338422573e490.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02a997be_9c86_4e1a_9684_709bacc2ba6e.slice/crio-conmon-b543ca39d3d58019f6313526a32b02e2e67a3ae3b2d11669c29390b32e04f3f8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod368856bb_a44a_49bd_83a3_e7a5f768a183.slice/crio-conmon-39e44cc9835c2afd127f779748ccc1e209d7eef8a1ae05c1ada338422573e490.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02a997be_9c86_4e1a_9684_709bacc2ba6e.slice/crio-b543ca39d3d58019f6313526a32b02e2e67a3ae3b2d11669c29390b32e04f3f8.scope\": RecentStats: unable to find data in memory cache]" Feb 25 13:38:02 crc kubenswrapper[5005]: I0225 13:38:02.949079 5005 generic.go:334] "Generic (PLEG): container finished" podID="02a997be-9c86-4e1a-9684-709bacc2ba6e" containerID="b543ca39d3d58019f6313526a32b02e2e67a3ae3b2d11669c29390b32e04f3f8" exitCode=0 Feb 25 13:38:02 crc kubenswrapper[5005]: I0225 13:38:02.950134 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533778-l9k9s" event={"ID":"02a997be-9c86-4e1a-9684-709bacc2ba6e","Type":"ContainerDied","Data":"b543ca39d3d58019f6313526a32b02e2e67a3ae3b2d11669c29390b32e04f3f8"} Feb 25 13:38:04 crc kubenswrapper[5005]: I0225 13:38:04.304081 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533778-l9k9s" Feb 25 13:38:04 crc kubenswrapper[5005]: I0225 13:38:04.494172 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljdsm\" (UniqueName: \"kubernetes.io/projected/02a997be-9c86-4e1a-9684-709bacc2ba6e-kube-api-access-ljdsm\") pod \"02a997be-9c86-4e1a-9684-709bacc2ba6e\" (UID: \"02a997be-9c86-4e1a-9684-709bacc2ba6e\") " Feb 25 13:38:04 crc kubenswrapper[5005]: I0225 13:38:04.501652 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02a997be-9c86-4e1a-9684-709bacc2ba6e-kube-api-access-ljdsm" (OuterVolumeSpecName: "kube-api-access-ljdsm") pod "02a997be-9c86-4e1a-9684-709bacc2ba6e" (UID: "02a997be-9c86-4e1a-9684-709bacc2ba6e"). InnerVolumeSpecName "kube-api-access-ljdsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 13:38:04 crc kubenswrapper[5005]: I0225 13:38:04.596442 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljdsm\" (UniqueName: \"kubernetes.io/projected/02a997be-9c86-4e1a-9684-709bacc2ba6e-kube-api-access-ljdsm\") on node \"crc\" DevicePath \"\"" Feb 25 13:38:04 crc kubenswrapper[5005]: I0225 13:38:04.968747 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533778-l9k9s" event={"ID":"02a997be-9c86-4e1a-9684-709bacc2ba6e","Type":"ContainerDied","Data":"a4628093f09dcff85b4c824a24fc039dcb4113f29cd5189e094ea8b664108865"} Feb 25 13:38:04 crc kubenswrapper[5005]: I0225 13:38:04.968785 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4628093f09dcff85b4c824a24fc039dcb4113f29cd5189e094ea8b664108865" Feb 25 13:38:04 crc kubenswrapper[5005]: I0225 13:38:04.968815 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533778-l9k9s" Feb 25 13:38:05 crc kubenswrapper[5005]: I0225 13:38:05.376816 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533772-vwcmr"] Feb 25 13:38:05 crc kubenswrapper[5005]: I0225 13:38:05.386356 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533772-vwcmr"] Feb 25 13:38:06 crc kubenswrapper[5005]: I0225 13:38:06.700520 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10aa38b8-236d-421c-9b9c-a6b3a3c3c361" path="/var/lib/kubelet/pods/10aa38b8-236d-421c-9b9c-a6b3a3c3c361/volumes" Feb 25 13:38:07 crc kubenswrapper[5005]: I0225 13:38:07.331106 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8hvjs" Feb 25 13:38:07 crc kubenswrapper[5005]: I0225 13:38:07.331476 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8hvjs" Feb 25 13:38:07 crc kubenswrapper[5005]: I0225 13:38:07.393292 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8hvjs" Feb 25 13:38:08 crc kubenswrapper[5005]: I0225 13:38:08.042587 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8hvjs" Feb 25 13:38:08 crc kubenswrapper[5005]: I0225 13:38:08.101608 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8hvjs"] Feb 25 13:38:10 crc kubenswrapper[5005]: I0225 13:38:10.013330 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8hvjs" podUID="da2db021-7bf0-42ce-81e3-1e260d30f908" containerName="registry-server" containerID="cri-o://cd8fc35abeab4f216a0a32fdd8edd0093496a0a5cbc81c6d92a254951c2910e0" gracePeriod=2 Feb 25 13:38:10 crc kubenswrapper[5005]: I0225 13:38:10.470119 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8hvjs" Feb 25 13:38:10 crc kubenswrapper[5005]: I0225 13:38:10.617346 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbddj\" (UniqueName: \"kubernetes.io/projected/da2db021-7bf0-42ce-81e3-1e260d30f908-kube-api-access-zbddj\") pod \"da2db021-7bf0-42ce-81e3-1e260d30f908\" (UID: \"da2db021-7bf0-42ce-81e3-1e260d30f908\") " Feb 25 13:38:10 crc kubenswrapper[5005]: I0225 13:38:10.617695 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da2db021-7bf0-42ce-81e3-1e260d30f908-utilities\") pod \"da2db021-7bf0-42ce-81e3-1e260d30f908\" (UID: \"da2db021-7bf0-42ce-81e3-1e260d30f908\") " Feb 25 13:38:10 crc kubenswrapper[5005]: I0225 13:38:10.617830 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da2db021-7bf0-42ce-81e3-1e260d30f908-catalog-content\") pod \"da2db021-7bf0-42ce-81e3-1e260d30f908\" (UID: \"da2db021-7bf0-42ce-81e3-1e260d30f908\") " Feb 25 13:38:10 crc kubenswrapper[5005]: I0225 13:38:10.618775 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da2db021-7bf0-42ce-81e3-1e260d30f908-utilities" (OuterVolumeSpecName: "utilities") pod "da2db021-7bf0-42ce-81e3-1e260d30f908" (UID: "da2db021-7bf0-42ce-81e3-1e260d30f908"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 13:38:10 crc kubenswrapper[5005]: I0225 13:38:10.626173 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da2db021-7bf0-42ce-81e3-1e260d30f908-kube-api-access-zbddj" (OuterVolumeSpecName: "kube-api-access-zbddj") pod "da2db021-7bf0-42ce-81e3-1e260d30f908" (UID: "da2db021-7bf0-42ce-81e3-1e260d30f908"). InnerVolumeSpecName "kube-api-access-zbddj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 13:38:10 crc kubenswrapper[5005]: I0225 13:38:10.640798 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da2db021-7bf0-42ce-81e3-1e260d30f908-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "da2db021-7bf0-42ce-81e3-1e260d30f908" (UID: "da2db021-7bf0-42ce-81e3-1e260d30f908"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 13:38:10 crc kubenswrapper[5005]: I0225 13:38:10.720386 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da2db021-7bf0-42ce-81e3-1e260d30f908-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 13:38:10 crc kubenswrapper[5005]: I0225 13:38:10.720418 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da2db021-7bf0-42ce-81e3-1e260d30f908-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 13:38:10 crc kubenswrapper[5005]: I0225 13:38:10.720434 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbddj\" (UniqueName: \"kubernetes.io/projected/da2db021-7bf0-42ce-81e3-1e260d30f908-kube-api-access-zbddj\") on node \"crc\" DevicePath \"\"" Feb 25 13:38:11 crc kubenswrapper[5005]: I0225 13:38:11.030573 5005 generic.go:334] "Generic (PLEG): container finished" podID="da2db021-7bf0-42ce-81e3-1e260d30f908" containerID="cd8fc35abeab4f216a0a32fdd8edd0093496a0a5cbc81c6d92a254951c2910e0" exitCode=0 Feb 25 13:38:11 crc kubenswrapper[5005]: I0225 13:38:11.030680 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8hvjs" Feb 25 13:38:11 crc kubenswrapper[5005]: I0225 13:38:11.030724 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8hvjs" event={"ID":"da2db021-7bf0-42ce-81e3-1e260d30f908","Type":"ContainerDied","Data":"cd8fc35abeab4f216a0a32fdd8edd0093496a0a5cbc81c6d92a254951c2910e0"} Feb 25 13:38:11 crc kubenswrapper[5005]: I0225 13:38:11.031676 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8hvjs" event={"ID":"da2db021-7bf0-42ce-81e3-1e260d30f908","Type":"ContainerDied","Data":"d6cd4963744613337a5772a721fa7b8125ef751bb51ed65c1ea0a86998d0f19a"} Feb 25 13:38:11 crc kubenswrapper[5005]: I0225 13:38:11.031746 5005 scope.go:117] "RemoveContainer" containerID="cd8fc35abeab4f216a0a32fdd8edd0093496a0a5cbc81c6d92a254951c2910e0" Feb 25 13:38:11 crc kubenswrapper[5005]: I0225 13:38:11.071099 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8hvjs"] Feb 25 13:38:11 crc kubenswrapper[5005]: I0225 13:38:11.076627 5005 scope.go:117] "RemoveContainer" containerID="b03c3d7db3acbf440850a6e63ed4400c10cd308139b0dbdafeead89ca40aab2c" Feb 25 13:38:11 crc kubenswrapper[5005]: I0225 13:38:11.081591 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8hvjs"] Feb 25 13:38:11 crc kubenswrapper[5005]: I0225 13:38:11.098590 5005 scope.go:117] "RemoveContainer" containerID="4f4049eb5de8bf70ed3ee6aba3e575132a27557949acd2c7245fd12099688e2d" Feb 25 13:38:11 crc kubenswrapper[5005]: I0225 13:38:11.151273 5005 scope.go:117] "RemoveContainer" containerID="cd8fc35abeab4f216a0a32fdd8edd0093496a0a5cbc81c6d92a254951c2910e0" Feb 25 13:38:11 crc kubenswrapper[5005]: E0225 13:38:11.151913 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd8fc35abeab4f216a0a32fdd8edd0093496a0a5cbc81c6d92a254951c2910e0\": container with ID starting with cd8fc35abeab4f216a0a32fdd8edd0093496a0a5cbc81c6d92a254951c2910e0 not found: ID does not exist" containerID="cd8fc35abeab4f216a0a32fdd8edd0093496a0a5cbc81c6d92a254951c2910e0" Feb 25 13:38:11 crc kubenswrapper[5005]: I0225 13:38:11.151950 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd8fc35abeab4f216a0a32fdd8edd0093496a0a5cbc81c6d92a254951c2910e0"} err="failed to get container status \"cd8fc35abeab4f216a0a32fdd8edd0093496a0a5cbc81c6d92a254951c2910e0\": rpc error: code = NotFound desc = could not find container \"cd8fc35abeab4f216a0a32fdd8edd0093496a0a5cbc81c6d92a254951c2910e0\": container with ID starting with cd8fc35abeab4f216a0a32fdd8edd0093496a0a5cbc81c6d92a254951c2910e0 not found: ID does not exist" Feb 25 13:38:11 crc kubenswrapper[5005]: I0225 13:38:11.151971 5005 scope.go:117] "RemoveContainer" containerID="b03c3d7db3acbf440850a6e63ed4400c10cd308139b0dbdafeead89ca40aab2c" Feb 25 13:38:11 crc kubenswrapper[5005]: E0225 13:38:11.152704 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b03c3d7db3acbf440850a6e63ed4400c10cd308139b0dbdafeead89ca40aab2c\": container with ID starting with b03c3d7db3acbf440850a6e63ed4400c10cd308139b0dbdafeead89ca40aab2c not found: ID does not exist" containerID="b03c3d7db3acbf440850a6e63ed4400c10cd308139b0dbdafeead89ca40aab2c" Feb 25 13:38:11 crc kubenswrapper[5005]: I0225 13:38:11.152799 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b03c3d7db3acbf440850a6e63ed4400c10cd308139b0dbdafeead89ca40aab2c"} err="failed to get container status \"b03c3d7db3acbf440850a6e63ed4400c10cd308139b0dbdafeead89ca40aab2c\": rpc error: code = NotFound desc = could not find container \"b03c3d7db3acbf440850a6e63ed4400c10cd308139b0dbdafeead89ca40aab2c\": container with ID starting with b03c3d7db3acbf440850a6e63ed4400c10cd308139b0dbdafeead89ca40aab2c not found: ID does not exist" Feb 25 13:38:11 crc kubenswrapper[5005]: I0225 13:38:11.152860 5005 scope.go:117] "RemoveContainer" containerID="4f4049eb5de8bf70ed3ee6aba3e575132a27557949acd2c7245fd12099688e2d" Feb 25 13:38:11 crc kubenswrapper[5005]: E0225 13:38:11.153413 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f4049eb5de8bf70ed3ee6aba3e575132a27557949acd2c7245fd12099688e2d\": container with ID starting with 4f4049eb5de8bf70ed3ee6aba3e575132a27557949acd2c7245fd12099688e2d not found: ID does not exist" containerID="4f4049eb5de8bf70ed3ee6aba3e575132a27557949acd2c7245fd12099688e2d" Feb 25 13:38:11 crc kubenswrapper[5005]: I0225 13:38:11.153442 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f4049eb5de8bf70ed3ee6aba3e575132a27557949acd2c7245fd12099688e2d"} err="failed to get container status \"4f4049eb5de8bf70ed3ee6aba3e575132a27557949acd2c7245fd12099688e2d\": rpc error: code = NotFound desc = could not find container \"4f4049eb5de8bf70ed3ee6aba3e575132a27557949acd2c7245fd12099688e2d\": container with ID starting with 4f4049eb5de8bf70ed3ee6aba3e575132a27557949acd2c7245fd12099688e2d not found: ID does not exist" Feb 25 13:38:12 crc kubenswrapper[5005]: I0225 13:38:12.697190 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da2db021-7bf0-42ce-81e3-1e260d30f908" path="/var/lib/kubelet/pods/da2db021-7bf0-42ce-81e3-1e260d30f908/volumes" Feb 25 13:38:26 crc kubenswrapper[5005]: I0225 13:38:26.113753 5005 scope.go:117] "RemoveContainer" containerID="4c3f272a399f948d05e2202f5d96e7a8a24821a98b06435e69f830b4cc1b47ed" Feb 25 13:38:28 crc kubenswrapper[5005]: I0225 13:38:28.087188 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 13:38:28 crc kubenswrapper[5005]: I0225 13:38:28.087676 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 13:38:28 crc kubenswrapper[5005]: I0225 13:38:28.087715 5005 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" Feb 25 13:38:28 crc kubenswrapper[5005]: I0225 13:38:28.088180 5005 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c5f542d1ec6dc300e3a44e3a29fbd2c7f4d7950016e38a2791f2d7acc0a680b4"} pod="openshift-machine-config-operator/machine-config-daemon-tct5q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 25 13:38:28 crc kubenswrapper[5005]: I0225 13:38:28.088233 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" containerID="cri-o://c5f542d1ec6dc300e3a44e3a29fbd2c7f4d7950016e38a2791f2d7acc0a680b4" gracePeriod=600 Feb 25 13:38:28 crc kubenswrapper[5005]: E0225 13:38:28.213910 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:38:29 crc kubenswrapper[5005]: I0225 13:38:29.203777 5005 generic.go:334] "Generic (PLEG): container finished" podID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerID="c5f542d1ec6dc300e3a44e3a29fbd2c7f4d7950016e38a2791f2d7acc0a680b4" exitCode=0 Feb 25 13:38:29 crc kubenswrapper[5005]: I0225 13:38:29.203845 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerDied","Data":"c5f542d1ec6dc300e3a44e3a29fbd2c7f4d7950016e38a2791f2d7acc0a680b4"} Feb 25 13:38:29 crc kubenswrapper[5005]: I0225 13:38:29.203887 5005 scope.go:117] "RemoveContainer" containerID="85c752404d1f958a38155c7cd981c0e67f818f1f94ca2a7107f2e153f56b5223" Feb 25 13:38:29 crc kubenswrapper[5005]: I0225 13:38:29.204513 5005 scope.go:117] "RemoveContainer" containerID="c5f542d1ec6dc300e3a44e3a29fbd2c7f4d7950016e38a2791f2d7acc0a680b4" Feb 25 13:38:29 crc kubenswrapper[5005]: E0225 13:38:29.204911 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:38:43 crc kubenswrapper[5005]: I0225 13:38:43.685717 5005 scope.go:117] "RemoveContainer" containerID="c5f542d1ec6dc300e3a44e3a29fbd2c7f4d7950016e38a2791f2d7acc0a680b4" Feb 25 13:38:43 crc kubenswrapper[5005]: E0225 13:38:43.686621 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:38:58 crc kubenswrapper[5005]: I0225 13:38:58.686094 5005 scope.go:117] "RemoveContainer" containerID="c5f542d1ec6dc300e3a44e3a29fbd2c7f4d7950016e38a2791f2d7acc0a680b4" Feb 25 13:38:58 crc kubenswrapper[5005]: E0225 13:38:58.686865 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:39:12 crc kubenswrapper[5005]: I0225 13:39:12.874046 5005 scope.go:117] "RemoveContainer" containerID="c5f542d1ec6dc300e3a44e3a29fbd2c7f4d7950016e38a2791f2d7acc0a680b4" Feb 25 13:39:12 crc kubenswrapper[5005]: E0225 13:39:12.874896 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:39:25 crc kubenswrapper[5005]: I0225 13:39:25.685780 5005 scope.go:117] "RemoveContainer" containerID="c5f542d1ec6dc300e3a44e3a29fbd2c7f4d7950016e38a2791f2d7acc0a680b4" Feb 25 13:39:25 crc kubenswrapper[5005]: E0225 13:39:25.686502 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:39:40 crc kubenswrapper[5005]: I0225 13:39:40.685261 5005 scope.go:117] "RemoveContainer" containerID="c5f542d1ec6dc300e3a44e3a29fbd2c7f4d7950016e38a2791f2d7acc0a680b4" Feb 25 13:39:40 crc kubenswrapper[5005]: E0225 13:39:40.686030 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:39:54 crc kubenswrapper[5005]: I0225 13:39:54.685481 5005 scope.go:117] "RemoveContainer" containerID="c5f542d1ec6dc300e3a44e3a29fbd2c7f4d7950016e38a2791f2d7acc0a680b4" Feb 25 13:39:54 crc kubenswrapper[5005]: E0225 13:39:54.686222 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:40:00 crc kubenswrapper[5005]: I0225 13:40:00.174631 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533780-7kvg2"] Feb 25 13:40:00 crc kubenswrapper[5005]: E0225 13:40:00.175828 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da2db021-7bf0-42ce-81e3-1e260d30f908" containerName="extract-content" Feb 25 13:40:00 crc kubenswrapper[5005]: I0225 13:40:00.175848 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="da2db021-7bf0-42ce-81e3-1e260d30f908" containerName="extract-content" Feb 25 13:40:00 crc kubenswrapper[5005]: E0225 13:40:00.175869 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da2db021-7bf0-42ce-81e3-1e260d30f908" containerName="registry-server" Feb 25 13:40:00 crc kubenswrapper[5005]: I0225 13:40:00.175878 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="da2db021-7bf0-42ce-81e3-1e260d30f908" containerName="registry-server" Feb 25 13:40:00 crc kubenswrapper[5005]: E0225 13:40:00.175890 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02a997be-9c86-4e1a-9684-709bacc2ba6e" containerName="oc" Feb 25 13:40:00 crc kubenswrapper[5005]: I0225 13:40:00.175899 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="02a997be-9c86-4e1a-9684-709bacc2ba6e" containerName="oc" Feb 25 13:40:00 crc kubenswrapper[5005]: E0225 13:40:00.175931 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da2db021-7bf0-42ce-81e3-1e260d30f908" containerName="extract-utilities" Feb 25 13:40:00 crc kubenswrapper[5005]: I0225 13:40:00.175941 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="da2db021-7bf0-42ce-81e3-1e260d30f908" containerName="extract-utilities" Feb 25 13:40:00 crc kubenswrapper[5005]: I0225 13:40:00.176167 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="da2db021-7bf0-42ce-81e3-1e260d30f908" containerName="registry-server" Feb 25 13:40:00 crc kubenswrapper[5005]: I0225 13:40:00.176188 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="02a997be-9c86-4e1a-9684-709bacc2ba6e" containerName="oc" Feb 25 13:40:00 crc kubenswrapper[5005]: I0225 13:40:00.177199 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533780-7kvg2" Feb 25 13:40:00 crc kubenswrapper[5005]: I0225 13:40:00.179975 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 13:40:00 crc kubenswrapper[5005]: I0225 13:40:00.180220 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 13:40:00 crc kubenswrapper[5005]: I0225 13:40:00.181271 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 13:40:00 crc kubenswrapper[5005]: I0225 13:40:00.189743 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533780-7kvg2"] Feb 25 13:40:00 crc kubenswrapper[5005]: I0225 13:40:00.295344 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gpvj\" (UniqueName: \"kubernetes.io/projected/6cfd3fa8-8b51-4073-816b-c47b895cd27e-kube-api-access-6gpvj\") pod \"auto-csr-approver-29533780-7kvg2\" (UID: \"6cfd3fa8-8b51-4073-816b-c47b895cd27e\") " pod="openshift-infra/auto-csr-approver-29533780-7kvg2" Feb 25 13:40:00 crc kubenswrapper[5005]: I0225 13:40:00.397697 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gpvj\" (UniqueName: \"kubernetes.io/projected/6cfd3fa8-8b51-4073-816b-c47b895cd27e-kube-api-access-6gpvj\") pod \"auto-csr-approver-29533780-7kvg2\" (UID: \"6cfd3fa8-8b51-4073-816b-c47b895cd27e\") " pod="openshift-infra/auto-csr-approver-29533780-7kvg2" Feb 25 13:40:00 crc kubenswrapper[5005]: I0225 13:40:00.418315 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gpvj\" (UniqueName: \"kubernetes.io/projected/6cfd3fa8-8b51-4073-816b-c47b895cd27e-kube-api-access-6gpvj\") pod \"auto-csr-approver-29533780-7kvg2\" (UID: \"6cfd3fa8-8b51-4073-816b-c47b895cd27e\") " pod="openshift-infra/auto-csr-approver-29533780-7kvg2" Feb 25 13:40:00 crc kubenswrapper[5005]: I0225 13:40:00.499062 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533780-7kvg2" Feb 25 13:40:00 crc kubenswrapper[5005]: I0225 13:40:00.828730 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533780-7kvg2"] Feb 25 13:40:01 crc kubenswrapper[5005]: I0225 13:40:01.036060 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533780-7kvg2" event={"ID":"6cfd3fa8-8b51-4073-816b-c47b895cd27e","Type":"ContainerStarted","Data":"54ea5dec5e20b349cbd1bd132e2e3b18f5ff0667ad511e254f5a5968da8a3290"} Feb 25 13:40:03 crc kubenswrapper[5005]: I0225 13:40:03.055353 5005 generic.go:334] "Generic (PLEG): container finished" podID="6cfd3fa8-8b51-4073-816b-c47b895cd27e" containerID="06bd6c37369cc629d479b39977b62ce46c6f6769d71fef74d78eb2d5728e2655" exitCode=0 Feb 25 13:40:03 crc kubenswrapper[5005]: I0225 13:40:03.055436 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533780-7kvg2" event={"ID":"6cfd3fa8-8b51-4073-816b-c47b895cd27e","Type":"ContainerDied","Data":"06bd6c37369cc629d479b39977b62ce46c6f6769d71fef74d78eb2d5728e2655"} Feb 25 13:40:04 crc kubenswrapper[5005]: I0225 13:40:04.410620 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533780-7kvg2" Feb 25 13:40:04 crc kubenswrapper[5005]: I0225 13:40:04.512508 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gpvj\" (UniqueName: \"kubernetes.io/projected/6cfd3fa8-8b51-4073-816b-c47b895cd27e-kube-api-access-6gpvj\") pod \"6cfd3fa8-8b51-4073-816b-c47b895cd27e\" (UID: \"6cfd3fa8-8b51-4073-816b-c47b895cd27e\") " Feb 25 13:40:04 crc kubenswrapper[5005]: I0225 13:40:04.518445 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cfd3fa8-8b51-4073-816b-c47b895cd27e-kube-api-access-6gpvj" (OuterVolumeSpecName: "kube-api-access-6gpvj") pod "6cfd3fa8-8b51-4073-816b-c47b895cd27e" (UID: "6cfd3fa8-8b51-4073-816b-c47b895cd27e"). InnerVolumeSpecName "kube-api-access-6gpvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 13:40:04 crc kubenswrapper[5005]: I0225 13:40:04.616210 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6gpvj\" (UniqueName: \"kubernetes.io/projected/6cfd3fa8-8b51-4073-816b-c47b895cd27e-kube-api-access-6gpvj\") on node \"crc\" DevicePath \"\"" Feb 25 13:40:05 crc kubenswrapper[5005]: I0225 13:40:05.075113 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533780-7kvg2" event={"ID":"6cfd3fa8-8b51-4073-816b-c47b895cd27e","Type":"ContainerDied","Data":"54ea5dec5e20b349cbd1bd132e2e3b18f5ff0667ad511e254f5a5968da8a3290"} Feb 25 13:40:05 crc kubenswrapper[5005]: I0225 13:40:05.075166 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54ea5dec5e20b349cbd1bd132e2e3b18f5ff0667ad511e254f5a5968da8a3290" Feb 25 13:40:05 crc kubenswrapper[5005]: I0225 13:40:05.075185 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533780-7kvg2" Feb 25 13:40:05 crc kubenswrapper[5005]: I0225 13:40:05.489291 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533774-j7zmg"] Feb 25 13:40:05 crc kubenswrapper[5005]: I0225 13:40:05.497145 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533774-j7zmg"] Feb 25 13:40:06 crc kubenswrapper[5005]: I0225 13:40:06.701924 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38c3de69-c8d2-457a-92be-5a509868e9ef" path="/var/lib/kubelet/pods/38c3de69-c8d2-457a-92be-5a509868e9ef/volumes" Feb 25 13:40:07 crc kubenswrapper[5005]: I0225 13:40:07.685591 5005 scope.go:117] "RemoveContainer" containerID="c5f542d1ec6dc300e3a44e3a29fbd2c7f4d7950016e38a2791f2d7acc0a680b4" Feb 25 13:40:07 crc kubenswrapper[5005]: E0225 13:40:07.686241 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:40:22 crc kubenswrapper[5005]: I0225 13:40:22.685359 5005 scope.go:117] "RemoveContainer" containerID="c5f542d1ec6dc300e3a44e3a29fbd2c7f4d7950016e38a2791f2d7acc0a680b4" Feb 25 13:40:22 crc kubenswrapper[5005]: E0225 13:40:22.686010 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:40:26 crc kubenswrapper[5005]: I0225 13:40:26.243058 5005 scope.go:117] "RemoveContainer" containerID="23a98077f66963094d2f54caad6c6ab440517b0c8b900c00969160a638572de4" Feb 25 13:40:33 crc kubenswrapper[5005]: I0225 13:40:33.685625 5005 scope.go:117] "RemoveContainer" containerID="c5f542d1ec6dc300e3a44e3a29fbd2c7f4d7950016e38a2791f2d7acc0a680b4" Feb 25 13:40:33 crc kubenswrapper[5005]: E0225 13:40:33.686301 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:40:44 crc kubenswrapper[5005]: I0225 13:40:44.685408 5005 scope.go:117] "RemoveContainer" containerID="c5f542d1ec6dc300e3a44e3a29fbd2c7f4d7950016e38a2791f2d7acc0a680b4" Feb 25 13:40:44 crc kubenswrapper[5005]: E0225 13:40:44.686208 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:40:56 crc kubenswrapper[5005]: I0225 13:40:56.691611 5005 scope.go:117] "RemoveContainer" containerID="c5f542d1ec6dc300e3a44e3a29fbd2c7f4d7950016e38a2791f2d7acc0a680b4" Feb 25 13:40:56 crc kubenswrapper[5005]: E0225 13:40:56.692435 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:41:09 crc kubenswrapper[5005]: I0225 13:41:09.685116 5005 scope.go:117] "RemoveContainer" containerID="c5f542d1ec6dc300e3a44e3a29fbd2c7f4d7950016e38a2791f2d7acc0a680b4" Feb 25 13:41:09 crc kubenswrapper[5005]: E0225 13:41:09.685961 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:41:24 crc kubenswrapper[5005]: I0225 13:41:24.686162 5005 scope.go:117] "RemoveContainer" containerID="c5f542d1ec6dc300e3a44e3a29fbd2c7f4d7950016e38a2791f2d7acc0a680b4" Feb 25 13:41:24 crc kubenswrapper[5005]: E0225 13:41:24.686911 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:41:35 crc kubenswrapper[5005]: I0225 13:41:35.685277 5005 scope.go:117] "RemoveContainer" containerID="c5f542d1ec6dc300e3a44e3a29fbd2c7f4d7950016e38a2791f2d7acc0a680b4" Feb 25 13:41:35 crc kubenswrapper[5005]: E0225 13:41:35.686307 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:41:48 crc kubenswrapper[5005]: I0225 13:41:48.686562 5005 scope.go:117] "RemoveContainer" containerID="c5f542d1ec6dc300e3a44e3a29fbd2c7f4d7950016e38a2791f2d7acc0a680b4" Feb 25 13:41:48 crc kubenswrapper[5005]: E0225 13:41:48.687435 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:42:00 crc kubenswrapper[5005]: I0225 13:42:00.147285 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533782-bhmmm"] Feb 25 13:42:00 crc kubenswrapper[5005]: E0225 13:42:00.148275 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cfd3fa8-8b51-4073-816b-c47b895cd27e" containerName="oc" Feb 25 13:42:00 crc kubenswrapper[5005]: I0225 13:42:00.148288 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cfd3fa8-8b51-4073-816b-c47b895cd27e" containerName="oc" Feb 25 13:42:00 crc kubenswrapper[5005]: I0225 13:42:00.148481 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cfd3fa8-8b51-4073-816b-c47b895cd27e" containerName="oc" Feb 25 13:42:00 crc kubenswrapper[5005]: I0225 13:42:00.149130 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533782-bhmmm" Feb 25 13:42:00 crc kubenswrapper[5005]: I0225 13:42:00.150725 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 13:42:00 crc kubenswrapper[5005]: I0225 13:42:00.152646 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 13:42:00 crc kubenswrapper[5005]: I0225 13:42:00.152697 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 13:42:00 crc kubenswrapper[5005]: I0225 13:42:00.162303 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533782-bhmmm"] Feb 25 13:42:00 crc kubenswrapper[5005]: I0225 13:42:00.170178 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjb5l\" (UniqueName: \"kubernetes.io/projected/d1c83a8f-b088-4b22-bd19-9fadd542773a-kube-api-access-qjb5l\") pod \"auto-csr-approver-29533782-bhmmm\" (UID: \"d1c83a8f-b088-4b22-bd19-9fadd542773a\") " pod="openshift-infra/auto-csr-approver-29533782-bhmmm" Feb 25 13:42:00 crc kubenswrapper[5005]: I0225 13:42:00.273447 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjb5l\" (UniqueName: \"kubernetes.io/projected/d1c83a8f-b088-4b22-bd19-9fadd542773a-kube-api-access-qjb5l\") pod \"auto-csr-approver-29533782-bhmmm\" (UID: \"d1c83a8f-b088-4b22-bd19-9fadd542773a\") " pod="openshift-infra/auto-csr-approver-29533782-bhmmm" Feb 25 13:42:00 crc kubenswrapper[5005]: I0225 13:42:00.291077 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjb5l\" (UniqueName: \"kubernetes.io/projected/d1c83a8f-b088-4b22-bd19-9fadd542773a-kube-api-access-qjb5l\") pod \"auto-csr-approver-29533782-bhmmm\" (UID: \"d1c83a8f-b088-4b22-bd19-9fadd542773a\") " pod="openshift-infra/auto-csr-approver-29533782-bhmmm" Feb 25 13:42:00 crc kubenswrapper[5005]: I0225 13:42:00.474741 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533782-bhmmm" Feb 25 13:42:00 crc kubenswrapper[5005]: I0225 13:42:00.950414 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533782-bhmmm"] Feb 25 13:42:00 crc kubenswrapper[5005]: I0225 13:42:00.952771 5005 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 25 13:42:01 crc kubenswrapper[5005]: I0225 13:42:01.207931 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533782-bhmmm" event={"ID":"d1c83a8f-b088-4b22-bd19-9fadd542773a","Type":"ContainerStarted","Data":"6f9331a0c66035261a1bf6df06826df53d570560891e13e36bc4ba4947b5034d"} Feb 25 13:42:01 crc kubenswrapper[5005]: I0225 13:42:01.685601 5005 scope.go:117] "RemoveContainer" containerID="c5f542d1ec6dc300e3a44e3a29fbd2c7f4d7950016e38a2791f2d7acc0a680b4" Feb 25 13:42:01 crc kubenswrapper[5005]: E0225 13:42:01.686116 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:42:02 crc kubenswrapper[5005]: I0225 13:42:02.218024 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533782-bhmmm" event={"ID":"d1c83a8f-b088-4b22-bd19-9fadd542773a","Type":"ContainerStarted","Data":"857d677ade39ee9a0495cfb2bc5df193d449691f7afe91aebcf356bf502a807e"} Feb 25 13:42:02 crc kubenswrapper[5005]: I0225 13:42:02.244431 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29533782-bhmmm" podStartSLOduration=1.329254894 podStartE2EDuration="2.244409991s" podCreationTimestamp="2026-02-25 13:42:00 +0000 UTC" firstStartedPulling="2026-02-25 13:42:00.95249927 +0000 UTC m=+8634.993231597" lastFinishedPulling="2026-02-25 13:42:01.867654377 +0000 UTC m=+8635.908386694" observedRunningTime="2026-02-25 13:42:02.236507176 +0000 UTC m=+8636.277239523" watchObservedRunningTime="2026-02-25 13:42:02.244409991 +0000 UTC m=+8636.285142318" Feb 25 13:42:03 crc kubenswrapper[5005]: I0225 13:42:03.244142 5005 generic.go:334] "Generic (PLEG): container finished" podID="d1c83a8f-b088-4b22-bd19-9fadd542773a" containerID="857d677ade39ee9a0495cfb2bc5df193d449691f7afe91aebcf356bf502a807e" exitCode=0 Feb 25 13:42:03 crc kubenswrapper[5005]: I0225 13:42:03.244201 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533782-bhmmm" event={"ID":"d1c83a8f-b088-4b22-bd19-9fadd542773a","Type":"ContainerDied","Data":"857d677ade39ee9a0495cfb2bc5df193d449691f7afe91aebcf356bf502a807e"} Feb 25 13:42:04 crc kubenswrapper[5005]: I0225 13:42:04.621248 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533782-bhmmm" Feb 25 13:42:04 crc kubenswrapper[5005]: I0225 13:42:04.773873 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjb5l\" (UniqueName: \"kubernetes.io/projected/d1c83a8f-b088-4b22-bd19-9fadd542773a-kube-api-access-qjb5l\") pod \"d1c83a8f-b088-4b22-bd19-9fadd542773a\" (UID: \"d1c83a8f-b088-4b22-bd19-9fadd542773a\") " Feb 25 13:42:04 crc kubenswrapper[5005]: I0225 13:42:04.780215 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1c83a8f-b088-4b22-bd19-9fadd542773a-kube-api-access-qjb5l" (OuterVolumeSpecName: "kube-api-access-qjb5l") pod "d1c83a8f-b088-4b22-bd19-9fadd542773a" (UID: "d1c83a8f-b088-4b22-bd19-9fadd542773a"). InnerVolumeSpecName "kube-api-access-qjb5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 13:42:04 crc kubenswrapper[5005]: I0225 13:42:04.876162 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjb5l\" (UniqueName: \"kubernetes.io/projected/d1c83a8f-b088-4b22-bd19-9fadd542773a-kube-api-access-qjb5l\") on node \"crc\" DevicePath \"\"" Feb 25 13:42:05 crc kubenswrapper[5005]: I0225 13:42:05.293404 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533782-bhmmm" event={"ID":"d1c83a8f-b088-4b22-bd19-9fadd542773a","Type":"ContainerDied","Data":"6f9331a0c66035261a1bf6df06826df53d570560891e13e36bc4ba4947b5034d"} Feb 25 13:42:05 crc kubenswrapper[5005]: I0225 13:42:05.293476 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f9331a0c66035261a1bf6df06826df53d570560891e13e36bc4ba4947b5034d" Feb 25 13:42:05 crc kubenswrapper[5005]: I0225 13:42:05.293622 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533782-bhmmm" Feb 25 13:42:05 crc kubenswrapper[5005]: I0225 13:42:05.323750 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533776-grflq"] Feb 25 13:42:05 crc kubenswrapper[5005]: I0225 13:42:05.333884 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533776-grflq"] Feb 25 13:42:06 crc kubenswrapper[5005]: I0225 13:42:06.703495 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85527208-6e1a-4c3e-b0ef-59799382f98e" path="/var/lib/kubelet/pods/85527208-6e1a-4c3e-b0ef-59799382f98e/volumes" Feb 25 13:42:15 crc kubenswrapper[5005]: I0225 13:42:15.686318 5005 scope.go:117] "RemoveContainer" containerID="c5f542d1ec6dc300e3a44e3a29fbd2c7f4d7950016e38a2791f2d7acc0a680b4" Feb 25 13:42:15 crc kubenswrapper[5005]: E0225 13:42:15.687073 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:42:26 crc kubenswrapper[5005]: I0225 13:42:26.336071 5005 scope.go:117] "RemoveContainer" containerID="5ff883db091143ab328e02ffcfd3ea13f3e1aa637519f813cf65d33a1f8dd8df" Feb 25 13:42:30 crc kubenswrapper[5005]: I0225 13:42:30.685890 5005 scope.go:117] "RemoveContainer" containerID="c5f542d1ec6dc300e3a44e3a29fbd2c7f4d7950016e38a2791f2d7acc0a680b4" Feb 25 13:42:30 crc kubenswrapper[5005]: E0225 13:42:30.686891 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:42:45 crc kubenswrapper[5005]: I0225 13:42:45.686014 5005 scope.go:117] "RemoveContainer" containerID="c5f542d1ec6dc300e3a44e3a29fbd2c7f4d7950016e38a2791f2d7acc0a680b4" Feb 25 13:42:45 crc kubenswrapper[5005]: E0225 13:42:45.686782 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:43:00 crc kubenswrapper[5005]: I0225 13:43:00.686857 5005 scope.go:117] "RemoveContainer" containerID="c5f542d1ec6dc300e3a44e3a29fbd2c7f4d7950016e38a2791f2d7acc0a680b4" Feb 25 13:43:00 crc kubenswrapper[5005]: E0225 13:43:00.688065 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:43:12 crc kubenswrapper[5005]: I0225 13:43:12.686247 5005 scope.go:117] "RemoveContainer" containerID="c5f542d1ec6dc300e3a44e3a29fbd2c7f4d7950016e38a2791f2d7acc0a680b4" Feb 25 13:43:12 crc kubenswrapper[5005]: E0225 13:43:12.687075 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:43:24 crc kubenswrapper[5005]: I0225 13:43:24.685949 5005 scope.go:117] "RemoveContainer" containerID="c5f542d1ec6dc300e3a44e3a29fbd2c7f4d7950016e38a2791f2d7acc0a680b4" Feb 25 13:43:24 crc kubenswrapper[5005]: E0225 13:43:24.686909 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:43:38 crc kubenswrapper[5005]: I0225 13:43:38.686302 5005 scope.go:117] "RemoveContainer" containerID="c5f542d1ec6dc300e3a44e3a29fbd2c7f4d7950016e38a2791f2d7acc0a680b4" Feb 25 13:43:39 crc kubenswrapper[5005]: I0225 13:43:39.105886 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerStarted","Data":"9afbe60c6503a9378a18734a1be182e383020005a8fd3909f742fdfb6e4176b6"} Feb 25 13:44:00 crc kubenswrapper[5005]: I0225 13:44:00.167605 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533784-mlsqv"] Feb 25 13:44:00 crc kubenswrapper[5005]: E0225 13:44:00.168661 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1c83a8f-b088-4b22-bd19-9fadd542773a" containerName="oc" Feb 25 13:44:00 crc kubenswrapper[5005]: I0225 13:44:00.168677 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1c83a8f-b088-4b22-bd19-9fadd542773a" containerName="oc" Feb 25 13:44:00 crc kubenswrapper[5005]: I0225 13:44:00.168957 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1c83a8f-b088-4b22-bd19-9fadd542773a" containerName="oc" Feb 25 13:44:00 crc kubenswrapper[5005]: I0225 13:44:00.169690 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533784-mlsqv"] Feb 25 13:44:00 crc kubenswrapper[5005]: I0225 13:44:00.169772 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533784-mlsqv" Feb 25 13:44:00 crc kubenswrapper[5005]: I0225 13:44:00.181567 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 13:44:00 crc kubenswrapper[5005]: I0225 13:44:00.181639 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 13:44:00 crc kubenswrapper[5005]: I0225 13:44:00.181677 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 13:44:00 crc kubenswrapper[5005]: I0225 13:44:00.288869 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55gnk\" (UniqueName: \"kubernetes.io/projected/1a45359c-f163-4c5c-ad4c-a51abf86772d-kube-api-access-55gnk\") pod \"auto-csr-approver-29533784-mlsqv\" (UID: \"1a45359c-f163-4c5c-ad4c-a51abf86772d\") " pod="openshift-infra/auto-csr-approver-29533784-mlsqv" Feb 25 13:44:00 crc kubenswrapper[5005]: I0225 13:44:00.390974 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55gnk\" (UniqueName: \"kubernetes.io/projected/1a45359c-f163-4c5c-ad4c-a51abf86772d-kube-api-access-55gnk\") pod \"auto-csr-approver-29533784-mlsqv\" (UID: \"1a45359c-f163-4c5c-ad4c-a51abf86772d\") " pod="openshift-infra/auto-csr-approver-29533784-mlsqv" Feb 25 13:44:00 crc kubenswrapper[5005]: I0225 13:44:00.416861 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55gnk\" (UniqueName: \"kubernetes.io/projected/1a45359c-f163-4c5c-ad4c-a51abf86772d-kube-api-access-55gnk\") pod \"auto-csr-approver-29533784-mlsqv\" (UID: \"1a45359c-f163-4c5c-ad4c-a51abf86772d\") " pod="openshift-infra/auto-csr-approver-29533784-mlsqv" Feb 25 13:44:00 crc kubenswrapper[5005]: I0225 13:44:00.496192 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533784-mlsqv" Feb 25 13:44:01 crc kubenswrapper[5005]: I0225 13:44:01.014089 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533784-mlsqv"] Feb 25 13:44:01 crc kubenswrapper[5005]: I0225 13:44:01.302269 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533784-mlsqv" event={"ID":"1a45359c-f163-4c5c-ad4c-a51abf86772d","Type":"ContainerStarted","Data":"8108a83522852d5df66dbb36c494b03eb51951e14ce4a56e95e58dbfd59cccca"} Feb 25 13:44:02 crc kubenswrapper[5005]: I0225 13:44:02.312280 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533784-mlsqv" event={"ID":"1a45359c-f163-4c5c-ad4c-a51abf86772d","Type":"ContainerStarted","Data":"3ddc5faecdf2805132adebdc230bd1d7f397a3c9750a3d1328eab720812c468a"} Feb 25 13:44:02 crc kubenswrapper[5005]: I0225 13:44:02.334459 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29533784-mlsqv" podStartSLOduration=1.399055369 podStartE2EDuration="2.334440213s" podCreationTimestamp="2026-02-25 13:44:00 +0000 UTC" firstStartedPulling="2026-02-25 13:44:01.020664254 +0000 UTC m=+8755.061396581" lastFinishedPulling="2026-02-25 13:44:01.956049098 +0000 UTC m=+8755.996781425" observedRunningTime="2026-02-25 13:44:02.327144866 +0000 UTC m=+8756.367877193" watchObservedRunningTime="2026-02-25 13:44:02.334440213 +0000 UTC m=+8756.375172540" Feb 25 13:44:03 crc kubenswrapper[5005]: I0225 13:44:03.326829 5005 generic.go:334] "Generic (PLEG): container finished" podID="1a45359c-f163-4c5c-ad4c-a51abf86772d" containerID="3ddc5faecdf2805132adebdc230bd1d7f397a3c9750a3d1328eab720812c468a" exitCode=0 Feb 25 13:44:03 crc kubenswrapper[5005]: I0225 13:44:03.328343 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533784-mlsqv" event={"ID":"1a45359c-f163-4c5c-ad4c-a51abf86772d","Type":"ContainerDied","Data":"3ddc5faecdf2805132adebdc230bd1d7f397a3c9750a3d1328eab720812c468a"} Feb 25 13:44:04 crc kubenswrapper[5005]: I0225 13:44:04.693313 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533784-mlsqv" Feb 25 13:44:04 crc kubenswrapper[5005]: I0225 13:44:04.778584 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55gnk\" (UniqueName: \"kubernetes.io/projected/1a45359c-f163-4c5c-ad4c-a51abf86772d-kube-api-access-55gnk\") pod \"1a45359c-f163-4c5c-ad4c-a51abf86772d\" (UID: \"1a45359c-f163-4c5c-ad4c-a51abf86772d\") " Feb 25 13:44:04 crc kubenswrapper[5005]: I0225 13:44:04.790081 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a45359c-f163-4c5c-ad4c-a51abf86772d-kube-api-access-55gnk" (OuterVolumeSpecName: "kube-api-access-55gnk") pod "1a45359c-f163-4c5c-ad4c-a51abf86772d" (UID: "1a45359c-f163-4c5c-ad4c-a51abf86772d"). InnerVolumeSpecName "kube-api-access-55gnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 13:44:04 crc kubenswrapper[5005]: I0225 13:44:04.882440 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55gnk\" (UniqueName: \"kubernetes.io/projected/1a45359c-f163-4c5c-ad4c-a51abf86772d-kube-api-access-55gnk\") on node \"crc\" DevicePath \"\"" Feb 25 13:44:05 crc kubenswrapper[5005]: I0225 13:44:05.348527 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533784-mlsqv" event={"ID":"1a45359c-f163-4c5c-ad4c-a51abf86772d","Type":"ContainerDied","Data":"8108a83522852d5df66dbb36c494b03eb51951e14ce4a56e95e58dbfd59cccca"} Feb 25 13:44:05 crc kubenswrapper[5005]: I0225 13:44:05.348572 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8108a83522852d5df66dbb36c494b03eb51951e14ce4a56e95e58dbfd59cccca" Feb 25 13:44:05 crc kubenswrapper[5005]: I0225 13:44:05.348641 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533784-mlsqv" Feb 25 13:44:05 crc kubenswrapper[5005]: I0225 13:44:05.406110 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533778-l9k9s"] Feb 25 13:44:05 crc kubenswrapper[5005]: I0225 13:44:05.418305 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533778-l9k9s"] Feb 25 13:44:06 crc kubenswrapper[5005]: I0225 13:44:06.695909 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02a997be-9c86-4e1a-9684-709bacc2ba6e" path="/var/lib/kubelet/pods/02a997be-9c86-4e1a-9684-709bacc2ba6e/volumes" Feb 25 13:44:26 crc kubenswrapper[5005]: I0225 13:44:26.443877 5005 scope.go:117] "RemoveContainer" containerID="b543ca39d3d58019f6313526a32b02e2e67a3ae3b2d11669c29390b32e04f3f8" Feb 25 13:45:00 crc kubenswrapper[5005]: I0225 13:45:00.155906 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29533785-dxjwj"] Feb 25 13:45:00 crc kubenswrapper[5005]: E0225 13:45:00.156975 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a45359c-f163-4c5c-ad4c-a51abf86772d" containerName="oc" Feb 25 13:45:00 crc kubenswrapper[5005]: I0225 13:45:00.156991 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a45359c-f163-4c5c-ad4c-a51abf86772d" containerName="oc" Feb 25 13:45:00 crc kubenswrapper[5005]: I0225 13:45:00.157214 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a45359c-f163-4c5c-ad4c-a51abf86772d" containerName="oc" Feb 25 13:45:00 crc kubenswrapper[5005]: I0225 13:45:00.158026 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29533785-dxjwj" Feb 25 13:45:00 crc kubenswrapper[5005]: I0225 13:45:00.160737 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 25 13:45:00 crc kubenswrapper[5005]: I0225 13:45:00.161091 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 25 13:45:00 crc kubenswrapper[5005]: I0225 13:45:00.170779 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29533785-dxjwj"] Feb 25 13:45:00 crc kubenswrapper[5005]: I0225 13:45:00.235619 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ef357d8-c0ff-4dfa-abdc-3eaa071f1368-config-volume\") pod \"collect-profiles-29533785-dxjwj\" (UID: \"7ef357d8-c0ff-4dfa-abdc-3eaa071f1368\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533785-dxjwj" Feb 25 13:45:00 crc kubenswrapper[5005]: I0225 13:45:00.235743 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7ef357d8-c0ff-4dfa-abdc-3eaa071f1368-secret-volume\") pod \"collect-profiles-29533785-dxjwj\" (UID: \"7ef357d8-c0ff-4dfa-abdc-3eaa071f1368\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533785-dxjwj" Feb 25 13:45:00 crc kubenswrapper[5005]: I0225 13:45:00.235769 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqhcf\" (UniqueName: \"kubernetes.io/projected/7ef357d8-c0ff-4dfa-abdc-3eaa071f1368-kube-api-access-bqhcf\") pod \"collect-profiles-29533785-dxjwj\" (UID: \"7ef357d8-c0ff-4dfa-abdc-3eaa071f1368\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533785-dxjwj" Feb 25 13:45:00 crc kubenswrapper[5005]: I0225 13:45:00.337588 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ef357d8-c0ff-4dfa-abdc-3eaa071f1368-config-volume\") pod \"collect-profiles-29533785-dxjwj\" (UID: \"7ef357d8-c0ff-4dfa-abdc-3eaa071f1368\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533785-dxjwj" Feb 25 13:45:00 crc kubenswrapper[5005]: I0225 13:45:00.337718 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7ef357d8-c0ff-4dfa-abdc-3eaa071f1368-secret-volume\") pod \"collect-profiles-29533785-dxjwj\" (UID: \"7ef357d8-c0ff-4dfa-abdc-3eaa071f1368\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533785-dxjwj" Feb 25 13:45:00 crc kubenswrapper[5005]: I0225 13:45:00.337746 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqhcf\" (UniqueName: \"kubernetes.io/projected/7ef357d8-c0ff-4dfa-abdc-3eaa071f1368-kube-api-access-bqhcf\") pod \"collect-profiles-29533785-dxjwj\" (UID: \"7ef357d8-c0ff-4dfa-abdc-3eaa071f1368\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533785-dxjwj" Feb 25 13:45:00 crc kubenswrapper[5005]: I0225 13:45:00.339565 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ef357d8-c0ff-4dfa-abdc-3eaa071f1368-config-volume\") pod \"collect-profiles-29533785-dxjwj\" (UID: \"7ef357d8-c0ff-4dfa-abdc-3eaa071f1368\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533785-dxjwj" Feb 25 13:45:00 crc kubenswrapper[5005]: I0225 13:45:00.344472 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7ef357d8-c0ff-4dfa-abdc-3eaa071f1368-secret-volume\") pod \"collect-profiles-29533785-dxjwj\" (UID: \"7ef357d8-c0ff-4dfa-abdc-3eaa071f1368\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533785-dxjwj" Feb 25 13:45:00 crc kubenswrapper[5005]: I0225 13:45:00.355057 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqhcf\" (UniqueName: \"kubernetes.io/projected/7ef357d8-c0ff-4dfa-abdc-3eaa071f1368-kube-api-access-bqhcf\") pod \"collect-profiles-29533785-dxjwj\" (UID: \"7ef357d8-c0ff-4dfa-abdc-3eaa071f1368\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533785-dxjwj" Feb 25 13:45:00 crc kubenswrapper[5005]: I0225 13:45:00.482355 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29533785-dxjwj" Feb 25 13:45:00 crc kubenswrapper[5005]: I0225 13:45:00.998345 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29533785-dxjwj"] Feb 25 13:45:01 crc kubenswrapper[5005]: I0225 13:45:01.851725 5005 generic.go:334] "Generic (PLEG): container finished" podID="7ef357d8-c0ff-4dfa-abdc-3eaa071f1368" containerID="c89506e7b9c8de005372f52f5fbb8db87db2393fb7fa896a1dfa11759425d830" exitCode=0 Feb 25 13:45:01 crc kubenswrapper[5005]: I0225 13:45:01.851775 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29533785-dxjwj" event={"ID":"7ef357d8-c0ff-4dfa-abdc-3eaa071f1368","Type":"ContainerDied","Data":"c89506e7b9c8de005372f52f5fbb8db87db2393fb7fa896a1dfa11759425d830"} Feb 25 13:45:01 crc kubenswrapper[5005]: I0225 13:45:01.852151 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29533785-dxjwj" event={"ID":"7ef357d8-c0ff-4dfa-abdc-3eaa071f1368","Type":"ContainerStarted","Data":"6bfd64043bdd46c008a14b3e7de57d0f4b66ca7c65c9abbd41eb88befe8d579a"} Feb 25 13:45:03 crc kubenswrapper[5005]: I0225 13:45:03.183565 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29533785-dxjwj" Feb 25 13:45:03 crc kubenswrapper[5005]: I0225 13:45:03.306026 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqhcf\" (UniqueName: \"kubernetes.io/projected/7ef357d8-c0ff-4dfa-abdc-3eaa071f1368-kube-api-access-bqhcf\") pod \"7ef357d8-c0ff-4dfa-abdc-3eaa071f1368\" (UID: \"7ef357d8-c0ff-4dfa-abdc-3eaa071f1368\") " Feb 25 13:45:03 crc kubenswrapper[5005]: I0225 13:45:03.306361 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ef357d8-c0ff-4dfa-abdc-3eaa071f1368-config-volume\") pod \"7ef357d8-c0ff-4dfa-abdc-3eaa071f1368\" (UID: \"7ef357d8-c0ff-4dfa-abdc-3eaa071f1368\") " Feb 25 13:45:03 crc kubenswrapper[5005]: I0225 13:45:03.306479 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7ef357d8-c0ff-4dfa-abdc-3eaa071f1368-secret-volume\") pod \"7ef357d8-c0ff-4dfa-abdc-3eaa071f1368\" (UID: \"7ef357d8-c0ff-4dfa-abdc-3eaa071f1368\") " Feb 25 13:45:03 crc kubenswrapper[5005]: I0225 13:45:03.310478 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ef357d8-c0ff-4dfa-abdc-3eaa071f1368-config-volume" (OuterVolumeSpecName: "config-volume") pod "7ef357d8-c0ff-4dfa-abdc-3eaa071f1368" (UID: "7ef357d8-c0ff-4dfa-abdc-3eaa071f1368"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 13:45:03 crc kubenswrapper[5005]: I0225 13:45:03.317875 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ef357d8-c0ff-4dfa-abdc-3eaa071f1368-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7ef357d8-c0ff-4dfa-abdc-3eaa071f1368" (UID: "7ef357d8-c0ff-4dfa-abdc-3eaa071f1368"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 13:45:03 crc kubenswrapper[5005]: I0225 13:45:03.326555 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-g7tp7"] Feb 25 13:45:03 crc kubenswrapper[5005]: E0225 13:45:03.327465 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ef357d8-c0ff-4dfa-abdc-3eaa071f1368" containerName="collect-profiles" Feb 25 13:45:03 crc kubenswrapper[5005]: I0225 13:45:03.327486 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ef357d8-c0ff-4dfa-abdc-3eaa071f1368" containerName="collect-profiles" Feb 25 13:45:03 crc kubenswrapper[5005]: I0225 13:45:03.327783 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ef357d8-c0ff-4dfa-abdc-3eaa071f1368" containerName="collect-profiles" Feb 25 13:45:03 crc kubenswrapper[5005]: I0225 13:45:03.329877 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g7tp7" Feb 25 13:45:03 crc kubenswrapper[5005]: I0225 13:45:03.335707 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ef357d8-c0ff-4dfa-abdc-3eaa071f1368-kube-api-access-bqhcf" (OuterVolumeSpecName: "kube-api-access-bqhcf") pod "7ef357d8-c0ff-4dfa-abdc-3eaa071f1368" (UID: "7ef357d8-c0ff-4dfa-abdc-3eaa071f1368"). InnerVolumeSpecName "kube-api-access-bqhcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 13:45:03 crc kubenswrapper[5005]: I0225 13:45:03.341622 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g7tp7"] Feb 25 13:45:03 crc kubenswrapper[5005]: I0225 13:45:03.409284 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b3b898a-5622-4477-8eb9-85d277e28efb-catalog-content\") pod \"certified-operators-g7tp7\" (UID: \"5b3b898a-5622-4477-8eb9-85d277e28efb\") " pod="openshift-marketplace/certified-operators-g7tp7" Feb 25 13:45:03 crc kubenswrapper[5005]: I0225 13:45:03.409888 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b3b898a-5622-4477-8eb9-85d277e28efb-utilities\") pod \"certified-operators-g7tp7\" (UID: \"5b3b898a-5622-4477-8eb9-85d277e28efb\") " pod="openshift-marketplace/certified-operators-g7tp7" Feb 25 13:45:03 crc kubenswrapper[5005]: I0225 13:45:03.409995 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96pvt\" (UniqueName: \"kubernetes.io/projected/5b3b898a-5622-4477-8eb9-85d277e28efb-kube-api-access-96pvt\") pod \"certified-operators-g7tp7\" (UID: \"5b3b898a-5622-4477-8eb9-85d277e28efb\") " pod="openshift-marketplace/certified-operators-g7tp7" Feb 25 13:45:03 crc kubenswrapper[5005]: I0225 13:45:03.410164 5005 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ef357d8-c0ff-4dfa-abdc-3eaa071f1368-config-volume\") on node \"crc\" DevicePath \"\"" Feb 25 13:45:03 crc kubenswrapper[5005]: I0225 13:45:03.410193 5005 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7ef357d8-c0ff-4dfa-abdc-3eaa071f1368-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 25 13:45:03 crc kubenswrapper[5005]: I0225 13:45:03.410204 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqhcf\" (UniqueName: \"kubernetes.io/projected/7ef357d8-c0ff-4dfa-abdc-3eaa071f1368-kube-api-access-bqhcf\") on node \"crc\" DevicePath \"\"" Feb 25 13:45:03 crc kubenswrapper[5005]: I0225 13:45:03.518559 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9djgj"] Feb 25 13:45:03 crc kubenswrapper[5005]: I0225 13:45:03.518643 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96pvt\" (UniqueName: \"kubernetes.io/projected/5b3b898a-5622-4477-8eb9-85d277e28efb-kube-api-access-96pvt\") pod \"certified-operators-g7tp7\" (UID: \"5b3b898a-5622-4477-8eb9-85d277e28efb\") " pod="openshift-marketplace/certified-operators-g7tp7" Feb 25 13:45:03 crc kubenswrapper[5005]: I0225 13:45:03.518733 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b3b898a-5622-4477-8eb9-85d277e28efb-catalog-content\") pod \"certified-operators-g7tp7\" (UID: \"5b3b898a-5622-4477-8eb9-85d277e28efb\") " pod="openshift-marketplace/certified-operators-g7tp7" Feb 25 13:45:03 crc kubenswrapper[5005]: I0225 13:45:03.518849 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b3b898a-5622-4477-8eb9-85d277e28efb-utilities\") pod \"certified-operators-g7tp7\" (UID: \"5b3b898a-5622-4477-8eb9-85d277e28efb\") " pod="openshift-marketplace/certified-operators-g7tp7" Feb 25 13:45:03 crc kubenswrapper[5005]: I0225 13:45:03.519456 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b3b898a-5622-4477-8eb9-85d277e28efb-utilities\") pod \"certified-operators-g7tp7\" (UID: \"5b3b898a-5622-4477-8eb9-85d277e28efb\") " pod="openshift-marketplace/certified-operators-g7tp7" Feb 25 13:45:03 crc kubenswrapper[5005]: I0225 13:45:03.519750 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b3b898a-5622-4477-8eb9-85d277e28efb-catalog-content\") pod \"certified-operators-g7tp7\" (UID: \"5b3b898a-5622-4477-8eb9-85d277e28efb\") " pod="openshift-marketplace/certified-operators-g7tp7" Feb 25 13:45:03 crc kubenswrapper[5005]: I0225 13:45:03.520886 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9djgj" Feb 25 13:45:03 crc kubenswrapper[5005]: I0225 13:45:03.534269 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9djgj"] Feb 25 13:45:03 crc kubenswrapper[5005]: I0225 13:45:03.546556 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96pvt\" (UniqueName: \"kubernetes.io/projected/5b3b898a-5622-4477-8eb9-85d277e28efb-kube-api-access-96pvt\") pod \"certified-operators-g7tp7\" (UID: \"5b3b898a-5622-4477-8eb9-85d277e28efb\") " pod="openshift-marketplace/certified-operators-g7tp7" Feb 25 13:45:03 crc kubenswrapper[5005]: I0225 13:45:03.621068 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bb0d8f1-3374-4893-b5db-e7c5e27b0f43-catalog-content\") pod \"community-operators-9djgj\" (UID: \"2bb0d8f1-3374-4893-b5db-e7c5e27b0f43\") " pod="openshift-marketplace/community-operators-9djgj" Feb 25 13:45:03 crc kubenswrapper[5005]: I0225 13:45:03.621166 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bb0d8f1-3374-4893-b5db-e7c5e27b0f43-utilities\") pod \"community-operators-9djgj\" (UID: \"2bb0d8f1-3374-4893-b5db-e7c5e27b0f43\") " pod="openshift-marketplace/community-operators-9djgj" Feb 25 13:45:03 crc kubenswrapper[5005]: I0225 13:45:03.621215 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btb96\" (UniqueName: \"kubernetes.io/projected/2bb0d8f1-3374-4893-b5db-e7c5e27b0f43-kube-api-access-btb96\") pod \"community-operators-9djgj\" (UID: \"2bb0d8f1-3374-4893-b5db-e7c5e27b0f43\") " pod="openshift-marketplace/community-operators-9djgj" Feb 25 13:45:03 crc kubenswrapper[5005]: I0225 13:45:03.727435 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bb0d8f1-3374-4893-b5db-e7c5e27b0f43-catalog-content\") pod \"community-operators-9djgj\" (UID: \"2bb0d8f1-3374-4893-b5db-e7c5e27b0f43\") " pod="openshift-marketplace/community-operators-9djgj" Feb 25 13:45:03 crc kubenswrapper[5005]: I0225 13:45:03.727553 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bb0d8f1-3374-4893-b5db-e7c5e27b0f43-utilities\") pod \"community-operators-9djgj\" (UID: \"2bb0d8f1-3374-4893-b5db-e7c5e27b0f43\") " pod="openshift-marketplace/community-operators-9djgj" Feb 25 13:45:03 crc kubenswrapper[5005]: I0225 13:45:03.727611 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btb96\" (UniqueName: \"kubernetes.io/projected/2bb0d8f1-3374-4893-b5db-e7c5e27b0f43-kube-api-access-btb96\") pod \"community-operators-9djgj\" (UID: \"2bb0d8f1-3374-4893-b5db-e7c5e27b0f43\") " pod="openshift-marketplace/community-operators-9djgj" Feb 25 13:45:03 crc kubenswrapper[5005]: I0225 13:45:03.729233 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bb0d8f1-3374-4893-b5db-e7c5e27b0f43-catalog-content\") pod \"community-operators-9djgj\" (UID: \"2bb0d8f1-3374-4893-b5db-e7c5e27b0f43\") " pod="openshift-marketplace/community-operators-9djgj" Feb 25 13:45:03 crc kubenswrapper[5005]: I0225 13:45:03.729571 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bb0d8f1-3374-4893-b5db-e7c5e27b0f43-utilities\") pod \"community-operators-9djgj\" (UID: \"2bb0d8f1-3374-4893-b5db-e7c5e27b0f43\") " pod="openshift-marketplace/community-operators-9djgj" Feb 25 13:45:03 crc kubenswrapper[5005]: I0225 13:45:03.749219 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g7tp7" Feb 25 13:45:03 crc kubenswrapper[5005]: I0225 13:45:03.751211 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btb96\" (UniqueName: \"kubernetes.io/projected/2bb0d8f1-3374-4893-b5db-e7c5e27b0f43-kube-api-access-btb96\") pod \"community-operators-9djgj\" (UID: \"2bb0d8f1-3374-4893-b5db-e7c5e27b0f43\") " pod="openshift-marketplace/community-operators-9djgj" Feb 25 13:45:03 crc kubenswrapper[5005]: I0225 13:45:03.848552 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9djgj" Feb 25 13:45:03 crc kubenswrapper[5005]: I0225 13:45:03.889644 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29533785-dxjwj" event={"ID":"7ef357d8-c0ff-4dfa-abdc-3eaa071f1368","Type":"ContainerDied","Data":"6bfd64043bdd46c008a14b3e7de57d0f4b66ca7c65c9abbd41eb88befe8d579a"} Feb 25 13:45:03 crc kubenswrapper[5005]: I0225 13:45:03.889883 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bfd64043bdd46c008a14b3e7de57d0f4b66ca7c65c9abbd41eb88befe8d579a" Feb 25 13:45:03 crc kubenswrapper[5005]: I0225 13:45:03.889964 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29533785-dxjwj" Feb 25 13:45:04 crc kubenswrapper[5005]: I0225 13:45:04.276323 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29533740-4ztrj"] Feb 25 13:45:04 crc kubenswrapper[5005]: I0225 13:45:04.286588 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29533740-4ztrj"] Feb 25 13:45:04 crc kubenswrapper[5005]: I0225 13:45:04.323437 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g7tp7"] Feb 25 13:45:04 crc kubenswrapper[5005]: W0225 13:45:04.342549 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b3b898a_5622_4477_8eb9_85d277e28efb.slice/crio-10ee8fcfc1e407eef766b51c593603a2a949086ada5f5709e11710b6642fbdd9 WatchSource:0}: Error finding container 10ee8fcfc1e407eef766b51c593603a2a949086ada5f5709e11710b6642fbdd9: Status 404 returned error can't find the container with id 10ee8fcfc1e407eef766b51c593603a2a949086ada5f5709e11710b6642fbdd9 Feb 25 13:45:04 crc kubenswrapper[5005]: W0225 13:45:04.463925 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2bb0d8f1_3374_4893_b5db_e7c5e27b0f43.slice/crio-db403f0a235ed54d021ad16e491c768e35114a0cd7864d77e820509cff0fb813 WatchSource:0}: Error finding container db403f0a235ed54d021ad16e491c768e35114a0cd7864d77e820509cff0fb813: Status 404 returned error can't find the container with id db403f0a235ed54d021ad16e491c768e35114a0cd7864d77e820509cff0fb813 Feb 25 13:45:04 crc kubenswrapper[5005]: I0225 13:45:04.464225 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9djgj"] Feb 25 13:45:04 crc kubenswrapper[5005]: I0225 13:45:04.696651 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="729d63f2-6dfd-4319-ad89-5ddd51220848" path="/var/lib/kubelet/pods/729d63f2-6dfd-4319-ad89-5ddd51220848/volumes" Feb 25 13:45:04 crc kubenswrapper[5005]: I0225 13:45:04.900536 5005 generic.go:334] "Generic (PLEG): container finished" podID="2bb0d8f1-3374-4893-b5db-e7c5e27b0f43" containerID="734c0b5c0d92c88d885e66cb58f5990bee3ec1cbf7235e1a66b249ae48abb9e1" exitCode=0 Feb 25 13:45:04 crc kubenswrapper[5005]: I0225 13:45:04.900637 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9djgj" event={"ID":"2bb0d8f1-3374-4893-b5db-e7c5e27b0f43","Type":"ContainerDied","Data":"734c0b5c0d92c88d885e66cb58f5990bee3ec1cbf7235e1a66b249ae48abb9e1"} Feb 25 13:45:04 crc kubenswrapper[5005]: I0225 13:45:04.900687 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9djgj" event={"ID":"2bb0d8f1-3374-4893-b5db-e7c5e27b0f43","Type":"ContainerStarted","Data":"db403f0a235ed54d021ad16e491c768e35114a0cd7864d77e820509cff0fb813"} Feb 25 13:45:04 crc kubenswrapper[5005]: I0225 13:45:04.904937 5005 generic.go:334] "Generic (PLEG): container finished" podID="5b3b898a-5622-4477-8eb9-85d277e28efb" containerID="0296ef732230bb7cf3601c97dc9b335b949ce5b08a69900a613d36698933e8ad" exitCode=0 Feb 25 13:45:04 crc kubenswrapper[5005]: I0225 13:45:04.904993 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g7tp7" event={"ID":"5b3b898a-5622-4477-8eb9-85d277e28efb","Type":"ContainerDied","Data":"0296ef732230bb7cf3601c97dc9b335b949ce5b08a69900a613d36698933e8ad"} Feb 25 13:45:04 crc kubenswrapper[5005]: I0225 13:45:04.905031 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g7tp7" event={"ID":"5b3b898a-5622-4477-8eb9-85d277e28efb","Type":"ContainerStarted","Data":"10ee8fcfc1e407eef766b51c593603a2a949086ada5f5709e11710b6642fbdd9"} Feb 25 13:45:10 crc kubenswrapper[5005]: I0225 13:45:10.965592 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9djgj" event={"ID":"2bb0d8f1-3374-4893-b5db-e7c5e27b0f43","Type":"ContainerStarted","Data":"16a400eb181a707f388523e948e8d785d458603a15d6d5fecf90aa3ac4aa310a"} Feb 25 13:45:10 crc kubenswrapper[5005]: I0225 13:45:10.971335 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g7tp7" event={"ID":"5b3b898a-5622-4477-8eb9-85d277e28efb","Type":"ContainerStarted","Data":"4e8aa7135a06b5e7d06818a767b905d3acd82ceb4e86ebc8565a0d670dedc038"} Feb 25 13:45:11 crc kubenswrapper[5005]: I0225 13:45:11.981650 5005 generic.go:334] "Generic (PLEG): container finished" podID="2bb0d8f1-3374-4893-b5db-e7c5e27b0f43" containerID="16a400eb181a707f388523e948e8d785d458603a15d6d5fecf90aa3ac4aa310a" exitCode=0 Feb 25 13:45:11 crc kubenswrapper[5005]: I0225 13:45:11.981713 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9djgj" event={"ID":"2bb0d8f1-3374-4893-b5db-e7c5e27b0f43","Type":"ContainerDied","Data":"16a400eb181a707f388523e948e8d785d458603a15d6d5fecf90aa3ac4aa310a"} Feb 25 13:45:11 crc kubenswrapper[5005]: I0225 13:45:11.983977 5005 generic.go:334] "Generic (PLEG): container finished" podID="5b3b898a-5622-4477-8eb9-85d277e28efb" containerID="4e8aa7135a06b5e7d06818a767b905d3acd82ceb4e86ebc8565a0d670dedc038" exitCode=0 Feb 25 13:45:11 crc kubenswrapper[5005]: I0225 13:45:11.984003 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g7tp7" event={"ID":"5b3b898a-5622-4477-8eb9-85d277e28efb","Type":"ContainerDied","Data":"4e8aa7135a06b5e7d06818a767b905d3acd82ceb4e86ebc8565a0d670dedc038"} Feb 25 13:45:14 crc kubenswrapper[5005]: I0225 13:45:14.002237 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9djgj" event={"ID":"2bb0d8f1-3374-4893-b5db-e7c5e27b0f43","Type":"ContainerStarted","Data":"2231aa00804c4463a22c7381c4197a186eb283905dfd54be5777a0aa035a2dee"} Feb 25 13:45:14 crc kubenswrapper[5005]: I0225 13:45:14.024217 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9djgj" podStartSLOduration=2.238215994 podStartE2EDuration="11.024194189s" podCreationTimestamp="2026-02-25 13:45:03 +0000 UTC" firstStartedPulling="2026-02-25 13:45:04.902495922 +0000 UTC m=+8818.943228269" lastFinishedPulling="2026-02-25 13:45:13.688474137 +0000 UTC m=+8827.729206464" observedRunningTime="2026-02-25 13:45:14.01645685 +0000 UTC m=+8828.057189187" watchObservedRunningTime="2026-02-25 13:45:14.024194189 +0000 UTC m=+8828.064926516" Feb 25 13:45:15 crc kubenswrapper[5005]: I0225 13:45:15.022929 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g7tp7" event={"ID":"5b3b898a-5622-4477-8eb9-85d277e28efb","Type":"ContainerStarted","Data":"345993f82406c3aa4c87d6acd1323476911aa39b8e4ef1e64fe0e7ef99a80a85"} Feb 25 13:45:15 crc kubenswrapper[5005]: I0225 13:45:15.043608 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-g7tp7" podStartSLOduration=2.8406705 podStartE2EDuration="12.043592076s" podCreationTimestamp="2026-02-25 13:45:03 +0000 UTC" firstStartedPulling="2026-02-25 13:45:04.906824025 +0000 UTC m=+8818.947556352" lastFinishedPulling="2026-02-25 13:45:14.109745611 +0000 UTC m=+8828.150477928" observedRunningTime="2026-02-25 13:45:15.039428427 +0000 UTC m=+8829.080160774" watchObservedRunningTime="2026-02-25 13:45:15.043592076 +0000 UTC m=+8829.084324403" Feb 25 13:45:23 crc kubenswrapper[5005]: I0225 13:45:23.749999 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-g7tp7" Feb 25 13:45:23 crc kubenswrapper[5005]: I0225 13:45:23.750594 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-g7tp7" Feb 25 13:45:23 crc kubenswrapper[5005]: I0225 13:45:23.793696 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-g7tp7" Feb 25 13:45:23 crc kubenswrapper[5005]: I0225 13:45:23.849732 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9djgj" Feb 25 13:45:23 crc kubenswrapper[5005]: I0225 13:45:23.849775 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9djgj" Feb 25 13:45:23 crc kubenswrapper[5005]: I0225 13:45:23.896421 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9djgj" Feb 25 13:45:24 crc kubenswrapper[5005]: I0225 13:45:24.148678 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-g7tp7" Feb 25 13:45:24 crc kubenswrapper[5005]: I0225 13:45:24.160343 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9djgj" Feb 25 13:45:25 crc kubenswrapper[5005]: I0225 13:45:25.453690 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g7tp7"] Feb 25 13:45:25 crc kubenswrapper[5005]: I0225 13:45:25.837685 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9vlc9"] Feb 25 13:45:25 crc kubenswrapper[5005]: I0225 13:45:25.838034 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9vlc9" podUID="c2595820-1d56-4102-82a6-cbc52b963ab4" containerName="registry-server" containerID="cri-o://7d7f36dc5136f430e3a8dd50a9ed4888162d7967450fa527ee5183eeaec79fd2" gracePeriod=2 Feb 25 13:45:26 crc kubenswrapper[5005]: I0225 13:45:26.053525 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9djgj"] Feb 25 13:45:26 crc kubenswrapper[5005]: I0225 13:45:26.129029 5005 generic.go:334] "Generic (PLEG): container finished" podID="c2595820-1d56-4102-82a6-cbc52b963ab4" containerID="7d7f36dc5136f430e3a8dd50a9ed4888162d7967450fa527ee5183eeaec79fd2" exitCode=0 Feb 25 13:45:26 crc kubenswrapper[5005]: I0225 13:45:26.129105 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9vlc9" event={"ID":"c2595820-1d56-4102-82a6-cbc52b963ab4","Type":"ContainerDied","Data":"7d7f36dc5136f430e3a8dd50a9ed4888162d7967450fa527ee5183eeaec79fd2"} Feb 25 13:45:26 crc kubenswrapper[5005]: I0225 13:45:26.363120 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9vlc9" Feb 25 13:45:26 crc kubenswrapper[5005]: I0225 13:45:26.429040 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rkjrs"] Feb 25 13:45:26 crc kubenswrapper[5005]: I0225 13:45:26.429258 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rkjrs" podUID="81b20372-b556-4def-bd2f-1452f40fe338" containerName="registry-server" containerID="cri-o://c02166e9269209b5b24ada3c9ff08678af3bd1c1db6782eeae097f1efc3bebf9" gracePeriod=2 Feb 25 13:45:26 crc kubenswrapper[5005]: I0225 13:45:26.477896 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcsfn\" (UniqueName: \"kubernetes.io/projected/c2595820-1d56-4102-82a6-cbc52b963ab4-kube-api-access-rcsfn\") pod \"c2595820-1d56-4102-82a6-cbc52b963ab4\" (UID: \"c2595820-1d56-4102-82a6-cbc52b963ab4\") " Feb 25 13:45:26 crc kubenswrapper[5005]: I0225 13:45:26.478118 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2595820-1d56-4102-82a6-cbc52b963ab4-utilities\") pod \"c2595820-1d56-4102-82a6-cbc52b963ab4\" (UID: \"c2595820-1d56-4102-82a6-cbc52b963ab4\") " Feb 25 13:45:26 crc kubenswrapper[5005]: I0225 13:45:26.478393 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2595820-1d56-4102-82a6-cbc52b963ab4-catalog-content\") pod \"c2595820-1d56-4102-82a6-cbc52b963ab4\" (UID: \"c2595820-1d56-4102-82a6-cbc52b963ab4\") " Feb 25 13:45:26 crc kubenswrapper[5005]: I0225 13:45:26.483699 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2595820-1d56-4102-82a6-cbc52b963ab4-utilities" (OuterVolumeSpecName: "utilities") pod "c2595820-1d56-4102-82a6-cbc52b963ab4" (UID: "c2595820-1d56-4102-82a6-cbc52b963ab4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 13:45:26 crc kubenswrapper[5005]: I0225 13:45:26.488143 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2595820-1d56-4102-82a6-cbc52b963ab4-kube-api-access-rcsfn" (OuterVolumeSpecName: "kube-api-access-rcsfn") pod "c2595820-1d56-4102-82a6-cbc52b963ab4" (UID: "c2595820-1d56-4102-82a6-cbc52b963ab4"). InnerVolumeSpecName "kube-api-access-rcsfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 13:45:26 crc kubenswrapper[5005]: I0225 13:45:26.541647 5005 scope.go:117] "RemoveContainer" containerID="e6b19cbb070152cda0233926ebefcc34299b9d578f06faddc82b7125d67797c0" Feb 25 13:45:26 crc kubenswrapper[5005]: I0225 13:45:26.553149 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2595820-1d56-4102-82a6-cbc52b963ab4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c2595820-1d56-4102-82a6-cbc52b963ab4" (UID: "c2595820-1d56-4102-82a6-cbc52b963ab4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 13:45:26 crc kubenswrapper[5005]: I0225 13:45:26.581649 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2595820-1d56-4102-82a6-cbc52b963ab4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 13:45:26 crc kubenswrapper[5005]: I0225 13:45:26.581686 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcsfn\" (UniqueName: \"kubernetes.io/projected/c2595820-1d56-4102-82a6-cbc52b963ab4-kube-api-access-rcsfn\") on node \"crc\" DevicePath \"\"" Feb 25 13:45:26 crc kubenswrapper[5005]: I0225 13:45:26.581704 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2595820-1d56-4102-82a6-cbc52b963ab4-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 13:45:26 crc kubenswrapper[5005]: I0225 13:45:26.589404 5005 scope.go:117] "RemoveContainer" containerID="7d7f36dc5136f430e3a8dd50a9ed4888162d7967450fa527ee5183eeaec79fd2" Feb 25 13:45:26 crc kubenswrapper[5005]: I0225 13:45:26.625044 5005 scope.go:117] "RemoveContainer" containerID="9db3e5c8ec310f12f87c60bc8a9a5dc20578bff8ae5b18cef0855e76e75c852e" Feb 25 13:45:26 crc kubenswrapper[5005]: I0225 13:45:26.670657 5005 scope.go:117] "RemoveContainer" containerID="8be50e6414f6669a5c6958aae1d7b83bf431022dc41a58565bc0f3ac2f646d37" Feb 25 13:45:27 crc kubenswrapper[5005]: I0225 13:45:27.146510 5005 generic.go:334] "Generic (PLEG): container finished" podID="81b20372-b556-4def-bd2f-1452f40fe338" containerID="c02166e9269209b5b24ada3c9ff08678af3bd1c1db6782eeae097f1efc3bebf9" exitCode=0 Feb 25 13:45:27 crc kubenswrapper[5005]: I0225 13:45:27.146598 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9vlc9" Feb 25 13:45:27 crc kubenswrapper[5005]: I0225 13:45:27.146618 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rkjrs" event={"ID":"81b20372-b556-4def-bd2f-1452f40fe338","Type":"ContainerDied","Data":"c02166e9269209b5b24ada3c9ff08678af3bd1c1db6782eeae097f1efc3bebf9"} Feb 25 13:45:27 crc kubenswrapper[5005]: I0225 13:45:27.146677 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9vlc9" event={"ID":"c2595820-1d56-4102-82a6-cbc52b963ab4","Type":"ContainerDied","Data":"6c584044f0912f72da9e8bc6a2c5d0341f25c017132d1a0b6b973e4c2c4e172c"} Feb 25 13:45:27 crc kubenswrapper[5005]: I0225 13:45:27.193723 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9vlc9"] Feb 25 13:45:27 crc kubenswrapper[5005]: I0225 13:45:27.204624 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9vlc9"] Feb 25 13:45:27 crc kubenswrapper[5005]: I0225 13:45:27.288950 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rkjrs" Feb 25 13:45:27 crc kubenswrapper[5005]: I0225 13:45:27.395150 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bnhq\" (UniqueName: \"kubernetes.io/projected/81b20372-b556-4def-bd2f-1452f40fe338-kube-api-access-6bnhq\") pod \"81b20372-b556-4def-bd2f-1452f40fe338\" (UID: \"81b20372-b556-4def-bd2f-1452f40fe338\") " Feb 25 13:45:27 crc kubenswrapper[5005]: I0225 13:45:27.395276 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81b20372-b556-4def-bd2f-1452f40fe338-catalog-content\") pod \"81b20372-b556-4def-bd2f-1452f40fe338\" (UID: \"81b20372-b556-4def-bd2f-1452f40fe338\") " Feb 25 13:45:27 crc kubenswrapper[5005]: I0225 13:45:27.395402 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81b20372-b556-4def-bd2f-1452f40fe338-utilities\") pod \"81b20372-b556-4def-bd2f-1452f40fe338\" (UID: \"81b20372-b556-4def-bd2f-1452f40fe338\") " Feb 25 13:45:27 crc kubenswrapper[5005]: I0225 13:45:27.396667 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81b20372-b556-4def-bd2f-1452f40fe338-utilities" (OuterVolumeSpecName: "utilities") pod "81b20372-b556-4def-bd2f-1452f40fe338" (UID: "81b20372-b556-4def-bd2f-1452f40fe338"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 13:45:27 crc kubenswrapper[5005]: I0225 13:45:27.400582 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81b20372-b556-4def-bd2f-1452f40fe338-kube-api-access-6bnhq" (OuterVolumeSpecName: "kube-api-access-6bnhq") pod "81b20372-b556-4def-bd2f-1452f40fe338" (UID: "81b20372-b556-4def-bd2f-1452f40fe338"). InnerVolumeSpecName "kube-api-access-6bnhq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 13:45:27 crc kubenswrapper[5005]: I0225 13:45:27.446539 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81b20372-b556-4def-bd2f-1452f40fe338-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "81b20372-b556-4def-bd2f-1452f40fe338" (UID: "81b20372-b556-4def-bd2f-1452f40fe338"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 13:45:27 crc kubenswrapper[5005]: I0225 13:45:27.498712 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81b20372-b556-4def-bd2f-1452f40fe338-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 13:45:27 crc kubenswrapper[5005]: I0225 13:45:27.499033 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bnhq\" (UniqueName: \"kubernetes.io/projected/81b20372-b556-4def-bd2f-1452f40fe338-kube-api-access-6bnhq\") on node \"crc\" DevicePath \"\"" Feb 25 13:45:27 crc kubenswrapper[5005]: I0225 13:45:27.499045 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81b20372-b556-4def-bd2f-1452f40fe338-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 13:45:28 crc kubenswrapper[5005]: I0225 13:45:28.156364 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rkjrs" event={"ID":"81b20372-b556-4def-bd2f-1452f40fe338","Type":"ContainerDied","Data":"5453e1adbf2da47db1d03e2041029930e5ff56c60d6a403fd5a828c77ed44411"} Feb 25 13:45:28 crc kubenswrapper[5005]: I0225 13:45:28.156439 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rkjrs" Feb 25 13:45:28 crc kubenswrapper[5005]: I0225 13:45:28.156618 5005 scope.go:117] "RemoveContainer" containerID="c02166e9269209b5b24ada3c9ff08678af3bd1c1db6782eeae097f1efc3bebf9" Feb 25 13:45:28 crc kubenswrapper[5005]: I0225 13:45:28.181036 5005 scope.go:117] "RemoveContainer" containerID="acc125cc87dae88bd1bc3b73b4aea651a920eb59858e381dd7106d1379e6957c" Feb 25 13:45:28 crc kubenswrapper[5005]: I0225 13:45:28.191574 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rkjrs"] Feb 25 13:45:28 crc kubenswrapper[5005]: I0225 13:45:28.200307 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rkjrs"] Feb 25 13:45:28 crc kubenswrapper[5005]: I0225 13:45:28.217078 5005 scope.go:117] "RemoveContainer" containerID="2d9183f6def143b53b10ad1b7a5a6a5102a6284dc01a1989c56d8f3c7166f962" Feb 25 13:45:28 crc kubenswrapper[5005]: I0225 13:45:28.694253 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81b20372-b556-4def-bd2f-1452f40fe338" path="/var/lib/kubelet/pods/81b20372-b556-4def-bd2f-1452f40fe338/volumes" Feb 25 13:45:28 crc kubenswrapper[5005]: I0225 13:45:28.694930 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2595820-1d56-4102-82a6-cbc52b963ab4" path="/var/lib/kubelet/pods/c2595820-1d56-4102-82a6-cbc52b963ab4/volumes" Feb 25 13:45:58 crc kubenswrapper[5005]: I0225 13:45:58.087081 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 13:45:58 crc kubenswrapper[5005]: I0225 13:45:58.087603 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 13:46:00 crc kubenswrapper[5005]: I0225 13:46:00.137662 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533786-fdtxj"] Feb 25 13:46:00 crc kubenswrapper[5005]: E0225 13:46:00.138572 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81b20372-b556-4def-bd2f-1452f40fe338" containerName="registry-server" Feb 25 13:46:00 crc kubenswrapper[5005]: I0225 13:46:00.138591 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="81b20372-b556-4def-bd2f-1452f40fe338" containerName="registry-server" Feb 25 13:46:00 crc kubenswrapper[5005]: E0225 13:46:00.138609 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81b20372-b556-4def-bd2f-1452f40fe338" containerName="extract-content" Feb 25 13:46:00 crc kubenswrapper[5005]: I0225 13:46:00.138615 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="81b20372-b556-4def-bd2f-1452f40fe338" containerName="extract-content" Feb 25 13:46:00 crc kubenswrapper[5005]: E0225 13:46:00.138631 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2595820-1d56-4102-82a6-cbc52b963ab4" containerName="registry-server" Feb 25 13:46:00 crc kubenswrapper[5005]: I0225 13:46:00.138638 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2595820-1d56-4102-82a6-cbc52b963ab4" containerName="registry-server" Feb 25 13:46:00 crc kubenswrapper[5005]: E0225 13:46:00.138650 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81b20372-b556-4def-bd2f-1452f40fe338" containerName="extract-utilities" Feb 25 13:46:00 crc kubenswrapper[5005]: I0225 13:46:00.138658 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="81b20372-b556-4def-bd2f-1452f40fe338" containerName="extract-utilities" Feb 25 13:46:00 crc kubenswrapper[5005]: E0225 13:46:00.138669 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2595820-1d56-4102-82a6-cbc52b963ab4" containerName="extract-utilities" Feb 25 13:46:00 crc kubenswrapper[5005]: I0225 13:46:00.138675 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2595820-1d56-4102-82a6-cbc52b963ab4" containerName="extract-utilities" Feb 25 13:46:00 crc kubenswrapper[5005]: E0225 13:46:00.138684 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2595820-1d56-4102-82a6-cbc52b963ab4" containerName="extract-content" Feb 25 13:46:00 crc kubenswrapper[5005]: I0225 13:46:00.138689 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2595820-1d56-4102-82a6-cbc52b963ab4" containerName="extract-content" Feb 25 13:46:00 crc kubenswrapper[5005]: I0225 13:46:00.138881 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2595820-1d56-4102-82a6-cbc52b963ab4" containerName="registry-server" Feb 25 13:46:00 crc kubenswrapper[5005]: I0225 13:46:00.138900 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="81b20372-b556-4def-bd2f-1452f40fe338" containerName="registry-server" Feb 25 13:46:00 crc kubenswrapper[5005]: I0225 13:46:00.139562 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533786-fdtxj" Feb 25 13:46:00 crc kubenswrapper[5005]: I0225 13:46:00.141881 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 13:46:00 crc kubenswrapper[5005]: I0225 13:46:00.141915 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 13:46:00 crc kubenswrapper[5005]: I0225 13:46:00.142234 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 13:46:00 crc kubenswrapper[5005]: I0225 13:46:00.208438 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533786-fdtxj"] Feb 25 13:46:00 crc kubenswrapper[5005]: I0225 13:46:00.276651 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drg7g\" (UniqueName: \"kubernetes.io/projected/a9f22394-58df-4486-8143-5137cece0b7f-kube-api-access-drg7g\") pod \"auto-csr-approver-29533786-fdtxj\" (UID: \"a9f22394-58df-4486-8143-5137cece0b7f\") " pod="openshift-infra/auto-csr-approver-29533786-fdtxj" Feb 25 13:46:00 crc kubenswrapper[5005]: I0225 13:46:00.378795 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drg7g\" (UniqueName: \"kubernetes.io/projected/a9f22394-58df-4486-8143-5137cece0b7f-kube-api-access-drg7g\") pod \"auto-csr-approver-29533786-fdtxj\" (UID: \"a9f22394-58df-4486-8143-5137cece0b7f\") " pod="openshift-infra/auto-csr-approver-29533786-fdtxj" Feb 25 13:46:00 crc kubenswrapper[5005]: I0225 13:46:00.397530 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drg7g\" (UniqueName: \"kubernetes.io/projected/a9f22394-58df-4486-8143-5137cece0b7f-kube-api-access-drg7g\") pod \"auto-csr-approver-29533786-fdtxj\" (UID: \"a9f22394-58df-4486-8143-5137cece0b7f\") " pod="openshift-infra/auto-csr-approver-29533786-fdtxj" Feb 25 13:46:00 crc kubenswrapper[5005]: I0225 13:46:00.475099 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533786-fdtxj" Feb 25 13:46:00 crc kubenswrapper[5005]: I0225 13:46:00.959936 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533786-fdtxj"] Feb 25 13:46:01 crc kubenswrapper[5005]: I0225 13:46:01.462704 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533786-fdtxj" event={"ID":"a9f22394-58df-4486-8143-5137cece0b7f","Type":"ContainerStarted","Data":"07bf62aafff6a821c0e92c728b0e859c4c60c02d5fb26d7928439302c4591a3b"} Feb 25 13:46:02 crc kubenswrapper[5005]: I0225 13:46:02.475051 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533786-fdtxj" event={"ID":"a9f22394-58df-4486-8143-5137cece0b7f","Type":"ContainerStarted","Data":"d282615803ea9a81d21bab6b35e26511ea730304d4fd0755bbf7aff554f09855"} Feb 25 13:46:02 crc kubenswrapper[5005]: I0225 13:46:02.494273 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29533786-fdtxj" podStartSLOduration=1.561760263 podStartE2EDuration="2.494249166s" podCreationTimestamp="2026-02-25 13:46:00 +0000 UTC" firstStartedPulling="2026-02-25 13:46:00.970785351 +0000 UTC m=+8875.011517678" lastFinishedPulling="2026-02-25 13:46:01.903274254 +0000 UTC m=+8875.944006581" observedRunningTime="2026-02-25 13:46:02.487905369 +0000 UTC m=+8876.528637686" watchObservedRunningTime="2026-02-25 13:46:02.494249166 +0000 UTC m=+8876.534981493" Feb 25 13:46:03 crc kubenswrapper[5005]: I0225 13:46:03.483724 5005 generic.go:334] "Generic (PLEG): container finished" podID="a9f22394-58df-4486-8143-5137cece0b7f" containerID="d282615803ea9a81d21bab6b35e26511ea730304d4fd0755bbf7aff554f09855" exitCode=0 Feb 25 13:46:03 crc kubenswrapper[5005]: I0225 13:46:03.483868 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533786-fdtxj" event={"ID":"a9f22394-58df-4486-8143-5137cece0b7f","Type":"ContainerDied","Data":"d282615803ea9a81d21bab6b35e26511ea730304d4fd0755bbf7aff554f09855"} Feb 25 13:46:05 crc kubenswrapper[5005]: I0225 13:46:05.044508 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533786-fdtxj" Feb 25 13:46:05 crc kubenswrapper[5005]: I0225 13:46:05.198447 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drg7g\" (UniqueName: \"kubernetes.io/projected/a9f22394-58df-4486-8143-5137cece0b7f-kube-api-access-drg7g\") pod \"a9f22394-58df-4486-8143-5137cece0b7f\" (UID: \"a9f22394-58df-4486-8143-5137cece0b7f\") " Feb 25 13:46:05 crc kubenswrapper[5005]: I0225 13:46:05.204832 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9f22394-58df-4486-8143-5137cece0b7f-kube-api-access-drg7g" (OuterVolumeSpecName: "kube-api-access-drg7g") pod "a9f22394-58df-4486-8143-5137cece0b7f" (UID: "a9f22394-58df-4486-8143-5137cece0b7f"). InnerVolumeSpecName "kube-api-access-drg7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 13:46:05 crc kubenswrapper[5005]: I0225 13:46:05.301655 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drg7g\" (UniqueName: \"kubernetes.io/projected/a9f22394-58df-4486-8143-5137cece0b7f-kube-api-access-drg7g\") on node \"crc\" DevicePath \"\"" Feb 25 13:46:05 crc kubenswrapper[5005]: I0225 13:46:05.503223 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533786-fdtxj" event={"ID":"a9f22394-58df-4486-8143-5137cece0b7f","Type":"ContainerDied","Data":"07bf62aafff6a821c0e92c728b0e859c4c60c02d5fb26d7928439302c4591a3b"} Feb 25 13:46:05 crc kubenswrapper[5005]: I0225 13:46:05.503268 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07bf62aafff6a821c0e92c728b0e859c4c60c02d5fb26d7928439302c4591a3b" Feb 25 13:46:05 crc kubenswrapper[5005]: I0225 13:46:05.503292 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533786-fdtxj" Feb 25 13:46:05 crc kubenswrapper[5005]: I0225 13:46:05.563228 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533780-7kvg2"] Feb 25 13:46:05 crc kubenswrapper[5005]: I0225 13:46:05.571154 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533780-7kvg2"] Feb 25 13:46:06 crc kubenswrapper[5005]: I0225 13:46:06.699486 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cfd3fa8-8b51-4073-816b-c47b895cd27e" path="/var/lib/kubelet/pods/6cfd3fa8-8b51-4073-816b-c47b895cd27e/volumes" Feb 25 13:46:26 crc kubenswrapper[5005]: I0225 13:46:26.748659 5005 scope.go:117] "RemoveContainer" containerID="06bd6c37369cc629d479b39977b62ce46c6f6769d71fef74d78eb2d5728e2655" Feb 25 13:46:28 crc kubenswrapper[5005]: I0225 13:46:28.087000 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 13:46:28 crc kubenswrapper[5005]: I0225 13:46:28.087306 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 13:46:58 crc kubenswrapper[5005]: I0225 13:46:58.087873 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 13:46:58 crc kubenswrapper[5005]: I0225 13:46:58.088624 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 13:46:58 crc kubenswrapper[5005]: I0225 13:46:58.088694 5005 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" Feb 25 13:46:58 crc kubenswrapper[5005]: I0225 13:46:58.089425 5005 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9afbe60c6503a9378a18734a1be182e383020005a8fd3909f742fdfb6e4176b6"} pod="openshift-machine-config-operator/machine-config-daemon-tct5q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 25 13:46:58 crc kubenswrapper[5005]: I0225 13:46:58.089478 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" containerID="cri-o://9afbe60c6503a9378a18734a1be182e383020005a8fd3909f742fdfb6e4176b6" gracePeriod=600 Feb 25 13:46:59 crc kubenswrapper[5005]: I0225 13:46:59.064583 5005 generic.go:334] "Generic (PLEG): container finished" podID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerID="9afbe60c6503a9378a18734a1be182e383020005a8fd3909f742fdfb6e4176b6" exitCode=0 Feb 25 13:46:59 crc kubenswrapper[5005]: I0225 13:46:59.065147 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerDied","Data":"9afbe60c6503a9378a18734a1be182e383020005a8fd3909f742fdfb6e4176b6"} Feb 25 13:46:59 crc kubenswrapper[5005]: I0225 13:46:59.065182 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerStarted","Data":"e73225572e6e0a8f193a76f2756f84c03ea35884dc40a26c584e9b4d7ff340f4"} Feb 25 13:46:59 crc kubenswrapper[5005]: I0225 13:46:59.065202 5005 scope.go:117] "RemoveContainer" containerID="c5f542d1ec6dc300e3a44e3a29fbd2c7f4d7950016e38a2791f2d7acc0a680b4" Feb 25 13:47:43 crc kubenswrapper[5005]: I0225 13:47:43.137710 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9ws9b"] Feb 25 13:47:43 crc kubenswrapper[5005]: E0225 13:47:43.138670 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9f22394-58df-4486-8143-5137cece0b7f" containerName="oc" Feb 25 13:47:43 crc kubenswrapper[5005]: I0225 13:47:43.138687 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9f22394-58df-4486-8143-5137cece0b7f" containerName="oc" Feb 25 13:47:43 crc kubenswrapper[5005]: I0225 13:47:43.138898 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9f22394-58df-4486-8143-5137cece0b7f" containerName="oc" Feb 25 13:47:43 crc kubenswrapper[5005]: I0225 13:47:43.140194 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9ws9b" Feb 25 13:47:43 crc kubenswrapper[5005]: I0225 13:47:43.152765 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9ws9b"] Feb 25 13:47:43 crc kubenswrapper[5005]: I0225 13:47:43.182256 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a2ed9ee-def4-4fe1-9dad-1cdf014ca595-catalog-content\") pod \"redhat-operators-9ws9b\" (UID: \"9a2ed9ee-def4-4fe1-9dad-1cdf014ca595\") " pod="openshift-marketplace/redhat-operators-9ws9b" Feb 25 13:47:43 crc kubenswrapper[5005]: I0225 13:47:43.182658 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a2ed9ee-def4-4fe1-9dad-1cdf014ca595-utilities\") pod \"redhat-operators-9ws9b\" (UID: \"9a2ed9ee-def4-4fe1-9dad-1cdf014ca595\") " pod="openshift-marketplace/redhat-operators-9ws9b" Feb 25 13:47:43 crc kubenswrapper[5005]: I0225 13:47:43.182810 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22gmh\" (UniqueName: \"kubernetes.io/projected/9a2ed9ee-def4-4fe1-9dad-1cdf014ca595-kube-api-access-22gmh\") pod \"redhat-operators-9ws9b\" (UID: \"9a2ed9ee-def4-4fe1-9dad-1cdf014ca595\") " pod="openshift-marketplace/redhat-operators-9ws9b" Feb 25 13:47:43 crc kubenswrapper[5005]: I0225 13:47:43.284895 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a2ed9ee-def4-4fe1-9dad-1cdf014ca595-utilities\") pod \"redhat-operators-9ws9b\" (UID: \"9a2ed9ee-def4-4fe1-9dad-1cdf014ca595\") " pod="openshift-marketplace/redhat-operators-9ws9b" Feb 25 13:47:43 crc kubenswrapper[5005]: I0225 13:47:43.285387 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a2ed9ee-def4-4fe1-9dad-1cdf014ca595-utilities\") pod \"redhat-operators-9ws9b\" (UID: \"9a2ed9ee-def4-4fe1-9dad-1cdf014ca595\") " pod="openshift-marketplace/redhat-operators-9ws9b" Feb 25 13:47:43 crc kubenswrapper[5005]: I0225 13:47:43.285483 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22gmh\" (UniqueName: \"kubernetes.io/projected/9a2ed9ee-def4-4fe1-9dad-1cdf014ca595-kube-api-access-22gmh\") pod \"redhat-operators-9ws9b\" (UID: \"9a2ed9ee-def4-4fe1-9dad-1cdf014ca595\") " pod="openshift-marketplace/redhat-operators-9ws9b" Feb 25 13:47:43 crc kubenswrapper[5005]: I0225 13:47:43.285684 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a2ed9ee-def4-4fe1-9dad-1cdf014ca595-catalog-content\") pod \"redhat-operators-9ws9b\" (UID: \"9a2ed9ee-def4-4fe1-9dad-1cdf014ca595\") " pod="openshift-marketplace/redhat-operators-9ws9b" Feb 25 13:47:43 crc kubenswrapper[5005]: I0225 13:47:43.286118 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a2ed9ee-def4-4fe1-9dad-1cdf014ca595-catalog-content\") pod \"redhat-operators-9ws9b\" (UID: \"9a2ed9ee-def4-4fe1-9dad-1cdf014ca595\") " pod="openshift-marketplace/redhat-operators-9ws9b" Feb 25 13:47:43 crc kubenswrapper[5005]: I0225 13:47:43.307779 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22gmh\" (UniqueName: \"kubernetes.io/projected/9a2ed9ee-def4-4fe1-9dad-1cdf014ca595-kube-api-access-22gmh\") pod \"redhat-operators-9ws9b\" (UID: \"9a2ed9ee-def4-4fe1-9dad-1cdf014ca595\") " pod="openshift-marketplace/redhat-operators-9ws9b" Feb 25 13:47:43 crc kubenswrapper[5005]: I0225 13:47:43.465607 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9ws9b" Feb 25 13:47:43 crc kubenswrapper[5005]: I0225 13:47:43.921997 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9ws9b"] Feb 25 13:47:44 crc kubenswrapper[5005]: I0225 13:47:44.553542 5005 generic.go:334] "Generic (PLEG): container finished" podID="9a2ed9ee-def4-4fe1-9dad-1cdf014ca595" containerID="e79bb7808578cf50c3d7ac1f48a89c26061317d5ba78c4e7590154450ad147df" exitCode=0 Feb 25 13:47:44 crc kubenswrapper[5005]: I0225 13:47:44.553622 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9ws9b" event={"ID":"9a2ed9ee-def4-4fe1-9dad-1cdf014ca595","Type":"ContainerDied","Data":"e79bb7808578cf50c3d7ac1f48a89c26061317d5ba78c4e7590154450ad147df"} Feb 25 13:47:44 crc kubenswrapper[5005]: I0225 13:47:44.554648 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9ws9b" event={"ID":"9a2ed9ee-def4-4fe1-9dad-1cdf014ca595","Type":"ContainerStarted","Data":"17c587e7ba3dcaebb5f0630b9ec9dd67e409337eb8d71110b10b88191a2d1914"} Feb 25 13:47:44 crc kubenswrapper[5005]: I0225 13:47:44.555336 5005 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 25 13:47:45 crc kubenswrapper[5005]: I0225 13:47:45.563941 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9ws9b" event={"ID":"9a2ed9ee-def4-4fe1-9dad-1cdf014ca595","Type":"ContainerStarted","Data":"5bb78fe7b3ead2e453374d7b256484cd23db12d11efc4b98b3ddb3db918afdd8"} Feb 25 13:47:46 crc kubenswrapper[5005]: E0225 13:47:46.395111 5005 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a2ed9ee_def4_4fe1_9dad_1cdf014ca595.slice/crio-conmon-5bb78fe7b3ead2e453374d7b256484cd23db12d11efc4b98b3ddb3db918afdd8.scope\": RecentStats: unable to find data in memory cache]" Feb 25 13:47:46 crc kubenswrapper[5005]: I0225 13:47:46.573920 5005 generic.go:334] "Generic (PLEG): container finished" podID="9a2ed9ee-def4-4fe1-9dad-1cdf014ca595" containerID="5bb78fe7b3ead2e453374d7b256484cd23db12d11efc4b98b3ddb3db918afdd8" exitCode=0 Feb 25 13:47:46 crc kubenswrapper[5005]: I0225 13:47:46.573963 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9ws9b" event={"ID":"9a2ed9ee-def4-4fe1-9dad-1cdf014ca595","Type":"ContainerDied","Data":"5bb78fe7b3ead2e453374d7b256484cd23db12d11efc4b98b3ddb3db918afdd8"} Feb 25 13:47:47 crc kubenswrapper[5005]: I0225 13:47:47.587269 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9ws9b" event={"ID":"9a2ed9ee-def4-4fe1-9dad-1cdf014ca595","Type":"ContainerStarted","Data":"85bcdd5b5744a983390fbba6693f333aa38d8dda77b0ae07f7a8f67ae4180f35"} Feb 25 13:47:47 crc kubenswrapper[5005]: I0225 13:47:47.605017 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9ws9b" podStartSLOduration=1.947158397 podStartE2EDuration="4.605001232s" podCreationTimestamp="2026-02-25 13:47:43 +0000 UTC" firstStartedPulling="2026-02-25 13:47:44.55511607 +0000 UTC m=+8978.595848397" lastFinishedPulling="2026-02-25 13:47:47.212958905 +0000 UTC m=+8981.253691232" observedRunningTime="2026-02-25 13:47:47.603512476 +0000 UTC m=+8981.644244823" watchObservedRunningTime="2026-02-25 13:47:47.605001232 +0000 UTC m=+8981.645733559" Feb 25 13:47:53 crc kubenswrapper[5005]: I0225 13:47:53.466041 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9ws9b" Feb 25 13:47:53 crc kubenswrapper[5005]: I0225 13:47:53.466664 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9ws9b" Feb 25 13:47:53 crc kubenswrapper[5005]: I0225 13:47:53.521098 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9ws9b" Feb 25 13:47:53 crc kubenswrapper[5005]: I0225 13:47:53.682065 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9ws9b" Feb 25 13:47:53 crc kubenswrapper[5005]: I0225 13:47:53.756645 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9ws9b"] Feb 25 13:47:55 crc kubenswrapper[5005]: I0225 13:47:55.646855 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9ws9b" podUID="9a2ed9ee-def4-4fe1-9dad-1cdf014ca595" containerName="registry-server" containerID="cri-o://85bcdd5b5744a983390fbba6693f333aa38d8dda77b0ae07f7a8f67ae4180f35" gracePeriod=2 Feb 25 13:47:56 crc kubenswrapper[5005]: I0225 13:47:56.130736 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9ws9b" Feb 25 13:47:56 crc kubenswrapper[5005]: I0225 13:47:56.245609 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a2ed9ee-def4-4fe1-9dad-1cdf014ca595-utilities\") pod \"9a2ed9ee-def4-4fe1-9dad-1cdf014ca595\" (UID: \"9a2ed9ee-def4-4fe1-9dad-1cdf014ca595\") " Feb 25 13:47:56 crc kubenswrapper[5005]: I0225 13:47:56.246355 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22gmh\" (UniqueName: \"kubernetes.io/projected/9a2ed9ee-def4-4fe1-9dad-1cdf014ca595-kube-api-access-22gmh\") pod \"9a2ed9ee-def4-4fe1-9dad-1cdf014ca595\" (UID: \"9a2ed9ee-def4-4fe1-9dad-1cdf014ca595\") " Feb 25 13:47:56 crc kubenswrapper[5005]: I0225 13:47:56.246732 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a2ed9ee-def4-4fe1-9dad-1cdf014ca595-catalog-content\") pod \"9a2ed9ee-def4-4fe1-9dad-1cdf014ca595\" (UID: \"9a2ed9ee-def4-4fe1-9dad-1cdf014ca595\") " Feb 25 13:47:56 crc kubenswrapper[5005]: I0225 13:47:56.246990 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a2ed9ee-def4-4fe1-9dad-1cdf014ca595-utilities" (OuterVolumeSpecName: "utilities") pod "9a2ed9ee-def4-4fe1-9dad-1cdf014ca595" (UID: "9a2ed9ee-def4-4fe1-9dad-1cdf014ca595"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 13:47:56 crc kubenswrapper[5005]: I0225 13:47:56.247463 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a2ed9ee-def4-4fe1-9dad-1cdf014ca595-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 13:47:56 crc kubenswrapper[5005]: I0225 13:47:56.286193 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a2ed9ee-def4-4fe1-9dad-1cdf014ca595-kube-api-access-22gmh" (OuterVolumeSpecName: "kube-api-access-22gmh") pod "9a2ed9ee-def4-4fe1-9dad-1cdf014ca595" (UID: "9a2ed9ee-def4-4fe1-9dad-1cdf014ca595"). InnerVolumeSpecName "kube-api-access-22gmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 13:47:56 crc kubenswrapper[5005]: I0225 13:47:56.350640 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22gmh\" (UniqueName: \"kubernetes.io/projected/9a2ed9ee-def4-4fe1-9dad-1cdf014ca595-kube-api-access-22gmh\") on node \"crc\" DevicePath \"\"" Feb 25 13:47:56 crc kubenswrapper[5005]: I0225 13:47:56.659954 5005 generic.go:334] "Generic (PLEG): container finished" podID="9a2ed9ee-def4-4fe1-9dad-1cdf014ca595" containerID="85bcdd5b5744a983390fbba6693f333aa38d8dda77b0ae07f7a8f67ae4180f35" exitCode=0 Feb 25 13:47:56 crc kubenswrapper[5005]: I0225 13:47:56.660061 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9ws9b" event={"ID":"9a2ed9ee-def4-4fe1-9dad-1cdf014ca595","Type":"ContainerDied","Data":"85bcdd5b5744a983390fbba6693f333aa38d8dda77b0ae07f7a8f67ae4180f35"} Feb 25 13:47:56 crc kubenswrapper[5005]: I0225 13:47:56.660109 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9ws9b" event={"ID":"9a2ed9ee-def4-4fe1-9dad-1cdf014ca595","Type":"ContainerDied","Data":"17c587e7ba3dcaebb5f0630b9ec9dd67e409337eb8d71110b10b88191a2d1914"} Feb 25 13:47:56 crc kubenswrapper[5005]: I0225 13:47:56.660133 5005 scope.go:117] "RemoveContainer" containerID="85bcdd5b5744a983390fbba6693f333aa38d8dda77b0ae07f7a8f67ae4180f35" Feb 25 13:47:56 crc kubenswrapper[5005]: I0225 13:47:56.660391 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9ws9b" Feb 25 13:47:56 crc kubenswrapper[5005]: I0225 13:47:56.686910 5005 scope.go:117] "RemoveContainer" containerID="5bb78fe7b3ead2e453374d7b256484cd23db12d11efc4b98b3ddb3db918afdd8" Feb 25 13:47:56 crc kubenswrapper[5005]: I0225 13:47:56.709952 5005 scope.go:117] "RemoveContainer" containerID="e79bb7808578cf50c3d7ac1f48a89c26061317d5ba78c4e7590154450ad147df" Feb 25 13:47:56 crc kubenswrapper[5005]: I0225 13:47:56.764262 5005 scope.go:117] "RemoveContainer" containerID="85bcdd5b5744a983390fbba6693f333aa38d8dda77b0ae07f7a8f67ae4180f35" Feb 25 13:47:56 crc kubenswrapper[5005]: E0225 13:47:56.765570 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85bcdd5b5744a983390fbba6693f333aa38d8dda77b0ae07f7a8f67ae4180f35\": container with ID starting with 85bcdd5b5744a983390fbba6693f333aa38d8dda77b0ae07f7a8f67ae4180f35 not found: ID does not exist" containerID="85bcdd5b5744a983390fbba6693f333aa38d8dda77b0ae07f7a8f67ae4180f35" Feb 25 13:47:56 crc kubenswrapper[5005]: I0225 13:47:56.765639 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85bcdd5b5744a983390fbba6693f333aa38d8dda77b0ae07f7a8f67ae4180f35"} err="failed to get container status \"85bcdd5b5744a983390fbba6693f333aa38d8dda77b0ae07f7a8f67ae4180f35\": rpc error: code = NotFound desc = could not find container \"85bcdd5b5744a983390fbba6693f333aa38d8dda77b0ae07f7a8f67ae4180f35\": container with ID starting with 85bcdd5b5744a983390fbba6693f333aa38d8dda77b0ae07f7a8f67ae4180f35 not found: ID does not exist" Feb 25 13:47:56 crc kubenswrapper[5005]: I0225 13:47:56.765675 5005 scope.go:117] "RemoveContainer" containerID="5bb78fe7b3ead2e453374d7b256484cd23db12d11efc4b98b3ddb3db918afdd8" Feb 25 13:47:56 crc kubenswrapper[5005]: E0225 13:47:56.766327 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bb78fe7b3ead2e453374d7b256484cd23db12d11efc4b98b3ddb3db918afdd8\": container with ID starting with 5bb78fe7b3ead2e453374d7b256484cd23db12d11efc4b98b3ddb3db918afdd8 not found: ID does not exist" containerID="5bb78fe7b3ead2e453374d7b256484cd23db12d11efc4b98b3ddb3db918afdd8" Feb 25 13:47:56 crc kubenswrapper[5005]: I0225 13:47:56.766394 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bb78fe7b3ead2e453374d7b256484cd23db12d11efc4b98b3ddb3db918afdd8"} err="failed to get container status \"5bb78fe7b3ead2e453374d7b256484cd23db12d11efc4b98b3ddb3db918afdd8\": rpc error: code = NotFound desc = could not find container \"5bb78fe7b3ead2e453374d7b256484cd23db12d11efc4b98b3ddb3db918afdd8\": container with ID starting with 5bb78fe7b3ead2e453374d7b256484cd23db12d11efc4b98b3ddb3db918afdd8 not found: ID does not exist" Feb 25 13:47:56 crc kubenswrapper[5005]: I0225 13:47:56.766426 5005 scope.go:117] "RemoveContainer" containerID="e79bb7808578cf50c3d7ac1f48a89c26061317d5ba78c4e7590154450ad147df" Feb 25 13:47:56 crc kubenswrapper[5005]: E0225 13:47:56.766830 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e79bb7808578cf50c3d7ac1f48a89c26061317d5ba78c4e7590154450ad147df\": container with ID starting with e79bb7808578cf50c3d7ac1f48a89c26061317d5ba78c4e7590154450ad147df not found: ID does not exist" containerID="e79bb7808578cf50c3d7ac1f48a89c26061317d5ba78c4e7590154450ad147df" Feb 25 13:47:56 crc kubenswrapper[5005]: I0225 13:47:56.766849 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e79bb7808578cf50c3d7ac1f48a89c26061317d5ba78c4e7590154450ad147df"} err="failed to get container status \"e79bb7808578cf50c3d7ac1f48a89c26061317d5ba78c4e7590154450ad147df\": rpc error: code = NotFound desc = could not find container \"e79bb7808578cf50c3d7ac1f48a89c26061317d5ba78c4e7590154450ad147df\": container with ID starting with e79bb7808578cf50c3d7ac1f48a89c26061317d5ba78c4e7590154450ad147df not found: ID does not exist" Feb 25 13:47:58 crc kubenswrapper[5005]: I0225 13:47:58.125459 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a2ed9ee-def4-4fe1-9dad-1cdf014ca595-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9a2ed9ee-def4-4fe1-9dad-1cdf014ca595" (UID: "9a2ed9ee-def4-4fe1-9dad-1cdf014ca595"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 13:47:58 crc kubenswrapper[5005]: I0225 13:47:58.198968 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a2ed9ee-def4-4fe1-9dad-1cdf014ca595-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 13:47:58 crc kubenswrapper[5005]: I0225 13:47:58.203831 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9ws9b"] Feb 25 13:47:58 crc kubenswrapper[5005]: I0225 13:47:58.213445 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9ws9b"] Feb 25 13:47:58 crc kubenswrapper[5005]: I0225 13:47:58.700151 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a2ed9ee-def4-4fe1-9dad-1cdf014ca595" path="/var/lib/kubelet/pods/9a2ed9ee-def4-4fe1-9dad-1cdf014ca595/volumes" Feb 25 13:48:00 crc kubenswrapper[5005]: I0225 13:48:00.189394 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533788-9rc8j"] Feb 25 13:48:00 crc kubenswrapper[5005]: E0225 13:48:00.190108 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a2ed9ee-def4-4fe1-9dad-1cdf014ca595" containerName="extract-content" Feb 25 13:48:00 crc kubenswrapper[5005]: I0225 13:48:00.190124 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a2ed9ee-def4-4fe1-9dad-1cdf014ca595" containerName="extract-content" Feb 25 13:48:00 crc kubenswrapper[5005]: E0225 13:48:00.190140 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a2ed9ee-def4-4fe1-9dad-1cdf014ca595" containerName="extract-utilities" Feb 25 13:48:00 crc kubenswrapper[5005]: I0225 13:48:00.190146 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a2ed9ee-def4-4fe1-9dad-1cdf014ca595" containerName="extract-utilities" Feb 25 13:48:00 crc kubenswrapper[5005]: E0225 13:48:00.190176 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a2ed9ee-def4-4fe1-9dad-1cdf014ca595" containerName="registry-server" Feb 25 13:48:00 crc kubenswrapper[5005]: I0225 13:48:00.190182 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a2ed9ee-def4-4fe1-9dad-1cdf014ca595" containerName="registry-server" Feb 25 13:48:00 crc kubenswrapper[5005]: I0225 13:48:00.190366 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a2ed9ee-def4-4fe1-9dad-1cdf014ca595" containerName="registry-server" Feb 25 13:48:00 crc kubenswrapper[5005]: I0225 13:48:00.191184 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533788-9rc8j" Feb 25 13:48:00 crc kubenswrapper[5005]: I0225 13:48:00.199609 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 13:48:00 crc kubenswrapper[5005]: I0225 13:48:00.202174 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 13:48:00 crc kubenswrapper[5005]: I0225 13:48:00.202880 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 13:48:00 crc kubenswrapper[5005]: I0225 13:48:00.214786 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533788-9rc8j"] Feb 25 13:48:00 crc kubenswrapper[5005]: I0225 13:48:00.258590 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4vf4\" (UniqueName: \"kubernetes.io/projected/4c2917e8-f0fd-4a50-b7d5-823e62f4cc9a-kube-api-access-l4vf4\") pod \"auto-csr-approver-29533788-9rc8j\" (UID: \"4c2917e8-f0fd-4a50-b7d5-823e62f4cc9a\") " pod="openshift-infra/auto-csr-approver-29533788-9rc8j" Feb 25 13:48:00 crc kubenswrapper[5005]: I0225 13:48:00.360591 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4vf4\" (UniqueName: \"kubernetes.io/projected/4c2917e8-f0fd-4a50-b7d5-823e62f4cc9a-kube-api-access-l4vf4\") pod \"auto-csr-approver-29533788-9rc8j\" (UID: \"4c2917e8-f0fd-4a50-b7d5-823e62f4cc9a\") " pod="openshift-infra/auto-csr-approver-29533788-9rc8j" Feb 25 13:48:00 crc kubenswrapper[5005]: I0225 13:48:00.387493 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4vf4\" (UniqueName: \"kubernetes.io/projected/4c2917e8-f0fd-4a50-b7d5-823e62f4cc9a-kube-api-access-l4vf4\") pod \"auto-csr-approver-29533788-9rc8j\" (UID: \"4c2917e8-f0fd-4a50-b7d5-823e62f4cc9a\") " pod="openshift-infra/auto-csr-approver-29533788-9rc8j" Feb 25 13:48:00 crc kubenswrapper[5005]: I0225 13:48:00.516263 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533788-9rc8j" Feb 25 13:48:01 crc kubenswrapper[5005]: I0225 13:48:01.078888 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533788-9rc8j"] Feb 25 13:48:01 crc kubenswrapper[5005]: I0225 13:48:01.722107 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533788-9rc8j" event={"ID":"4c2917e8-f0fd-4a50-b7d5-823e62f4cc9a","Type":"ContainerStarted","Data":"b8d67789db4d88aa28a72c8c0368c01f096c4e875da81b3860be230e560e8190"} Feb 25 13:48:02 crc kubenswrapper[5005]: I0225 13:48:02.730258 5005 generic.go:334] "Generic (PLEG): container finished" podID="4c2917e8-f0fd-4a50-b7d5-823e62f4cc9a" containerID="07659e8997170e4f7e68ab3fff7a1bdd0aaa99d4a2a4498a9036479ca3fc804e" exitCode=0 Feb 25 13:48:02 crc kubenswrapper[5005]: I0225 13:48:02.730581 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533788-9rc8j" event={"ID":"4c2917e8-f0fd-4a50-b7d5-823e62f4cc9a","Type":"ContainerDied","Data":"07659e8997170e4f7e68ab3fff7a1bdd0aaa99d4a2a4498a9036479ca3fc804e"} Feb 25 13:48:04 crc kubenswrapper[5005]: I0225 13:48:04.145999 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533788-9rc8j" Feb 25 13:48:04 crc kubenswrapper[5005]: I0225 13:48:04.242165 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4vf4\" (UniqueName: \"kubernetes.io/projected/4c2917e8-f0fd-4a50-b7d5-823e62f4cc9a-kube-api-access-l4vf4\") pod \"4c2917e8-f0fd-4a50-b7d5-823e62f4cc9a\" (UID: \"4c2917e8-f0fd-4a50-b7d5-823e62f4cc9a\") " Feb 25 13:48:04 crc kubenswrapper[5005]: I0225 13:48:04.247764 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c2917e8-f0fd-4a50-b7d5-823e62f4cc9a-kube-api-access-l4vf4" (OuterVolumeSpecName: "kube-api-access-l4vf4") pod "4c2917e8-f0fd-4a50-b7d5-823e62f4cc9a" (UID: "4c2917e8-f0fd-4a50-b7d5-823e62f4cc9a"). InnerVolumeSpecName "kube-api-access-l4vf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 13:48:04 crc kubenswrapper[5005]: I0225 13:48:04.344778 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4vf4\" (UniqueName: \"kubernetes.io/projected/4c2917e8-f0fd-4a50-b7d5-823e62f4cc9a-kube-api-access-l4vf4\") on node \"crc\" DevicePath \"\"" Feb 25 13:48:04 crc kubenswrapper[5005]: I0225 13:48:04.749806 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533788-9rc8j" event={"ID":"4c2917e8-f0fd-4a50-b7d5-823e62f4cc9a","Type":"ContainerDied","Data":"b8d67789db4d88aa28a72c8c0368c01f096c4e875da81b3860be230e560e8190"} Feb 25 13:48:04 crc kubenswrapper[5005]: I0225 13:48:04.749845 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8d67789db4d88aa28a72c8c0368c01f096c4e875da81b3860be230e560e8190" Feb 25 13:48:04 crc kubenswrapper[5005]: I0225 13:48:04.749873 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533788-9rc8j" Feb 25 13:48:05 crc kubenswrapper[5005]: I0225 13:48:05.218820 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533782-bhmmm"] Feb 25 13:48:05 crc kubenswrapper[5005]: I0225 13:48:05.229254 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533782-bhmmm"] Feb 25 13:48:06 crc kubenswrapper[5005]: I0225 13:48:06.710177 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1c83a8f-b088-4b22-bd19-9fadd542773a" path="/var/lib/kubelet/pods/d1c83a8f-b088-4b22-bd19-9fadd542773a/volumes" Feb 25 13:48:26 crc kubenswrapper[5005]: I0225 13:48:26.527653 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-f8gw9"] Feb 25 13:48:26 crc kubenswrapper[5005]: E0225 13:48:26.528594 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c2917e8-f0fd-4a50-b7d5-823e62f4cc9a" containerName="oc" Feb 25 13:48:26 crc kubenswrapper[5005]: I0225 13:48:26.528605 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c2917e8-f0fd-4a50-b7d5-823e62f4cc9a" containerName="oc" Feb 25 13:48:26 crc kubenswrapper[5005]: I0225 13:48:26.528804 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c2917e8-f0fd-4a50-b7d5-823e62f4cc9a" containerName="oc" Feb 25 13:48:26 crc kubenswrapper[5005]: I0225 13:48:26.530215 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f8gw9" Feb 25 13:48:26 crc kubenswrapper[5005]: I0225 13:48:26.553720 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f8gw9"] Feb 25 13:48:26 crc kubenswrapper[5005]: I0225 13:48:26.606126 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/742ba61c-49d8-422f-8627-ee4196bdba8e-catalog-content\") pod \"redhat-marketplace-f8gw9\" (UID: \"742ba61c-49d8-422f-8627-ee4196bdba8e\") " pod="openshift-marketplace/redhat-marketplace-f8gw9" Feb 25 13:48:26 crc kubenswrapper[5005]: I0225 13:48:26.606317 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhxb6\" (UniqueName: \"kubernetes.io/projected/742ba61c-49d8-422f-8627-ee4196bdba8e-kube-api-access-hhxb6\") pod \"redhat-marketplace-f8gw9\" (UID: \"742ba61c-49d8-422f-8627-ee4196bdba8e\") " pod="openshift-marketplace/redhat-marketplace-f8gw9" Feb 25 13:48:26 crc kubenswrapper[5005]: I0225 13:48:26.606414 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/742ba61c-49d8-422f-8627-ee4196bdba8e-utilities\") pod \"redhat-marketplace-f8gw9\" (UID: \"742ba61c-49d8-422f-8627-ee4196bdba8e\") " pod="openshift-marketplace/redhat-marketplace-f8gw9" Feb 25 13:48:26 crc kubenswrapper[5005]: I0225 13:48:26.708858 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhxb6\" (UniqueName: \"kubernetes.io/projected/742ba61c-49d8-422f-8627-ee4196bdba8e-kube-api-access-hhxb6\") pod \"redhat-marketplace-f8gw9\" (UID: \"742ba61c-49d8-422f-8627-ee4196bdba8e\") " pod="openshift-marketplace/redhat-marketplace-f8gw9" Feb 25 13:48:26 crc kubenswrapper[5005]: I0225 13:48:26.708922 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/742ba61c-49d8-422f-8627-ee4196bdba8e-utilities\") pod \"redhat-marketplace-f8gw9\" (UID: \"742ba61c-49d8-422f-8627-ee4196bdba8e\") " pod="openshift-marketplace/redhat-marketplace-f8gw9" Feb 25 13:48:26 crc kubenswrapper[5005]: I0225 13:48:26.709198 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/742ba61c-49d8-422f-8627-ee4196bdba8e-catalog-content\") pod \"redhat-marketplace-f8gw9\" (UID: \"742ba61c-49d8-422f-8627-ee4196bdba8e\") " pod="openshift-marketplace/redhat-marketplace-f8gw9" Feb 25 13:48:26 crc kubenswrapper[5005]: I0225 13:48:26.709648 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/742ba61c-49d8-422f-8627-ee4196bdba8e-utilities\") pod \"redhat-marketplace-f8gw9\" (UID: \"742ba61c-49d8-422f-8627-ee4196bdba8e\") " pod="openshift-marketplace/redhat-marketplace-f8gw9" Feb 25 13:48:26 crc kubenswrapper[5005]: I0225 13:48:26.709733 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/742ba61c-49d8-422f-8627-ee4196bdba8e-catalog-content\") pod \"redhat-marketplace-f8gw9\" (UID: \"742ba61c-49d8-422f-8627-ee4196bdba8e\") " pod="openshift-marketplace/redhat-marketplace-f8gw9" Feb 25 13:48:26 crc kubenswrapper[5005]: I0225 13:48:26.733312 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhxb6\" (UniqueName: \"kubernetes.io/projected/742ba61c-49d8-422f-8627-ee4196bdba8e-kube-api-access-hhxb6\") pod \"redhat-marketplace-f8gw9\" (UID: \"742ba61c-49d8-422f-8627-ee4196bdba8e\") " pod="openshift-marketplace/redhat-marketplace-f8gw9" Feb 25 13:48:26 crc kubenswrapper[5005]: I0225 13:48:26.852404 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f8gw9" Feb 25 13:48:26 crc kubenswrapper[5005]: I0225 13:48:26.866344 5005 scope.go:117] "RemoveContainer" containerID="857d677ade39ee9a0495cfb2bc5df193d449691f7afe91aebcf356bf502a807e" Feb 25 13:48:27 crc kubenswrapper[5005]: I0225 13:48:27.367895 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f8gw9"] Feb 25 13:48:27 crc kubenswrapper[5005]: I0225 13:48:27.953917 5005 generic.go:334] "Generic (PLEG): container finished" podID="742ba61c-49d8-422f-8627-ee4196bdba8e" containerID="9de0ec7c86f50dbefad374dd88c697b35424844f988ede771e41bfb153c020f9" exitCode=0 Feb 25 13:48:27 crc kubenswrapper[5005]: I0225 13:48:27.953968 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f8gw9" event={"ID":"742ba61c-49d8-422f-8627-ee4196bdba8e","Type":"ContainerDied","Data":"9de0ec7c86f50dbefad374dd88c697b35424844f988ede771e41bfb153c020f9"} Feb 25 13:48:27 crc kubenswrapper[5005]: I0225 13:48:27.954207 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f8gw9" event={"ID":"742ba61c-49d8-422f-8627-ee4196bdba8e","Type":"ContainerStarted","Data":"dfcc6058a14bac9a01301db8d65741c000bcb95a942ea5de0f6421ba0677dc67"} Feb 25 13:48:28 crc kubenswrapper[5005]: I0225 13:48:28.966614 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f8gw9" event={"ID":"742ba61c-49d8-422f-8627-ee4196bdba8e","Type":"ContainerStarted","Data":"20dea8f0d4d94985ff23256f01150dd7a96d245e09cb4faa466f52ccdcb3bd02"} Feb 25 13:48:29 crc kubenswrapper[5005]: I0225 13:48:29.976768 5005 generic.go:334] "Generic (PLEG): container finished" podID="742ba61c-49d8-422f-8627-ee4196bdba8e" containerID="20dea8f0d4d94985ff23256f01150dd7a96d245e09cb4faa466f52ccdcb3bd02" exitCode=0 Feb 25 13:48:29 crc kubenswrapper[5005]: I0225 13:48:29.976853 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f8gw9" event={"ID":"742ba61c-49d8-422f-8627-ee4196bdba8e","Type":"ContainerDied","Data":"20dea8f0d4d94985ff23256f01150dd7a96d245e09cb4faa466f52ccdcb3bd02"} Feb 25 13:48:30 crc kubenswrapper[5005]: I0225 13:48:30.986795 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f8gw9" event={"ID":"742ba61c-49d8-422f-8627-ee4196bdba8e","Type":"ContainerStarted","Data":"76f295d64dbec0b295a891264c37436238765f1c7b1e8afe69d758ea6736cdd6"} Feb 25 13:48:31 crc kubenswrapper[5005]: I0225 13:48:31.002966 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-f8gw9" podStartSLOduration=2.563524744 podStartE2EDuration="5.002952281s" podCreationTimestamp="2026-02-25 13:48:26 +0000 UTC" firstStartedPulling="2026-02-25 13:48:27.955925047 +0000 UTC m=+9021.996657374" lastFinishedPulling="2026-02-25 13:48:30.395352584 +0000 UTC m=+9024.436084911" observedRunningTime="2026-02-25 13:48:31.002352072 +0000 UTC m=+9025.043084409" watchObservedRunningTime="2026-02-25 13:48:31.002952281 +0000 UTC m=+9025.043684608" Feb 25 13:48:36 crc kubenswrapper[5005]: I0225 13:48:36.853063 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-f8gw9" Feb 25 13:48:36 crc kubenswrapper[5005]: I0225 13:48:36.853793 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-f8gw9" Feb 25 13:48:36 crc kubenswrapper[5005]: I0225 13:48:36.906306 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-f8gw9" Feb 25 13:48:37 crc kubenswrapper[5005]: I0225 13:48:37.108148 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-f8gw9" Feb 25 13:48:37 crc kubenswrapper[5005]: I0225 13:48:37.199869 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f8gw9"] Feb 25 13:48:39 crc kubenswrapper[5005]: I0225 13:48:39.054532 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-f8gw9" podUID="742ba61c-49d8-422f-8627-ee4196bdba8e" containerName="registry-server" containerID="cri-o://76f295d64dbec0b295a891264c37436238765f1c7b1e8afe69d758ea6736cdd6" gracePeriod=2 Feb 25 13:48:39 crc kubenswrapper[5005]: I0225 13:48:39.675118 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f8gw9" Feb 25 13:48:39 crc kubenswrapper[5005]: I0225 13:48:39.754754 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/742ba61c-49d8-422f-8627-ee4196bdba8e-catalog-content\") pod \"742ba61c-49d8-422f-8627-ee4196bdba8e\" (UID: \"742ba61c-49d8-422f-8627-ee4196bdba8e\") " Feb 25 13:48:39 crc kubenswrapper[5005]: I0225 13:48:39.755019 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhxb6\" (UniqueName: \"kubernetes.io/projected/742ba61c-49d8-422f-8627-ee4196bdba8e-kube-api-access-hhxb6\") pod \"742ba61c-49d8-422f-8627-ee4196bdba8e\" (UID: \"742ba61c-49d8-422f-8627-ee4196bdba8e\") " Feb 25 13:48:39 crc kubenswrapper[5005]: I0225 13:48:39.755091 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/742ba61c-49d8-422f-8627-ee4196bdba8e-utilities\") pod \"742ba61c-49d8-422f-8627-ee4196bdba8e\" (UID: \"742ba61c-49d8-422f-8627-ee4196bdba8e\") " Feb 25 13:48:39 crc kubenswrapper[5005]: I0225 13:48:39.756065 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/742ba61c-49d8-422f-8627-ee4196bdba8e-utilities" (OuterVolumeSpecName: "utilities") pod "742ba61c-49d8-422f-8627-ee4196bdba8e" (UID: "742ba61c-49d8-422f-8627-ee4196bdba8e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 13:48:39 crc kubenswrapper[5005]: I0225 13:48:39.756417 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/742ba61c-49d8-422f-8627-ee4196bdba8e-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 13:48:39 crc kubenswrapper[5005]: I0225 13:48:39.760876 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/742ba61c-49d8-422f-8627-ee4196bdba8e-kube-api-access-hhxb6" (OuterVolumeSpecName: "kube-api-access-hhxb6") pod "742ba61c-49d8-422f-8627-ee4196bdba8e" (UID: "742ba61c-49d8-422f-8627-ee4196bdba8e"). InnerVolumeSpecName "kube-api-access-hhxb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 13:48:39 crc kubenswrapper[5005]: I0225 13:48:39.782990 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/742ba61c-49d8-422f-8627-ee4196bdba8e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "742ba61c-49d8-422f-8627-ee4196bdba8e" (UID: "742ba61c-49d8-422f-8627-ee4196bdba8e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 13:48:39 crc kubenswrapper[5005]: I0225 13:48:39.859293 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/742ba61c-49d8-422f-8627-ee4196bdba8e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 13:48:39 crc kubenswrapper[5005]: I0225 13:48:39.859328 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhxb6\" (UniqueName: \"kubernetes.io/projected/742ba61c-49d8-422f-8627-ee4196bdba8e-kube-api-access-hhxb6\") on node \"crc\" DevicePath \"\"" Feb 25 13:48:40 crc kubenswrapper[5005]: I0225 13:48:40.065029 5005 generic.go:334] "Generic (PLEG): container finished" podID="742ba61c-49d8-422f-8627-ee4196bdba8e" containerID="76f295d64dbec0b295a891264c37436238765f1c7b1e8afe69d758ea6736cdd6" exitCode=0 Feb 25 13:48:40 crc kubenswrapper[5005]: I0225 13:48:40.065090 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f8gw9" event={"ID":"742ba61c-49d8-422f-8627-ee4196bdba8e","Type":"ContainerDied","Data":"76f295d64dbec0b295a891264c37436238765f1c7b1e8afe69d758ea6736cdd6"} Feb 25 13:48:40 crc kubenswrapper[5005]: I0225 13:48:40.065108 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f8gw9" Feb 25 13:48:40 crc kubenswrapper[5005]: I0225 13:48:40.066632 5005 scope.go:117] "RemoveContainer" containerID="76f295d64dbec0b295a891264c37436238765f1c7b1e8afe69d758ea6736cdd6" Feb 25 13:48:40 crc kubenswrapper[5005]: I0225 13:48:40.066486 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f8gw9" event={"ID":"742ba61c-49d8-422f-8627-ee4196bdba8e","Type":"ContainerDied","Data":"dfcc6058a14bac9a01301db8d65741c000bcb95a942ea5de0f6421ba0677dc67"} Feb 25 13:48:40 crc kubenswrapper[5005]: I0225 13:48:40.085984 5005 scope.go:117] "RemoveContainer" containerID="20dea8f0d4d94985ff23256f01150dd7a96d245e09cb4faa466f52ccdcb3bd02" Feb 25 13:48:40 crc kubenswrapper[5005]: I0225 13:48:40.104512 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f8gw9"] Feb 25 13:48:40 crc kubenswrapper[5005]: I0225 13:48:40.107139 5005 scope.go:117] "RemoveContainer" containerID="9de0ec7c86f50dbefad374dd88c697b35424844f988ede771e41bfb153c020f9" Feb 25 13:48:40 crc kubenswrapper[5005]: I0225 13:48:40.111195 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-f8gw9"] Feb 25 13:48:40 crc kubenswrapper[5005]: I0225 13:48:40.148551 5005 scope.go:117] "RemoveContainer" containerID="76f295d64dbec0b295a891264c37436238765f1c7b1e8afe69d758ea6736cdd6" Feb 25 13:48:40 crc kubenswrapper[5005]: E0225 13:48:40.149339 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76f295d64dbec0b295a891264c37436238765f1c7b1e8afe69d758ea6736cdd6\": container with ID starting with 76f295d64dbec0b295a891264c37436238765f1c7b1e8afe69d758ea6736cdd6 not found: ID does not exist" containerID="76f295d64dbec0b295a891264c37436238765f1c7b1e8afe69d758ea6736cdd6" Feb 25 13:48:40 crc kubenswrapper[5005]: I0225 13:48:40.149405 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76f295d64dbec0b295a891264c37436238765f1c7b1e8afe69d758ea6736cdd6"} err="failed to get container status \"76f295d64dbec0b295a891264c37436238765f1c7b1e8afe69d758ea6736cdd6\": rpc error: code = NotFound desc = could not find container \"76f295d64dbec0b295a891264c37436238765f1c7b1e8afe69d758ea6736cdd6\": container with ID starting with 76f295d64dbec0b295a891264c37436238765f1c7b1e8afe69d758ea6736cdd6 not found: ID does not exist" Feb 25 13:48:40 crc kubenswrapper[5005]: I0225 13:48:40.149434 5005 scope.go:117] "RemoveContainer" containerID="20dea8f0d4d94985ff23256f01150dd7a96d245e09cb4faa466f52ccdcb3bd02" Feb 25 13:48:40 crc kubenswrapper[5005]: E0225 13:48:40.149963 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20dea8f0d4d94985ff23256f01150dd7a96d245e09cb4faa466f52ccdcb3bd02\": container with ID starting with 20dea8f0d4d94985ff23256f01150dd7a96d245e09cb4faa466f52ccdcb3bd02 not found: ID does not exist" containerID="20dea8f0d4d94985ff23256f01150dd7a96d245e09cb4faa466f52ccdcb3bd02" Feb 25 13:48:40 crc kubenswrapper[5005]: I0225 13:48:40.150013 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20dea8f0d4d94985ff23256f01150dd7a96d245e09cb4faa466f52ccdcb3bd02"} err="failed to get container status \"20dea8f0d4d94985ff23256f01150dd7a96d245e09cb4faa466f52ccdcb3bd02\": rpc error: code = NotFound desc = could not find container \"20dea8f0d4d94985ff23256f01150dd7a96d245e09cb4faa466f52ccdcb3bd02\": container with ID starting with 20dea8f0d4d94985ff23256f01150dd7a96d245e09cb4faa466f52ccdcb3bd02 not found: ID does not exist" Feb 25 13:48:40 crc kubenswrapper[5005]: I0225 13:48:40.150037 5005 scope.go:117] "RemoveContainer" containerID="9de0ec7c86f50dbefad374dd88c697b35424844f988ede771e41bfb153c020f9" Feb 25 13:48:40 crc kubenswrapper[5005]: E0225 13:48:40.150272 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9de0ec7c86f50dbefad374dd88c697b35424844f988ede771e41bfb153c020f9\": container with ID starting with 9de0ec7c86f50dbefad374dd88c697b35424844f988ede771e41bfb153c020f9 not found: ID does not exist" containerID="9de0ec7c86f50dbefad374dd88c697b35424844f988ede771e41bfb153c020f9" Feb 25 13:48:40 crc kubenswrapper[5005]: I0225 13:48:40.150313 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9de0ec7c86f50dbefad374dd88c697b35424844f988ede771e41bfb153c020f9"} err="failed to get container status \"9de0ec7c86f50dbefad374dd88c697b35424844f988ede771e41bfb153c020f9\": rpc error: code = NotFound desc = could not find container \"9de0ec7c86f50dbefad374dd88c697b35424844f988ede771e41bfb153c020f9\": container with ID starting with 9de0ec7c86f50dbefad374dd88c697b35424844f988ede771e41bfb153c020f9 not found: ID does not exist" Feb 25 13:48:40 crc kubenswrapper[5005]: I0225 13:48:40.696149 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="742ba61c-49d8-422f-8627-ee4196bdba8e" path="/var/lib/kubelet/pods/742ba61c-49d8-422f-8627-ee4196bdba8e/volumes" Feb 25 13:48:58 crc kubenswrapper[5005]: I0225 13:48:58.087725 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 13:48:58 crc kubenswrapper[5005]: I0225 13:48:58.088298 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 13:49:28 crc kubenswrapper[5005]: I0225 13:49:28.087481 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 13:49:28 crc kubenswrapper[5005]: I0225 13:49:28.088523 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 13:49:58 crc kubenswrapper[5005]: I0225 13:49:58.087843 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 13:49:58 crc kubenswrapper[5005]: I0225 13:49:58.088328 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 13:49:58 crc kubenswrapper[5005]: I0225 13:49:58.088373 5005 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" Feb 25 13:49:58 crc kubenswrapper[5005]: I0225 13:49:58.088891 5005 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e73225572e6e0a8f193a76f2756f84c03ea35884dc40a26c584e9b4d7ff340f4"} pod="openshift-machine-config-operator/machine-config-daemon-tct5q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 25 13:49:58 crc kubenswrapper[5005]: I0225 13:49:58.088946 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" containerID="cri-o://e73225572e6e0a8f193a76f2756f84c03ea35884dc40a26c584e9b4d7ff340f4" gracePeriod=600 Feb 25 13:49:58 crc kubenswrapper[5005]: E0225 13:49:58.215607 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:49:58 crc kubenswrapper[5005]: I0225 13:49:58.760035 5005 generic.go:334] "Generic (PLEG): container finished" podID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerID="e73225572e6e0a8f193a76f2756f84c03ea35884dc40a26c584e9b4d7ff340f4" exitCode=0 Feb 25 13:49:58 crc kubenswrapper[5005]: I0225 13:49:58.760085 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerDied","Data":"e73225572e6e0a8f193a76f2756f84c03ea35884dc40a26c584e9b4d7ff340f4"} Feb 25 13:49:58 crc kubenswrapper[5005]: I0225 13:49:58.760128 5005 scope.go:117] "RemoveContainer" containerID="9afbe60c6503a9378a18734a1be182e383020005a8fd3909f742fdfb6e4176b6" Feb 25 13:49:58 crc kubenswrapper[5005]: I0225 13:49:58.760881 5005 scope.go:117] "RemoveContainer" containerID="e73225572e6e0a8f193a76f2756f84c03ea35884dc40a26c584e9b4d7ff340f4" Feb 25 13:49:58 crc kubenswrapper[5005]: E0225 13:49:58.761295 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:50:00 crc kubenswrapper[5005]: I0225 13:50:00.153010 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533790-xf5wq"] Feb 25 13:50:00 crc kubenswrapper[5005]: E0225 13:50:00.153755 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="742ba61c-49d8-422f-8627-ee4196bdba8e" containerName="extract-content" Feb 25 13:50:00 crc kubenswrapper[5005]: I0225 13:50:00.153769 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="742ba61c-49d8-422f-8627-ee4196bdba8e" containerName="extract-content" Feb 25 13:50:00 crc kubenswrapper[5005]: E0225 13:50:00.153798 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="742ba61c-49d8-422f-8627-ee4196bdba8e" containerName="registry-server" Feb 25 13:50:00 crc kubenswrapper[5005]: I0225 13:50:00.153805 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="742ba61c-49d8-422f-8627-ee4196bdba8e" containerName="registry-server" Feb 25 13:50:00 crc kubenswrapper[5005]: E0225 13:50:00.153819 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="742ba61c-49d8-422f-8627-ee4196bdba8e" containerName="extract-utilities" Feb 25 13:50:00 crc kubenswrapper[5005]: I0225 13:50:00.153827 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="742ba61c-49d8-422f-8627-ee4196bdba8e" containerName="extract-utilities" Feb 25 13:50:00 crc kubenswrapper[5005]: I0225 13:50:00.154073 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="742ba61c-49d8-422f-8627-ee4196bdba8e" containerName="registry-server" Feb 25 13:50:00 crc kubenswrapper[5005]: I0225 13:50:00.154855 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533790-xf5wq" Feb 25 13:50:00 crc kubenswrapper[5005]: I0225 13:50:00.159785 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 13:50:00 crc kubenswrapper[5005]: I0225 13:50:00.160041 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 13:50:00 crc kubenswrapper[5005]: I0225 13:50:00.160153 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 13:50:00 crc kubenswrapper[5005]: I0225 13:50:00.170686 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533790-xf5wq"] Feb 25 13:50:00 crc kubenswrapper[5005]: I0225 13:50:00.323605 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh9hq\" (UniqueName: \"kubernetes.io/projected/e3c2f238-06d3-489e-a175-29841682cd1d-kube-api-access-vh9hq\") pod \"auto-csr-approver-29533790-xf5wq\" (UID: \"e3c2f238-06d3-489e-a175-29841682cd1d\") " pod="openshift-infra/auto-csr-approver-29533790-xf5wq" Feb 25 13:50:00 crc kubenswrapper[5005]: I0225 13:50:00.425816 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vh9hq\" (UniqueName: \"kubernetes.io/projected/e3c2f238-06d3-489e-a175-29841682cd1d-kube-api-access-vh9hq\") pod \"auto-csr-approver-29533790-xf5wq\" (UID: \"e3c2f238-06d3-489e-a175-29841682cd1d\") " pod="openshift-infra/auto-csr-approver-29533790-xf5wq" Feb 25 13:50:00 crc kubenswrapper[5005]: I0225 13:50:00.448573 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh9hq\" (UniqueName: \"kubernetes.io/projected/e3c2f238-06d3-489e-a175-29841682cd1d-kube-api-access-vh9hq\") pod \"auto-csr-approver-29533790-xf5wq\" (UID: \"e3c2f238-06d3-489e-a175-29841682cd1d\") " pod="openshift-infra/auto-csr-approver-29533790-xf5wq" Feb 25 13:50:00 crc kubenswrapper[5005]: I0225 13:50:00.473314 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533790-xf5wq" Feb 25 13:50:00 crc kubenswrapper[5005]: I0225 13:50:00.958172 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533790-xf5wq"] Feb 25 13:50:01 crc kubenswrapper[5005]: I0225 13:50:01.785705 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533790-xf5wq" event={"ID":"e3c2f238-06d3-489e-a175-29841682cd1d","Type":"ContainerStarted","Data":"55c26bb6738770496afb57368b09d9aa9fc0429012e32afceabb80707bd17350"} Feb 25 13:50:02 crc kubenswrapper[5005]: I0225 13:50:02.796471 5005 generic.go:334] "Generic (PLEG): container finished" podID="e3c2f238-06d3-489e-a175-29841682cd1d" containerID="9b274e03b37b04b19132a44c900ee67f34a19a361f6aa415d23da4351da32209" exitCode=0 Feb 25 13:50:02 crc kubenswrapper[5005]: I0225 13:50:02.796578 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533790-xf5wq" event={"ID":"e3c2f238-06d3-489e-a175-29841682cd1d","Type":"ContainerDied","Data":"9b274e03b37b04b19132a44c900ee67f34a19a361f6aa415d23da4351da32209"} Feb 25 13:50:04 crc kubenswrapper[5005]: I0225 13:50:04.109735 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533790-xf5wq" Feb 25 13:50:04 crc kubenswrapper[5005]: I0225 13:50:04.297133 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vh9hq\" (UniqueName: \"kubernetes.io/projected/e3c2f238-06d3-489e-a175-29841682cd1d-kube-api-access-vh9hq\") pod \"e3c2f238-06d3-489e-a175-29841682cd1d\" (UID: \"e3c2f238-06d3-489e-a175-29841682cd1d\") " Feb 25 13:50:04 crc kubenswrapper[5005]: I0225 13:50:04.302856 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3c2f238-06d3-489e-a175-29841682cd1d-kube-api-access-vh9hq" (OuterVolumeSpecName: "kube-api-access-vh9hq") pod "e3c2f238-06d3-489e-a175-29841682cd1d" (UID: "e3c2f238-06d3-489e-a175-29841682cd1d"). InnerVolumeSpecName "kube-api-access-vh9hq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 13:50:04 crc kubenswrapper[5005]: I0225 13:50:04.399226 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vh9hq\" (UniqueName: \"kubernetes.io/projected/e3c2f238-06d3-489e-a175-29841682cd1d-kube-api-access-vh9hq\") on node \"crc\" DevicePath \"\"" Feb 25 13:50:04 crc kubenswrapper[5005]: I0225 13:50:04.820023 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533790-xf5wq" event={"ID":"e3c2f238-06d3-489e-a175-29841682cd1d","Type":"ContainerDied","Data":"55c26bb6738770496afb57368b09d9aa9fc0429012e32afceabb80707bd17350"} Feb 25 13:50:04 crc kubenswrapper[5005]: I0225 13:50:04.820072 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55c26bb6738770496afb57368b09d9aa9fc0429012e32afceabb80707bd17350" Feb 25 13:50:04 crc kubenswrapper[5005]: I0225 13:50:04.820113 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533790-xf5wq" Feb 25 13:50:05 crc kubenswrapper[5005]: I0225 13:50:05.190150 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533784-mlsqv"] Feb 25 13:50:05 crc kubenswrapper[5005]: I0225 13:50:05.197851 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533784-mlsqv"] Feb 25 13:50:06 crc kubenswrapper[5005]: I0225 13:50:06.712828 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a45359c-f163-4c5c-ad4c-a51abf86772d" path="/var/lib/kubelet/pods/1a45359c-f163-4c5c-ad4c-a51abf86772d/volumes" Feb 25 13:50:09 crc kubenswrapper[5005]: I0225 13:50:09.685286 5005 scope.go:117] "RemoveContainer" containerID="e73225572e6e0a8f193a76f2756f84c03ea35884dc40a26c584e9b4d7ff340f4" Feb 25 13:50:09 crc kubenswrapper[5005]: E0225 13:50:09.686068 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:50:20 crc kubenswrapper[5005]: I0225 13:50:20.685683 5005 scope.go:117] "RemoveContainer" containerID="e73225572e6e0a8f193a76f2756f84c03ea35884dc40a26c584e9b4d7ff340f4" Feb 25 13:50:20 crc kubenswrapper[5005]: E0225 13:50:20.686686 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:50:27 crc kubenswrapper[5005]: I0225 13:50:27.082300 5005 scope.go:117] "RemoveContainer" containerID="3ddc5faecdf2805132adebdc230bd1d7f397a3c9750a3d1328eab720812c468a" Feb 25 13:50:35 crc kubenswrapper[5005]: I0225 13:50:35.686039 5005 scope.go:117] "RemoveContainer" containerID="e73225572e6e0a8f193a76f2756f84c03ea35884dc40a26c584e9b4d7ff340f4" Feb 25 13:50:35 crc kubenswrapper[5005]: E0225 13:50:35.686983 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:50:47 crc kubenswrapper[5005]: I0225 13:50:47.686650 5005 scope.go:117] "RemoveContainer" containerID="e73225572e6e0a8f193a76f2756f84c03ea35884dc40a26c584e9b4d7ff340f4" Feb 25 13:50:47 crc kubenswrapper[5005]: E0225 13:50:47.687308 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:51:01 crc kubenswrapper[5005]: I0225 13:51:01.688009 5005 scope.go:117] "RemoveContainer" containerID="e73225572e6e0a8f193a76f2756f84c03ea35884dc40a26c584e9b4d7ff340f4" Feb 25 13:51:01 crc kubenswrapper[5005]: E0225 13:51:01.689334 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:51:12 crc kubenswrapper[5005]: I0225 13:51:12.685588 5005 scope.go:117] "RemoveContainer" containerID="e73225572e6e0a8f193a76f2756f84c03ea35884dc40a26c584e9b4d7ff340f4" Feb 25 13:51:12 crc kubenswrapper[5005]: E0225 13:51:12.686359 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:51:23 crc kubenswrapper[5005]: I0225 13:51:23.686516 5005 scope.go:117] "RemoveContainer" containerID="e73225572e6e0a8f193a76f2756f84c03ea35884dc40a26c584e9b4d7ff340f4" Feb 25 13:51:23 crc kubenswrapper[5005]: E0225 13:51:23.687415 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:51:35 crc kubenswrapper[5005]: I0225 13:51:35.685896 5005 scope.go:117] "RemoveContainer" containerID="e73225572e6e0a8f193a76f2756f84c03ea35884dc40a26c584e9b4d7ff340f4" Feb 25 13:51:35 crc kubenswrapper[5005]: E0225 13:51:35.686773 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:51:49 crc kubenswrapper[5005]: I0225 13:51:49.685397 5005 scope.go:117] "RemoveContainer" containerID="e73225572e6e0a8f193a76f2756f84c03ea35884dc40a26c584e9b4d7ff340f4" Feb 25 13:51:49 crc kubenswrapper[5005]: E0225 13:51:49.687523 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:52:00 crc kubenswrapper[5005]: I0225 13:52:00.146315 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533792-v9ddb"] Feb 25 13:52:00 crc kubenswrapper[5005]: E0225 13:52:00.147636 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3c2f238-06d3-489e-a175-29841682cd1d" containerName="oc" Feb 25 13:52:00 crc kubenswrapper[5005]: I0225 13:52:00.147652 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3c2f238-06d3-489e-a175-29841682cd1d" containerName="oc" Feb 25 13:52:00 crc kubenswrapper[5005]: I0225 13:52:00.147885 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3c2f238-06d3-489e-a175-29841682cd1d" containerName="oc" Feb 25 13:52:00 crc kubenswrapper[5005]: I0225 13:52:00.148725 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533792-v9ddb" Feb 25 13:52:00 crc kubenswrapper[5005]: I0225 13:52:00.150488 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 13:52:00 crc kubenswrapper[5005]: I0225 13:52:00.150809 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 13:52:00 crc kubenswrapper[5005]: I0225 13:52:00.150934 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 13:52:00 crc kubenswrapper[5005]: I0225 13:52:00.158963 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533792-v9ddb"] Feb 25 13:52:00 crc kubenswrapper[5005]: I0225 13:52:00.279384 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjm7v\" (UniqueName: \"kubernetes.io/projected/ffedae01-95b5-4a35-a6fd-159317168f4b-kube-api-access-bjm7v\") pod \"auto-csr-approver-29533792-v9ddb\" (UID: \"ffedae01-95b5-4a35-a6fd-159317168f4b\") " pod="openshift-infra/auto-csr-approver-29533792-v9ddb" Feb 25 13:52:00 crc kubenswrapper[5005]: I0225 13:52:00.381586 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjm7v\" (UniqueName: \"kubernetes.io/projected/ffedae01-95b5-4a35-a6fd-159317168f4b-kube-api-access-bjm7v\") pod \"auto-csr-approver-29533792-v9ddb\" (UID: \"ffedae01-95b5-4a35-a6fd-159317168f4b\") " pod="openshift-infra/auto-csr-approver-29533792-v9ddb" Feb 25 13:52:00 crc kubenswrapper[5005]: I0225 13:52:00.401010 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjm7v\" (UniqueName: \"kubernetes.io/projected/ffedae01-95b5-4a35-a6fd-159317168f4b-kube-api-access-bjm7v\") pod \"auto-csr-approver-29533792-v9ddb\" (UID: \"ffedae01-95b5-4a35-a6fd-159317168f4b\") " pod="openshift-infra/auto-csr-approver-29533792-v9ddb" Feb 25 13:52:00 crc kubenswrapper[5005]: I0225 13:52:00.473312 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533792-v9ddb" Feb 25 13:52:00 crc kubenswrapper[5005]: I0225 13:52:00.983149 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533792-v9ddb"] Feb 25 13:52:01 crc kubenswrapper[5005]: I0225 13:52:01.910854 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533792-v9ddb" event={"ID":"ffedae01-95b5-4a35-a6fd-159317168f4b","Type":"ContainerStarted","Data":"291ce08a995301f95df6d83c3ae9f06206ccbe17b189f8b1daa6131890ef3dd3"} Feb 25 13:52:02 crc kubenswrapper[5005]: I0225 13:52:02.685255 5005 scope.go:117] "RemoveContainer" containerID="e73225572e6e0a8f193a76f2756f84c03ea35884dc40a26c584e9b4d7ff340f4" Feb 25 13:52:02 crc kubenswrapper[5005]: E0225 13:52:02.686111 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:52:02 crc kubenswrapper[5005]: I0225 13:52:02.920197 5005 generic.go:334] "Generic (PLEG): container finished" podID="ffedae01-95b5-4a35-a6fd-159317168f4b" containerID="a8b1e84e64e493aa129199c014c813724d91ddceeccc577d12dd1c67e89165f3" exitCode=0 Feb 25 13:52:02 crc kubenswrapper[5005]: I0225 13:52:02.920400 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533792-v9ddb" event={"ID":"ffedae01-95b5-4a35-a6fd-159317168f4b","Type":"ContainerDied","Data":"a8b1e84e64e493aa129199c014c813724d91ddceeccc577d12dd1c67e89165f3"} Feb 25 13:52:04 crc kubenswrapper[5005]: I0225 13:52:04.259157 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533792-v9ddb" Feb 25 13:52:04 crc kubenswrapper[5005]: I0225 13:52:04.368891 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjm7v\" (UniqueName: \"kubernetes.io/projected/ffedae01-95b5-4a35-a6fd-159317168f4b-kube-api-access-bjm7v\") pod \"ffedae01-95b5-4a35-a6fd-159317168f4b\" (UID: \"ffedae01-95b5-4a35-a6fd-159317168f4b\") " Feb 25 13:52:04 crc kubenswrapper[5005]: I0225 13:52:04.376037 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffedae01-95b5-4a35-a6fd-159317168f4b-kube-api-access-bjm7v" (OuterVolumeSpecName: "kube-api-access-bjm7v") pod "ffedae01-95b5-4a35-a6fd-159317168f4b" (UID: "ffedae01-95b5-4a35-a6fd-159317168f4b"). InnerVolumeSpecName "kube-api-access-bjm7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 13:52:04 crc kubenswrapper[5005]: I0225 13:52:04.472852 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjm7v\" (UniqueName: \"kubernetes.io/projected/ffedae01-95b5-4a35-a6fd-159317168f4b-kube-api-access-bjm7v\") on node \"crc\" DevicePath \"\"" Feb 25 13:52:04 crc kubenswrapper[5005]: I0225 13:52:04.939866 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533792-v9ddb" event={"ID":"ffedae01-95b5-4a35-a6fd-159317168f4b","Type":"ContainerDied","Data":"291ce08a995301f95df6d83c3ae9f06206ccbe17b189f8b1daa6131890ef3dd3"} Feb 25 13:52:04 crc kubenswrapper[5005]: I0225 13:52:04.940269 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="291ce08a995301f95df6d83c3ae9f06206ccbe17b189f8b1daa6131890ef3dd3" Feb 25 13:52:04 crc kubenswrapper[5005]: I0225 13:52:04.939922 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533792-v9ddb" Feb 25 13:52:05 crc kubenswrapper[5005]: I0225 13:52:05.326872 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533786-fdtxj"] Feb 25 13:52:05 crc kubenswrapper[5005]: I0225 13:52:05.347781 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533786-fdtxj"] Feb 25 13:52:06 crc kubenswrapper[5005]: I0225 13:52:06.696498 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9f22394-58df-4486-8143-5137cece0b7f" path="/var/lib/kubelet/pods/a9f22394-58df-4486-8143-5137cece0b7f/volumes" Feb 25 13:52:17 crc kubenswrapper[5005]: I0225 13:52:17.685425 5005 scope.go:117] "RemoveContainer" containerID="e73225572e6e0a8f193a76f2756f84c03ea35884dc40a26c584e9b4d7ff340f4" Feb 25 13:52:17 crc kubenswrapper[5005]: E0225 13:52:17.686175 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:52:27 crc kubenswrapper[5005]: I0225 13:52:27.226261 5005 scope.go:117] "RemoveContainer" containerID="d282615803ea9a81d21bab6b35e26511ea730304d4fd0755bbf7aff554f09855" Feb 25 13:52:32 crc kubenswrapper[5005]: I0225 13:52:32.685731 5005 scope.go:117] "RemoveContainer" containerID="e73225572e6e0a8f193a76f2756f84c03ea35884dc40a26c584e9b4d7ff340f4" Feb 25 13:52:32 crc kubenswrapper[5005]: E0225 13:52:32.686509 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:52:45 crc kubenswrapper[5005]: I0225 13:52:45.686416 5005 scope.go:117] "RemoveContainer" containerID="e73225572e6e0a8f193a76f2756f84c03ea35884dc40a26c584e9b4d7ff340f4" Feb 25 13:52:45 crc kubenswrapper[5005]: E0225 13:52:45.687854 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:52:57 crc kubenswrapper[5005]: I0225 13:52:57.685460 5005 scope.go:117] "RemoveContainer" containerID="e73225572e6e0a8f193a76f2756f84c03ea35884dc40a26c584e9b4d7ff340f4" Feb 25 13:52:57 crc kubenswrapper[5005]: E0225 13:52:57.686271 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:53:12 crc kubenswrapper[5005]: I0225 13:53:12.686006 5005 scope.go:117] "RemoveContainer" containerID="e73225572e6e0a8f193a76f2756f84c03ea35884dc40a26c584e9b4d7ff340f4" Feb 25 13:53:12 crc kubenswrapper[5005]: E0225 13:53:12.686862 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:53:25 crc kubenswrapper[5005]: I0225 13:53:25.685979 5005 scope.go:117] "RemoveContainer" containerID="e73225572e6e0a8f193a76f2756f84c03ea35884dc40a26c584e9b4d7ff340f4" Feb 25 13:53:25 crc kubenswrapper[5005]: E0225 13:53:25.686880 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:53:40 crc kubenswrapper[5005]: I0225 13:53:40.685470 5005 scope.go:117] "RemoveContainer" containerID="e73225572e6e0a8f193a76f2756f84c03ea35884dc40a26c584e9b4d7ff340f4" Feb 25 13:53:40 crc kubenswrapper[5005]: E0225 13:53:40.686131 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:53:53 crc kubenswrapper[5005]: I0225 13:53:53.685653 5005 scope.go:117] "RemoveContainer" containerID="e73225572e6e0a8f193a76f2756f84c03ea35884dc40a26c584e9b4d7ff340f4" Feb 25 13:53:53 crc kubenswrapper[5005]: E0225 13:53:53.686477 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:54:00 crc kubenswrapper[5005]: I0225 13:54:00.143257 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533794-sdw7r"] Feb 25 13:54:00 crc kubenswrapper[5005]: E0225 13:54:00.144314 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffedae01-95b5-4a35-a6fd-159317168f4b" containerName="oc" Feb 25 13:54:00 crc kubenswrapper[5005]: I0225 13:54:00.144327 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffedae01-95b5-4a35-a6fd-159317168f4b" containerName="oc" Feb 25 13:54:00 crc kubenswrapper[5005]: I0225 13:54:00.144523 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffedae01-95b5-4a35-a6fd-159317168f4b" containerName="oc" Feb 25 13:54:00 crc kubenswrapper[5005]: I0225 13:54:00.145158 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533794-sdw7r" Feb 25 13:54:00 crc kubenswrapper[5005]: I0225 13:54:00.146589 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 13:54:00 crc kubenswrapper[5005]: I0225 13:54:00.146600 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 13:54:00 crc kubenswrapper[5005]: I0225 13:54:00.147236 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 13:54:00 crc kubenswrapper[5005]: I0225 13:54:00.152987 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533794-sdw7r"] Feb 25 13:54:00 crc kubenswrapper[5005]: I0225 13:54:00.281390 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gt5qc\" (UniqueName: \"kubernetes.io/projected/0bb0b06b-bc4a-4f7e-a256-63f0995f1f19-kube-api-access-gt5qc\") pod \"auto-csr-approver-29533794-sdw7r\" (UID: \"0bb0b06b-bc4a-4f7e-a256-63f0995f1f19\") " pod="openshift-infra/auto-csr-approver-29533794-sdw7r" Feb 25 13:54:00 crc kubenswrapper[5005]: I0225 13:54:00.383361 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gt5qc\" (UniqueName: \"kubernetes.io/projected/0bb0b06b-bc4a-4f7e-a256-63f0995f1f19-kube-api-access-gt5qc\") pod \"auto-csr-approver-29533794-sdw7r\" (UID: \"0bb0b06b-bc4a-4f7e-a256-63f0995f1f19\") " pod="openshift-infra/auto-csr-approver-29533794-sdw7r" Feb 25 13:54:00 crc kubenswrapper[5005]: I0225 13:54:00.405629 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gt5qc\" (UniqueName: \"kubernetes.io/projected/0bb0b06b-bc4a-4f7e-a256-63f0995f1f19-kube-api-access-gt5qc\") pod \"auto-csr-approver-29533794-sdw7r\" (UID: \"0bb0b06b-bc4a-4f7e-a256-63f0995f1f19\") " pod="openshift-infra/auto-csr-approver-29533794-sdw7r" Feb 25 13:54:00 crc kubenswrapper[5005]: I0225 13:54:00.462847 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533794-sdw7r" Feb 25 13:54:00 crc kubenswrapper[5005]: I0225 13:54:00.946510 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533794-sdw7r"] Feb 25 13:54:00 crc kubenswrapper[5005]: W0225 13:54:00.961386 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0bb0b06b_bc4a_4f7e_a256_63f0995f1f19.slice/crio-5b1926cceadf861399edda32e1cb86ebd65b9f28348d081a8eb5ea30293bdbd3 WatchSource:0}: Error finding container 5b1926cceadf861399edda32e1cb86ebd65b9f28348d081a8eb5ea30293bdbd3: Status 404 returned error can't find the container with id 5b1926cceadf861399edda32e1cb86ebd65b9f28348d081a8eb5ea30293bdbd3 Feb 25 13:54:00 crc kubenswrapper[5005]: I0225 13:54:00.965220 5005 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 25 13:54:01 crc kubenswrapper[5005]: I0225 13:54:01.979239 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533794-sdw7r" event={"ID":"0bb0b06b-bc4a-4f7e-a256-63f0995f1f19","Type":"ContainerStarted","Data":"5b1926cceadf861399edda32e1cb86ebd65b9f28348d081a8eb5ea30293bdbd3"} Feb 25 13:54:02 crc kubenswrapper[5005]: I0225 13:54:02.988062 5005 generic.go:334] "Generic (PLEG): container finished" podID="0bb0b06b-bc4a-4f7e-a256-63f0995f1f19" containerID="fc90c498b00aab0e9f0169f8ccbc751c5d2f8859b40256b28d9552393890826f" exitCode=0 Feb 25 13:54:02 crc kubenswrapper[5005]: I0225 13:54:02.988112 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533794-sdw7r" event={"ID":"0bb0b06b-bc4a-4f7e-a256-63f0995f1f19","Type":"ContainerDied","Data":"fc90c498b00aab0e9f0169f8ccbc751c5d2f8859b40256b28d9552393890826f"} Feb 25 13:54:04 crc kubenswrapper[5005]: I0225 13:54:04.303076 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533794-sdw7r" Feb 25 13:54:04 crc kubenswrapper[5005]: I0225 13:54:04.464727 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gt5qc\" (UniqueName: \"kubernetes.io/projected/0bb0b06b-bc4a-4f7e-a256-63f0995f1f19-kube-api-access-gt5qc\") pod \"0bb0b06b-bc4a-4f7e-a256-63f0995f1f19\" (UID: \"0bb0b06b-bc4a-4f7e-a256-63f0995f1f19\") " Feb 25 13:54:04 crc kubenswrapper[5005]: I0225 13:54:04.470715 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bb0b06b-bc4a-4f7e-a256-63f0995f1f19-kube-api-access-gt5qc" (OuterVolumeSpecName: "kube-api-access-gt5qc") pod "0bb0b06b-bc4a-4f7e-a256-63f0995f1f19" (UID: "0bb0b06b-bc4a-4f7e-a256-63f0995f1f19"). InnerVolumeSpecName "kube-api-access-gt5qc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 13:54:04 crc kubenswrapper[5005]: I0225 13:54:04.567115 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gt5qc\" (UniqueName: \"kubernetes.io/projected/0bb0b06b-bc4a-4f7e-a256-63f0995f1f19-kube-api-access-gt5qc\") on node \"crc\" DevicePath \"\"" Feb 25 13:54:05 crc kubenswrapper[5005]: I0225 13:54:05.004298 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533794-sdw7r" event={"ID":"0bb0b06b-bc4a-4f7e-a256-63f0995f1f19","Type":"ContainerDied","Data":"5b1926cceadf861399edda32e1cb86ebd65b9f28348d081a8eb5ea30293bdbd3"} Feb 25 13:54:05 crc kubenswrapper[5005]: I0225 13:54:05.004336 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b1926cceadf861399edda32e1cb86ebd65b9f28348d081a8eb5ea30293bdbd3" Feb 25 13:54:05 crc kubenswrapper[5005]: I0225 13:54:05.004363 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533794-sdw7r" Feb 25 13:54:05 crc kubenswrapper[5005]: I0225 13:54:05.361194 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533788-9rc8j"] Feb 25 13:54:05 crc kubenswrapper[5005]: I0225 13:54:05.369209 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533788-9rc8j"] Feb 25 13:54:06 crc kubenswrapper[5005]: I0225 13:54:06.692925 5005 scope.go:117] "RemoveContainer" containerID="e73225572e6e0a8f193a76f2756f84c03ea35884dc40a26c584e9b4d7ff340f4" Feb 25 13:54:06 crc kubenswrapper[5005]: E0225 13:54:06.693230 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:54:06 crc kubenswrapper[5005]: I0225 13:54:06.697961 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c2917e8-f0fd-4a50-b7d5-823e62f4cc9a" path="/var/lib/kubelet/pods/4c2917e8-f0fd-4a50-b7d5-823e62f4cc9a/volumes" Feb 25 13:54:21 crc kubenswrapper[5005]: I0225 13:54:21.686074 5005 scope.go:117] "RemoveContainer" containerID="e73225572e6e0a8f193a76f2756f84c03ea35884dc40a26c584e9b4d7ff340f4" Feb 25 13:54:21 crc kubenswrapper[5005]: E0225 13:54:21.687493 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:54:27 crc kubenswrapper[5005]: I0225 13:54:27.321617 5005 scope.go:117] "RemoveContainer" containerID="07659e8997170e4f7e68ab3fff7a1bdd0aaa99d4a2a4498a9036479ca3fc804e" Feb 25 13:54:32 crc kubenswrapper[5005]: I0225 13:54:32.686088 5005 scope.go:117] "RemoveContainer" containerID="e73225572e6e0a8f193a76f2756f84c03ea35884dc40a26c584e9b4d7ff340f4" Feb 25 13:54:32 crc kubenswrapper[5005]: E0225 13:54:32.688824 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:54:45 crc kubenswrapper[5005]: I0225 13:54:45.685921 5005 scope.go:117] "RemoveContainer" containerID="e73225572e6e0a8f193a76f2756f84c03ea35884dc40a26c584e9b4d7ff340f4" Feb 25 13:54:45 crc kubenswrapper[5005]: E0225 13:54:45.686985 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:54:57 crc kubenswrapper[5005]: I0225 13:54:57.685520 5005 scope.go:117] "RemoveContainer" containerID="e73225572e6e0a8f193a76f2756f84c03ea35884dc40a26c584e9b4d7ff340f4" Feb 25 13:54:57 crc kubenswrapper[5005]: E0225 13:54:57.686495 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 13:55:11 crc kubenswrapper[5005]: I0225 13:55:11.685580 5005 scope.go:117] "RemoveContainer" containerID="e73225572e6e0a8f193a76f2756f84c03ea35884dc40a26c584e9b4d7ff340f4" Feb 25 13:55:12 crc kubenswrapper[5005]: I0225 13:55:12.592941 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerStarted","Data":"0dce4e7103bd6e14723c9dec0cd250411a0416d07006d4293ec16e96d56e8d6c"} Feb 25 13:55:48 crc kubenswrapper[5005]: I0225 13:55:48.503343 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4tqxp"] Feb 25 13:55:48 crc kubenswrapper[5005]: E0225 13:55:48.504387 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bb0b06b-bc4a-4f7e-a256-63f0995f1f19" containerName="oc" Feb 25 13:55:48 crc kubenswrapper[5005]: I0225 13:55:48.504404 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bb0b06b-bc4a-4f7e-a256-63f0995f1f19" containerName="oc" Feb 25 13:55:48 crc kubenswrapper[5005]: I0225 13:55:48.504640 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bb0b06b-bc4a-4f7e-a256-63f0995f1f19" containerName="oc" Feb 25 13:55:48 crc kubenswrapper[5005]: I0225 13:55:48.506242 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4tqxp" Feb 25 13:55:48 crc kubenswrapper[5005]: I0225 13:55:48.527067 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4tqxp"] Feb 25 13:55:48 crc kubenswrapper[5005]: I0225 13:55:48.632741 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4151ffe5-c07d-41eb-9480-36b5330da211-utilities\") pod \"certified-operators-4tqxp\" (UID: \"4151ffe5-c07d-41eb-9480-36b5330da211\") " pod="openshift-marketplace/certified-operators-4tqxp" Feb 25 13:55:48 crc kubenswrapper[5005]: I0225 13:55:48.632917 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4151ffe5-c07d-41eb-9480-36b5330da211-catalog-content\") pod \"certified-operators-4tqxp\" (UID: \"4151ffe5-c07d-41eb-9480-36b5330da211\") " pod="openshift-marketplace/certified-operators-4tqxp" Feb 25 13:55:48 crc kubenswrapper[5005]: I0225 13:55:48.633025 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v9hk\" (UniqueName: \"kubernetes.io/projected/4151ffe5-c07d-41eb-9480-36b5330da211-kube-api-access-9v9hk\") pod \"certified-operators-4tqxp\" (UID: \"4151ffe5-c07d-41eb-9480-36b5330da211\") " pod="openshift-marketplace/certified-operators-4tqxp" Feb 25 13:55:48 crc kubenswrapper[5005]: I0225 13:55:48.735011 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9v9hk\" (UniqueName: \"kubernetes.io/projected/4151ffe5-c07d-41eb-9480-36b5330da211-kube-api-access-9v9hk\") pod \"certified-operators-4tqxp\" (UID: \"4151ffe5-c07d-41eb-9480-36b5330da211\") " pod="openshift-marketplace/certified-operators-4tqxp" Feb 25 13:55:48 crc kubenswrapper[5005]: I0225 13:55:48.735150 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4151ffe5-c07d-41eb-9480-36b5330da211-utilities\") pod \"certified-operators-4tqxp\" (UID: \"4151ffe5-c07d-41eb-9480-36b5330da211\") " pod="openshift-marketplace/certified-operators-4tqxp" Feb 25 13:55:48 crc kubenswrapper[5005]: I0225 13:55:48.735233 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4151ffe5-c07d-41eb-9480-36b5330da211-catalog-content\") pod \"certified-operators-4tqxp\" (UID: \"4151ffe5-c07d-41eb-9480-36b5330da211\") " pod="openshift-marketplace/certified-operators-4tqxp" Feb 25 13:55:48 crc kubenswrapper[5005]: I0225 13:55:48.735887 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4151ffe5-c07d-41eb-9480-36b5330da211-utilities\") pod \"certified-operators-4tqxp\" (UID: \"4151ffe5-c07d-41eb-9480-36b5330da211\") " pod="openshift-marketplace/certified-operators-4tqxp" Feb 25 13:55:48 crc kubenswrapper[5005]: I0225 13:55:48.735970 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4151ffe5-c07d-41eb-9480-36b5330da211-catalog-content\") pod \"certified-operators-4tqxp\" (UID: \"4151ffe5-c07d-41eb-9480-36b5330da211\") " pod="openshift-marketplace/certified-operators-4tqxp" Feb 25 13:55:48 crc kubenswrapper[5005]: I0225 13:55:48.765287 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9v9hk\" (UniqueName: \"kubernetes.io/projected/4151ffe5-c07d-41eb-9480-36b5330da211-kube-api-access-9v9hk\") pod \"certified-operators-4tqxp\" (UID: \"4151ffe5-c07d-41eb-9480-36b5330da211\") " pod="openshift-marketplace/certified-operators-4tqxp" Feb 25 13:55:48 crc kubenswrapper[5005]: I0225 13:55:48.838156 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4tqxp" Feb 25 13:55:49 crc kubenswrapper[5005]: I0225 13:55:49.363717 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4tqxp"] Feb 25 13:55:49 crc kubenswrapper[5005]: I0225 13:55:49.920195 5005 generic.go:334] "Generic (PLEG): container finished" podID="4151ffe5-c07d-41eb-9480-36b5330da211" containerID="996554d933e3577d654004365c361ab360559267fb61475c211b4577b3ddccd3" exitCode=0 Feb 25 13:55:49 crc kubenswrapper[5005]: I0225 13:55:49.920268 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4tqxp" event={"ID":"4151ffe5-c07d-41eb-9480-36b5330da211","Type":"ContainerDied","Data":"996554d933e3577d654004365c361ab360559267fb61475c211b4577b3ddccd3"} Feb 25 13:55:49 crc kubenswrapper[5005]: I0225 13:55:49.920333 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4tqxp" event={"ID":"4151ffe5-c07d-41eb-9480-36b5330da211","Type":"ContainerStarted","Data":"8a01113cc5a6d907ab7550af886672b43ec228aca2e80882d3933b8f4201a89f"} Feb 25 13:55:51 crc kubenswrapper[5005]: I0225 13:55:51.942028 5005 generic.go:334] "Generic (PLEG): container finished" podID="4151ffe5-c07d-41eb-9480-36b5330da211" containerID="ca8e9fb1956f8fe897fc28d5dcc9f2068a17fc79ca15fbd6d0ed9ded4f5cb6ca" exitCode=0 Feb 25 13:55:51 crc kubenswrapper[5005]: I0225 13:55:51.942118 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4tqxp" event={"ID":"4151ffe5-c07d-41eb-9480-36b5330da211","Type":"ContainerDied","Data":"ca8e9fb1956f8fe897fc28d5dcc9f2068a17fc79ca15fbd6d0ed9ded4f5cb6ca"} Feb 25 13:55:52 crc kubenswrapper[5005]: I0225 13:55:52.957513 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4tqxp" event={"ID":"4151ffe5-c07d-41eb-9480-36b5330da211","Type":"ContainerStarted","Data":"e1239c40f4fffdc77025b067beb0f6222634063eeed149f26c6b26e435960751"} Feb 25 13:55:52 crc kubenswrapper[5005]: I0225 13:55:52.975250 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4tqxp" podStartSLOduration=2.48325263 podStartE2EDuration="4.975232865s" podCreationTimestamp="2026-02-25 13:55:48 +0000 UTC" firstStartedPulling="2026-02-25 13:55:49.923043052 +0000 UTC m=+9463.963775379" lastFinishedPulling="2026-02-25 13:55:52.415023277 +0000 UTC m=+9466.455755614" observedRunningTime="2026-02-25 13:55:52.972077587 +0000 UTC m=+9467.012809914" watchObservedRunningTime="2026-02-25 13:55:52.975232865 +0000 UTC m=+9467.015965182" Feb 25 13:55:58 crc kubenswrapper[5005]: I0225 13:55:58.838710 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4tqxp" Feb 25 13:55:58 crc kubenswrapper[5005]: I0225 13:55:58.839681 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4tqxp" Feb 25 13:55:58 crc kubenswrapper[5005]: I0225 13:55:58.890823 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4tqxp" Feb 25 13:55:59 crc kubenswrapper[5005]: I0225 13:55:59.064025 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4tqxp" Feb 25 13:55:59 crc kubenswrapper[5005]: I0225 13:55:59.125961 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4tqxp"] Feb 25 13:56:00 crc kubenswrapper[5005]: I0225 13:56:00.152797 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533796-tfmk8"] Feb 25 13:56:00 crc kubenswrapper[5005]: I0225 13:56:00.154031 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533796-tfmk8" Feb 25 13:56:00 crc kubenswrapper[5005]: I0225 13:56:00.155939 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 13:56:00 crc kubenswrapper[5005]: I0225 13:56:00.156227 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 13:56:00 crc kubenswrapper[5005]: I0225 13:56:00.156365 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 13:56:00 crc kubenswrapper[5005]: I0225 13:56:00.166147 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533796-tfmk8"] Feb 25 13:56:00 crc kubenswrapper[5005]: I0225 13:56:00.176590 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvjp6\" (UniqueName: \"kubernetes.io/projected/8e6f95c5-4d98-4a5e-afde-ab241829e009-kube-api-access-wvjp6\") pod \"auto-csr-approver-29533796-tfmk8\" (UID: \"8e6f95c5-4d98-4a5e-afde-ab241829e009\") " pod="openshift-infra/auto-csr-approver-29533796-tfmk8" Feb 25 13:56:00 crc kubenswrapper[5005]: I0225 13:56:00.278116 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvjp6\" (UniqueName: \"kubernetes.io/projected/8e6f95c5-4d98-4a5e-afde-ab241829e009-kube-api-access-wvjp6\") pod \"auto-csr-approver-29533796-tfmk8\" (UID: \"8e6f95c5-4d98-4a5e-afde-ab241829e009\") " pod="openshift-infra/auto-csr-approver-29533796-tfmk8" Feb 25 13:56:00 crc kubenswrapper[5005]: I0225 13:56:00.294475 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvjp6\" (UniqueName: \"kubernetes.io/projected/8e6f95c5-4d98-4a5e-afde-ab241829e009-kube-api-access-wvjp6\") pod \"auto-csr-approver-29533796-tfmk8\" (UID: \"8e6f95c5-4d98-4a5e-afde-ab241829e009\") " pod="openshift-infra/auto-csr-approver-29533796-tfmk8" Feb 25 13:56:00 crc kubenswrapper[5005]: I0225 13:56:00.481114 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533796-tfmk8" Feb 25 13:56:00 crc kubenswrapper[5005]: I0225 13:56:00.966471 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533796-tfmk8"] Feb 25 13:56:00 crc kubenswrapper[5005]: W0225 13:56:00.980499 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e6f95c5_4d98_4a5e_afde_ab241829e009.slice/crio-5cb9e3cdfb8778a6e5549060e9b82bb8f5c9d06d9484e6e90c1d3e543cdf7059 WatchSource:0}: Error finding container 5cb9e3cdfb8778a6e5549060e9b82bb8f5c9d06d9484e6e90c1d3e543cdf7059: Status 404 returned error can't find the container with id 5cb9e3cdfb8778a6e5549060e9b82bb8f5c9d06d9484e6e90c1d3e543cdf7059 Feb 25 13:56:01 crc kubenswrapper[5005]: I0225 13:56:01.030593 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533796-tfmk8" event={"ID":"8e6f95c5-4d98-4a5e-afde-ab241829e009","Type":"ContainerStarted","Data":"5cb9e3cdfb8778a6e5549060e9b82bb8f5c9d06d9484e6e90c1d3e543cdf7059"} Feb 25 13:56:01 crc kubenswrapper[5005]: I0225 13:56:01.030772 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4tqxp" podUID="4151ffe5-c07d-41eb-9480-36b5330da211" containerName="registry-server" containerID="cri-o://e1239c40f4fffdc77025b067beb0f6222634063eeed149f26c6b26e435960751" gracePeriod=2 Feb 25 13:56:01 crc kubenswrapper[5005]: I0225 13:56:01.440405 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4tqxp" Feb 25 13:56:01 crc kubenswrapper[5005]: I0225 13:56:01.603175 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4151ffe5-c07d-41eb-9480-36b5330da211-utilities\") pod \"4151ffe5-c07d-41eb-9480-36b5330da211\" (UID: \"4151ffe5-c07d-41eb-9480-36b5330da211\") " Feb 25 13:56:01 crc kubenswrapper[5005]: I0225 13:56:01.603482 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4151ffe5-c07d-41eb-9480-36b5330da211-catalog-content\") pod \"4151ffe5-c07d-41eb-9480-36b5330da211\" (UID: \"4151ffe5-c07d-41eb-9480-36b5330da211\") " Feb 25 13:56:01 crc kubenswrapper[5005]: I0225 13:56:01.603662 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9v9hk\" (UniqueName: \"kubernetes.io/projected/4151ffe5-c07d-41eb-9480-36b5330da211-kube-api-access-9v9hk\") pod \"4151ffe5-c07d-41eb-9480-36b5330da211\" (UID: \"4151ffe5-c07d-41eb-9480-36b5330da211\") " Feb 25 13:56:01 crc kubenswrapper[5005]: I0225 13:56:01.604233 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4151ffe5-c07d-41eb-9480-36b5330da211-utilities" (OuterVolumeSpecName: "utilities") pod "4151ffe5-c07d-41eb-9480-36b5330da211" (UID: "4151ffe5-c07d-41eb-9480-36b5330da211"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 13:56:01 crc kubenswrapper[5005]: I0225 13:56:01.604444 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4151ffe5-c07d-41eb-9480-36b5330da211-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 13:56:01 crc kubenswrapper[5005]: I0225 13:56:01.647129 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4151ffe5-c07d-41eb-9480-36b5330da211-kube-api-access-9v9hk" (OuterVolumeSpecName: "kube-api-access-9v9hk") pod "4151ffe5-c07d-41eb-9480-36b5330da211" (UID: "4151ffe5-c07d-41eb-9480-36b5330da211"). InnerVolumeSpecName "kube-api-access-9v9hk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 13:56:01 crc kubenswrapper[5005]: I0225 13:56:01.706915 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9v9hk\" (UniqueName: \"kubernetes.io/projected/4151ffe5-c07d-41eb-9480-36b5330da211-kube-api-access-9v9hk\") on node \"crc\" DevicePath \"\"" Feb 25 13:56:02 crc kubenswrapper[5005]: I0225 13:56:02.056779 5005 generic.go:334] "Generic (PLEG): container finished" podID="4151ffe5-c07d-41eb-9480-36b5330da211" containerID="e1239c40f4fffdc77025b067beb0f6222634063eeed149f26c6b26e435960751" exitCode=0 Feb 25 13:56:02 crc kubenswrapper[5005]: I0225 13:56:02.056819 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4tqxp" event={"ID":"4151ffe5-c07d-41eb-9480-36b5330da211","Type":"ContainerDied","Data":"e1239c40f4fffdc77025b067beb0f6222634063eeed149f26c6b26e435960751"} Feb 25 13:56:02 crc kubenswrapper[5005]: I0225 13:56:02.056846 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4tqxp" event={"ID":"4151ffe5-c07d-41eb-9480-36b5330da211","Type":"ContainerDied","Data":"8a01113cc5a6d907ab7550af886672b43ec228aca2e80882d3933b8f4201a89f"} Feb 25 13:56:02 crc kubenswrapper[5005]: I0225 13:56:02.056865 5005 scope.go:117] "RemoveContainer" containerID="e1239c40f4fffdc77025b067beb0f6222634063eeed149f26c6b26e435960751" Feb 25 13:56:02 crc kubenswrapper[5005]: I0225 13:56:02.056987 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4tqxp" Feb 25 13:56:02 crc kubenswrapper[5005]: I0225 13:56:02.084272 5005 scope.go:117] "RemoveContainer" containerID="ca8e9fb1956f8fe897fc28d5dcc9f2068a17fc79ca15fbd6d0ed9ded4f5cb6ca" Feb 25 13:56:02 crc kubenswrapper[5005]: I0225 13:56:02.104759 5005 scope.go:117] "RemoveContainer" containerID="996554d933e3577d654004365c361ab360559267fb61475c211b4577b3ddccd3" Feb 25 13:56:02 crc kubenswrapper[5005]: I0225 13:56:02.162937 5005 scope.go:117] "RemoveContainer" containerID="e1239c40f4fffdc77025b067beb0f6222634063eeed149f26c6b26e435960751" Feb 25 13:56:02 crc kubenswrapper[5005]: E0225 13:56:02.163782 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1239c40f4fffdc77025b067beb0f6222634063eeed149f26c6b26e435960751\": container with ID starting with e1239c40f4fffdc77025b067beb0f6222634063eeed149f26c6b26e435960751 not found: ID does not exist" containerID="e1239c40f4fffdc77025b067beb0f6222634063eeed149f26c6b26e435960751" Feb 25 13:56:02 crc kubenswrapper[5005]: I0225 13:56:02.163838 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1239c40f4fffdc77025b067beb0f6222634063eeed149f26c6b26e435960751"} err="failed to get container status \"e1239c40f4fffdc77025b067beb0f6222634063eeed149f26c6b26e435960751\": rpc error: code = NotFound desc = could not find container \"e1239c40f4fffdc77025b067beb0f6222634063eeed149f26c6b26e435960751\": container with ID starting with e1239c40f4fffdc77025b067beb0f6222634063eeed149f26c6b26e435960751 not found: ID does not exist" Feb 25 13:56:02 crc kubenswrapper[5005]: I0225 13:56:02.163867 5005 scope.go:117] "RemoveContainer" containerID="ca8e9fb1956f8fe897fc28d5dcc9f2068a17fc79ca15fbd6d0ed9ded4f5cb6ca" Feb 25 13:56:02 crc kubenswrapper[5005]: E0225 13:56:02.173008 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca8e9fb1956f8fe897fc28d5dcc9f2068a17fc79ca15fbd6d0ed9ded4f5cb6ca\": container with ID starting with ca8e9fb1956f8fe897fc28d5dcc9f2068a17fc79ca15fbd6d0ed9ded4f5cb6ca not found: ID does not exist" containerID="ca8e9fb1956f8fe897fc28d5dcc9f2068a17fc79ca15fbd6d0ed9ded4f5cb6ca" Feb 25 13:56:02 crc kubenswrapper[5005]: I0225 13:56:02.173063 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca8e9fb1956f8fe897fc28d5dcc9f2068a17fc79ca15fbd6d0ed9ded4f5cb6ca"} err="failed to get container status \"ca8e9fb1956f8fe897fc28d5dcc9f2068a17fc79ca15fbd6d0ed9ded4f5cb6ca\": rpc error: code = NotFound desc = could not find container \"ca8e9fb1956f8fe897fc28d5dcc9f2068a17fc79ca15fbd6d0ed9ded4f5cb6ca\": container with ID starting with ca8e9fb1956f8fe897fc28d5dcc9f2068a17fc79ca15fbd6d0ed9ded4f5cb6ca not found: ID does not exist" Feb 25 13:56:02 crc kubenswrapper[5005]: I0225 13:56:02.173096 5005 scope.go:117] "RemoveContainer" containerID="996554d933e3577d654004365c361ab360559267fb61475c211b4577b3ddccd3" Feb 25 13:56:02 crc kubenswrapper[5005]: E0225 13:56:02.176035 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"996554d933e3577d654004365c361ab360559267fb61475c211b4577b3ddccd3\": container with ID starting with 996554d933e3577d654004365c361ab360559267fb61475c211b4577b3ddccd3 not found: ID does not exist" containerID="996554d933e3577d654004365c361ab360559267fb61475c211b4577b3ddccd3" Feb 25 13:56:02 crc kubenswrapper[5005]: I0225 13:56:02.176066 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"996554d933e3577d654004365c361ab360559267fb61475c211b4577b3ddccd3"} err="failed to get container status \"996554d933e3577d654004365c361ab360559267fb61475c211b4577b3ddccd3\": rpc error: code = NotFound desc = could not find container \"996554d933e3577d654004365c361ab360559267fb61475c211b4577b3ddccd3\": container with ID starting with 996554d933e3577d654004365c361ab360559267fb61475c211b4577b3ddccd3 not found: ID does not exist" Feb 25 13:56:02 crc kubenswrapper[5005]: I0225 13:56:02.376007 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4151ffe5-c07d-41eb-9480-36b5330da211-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4151ffe5-c07d-41eb-9480-36b5330da211" (UID: "4151ffe5-c07d-41eb-9480-36b5330da211"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 13:56:02 crc kubenswrapper[5005]: I0225 13:56:02.420774 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4151ffe5-c07d-41eb-9480-36b5330da211-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 13:56:02 crc kubenswrapper[5005]: I0225 13:56:02.706073 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4tqxp"] Feb 25 13:56:02 crc kubenswrapper[5005]: I0225 13:56:02.715022 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4tqxp"] Feb 25 13:56:03 crc kubenswrapper[5005]: I0225 13:56:03.068002 5005 generic.go:334] "Generic (PLEG): container finished" podID="8e6f95c5-4d98-4a5e-afde-ab241829e009" containerID="9fb40e56f2944e165a140dbf64ed851f9d0717fc8d4cd4eca656c440b3a253a0" exitCode=0 Feb 25 13:56:03 crc kubenswrapper[5005]: I0225 13:56:03.068051 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533796-tfmk8" event={"ID":"8e6f95c5-4d98-4a5e-afde-ab241829e009","Type":"ContainerDied","Data":"9fb40e56f2944e165a140dbf64ed851f9d0717fc8d4cd4eca656c440b3a253a0"} Feb 25 13:56:04 crc kubenswrapper[5005]: I0225 13:56:04.406639 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533796-tfmk8" Feb 25 13:56:04 crc kubenswrapper[5005]: I0225 13:56:04.463538 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvjp6\" (UniqueName: \"kubernetes.io/projected/8e6f95c5-4d98-4a5e-afde-ab241829e009-kube-api-access-wvjp6\") pod \"8e6f95c5-4d98-4a5e-afde-ab241829e009\" (UID: \"8e6f95c5-4d98-4a5e-afde-ab241829e009\") " Feb 25 13:56:04 crc kubenswrapper[5005]: I0225 13:56:04.469330 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e6f95c5-4d98-4a5e-afde-ab241829e009-kube-api-access-wvjp6" (OuterVolumeSpecName: "kube-api-access-wvjp6") pod "8e6f95c5-4d98-4a5e-afde-ab241829e009" (UID: "8e6f95c5-4d98-4a5e-afde-ab241829e009"). InnerVolumeSpecName "kube-api-access-wvjp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 13:56:04 crc kubenswrapper[5005]: I0225 13:56:04.566051 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvjp6\" (UniqueName: \"kubernetes.io/projected/8e6f95c5-4d98-4a5e-afde-ab241829e009-kube-api-access-wvjp6\") on node \"crc\" DevicePath \"\"" Feb 25 13:56:04 crc kubenswrapper[5005]: I0225 13:56:04.695869 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4151ffe5-c07d-41eb-9480-36b5330da211" path="/var/lib/kubelet/pods/4151ffe5-c07d-41eb-9480-36b5330da211/volumes" Feb 25 13:56:05 crc kubenswrapper[5005]: I0225 13:56:05.090846 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533796-tfmk8" event={"ID":"8e6f95c5-4d98-4a5e-afde-ab241829e009","Type":"ContainerDied","Data":"5cb9e3cdfb8778a6e5549060e9b82bb8f5c9d06d9484e6e90c1d3e543cdf7059"} Feb 25 13:56:05 crc kubenswrapper[5005]: I0225 13:56:05.090887 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5cb9e3cdfb8778a6e5549060e9b82bb8f5c9d06d9484e6e90c1d3e543cdf7059" Feb 25 13:56:05 crc kubenswrapper[5005]: I0225 13:56:05.090907 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533796-tfmk8" Feb 25 13:56:05 crc kubenswrapper[5005]: I0225 13:56:05.466292 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533790-xf5wq"] Feb 25 13:56:05 crc kubenswrapper[5005]: I0225 13:56:05.473891 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533790-xf5wq"] Feb 25 13:56:06 crc kubenswrapper[5005]: I0225 13:56:06.696965 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3c2f238-06d3-489e-a175-29841682cd1d" path="/var/lib/kubelet/pods/e3c2f238-06d3-489e-a175-29841682cd1d/volumes" Feb 25 13:56:27 crc kubenswrapper[5005]: I0225 13:56:27.408151 5005 scope.go:117] "RemoveContainer" containerID="9b274e03b37b04b19132a44c900ee67f34a19a361f6aa415d23da4351da32209" Feb 25 13:57:28 crc kubenswrapper[5005]: I0225 13:57:28.088033 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 13:57:28 crc kubenswrapper[5005]: I0225 13:57:28.088769 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 13:57:58 crc kubenswrapper[5005]: I0225 13:57:58.088746 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 13:57:58 crc kubenswrapper[5005]: I0225 13:57:58.089409 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 13:58:00 crc kubenswrapper[5005]: I0225 13:58:00.145937 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533798-9pfx5"] Feb 25 13:58:00 crc kubenswrapper[5005]: E0225 13:58:00.146677 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e6f95c5-4d98-4a5e-afde-ab241829e009" containerName="oc" Feb 25 13:58:00 crc kubenswrapper[5005]: I0225 13:58:00.146691 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e6f95c5-4d98-4a5e-afde-ab241829e009" containerName="oc" Feb 25 13:58:00 crc kubenswrapper[5005]: E0225 13:58:00.146713 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4151ffe5-c07d-41eb-9480-36b5330da211" containerName="extract-content" Feb 25 13:58:00 crc kubenswrapper[5005]: I0225 13:58:00.146719 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="4151ffe5-c07d-41eb-9480-36b5330da211" containerName="extract-content" Feb 25 13:58:00 crc kubenswrapper[5005]: E0225 13:58:00.146729 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4151ffe5-c07d-41eb-9480-36b5330da211" containerName="registry-server" Feb 25 13:58:00 crc kubenswrapper[5005]: I0225 13:58:00.146735 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="4151ffe5-c07d-41eb-9480-36b5330da211" containerName="registry-server" Feb 25 13:58:00 crc kubenswrapper[5005]: E0225 13:58:00.146748 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4151ffe5-c07d-41eb-9480-36b5330da211" containerName="extract-utilities" Feb 25 13:58:00 crc kubenswrapper[5005]: I0225 13:58:00.146754 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="4151ffe5-c07d-41eb-9480-36b5330da211" containerName="extract-utilities" Feb 25 13:58:00 crc kubenswrapper[5005]: I0225 13:58:00.146925 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e6f95c5-4d98-4a5e-afde-ab241829e009" containerName="oc" Feb 25 13:58:00 crc kubenswrapper[5005]: I0225 13:58:00.146945 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="4151ffe5-c07d-41eb-9480-36b5330da211" containerName="registry-server" Feb 25 13:58:00 crc kubenswrapper[5005]: I0225 13:58:00.147647 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533798-9pfx5" Feb 25 13:58:00 crc kubenswrapper[5005]: I0225 13:58:00.149848 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 13:58:00 crc kubenswrapper[5005]: I0225 13:58:00.149881 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 13:58:00 crc kubenswrapper[5005]: I0225 13:58:00.150211 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 13:58:00 crc kubenswrapper[5005]: I0225 13:58:00.173204 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533798-9pfx5"] Feb 25 13:58:00 crc kubenswrapper[5005]: I0225 13:58:00.285050 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjpfx\" (UniqueName: \"kubernetes.io/projected/a4c385c6-ce55-4264-a6dc-2453e7da9e2a-kube-api-access-rjpfx\") pod \"auto-csr-approver-29533798-9pfx5\" (UID: \"a4c385c6-ce55-4264-a6dc-2453e7da9e2a\") " pod="openshift-infra/auto-csr-approver-29533798-9pfx5" Feb 25 13:58:00 crc kubenswrapper[5005]: I0225 13:58:00.387290 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjpfx\" (UniqueName: \"kubernetes.io/projected/a4c385c6-ce55-4264-a6dc-2453e7da9e2a-kube-api-access-rjpfx\") pod \"auto-csr-approver-29533798-9pfx5\" (UID: \"a4c385c6-ce55-4264-a6dc-2453e7da9e2a\") " pod="openshift-infra/auto-csr-approver-29533798-9pfx5" Feb 25 13:58:00 crc kubenswrapper[5005]: I0225 13:58:00.417257 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjpfx\" (UniqueName: \"kubernetes.io/projected/a4c385c6-ce55-4264-a6dc-2453e7da9e2a-kube-api-access-rjpfx\") pod \"auto-csr-approver-29533798-9pfx5\" (UID: \"a4c385c6-ce55-4264-a6dc-2453e7da9e2a\") " pod="openshift-infra/auto-csr-approver-29533798-9pfx5" Feb 25 13:58:00 crc kubenswrapper[5005]: I0225 13:58:00.472282 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533798-9pfx5" Feb 25 13:58:00 crc kubenswrapper[5005]: I0225 13:58:00.964659 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533798-9pfx5"] Feb 25 13:58:01 crc kubenswrapper[5005]: I0225 13:58:01.112653 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533798-9pfx5" event={"ID":"a4c385c6-ce55-4264-a6dc-2453e7da9e2a","Type":"ContainerStarted","Data":"798e45fce1ebd5594f088c16d107264f487440dea93374155c8a1f9c8aac9625"} Feb 25 13:58:03 crc kubenswrapper[5005]: I0225 13:58:03.130605 5005 generic.go:334] "Generic (PLEG): container finished" podID="a4c385c6-ce55-4264-a6dc-2453e7da9e2a" containerID="707335ced59666285654f7cdf8c81563a97c9da35636e8d377aa14d4942a3b41" exitCode=0 Feb 25 13:58:03 crc kubenswrapper[5005]: I0225 13:58:03.130678 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533798-9pfx5" event={"ID":"a4c385c6-ce55-4264-a6dc-2453e7da9e2a","Type":"ContainerDied","Data":"707335ced59666285654f7cdf8c81563a97c9da35636e8d377aa14d4942a3b41"} Feb 25 13:58:04 crc kubenswrapper[5005]: I0225 13:58:04.455711 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533798-9pfx5" Feb 25 13:58:04 crc kubenswrapper[5005]: I0225 13:58:04.473678 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjpfx\" (UniqueName: \"kubernetes.io/projected/a4c385c6-ce55-4264-a6dc-2453e7da9e2a-kube-api-access-rjpfx\") pod \"a4c385c6-ce55-4264-a6dc-2453e7da9e2a\" (UID: \"a4c385c6-ce55-4264-a6dc-2453e7da9e2a\") " Feb 25 13:58:04 crc kubenswrapper[5005]: I0225 13:58:04.480006 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4c385c6-ce55-4264-a6dc-2453e7da9e2a-kube-api-access-rjpfx" (OuterVolumeSpecName: "kube-api-access-rjpfx") pod "a4c385c6-ce55-4264-a6dc-2453e7da9e2a" (UID: "a4c385c6-ce55-4264-a6dc-2453e7da9e2a"). InnerVolumeSpecName "kube-api-access-rjpfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 13:58:04 crc kubenswrapper[5005]: I0225 13:58:04.577389 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjpfx\" (UniqueName: \"kubernetes.io/projected/a4c385c6-ce55-4264-a6dc-2453e7da9e2a-kube-api-access-rjpfx\") on node \"crc\" DevicePath \"\"" Feb 25 13:58:04 crc kubenswrapper[5005]: I0225 13:58:04.798535 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rprkz"] Feb 25 13:58:04 crc kubenswrapper[5005]: E0225 13:58:04.799616 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4c385c6-ce55-4264-a6dc-2453e7da9e2a" containerName="oc" Feb 25 13:58:04 crc kubenswrapper[5005]: I0225 13:58:04.799773 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4c385c6-ce55-4264-a6dc-2453e7da9e2a" containerName="oc" Feb 25 13:58:04 crc kubenswrapper[5005]: I0225 13:58:04.800300 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4c385c6-ce55-4264-a6dc-2453e7da9e2a" containerName="oc" Feb 25 13:58:04 crc kubenswrapper[5005]: I0225 13:58:04.802818 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rprkz" Feb 25 13:58:04 crc kubenswrapper[5005]: I0225 13:58:04.812038 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rprkz"] Feb 25 13:58:04 crc kubenswrapper[5005]: I0225 13:58:04.985503 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce4399eb-0ef8-44dd-aa84-1448fc2d78b3-catalog-content\") pod \"redhat-operators-rprkz\" (UID: \"ce4399eb-0ef8-44dd-aa84-1448fc2d78b3\") " pod="openshift-marketplace/redhat-operators-rprkz" Feb 25 13:58:04 crc kubenswrapper[5005]: I0225 13:58:04.985610 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce4399eb-0ef8-44dd-aa84-1448fc2d78b3-utilities\") pod \"redhat-operators-rprkz\" (UID: \"ce4399eb-0ef8-44dd-aa84-1448fc2d78b3\") " pod="openshift-marketplace/redhat-operators-rprkz" Feb 25 13:58:04 crc kubenswrapper[5005]: I0225 13:58:04.985829 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7p95\" (UniqueName: \"kubernetes.io/projected/ce4399eb-0ef8-44dd-aa84-1448fc2d78b3-kube-api-access-b7p95\") pod \"redhat-operators-rprkz\" (UID: \"ce4399eb-0ef8-44dd-aa84-1448fc2d78b3\") " pod="openshift-marketplace/redhat-operators-rprkz" Feb 25 13:58:05 crc kubenswrapper[5005]: I0225 13:58:05.088227 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce4399eb-0ef8-44dd-aa84-1448fc2d78b3-catalog-content\") pod \"redhat-operators-rprkz\" (UID: \"ce4399eb-0ef8-44dd-aa84-1448fc2d78b3\") " pod="openshift-marketplace/redhat-operators-rprkz" Feb 25 13:58:05 crc kubenswrapper[5005]: I0225 13:58:05.088306 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce4399eb-0ef8-44dd-aa84-1448fc2d78b3-utilities\") pod \"redhat-operators-rprkz\" (UID: \"ce4399eb-0ef8-44dd-aa84-1448fc2d78b3\") " pod="openshift-marketplace/redhat-operators-rprkz" Feb 25 13:58:05 crc kubenswrapper[5005]: I0225 13:58:05.088407 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7p95\" (UniqueName: \"kubernetes.io/projected/ce4399eb-0ef8-44dd-aa84-1448fc2d78b3-kube-api-access-b7p95\") pod \"redhat-operators-rprkz\" (UID: \"ce4399eb-0ef8-44dd-aa84-1448fc2d78b3\") " pod="openshift-marketplace/redhat-operators-rprkz" Feb 25 13:58:05 crc kubenswrapper[5005]: I0225 13:58:05.088726 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce4399eb-0ef8-44dd-aa84-1448fc2d78b3-catalog-content\") pod \"redhat-operators-rprkz\" (UID: \"ce4399eb-0ef8-44dd-aa84-1448fc2d78b3\") " pod="openshift-marketplace/redhat-operators-rprkz" Feb 25 13:58:05 crc kubenswrapper[5005]: I0225 13:58:05.089147 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce4399eb-0ef8-44dd-aa84-1448fc2d78b3-utilities\") pod \"redhat-operators-rprkz\" (UID: \"ce4399eb-0ef8-44dd-aa84-1448fc2d78b3\") " pod="openshift-marketplace/redhat-operators-rprkz" Feb 25 13:58:05 crc kubenswrapper[5005]: I0225 13:58:05.112988 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7p95\" (UniqueName: \"kubernetes.io/projected/ce4399eb-0ef8-44dd-aa84-1448fc2d78b3-kube-api-access-b7p95\") pod \"redhat-operators-rprkz\" (UID: \"ce4399eb-0ef8-44dd-aa84-1448fc2d78b3\") " pod="openshift-marketplace/redhat-operators-rprkz" Feb 25 13:58:05 crc kubenswrapper[5005]: I0225 13:58:05.135450 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rprkz" Feb 25 13:58:05 crc kubenswrapper[5005]: I0225 13:58:05.149956 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533798-9pfx5" event={"ID":"a4c385c6-ce55-4264-a6dc-2453e7da9e2a","Type":"ContainerDied","Data":"798e45fce1ebd5594f088c16d107264f487440dea93374155c8a1f9c8aac9625"} Feb 25 13:58:05 crc kubenswrapper[5005]: I0225 13:58:05.150617 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="798e45fce1ebd5594f088c16d107264f487440dea93374155c8a1f9c8aac9625" Feb 25 13:58:05 crc kubenswrapper[5005]: I0225 13:58:05.150583 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533798-9pfx5" Feb 25 13:58:05 crc kubenswrapper[5005]: I0225 13:58:05.518219 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533792-v9ddb"] Feb 25 13:58:05 crc kubenswrapper[5005]: I0225 13:58:05.525951 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533792-v9ddb"] Feb 25 13:58:05 crc kubenswrapper[5005]: I0225 13:58:05.650809 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rprkz"] Feb 25 13:58:05 crc kubenswrapper[5005]: W0225 13:58:05.652487 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce4399eb_0ef8_44dd_aa84_1448fc2d78b3.slice/crio-d5d6d21a7596e0ba0f26f55dc4c0e4d51b9a2d41885ca8dec4d0331b8e63bcd6 WatchSource:0}: Error finding container d5d6d21a7596e0ba0f26f55dc4c0e4d51b9a2d41885ca8dec4d0331b8e63bcd6: Status 404 returned error can't find the container with id d5d6d21a7596e0ba0f26f55dc4c0e4d51b9a2d41885ca8dec4d0331b8e63bcd6 Feb 25 13:58:06 crc kubenswrapper[5005]: I0225 13:58:06.163005 5005 generic.go:334] "Generic (PLEG): container finished" podID="ce4399eb-0ef8-44dd-aa84-1448fc2d78b3" containerID="ced835b28b3195a0463fdb544e417067654931dce9ea6240afbafb27da08740b" exitCode=0 Feb 25 13:58:06 crc kubenswrapper[5005]: I0225 13:58:06.163054 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rprkz" event={"ID":"ce4399eb-0ef8-44dd-aa84-1448fc2d78b3","Type":"ContainerDied","Data":"ced835b28b3195a0463fdb544e417067654931dce9ea6240afbafb27da08740b"} Feb 25 13:58:06 crc kubenswrapper[5005]: I0225 13:58:06.163336 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rprkz" event={"ID":"ce4399eb-0ef8-44dd-aa84-1448fc2d78b3","Type":"ContainerStarted","Data":"d5d6d21a7596e0ba0f26f55dc4c0e4d51b9a2d41885ca8dec4d0331b8e63bcd6"} Feb 25 13:58:06 crc kubenswrapper[5005]: I0225 13:58:06.697476 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffedae01-95b5-4a35-a6fd-159317168f4b" path="/var/lib/kubelet/pods/ffedae01-95b5-4a35-a6fd-159317168f4b/volumes" Feb 25 13:58:07 crc kubenswrapper[5005]: I0225 13:58:07.180005 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rprkz" event={"ID":"ce4399eb-0ef8-44dd-aa84-1448fc2d78b3","Type":"ContainerStarted","Data":"1df7991cd62d24b5d9e83b61cb52006857a4dec571d640df261714f7ada58798"} Feb 25 13:58:08 crc kubenswrapper[5005]: I0225 13:58:08.190866 5005 generic.go:334] "Generic (PLEG): container finished" podID="ce4399eb-0ef8-44dd-aa84-1448fc2d78b3" containerID="1df7991cd62d24b5d9e83b61cb52006857a4dec571d640df261714f7ada58798" exitCode=0 Feb 25 13:58:08 crc kubenswrapper[5005]: I0225 13:58:08.190873 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rprkz" event={"ID":"ce4399eb-0ef8-44dd-aa84-1448fc2d78b3","Type":"ContainerDied","Data":"1df7991cd62d24b5d9e83b61cb52006857a4dec571d640df261714f7ada58798"} Feb 25 13:58:09 crc kubenswrapper[5005]: I0225 13:58:09.202750 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rprkz" event={"ID":"ce4399eb-0ef8-44dd-aa84-1448fc2d78b3","Type":"ContainerStarted","Data":"e937307eae064346a8dc7dc97e173f5cb6f4e23231b3bc8d9c4a3c1d1a7d0bea"} Feb 25 13:58:09 crc kubenswrapper[5005]: I0225 13:58:09.221870 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rprkz" podStartSLOduration=2.715234432 podStartE2EDuration="5.221853946s" podCreationTimestamp="2026-02-25 13:58:04 +0000 UTC" firstStartedPulling="2026-02-25 13:58:06.164722878 +0000 UTC m=+9600.205455205" lastFinishedPulling="2026-02-25 13:58:08.671342392 +0000 UTC m=+9602.712074719" observedRunningTime="2026-02-25 13:58:09.219001466 +0000 UTC m=+9603.259733813" watchObservedRunningTime="2026-02-25 13:58:09.221853946 +0000 UTC m=+9603.262586273" Feb 25 13:58:15 crc kubenswrapper[5005]: I0225 13:58:15.136674 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rprkz" Feb 25 13:58:15 crc kubenswrapper[5005]: I0225 13:58:15.137266 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rprkz" Feb 25 13:58:15 crc kubenswrapper[5005]: I0225 13:58:15.180319 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rprkz" Feb 25 13:58:15 crc kubenswrapper[5005]: I0225 13:58:15.316087 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rprkz" Feb 25 13:58:15 crc kubenswrapper[5005]: I0225 13:58:15.450328 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rprkz"] Feb 25 13:58:17 crc kubenswrapper[5005]: I0225 13:58:17.286305 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rprkz" podUID="ce4399eb-0ef8-44dd-aa84-1448fc2d78b3" containerName="registry-server" containerID="cri-o://e937307eae064346a8dc7dc97e173f5cb6f4e23231b3bc8d9c4a3c1d1a7d0bea" gracePeriod=2 Feb 25 13:58:17 crc kubenswrapper[5005]: I0225 13:58:17.804450 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rprkz" Feb 25 13:58:17 crc kubenswrapper[5005]: I0225 13:58:17.856119 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce4399eb-0ef8-44dd-aa84-1448fc2d78b3-utilities\") pod \"ce4399eb-0ef8-44dd-aa84-1448fc2d78b3\" (UID: \"ce4399eb-0ef8-44dd-aa84-1448fc2d78b3\") " Feb 25 13:58:17 crc kubenswrapper[5005]: I0225 13:58:17.856298 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7p95\" (UniqueName: \"kubernetes.io/projected/ce4399eb-0ef8-44dd-aa84-1448fc2d78b3-kube-api-access-b7p95\") pod \"ce4399eb-0ef8-44dd-aa84-1448fc2d78b3\" (UID: \"ce4399eb-0ef8-44dd-aa84-1448fc2d78b3\") " Feb 25 13:58:17 crc kubenswrapper[5005]: I0225 13:58:17.856720 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce4399eb-0ef8-44dd-aa84-1448fc2d78b3-catalog-content\") pod \"ce4399eb-0ef8-44dd-aa84-1448fc2d78b3\" (UID: \"ce4399eb-0ef8-44dd-aa84-1448fc2d78b3\") " Feb 25 13:58:17 crc kubenswrapper[5005]: I0225 13:58:17.857201 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce4399eb-0ef8-44dd-aa84-1448fc2d78b3-utilities" (OuterVolumeSpecName: "utilities") pod "ce4399eb-0ef8-44dd-aa84-1448fc2d78b3" (UID: "ce4399eb-0ef8-44dd-aa84-1448fc2d78b3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 13:58:17 crc kubenswrapper[5005]: I0225 13:58:17.857969 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce4399eb-0ef8-44dd-aa84-1448fc2d78b3-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 13:58:17 crc kubenswrapper[5005]: I0225 13:58:17.863797 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce4399eb-0ef8-44dd-aa84-1448fc2d78b3-kube-api-access-b7p95" (OuterVolumeSpecName: "kube-api-access-b7p95") pod "ce4399eb-0ef8-44dd-aa84-1448fc2d78b3" (UID: "ce4399eb-0ef8-44dd-aa84-1448fc2d78b3"). InnerVolumeSpecName "kube-api-access-b7p95". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 13:58:17 crc kubenswrapper[5005]: I0225 13:58:17.959594 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7p95\" (UniqueName: \"kubernetes.io/projected/ce4399eb-0ef8-44dd-aa84-1448fc2d78b3-kube-api-access-b7p95\") on node \"crc\" DevicePath \"\"" Feb 25 13:58:18 crc kubenswrapper[5005]: I0225 13:58:18.298433 5005 generic.go:334] "Generic (PLEG): container finished" podID="ce4399eb-0ef8-44dd-aa84-1448fc2d78b3" containerID="e937307eae064346a8dc7dc97e173f5cb6f4e23231b3bc8d9c4a3c1d1a7d0bea" exitCode=0 Feb 25 13:58:18 crc kubenswrapper[5005]: I0225 13:58:18.298484 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rprkz" event={"ID":"ce4399eb-0ef8-44dd-aa84-1448fc2d78b3","Type":"ContainerDied","Data":"e937307eae064346a8dc7dc97e173f5cb6f4e23231b3bc8d9c4a3c1d1a7d0bea"} Feb 25 13:58:18 crc kubenswrapper[5005]: I0225 13:58:18.298770 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rprkz" event={"ID":"ce4399eb-0ef8-44dd-aa84-1448fc2d78b3","Type":"ContainerDied","Data":"d5d6d21a7596e0ba0f26f55dc4c0e4d51b9a2d41885ca8dec4d0331b8e63bcd6"} Feb 25 13:58:18 crc kubenswrapper[5005]: I0225 13:58:18.298797 5005 scope.go:117] "RemoveContainer" containerID="e937307eae064346a8dc7dc97e173f5cb6f4e23231b3bc8d9c4a3c1d1a7d0bea" Feb 25 13:58:18 crc kubenswrapper[5005]: I0225 13:58:18.298514 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rprkz" Feb 25 13:58:18 crc kubenswrapper[5005]: I0225 13:58:18.337949 5005 scope.go:117] "RemoveContainer" containerID="1df7991cd62d24b5d9e83b61cb52006857a4dec571d640df261714f7ada58798" Feb 25 13:58:18 crc kubenswrapper[5005]: I0225 13:58:18.359356 5005 scope.go:117] "RemoveContainer" containerID="ced835b28b3195a0463fdb544e417067654931dce9ea6240afbafb27da08740b" Feb 25 13:58:18 crc kubenswrapper[5005]: I0225 13:58:18.406472 5005 scope.go:117] "RemoveContainer" containerID="e937307eae064346a8dc7dc97e173f5cb6f4e23231b3bc8d9c4a3c1d1a7d0bea" Feb 25 13:58:18 crc kubenswrapper[5005]: E0225 13:58:18.407052 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e937307eae064346a8dc7dc97e173f5cb6f4e23231b3bc8d9c4a3c1d1a7d0bea\": container with ID starting with e937307eae064346a8dc7dc97e173f5cb6f4e23231b3bc8d9c4a3c1d1a7d0bea not found: ID does not exist" containerID="e937307eae064346a8dc7dc97e173f5cb6f4e23231b3bc8d9c4a3c1d1a7d0bea" Feb 25 13:58:18 crc kubenswrapper[5005]: I0225 13:58:18.407119 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e937307eae064346a8dc7dc97e173f5cb6f4e23231b3bc8d9c4a3c1d1a7d0bea"} err="failed to get container status \"e937307eae064346a8dc7dc97e173f5cb6f4e23231b3bc8d9c4a3c1d1a7d0bea\": rpc error: code = NotFound desc = could not find container \"e937307eae064346a8dc7dc97e173f5cb6f4e23231b3bc8d9c4a3c1d1a7d0bea\": container with ID starting with e937307eae064346a8dc7dc97e173f5cb6f4e23231b3bc8d9c4a3c1d1a7d0bea not found: ID does not exist" Feb 25 13:58:18 crc kubenswrapper[5005]: I0225 13:58:18.407144 5005 scope.go:117] "RemoveContainer" containerID="1df7991cd62d24b5d9e83b61cb52006857a4dec571d640df261714f7ada58798" Feb 25 13:58:18 crc kubenswrapper[5005]: E0225 13:58:18.407556 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1df7991cd62d24b5d9e83b61cb52006857a4dec571d640df261714f7ada58798\": container with ID starting with 1df7991cd62d24b5d9e83b61cb52006857a4dec571d640df261714f7ada58798 not found: ID does not exist" containerID="1df7991cd62d24b5d9e83b61cb52006857a4dec571d640df261714f7ada58798" Feb 25 13:58:18 crc kubenswrapper[5005]: I0225 13:58:18.407680 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1df7991cd62d24b5d9e83b61cb52006857a4dec571d640df261714f7ada58798"} err="failed to get container status \"1df7991cd62d24b5d9e83b61cb52006857a4dec571d640df261714f7ada58798\": rpc error: code = NotFound desc = could not find container \"1df7991cd62d24b5d9e83b61cb52006857a4dec571d640df261714f7ada58798\": container with ID starting with 1df7991cd62d24b5d9e83b61cb52006857a4dec571d640df261714f7ada58798 not found: ID does not exist" Feb 25 13:58:18 crc kubenswrapper[5005]: I0225 13:58:18.407773 5005 scope.go:117] "RemoveContainer" containerID="ced835b28b3195a0463fdb544e417067654931dce9ea6240afbafb27da08740b" Feb 25 13:58:18 crc kubenswrapper[5005]: E0225 13:58:18.409021 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ced835b28b3195a0463fdb544e417067654931dce9ea6240afbafb27da08740b\": container with ID starting with ced835b28b3195a0463fdb544e417067654931dce9ea6240afbafb27da08740b not found: ID does not exist" containerID="ced835b28b3195a0463fdb544e417067654931dce9ea6240afbafb27da08740b" Feb 25 13:58:18 crc kubenswrapper[5005]: I0225 13:58:18.409046 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ced835b28b3195a0463fdb544e417067654931dce9ea6240afbafb27da08740b"} err="failed to get container status \"ced835b28b3195a0463fdb544e417067654931dce9ea6240afbafb27da08740b\": rpc error: code = NotFound desc = could not find container \"ced835b28b3195a0463fdb544e417067654931dce9ea6240afbafb27da08740b\": container with ID starting with ced835b28b3195a0463fdb544e417067654931dce9ea6240afbafb27da08740b not found: ID does not exist" Feb 25 13:58:20 crc kubenswrapper[5005]: I0225 13:58:20.401980 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce4399eb-0ef8-44dd-aa84-1448fc2d78b3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ce4399eb-0ef8-44dd-aa84-1448fc2d78b3" (UID: "ce4399eb-0ef8-44dd-aa84-1448fc2d78b3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 13:58:20 crc kubenswrapper[5005]: I0225 13:58:20.406274 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce4399eb-0ef8-44dd-aa84-1448fc2d78b3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 13:58:20 crc kubenswrapper[5005]: I0225 13:58:20.756147 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rprkz"] Feb 25 13:58:20 crc kubenswrapper[5005]: I0225 13:58:20.767927 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rprkz"] Feb 25 13:58:22 crc kubenswrapper[5005]: I0225 13:58:22.696354 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce4399eb-0ef8-44dd-aa84-1448fc2d78b3" path="/var/lib/kubelet/pods/ce4399eb-0ef8-44dd-aa84-1448fc2d78b3/volumes" Feb 25 13:58:27 crc kubenswrapper[5005]: I0225 13:58:27.501059 5005 scope.go:117] "RemoveContainer" containerID="a8b1e84e64e493aa129199c014c813724d91ddceeccc577d12dd1c67e89165f3" Feb 25 13:58:28 crc kubenswrapper[5005]: I0225 13:58:28.087457 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 13:58:28 crc kubenswrapper[5005]: I0225 13:58:28.087522 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 13:58:28 crc kubenswrapper[5005]: I0225 13:58:28.087566 5005 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" Feb 25 13:58:28 crc kubenswrapper[5005]: I0225 13:58:28.088296 5005 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0dce4e7103bd6e14723c9dec0cd250411a0416d07006d4293ec16e96d56e8d6c"} pod="openshift-machine-config-operator/machine-config-daemon-tct5q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 25 13:58:28 crc kubenswrapper[5005]: I0225 13:58:28.088353 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" containerID="cri-o://0dce4e7103bd6e14723c9dec0cd250411a0416d07006d4293ec16e96d56e8d6c" gracePeriod=600 Feb 25 13:58:28 crc kubenswrapper[5005]: I0225 13:58:28.395477 5005 generic.go:334] "Generic (PLEG): container finished" podID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerID="0dce4e7103bd6e14723c9dec0cd250411a0416d07006d4293ec16e96d56e8d6c" exitCode=0 Feb 25 13:58:28 crc kubenswrapper[5005]: I0225 13:58:28.395597 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerDied","Data":"0dce4e7103bd6e14723c9dec0cd250411a0416d07006d4293ec16e96d56e8d6c"} Feb 25 13:58:28 crc kubenswrapper[5005]: I0225 13:58:28.395808 5005 scope.go:117] "RemoveContainer" containerID="e73225572e6e0a8f193a76f2756f84c03ea35884dc40a26c584e9b4d7ff340f4" Feb 25 13:58:29 crc kubenswrapper[5005]: I0225 13:58:29.405151 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerStarted","Data":"09011083cf2f6f987ca97f765e283a7d3f7fb45513d71b6530b852477309d29b"} Feb 25 13:59:13 crc kubenswrapper[5005]: I0225 13:59:13.057031 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-48vlq"] Feb 25 13:59:13 crc kubenswrapper[5005]: E0225 13:59:13.057935 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce4399eb-0ef8-44dd-aa84-1448fc2d78b3" containerName="registry-server" Feb 25 13:59:13 crc kubenswrapper[5005]: I0225 13:59:13.057947 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce4399eb-0ef8-44dd-aa84-1448fc2d78b3" containerName="registry-server" Feb 25 13:59:13 crc kubenswrapper[5005]: E0225 13:59:13.057969 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce4399eb-0ef8-44dd-aa84-1448fc2d78b3" containerName="extract-content" Feb 25 13:59:13 crc kubenswrapper[5005]: I0225 13:59:13.057975 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce4399eb-0ef8-44dd-aa84-1448fc2d78b3" containerName="extract-content" Feb 25 13:59:13 crc kubenswrapper[5005]: E0225 13:59:13.057994 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce4399eb-0ef8-44dd-aa84-1448fc2d78b3" containerName="extract-utilities" Feb 25 13:59:13 crc kubenswrapper[5005]: I0225 13:59:13.058000 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce4399eb-0ef8-44dd-aa84-1448fc2d78b3" containerName="extract-utilities" Feb 25 13:59:13 crc kubenswrapper[5005]: I0225 13:59:13.058179 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce4399eb-0ef8-44dd-aa84-1448fc2d78b3" containerName="registry-server" Feb 25 13:59:13 crc kubenswrapper[5005]: I0225 13:59:13.060697 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-48vlq" Feb 25 13:59:13 crc kubenswrapper[5005]: I0225 13:59:13.084643 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-48vlq"] Feb 25 13:59:13 crc kubenswrapper[5005]: I0225 13:59:13.125690 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr488\" (UniqueName: \"kubernetes.io/projected/0cf08d0d-e6c0-45b7-a7c1-753c10e2ae6f-kube-api-access-fr488\") pod \"community-operators-48vlq\" (UID: \"0cf08d0d-e6c0-45b7-a7c1-753c10e2ae6f\") " pod="openshift-marketplace/community-operators-48vlq" Feb 25 13:59:13 crc kubenswrapper[5005]: I0225 13:59:13.125928 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cf08d0d-e6c0-45b7-a7c1-753c10e2ae6f-catalog-content\") pod \"community-operators-48vlq\" (UID: \"0cf08d0d-e6c0-45b7-a7c1-753c10e2ae6f\") " pod="openshift-marketplace/community-operators-48vlq" Feb 25 13:59:13 crc kubenswrapper[5005]: I0225 13:59:13.126131 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cf08d0d-e6c0-45b7-a7c1-753c10e2ae6f-utilities\") pod \"community-operators-48vlq\" (UID: \"0cf08d0d-e6c0-45b7-a7c1-753c10e2ae6f\") " pod="openshift-marketplace/community-operators-48vlq" Feb 25 13:59:13 crc kubenswrapper[5005]: I0225 13:59:13.227857 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cf08d0d-e6c0-45b7-a7c1-753c10e2ae6f-utilities\") pod \"community-operators-48vlq\" (UID: \"0cf08d0d-e6c0-45b7-a7c1-753c10e2ae6f\") " pod="openshift-marketplace/community-operators-48vlq" Feb 25 13:59:13 crc kubenswrapper[5005]: I0225 13:59:13.227963 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fr488\" (UniqueName: \"kubernetes.io/projected/0cf08d0d-e6c0-45b7-a7c1-753c10e2ae6f-kube-api-access-fr488\") pod \"community-operators-48vlq\" (UID: \"0cf08d0d-e6c0-45b7-a7c1-753c10e2ae6f\") " pod="openshift-marketplace/community-operators-48vlq" Feb 25 13:59:13 crc kubenswrapper[5005]: I0225 13:59:13.228046 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cf08d0d-e6c0-45b7-a7c1-753c10e2ae6f-catalog-content\") pod \"community-operators-48vlq\" (UID: \"0cf08d0d-e6c0-45b7-a7c1-753c10e2ae6f\") " pod="openshift-marketplace/community-operators-48vlq" Feb 25 13:59:13 crc kubenswrapper[5005]: I0225 13:59:13.228621 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cf08d0d-e6c0-45b7-a7c1-753c10e2ae6f-catalog-content\") pod \"community-operators-48vlq\" (UID: \"0cf08d0d-e6c0-45b7-a7c1-753c10e2ae6f\") " pod="openshift-marketplace/community-operators-48vlq" Feb 25 13:59:13 crc kubenswrapper[5005]: I0225 13:59:13.228849 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cf08d0d-e6c0-45b7-a7c1-753c10e2ae6f-utilities\") pod \"community-operators-48vlq\" (UID: \"0cf08d0d-e6c0-45b7-a7c1-753c10e2ae6f\") " pod="openshift-marketplace/community-operators-48vlq" Feb 25 13:59:13 crc kubenswrapper[5005]: I0225 13:59:13.263114 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fr488\" (UniqueName: \"kubernetes.io/projected/0cf08d0d-e6c0-45b7-a7c1-753c10e2ae6f-kube-api-access-fr488\") pod \"community-operators-48vlq\" (UID: \"0cf08d0d-e6c0-45b7-a7c1-753c10e2ae6f\") " pod="openshift-marketplace/community-operators-48vlq" Feb 25 13:59:13 crc kubenswrapper[5005]: I0225 13:59:13.394845 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-48vlq" Feb 25 13:59:13 crc kubenswrapper[5005]: I0225 13:59:13.913127 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-48vlq"] Feb 25 13:59:14 crc kubenswrapper[5005]: I0225 13:59:14.450209 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-m92xd"] Feb 25 13:59:14 crc kubenswrapper[5005]: I0225 13:59:14.452802 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m92xd" Feb 25 13:59:14 crc kubenswrapper[5005]: I0225 13:59:14.460719 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m92xd"] Feb 25 13:59:14 crc kubenswrapper[5005]: I0225 13:59:14.555680 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afd98343-4162-4b59-a3ab-ef517df99459-catalog-content\") pod \"redhat-marketplace-m92xd\" (UID: \"afd98343-4162-4b59-a3ab-ef517df99459\") " pod="openshift-marketplace/redhat-marketplace-m92xd" Feb 25 13:59:14 crc kubenswrapper[5005]: I0225 13:59:14.555904 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afd98343-4162-4b59-a3ab-ef517df99459-utilities\") pod \"redhat-marketplace-m92xd\" (UID: \"afd98343-4162-4b59-a3ab-ef517df99459\") " pod="openshift-marketplace/redhat-marketplace-m92xd" Feb 25 13:59:14 crc kubenswrapper[5005]: I0225 13:59:14.556004 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld2kc\" (UniqueName: \"kubernetes.io/projected/afd98343-4162-4b59-a3ab-ef517df99459-kube-api-access-ld2kc\") pod \"redhat-marketplace-m92xd\" (UID: \"afd98343-4162-4b59-a3ab-ef517df99459\") " pod="openshift-marketplace/redhat-marketplace-m92xd" Feb 25 13:59:14 crc kubenswrapper[5005]: I0225 13:59:14.658089 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ld2kc\" (UniqueName: \"kubernetes.io/projected/afd98343-4162-4b59-a3ab-ef517df99459-kube-api-access-ld2kc\") pod \"redhat-marketplace-m92xd\" (UID: \"afd98343-4162-4b59-a3ab-ef517df99459\") " pod="openshift-marketplace/redhat-marketplace-m92xd" Feb 25 13:59:14 crc kubenswrapper[5005]: I0225 13:59:14.658278 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afd98343-4162-4b59-a3ab-ef517df99459-catalog-content\") pod \"redhat-marketplace-m92xd\" (UID: \"afd98343-4162-4b59-a3ab-ef517df99459\") " pod="openshift-marketplace/redhat-marketplace-m92xd" Feb 25 13:59:14 crc kubenswrapper[5005]: I0225 13:59:14.658368 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afd98343-4162-4b59-a3ab-ef517df99459-utilities\") pod \"redhat-marketplace-m92xd\" (UID: \"afd98343-4162-4b59-a3ab-ef517df99459\") " pod="openshift-marketplace/redhat-marketplace-m92xd" Feb 25 13:59:14 crc kubenswrapper[5005]: I0225 13:59:14.659017 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afd98343-4162-4b59-a3ab-ef517df99459-utilities\") pod \"redhat-marketplace-m92xd\" (UID: \"afd98343-4162-4b59-a3ab-ef517df99459\") " pod="openshift-marketplace/redhat-marketplace-m92xd" Feb 25 13:59:14 crc kubenswrapper[5005]: I0225 13:59:14.659756 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afd98343-4162-4b59-a3ab-ef517df99459-catalog-content\") pod \"redhat-marketplace-m92xd\" (UID: \"afd98343-4162-4b59-a3ab-ef517df99459\") " pod="openshift-marketplace/redhat-marketplace-m92xd" Feb 25 13:59:14 crc kubenswrapper[5005]: I0225 13:59:14.684521 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ld2kc\" (UniqueName: \"kubernetes.io/projected/afd98343-4162-4b59-a3ab-ef517df99459-kube-api-access-ld2kc\") pod \"redhat-marketplace-m92xd\" (UID: \"afd98343-4162-4b59-a3ab-ef517df99459\") " pod="openshift-marketplace/redhat-marketplace-m92xd" Feb 25 13:59:14 crc kubenswrapper[5005]: I0225 13:59:14.779073 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m92xd" Feb 25 13:59:14 crc kubenswrapper[5005]: I0225 13:59:14.868682 5005 generic.go:334] "Generic (PLEG): container finished" podID="0cf08d0d-e6c0-45b7-a7c1-753c10e2ae6f" containerID="a43543ec785cf0f41bf0d371983af64a58b30d13f20ee098aef123582932936f" exitCode=0 Feb 25 13:59:14 crc kubenswrapper[5005]: I0225 13:59:14.869057 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-48vlq" event={"ID":"0cf08d0d-e6c0-45b7-a7c1-753c10e2ae6f","Type":"ContainerDied","Data":"a43543ec785cf0f41bf0d371983af64a58b30d13f20ee098aef123582932936f"} Feb 25 13:59:14 crc kubenswrapper[5005]: I0225 13:59:14.869424 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-48vlq" event={"ID":"0cf08d0d-e6c0-45b7-a7c1-753c10e2ae6f","Type":"ContainerStarted","Data":"32535d7096484aac379fe5616d480dfcba551e024e3ea15b58e963ae52673b6d"} Feb 25 13:59:14 crc kubenswrapper[5005]: I0225 13:59:14.872650 5005 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 25 13:59:15 crc kubenswrapper[5005]: I0225 13:59:15.283881 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m92xd"] Feb 25 13:59:15 crc kubenswrapper[5005]: I0225 13:59:15.878434 5005 generic.go:334] "Generic (PLEG): container finished" podID="afd98343-4162-4b59-a3ab-ef517df99459" containerID="ae6154cd08bb550c91cd483fcfd3c9098d418908ea9dfa38394c4584ceb0559c" exitCode=0 Feb 25 13:59:15 crc kubenswrapper[5005]: I0225 13:59:15.878507 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m92xd" event={"ID":"afd98343-4162-4b59-a3ab-ef517df99459","Type":"ContainerDied","Data":"ae6154cd08bb550c91cd483fcfd3c9098d418908ea9dfa38394c4584ceb0559c"} Feb 25 13:59:15 crc kubenswrapper[5005]: I0225 13:59:15.878745 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m92xd" event={"ID":"afd98343-4162-4b59-a3ab-ef517df99459","Type":"ContainerStarted","Data":"ff8905a5e88d109f179b176a50f43b567e2da71e9c2eabbb75508159630830b4"} Feb 25 13:59:15 crc kubenswrapper[5005]: I0225 13:59:15.882578 5005 generic.go:334] "Generic (PLEG): container finished" podID="0cf08d0d-e6c0-45b7-a7c1-753c10e2ae6f" containerID="64e5f48a127dadf6ab7b701bb9a4a933e7bcb1c66003832686e9e423198f71ba" exitCode=0 Feb 25 13:59:15 crc kubenswrapper[5005]: I0225 13:59:15.882607 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-48vlq" event={"ID":"0cf08d0d-e6c0-45b7-a7c1-753c10e2ae6f","Type":"ContainerDied","Data":"64e5f48a127dadf6ab7b701bb9a4a933e7bcb1c66003832686e9e423198f71ba"} Feb 25 13:59:16 crc kubenswrapper[5005]: I0225 13:59:16.893871 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-48vlq" event={"ID":"0cf08d0d-e6c0-45b7-a7c1-753c10e2ae6f","Type":"ContainerStarted","Data":"494d75fdb506cb929de89f4cb4b7144af760ab7e9143be184500928105446c55"} Feb 25 13:59:17 crc kubenswrapper[5005]: I0225 13:59:17.905452 5005 generic.go:334] "Generic (PLEG): container finished" podID="afd98343-4162-4b59-a3ab-ef517df99459" containerID="4649892dba460569061f6e83c4482f229d99c8499df541759e6e00e551ab5278" exitCode=0 Feb 25 13:59:17 crc kubenswrapper[5005]: I0225 13:59:17.905583 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m92xd" event={"ID":"afd98343-4162-4b59-a3ab-ef517df99459","Type":"ContainerDied","Data":"4649892dba460569061f6e83c4482f229d99c8499df541759e6e00e551ab5278"} Feb 25 13:59:17 crc kubenswrapper[5005]: I0225 13:59:17.932383 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-48vlq" podStartSLOduration=3.267621255 podStartE2EDuration="4.932343792s" podCreationTimestamp="2026-02-25 13:59:13 +0000 UTC" firstStartedPulling="2026-02-25 13:59:14.872180938 +0000 UTC m=+9668.912913305" lastFinishedPulling="2026-02-25 13:59:16.536903505 +0000 UTC m=+9670.577635842" observedRunningTime="2026-02-25 13:59:16.922737137 +0000 UTC m=+9670.963469464" watchObservedRunningTime="2026-02-25 13:59:17.932343792 +0000 UTC m=+9671.973076119" Feb 25 13:59:18 crc kubenswrapper[5005]: I0225 13:59:18.924949 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m92xd" event={"ID":"afd98343-4162-4b59-a3ab-ef517df99459","Type":"ContainerStarted","Data":"017a11349daab0e159d61d871118ad1ee40c48013940030a695be8d255a017f7"} Feb 25 13:59:23 crc kubenswrapper[5005]: I0225 13:59:23.395078 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-48vlq" Feb 25 13:59:23 crc kubenswrapper[5005]: I0225 13:59:23.397614 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-48vlq" Feb 25 13:59:23 crc kubenswrapper[5005]: I0225 13:59:23.454740 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-48vlq" Feb 25 13:59:23 crc kubenswrapper[5005]: I0225 13:59:23.496414 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-m92xd" podStartSLOduration=7.107668907 podStartE2EDuration="9.496383274s" podCreationTimestamp="2026-02-25 13:59:14 +0000 UTC" firstStartedPulling="2026-02-25 13:59:15.882571829 +0000 UTC m=+9669.923304166" lastFinishedPulling="2026-02-25 13:59:18.271286196 +0000 UTC m=+9672.312018533" observedRunningTime="2026-02-25 13:59:18.947005067 +0000 UTC m=+9672.987737404" watchObservedRunningTime="2026-02-25 13:59:23.496383274 +0000 UTC m=+9677.537115601" Feb 25 13:59:24 crc kubenswrapper[5005]: I0225 13:59:24.035403 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-48vlq" Feb 25 13:59:24 crc kubenswrapper[5005]: I0225 13:59:24.448804 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-48vlq"] Feb 25 13:59:24 crc kubenswrapper[5005]: I0225 13:59:24.779232 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-m92xd" Feb 25 13:59:24 crc kubenswrapper[5005]: I0225 13:59:24.779286 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-m92xd" Feb 25 13:59:24 crc kubenswrapper[5005]: I0225 13:59:24.847681 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-m92xd" Feb 25 13:59:25 crc kubenswrapper[5005]: I0225 13:59:25.058128 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-m92xd" Feb 25 13:59:25 crc kubenswrapper[5005]: I0225 13:59:25.992541 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-48vlq" podUID="0cf08d0d-e6c0-45b7-a7c1-753c10e2ae6f" containerName="registry-server" containerID="cri-o://494d75fdb506cb929de89f4cb4b7144af760ab7e9143be184500928105446c55" gracePeriod=2 Feb 25 13:59:26 crc kubenswrapper[5005]: I0225 13:59:26.453939 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-48vlq" Feb 25 13:59:26 crc kubenswrapper[5005]: I0225 13:59:26.522877 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cf08d0d-e6c0-45b7-a7c1-753c10e2ae6f-utilities\") pod \"0cf08d0d-e6c0-45b7-a7c1-753c10e2ae6f\" (UID: \"0cf08d0d-e6c0-45b7-a7c1-753c10e2ae6f\") " Feb 25 13:59:26 crc kubenswrapper[5005]: I0225 13:59:26.522952 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fr488\" (UniqueName: \"kubernetes.io/projected/0cf08d0d-e6c0-45b7-a7c1-753c10e2ae6f-kube-api-access-fr488\") pod \"0cf08d0d-e6c0-45b7-a7c1-753c10e2ae6f\" (UID: \"0cf08d0d-e6c0-45b7-a7c1-753c10e2ae6f\") " Feb 25 13:59:26 crc kubenswrapper[5005]: I0225 13:59:26.523015 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cf08d0d-e6c0-45b7-a7c1-753c10e2ae6f-catalog-content\") pod \"0cf08d0d-e6c0-45b7-a7c1-753c10e2ae6f\" (UID: \"0cf08d0d-e6c0-45b7-a7c1-753c10e2ae6f\") " Feb 25 13:59:26 crc kubenswrapper[5005]: I0225 13:59:26.523947 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0cf08d0d-e6c0-45b7-a7c1-753c10e2ae6f-utilities" (OuterVolumeSpecName: "utilities") pod "0cf08d0d-e6c0-45b7-a7c1-753c10e2ae6f" (UID: "0cf08d0d-e6c0-45b7-a7c1-753c10e2ae6f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 13:59:26 crc kubenswrapper[5005]: I0225 13:59:26.532015 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cf08d0d-e6c0-45b7-a7c1-753c10e2ae6f-kube-api-access-fr488" (OuterVolumeSpecName: "kube-api-access-fr488") pod "0cf08d0d-e6c0-45b7-a7c1-753c10e2ae6f" (UID: "0cf08d0d-e6c0-45b7-a7c1-753c10e2ae6f"). InnerVolumeSpecName "kube-api-access-fr488". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 13:59:26 crc kubenswrapper[5005]: I0225 13:59:26.586788 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0cf08d0d-e6c0-45b7-a7c1-753c10e2ae6f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0cf08d0d-e6c0-45b7-a7c1-753c10e2ae6f" (UID: "0cf08d0d-e6c0-45b7-a7c1-753c10e2ae6f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 13:59:26 crc kubenswrapper[5005]: I0225 13:59:26.626651 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cf08d0d-e6c0-45b7-a7c1-753c10e2ae6f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 13:59:26 crc kubenswrapper[5005]: I0225 13:59:26.626723 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cf08d0d-e6c0-45b7-a7c1-753c10e2ae6f-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 13:59:26 crc kubenswrapper[5005]: I0225 13:59:26.626746 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fr488\" (UniqueName: \"kubernetes.io/projected/0cf08d0d-e6c0-45b7-a7c1-753c10e2ae6f-kube-api-access-fr488\") on node \"crc\" DevicePath \"\"" Feb 25 13:59:27 crc kubenswrapper[5005]: I0225 13:59:27.008825 5005 generic.go:334] "Generic (PLEG): container finished" podID="0cf08d0d-e6c0-45b7-a7c1-753c10e2ae6f" containerID="494d75fdb506cb929de89f4cb4b7144af760ab7e9143be184500928105446c55" exitCode=0 Feb 25 13:59:27 crc kubenswrapper[5005]: I0225 13:59:27.008919 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-48vlq" event={"ID":"0cf08d0d-e6c0-45b7-a7c1-753c10e2ae6f","Type":"ContainerDied","Data":"494d75fdb506cb929de89f4cb4b7144af760ab7e9143be184500928105446c55"} Feb 25 13:59:27 crc kubenswrapper[5005]: I0225 13:59:27.008955 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-48vlq" event={"ID":"0cf08d0d-e6c0-45b7-a7c1-753c10e2ae6f","Type":"ContainerDied","Data":"32535d7096484aac379fe5616d480dfcba551e024e3ea15b58e963ae52673b6d"} Feb 25 13:59:27 crc kubenswrapper[5005]: I0225 13:59:27.009009 5005 scope.go:117] "RemoveContainer" containerID="494d75fdb506cb929de89f4cb4b7144af760ab7e9143be184500928105446c55" Feb 25 13:59:27 crc kubenswrapper[5005]: I0225 13:59:27.009043 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-48vlq" Feb 25 13:59:27 crc kubenswrapper[5005]: I0225 13:59:27.041494 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-48vlq"] Feb 25 13:59:27 crc kubenswrapper[5005]: I0225 13:59:27.048825 5005 scope.go:117] "RemoveContainer" containerID="64e5f48a127dadf6ab7b701bb9a4a933e7bcb1c66003832686e9e423198f71ba" Feb 25 13:59:27 crc kubenswrapper[5005]: I0225 13:59:27.053248 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-48vlq"] Feb 25 13:59:27 crc kubenswrapper[5005]: I0225 13:59:27.064416 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m92xd"] Feb 25 13:59:27 crc kubenswrapper[5005]: I0225 13:59:27.064880 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-m92xd" podUID="afd98343-4162-4b59-a3ab-ef517df99459" containerName="registry-server" containerID="cri-o://017a11349daab0e159d61d871118ad1ee40c48013940030a695be8d255a017f7" gracePeriod=2 Feb 25 13:59:27 crc kubenswrapper[5005]: I0225 13:59:27.084091 5005 scope.go:117] "RemoveContainer" containerID="a43543ec785cf0f41bf0d371983af64a58b30d13f20ee098aef123582932936f" Feb 25 13:59:27 crc kubenswrapper[5005]: I0225 13:59:27.244104 5005 scope.go:117] "RemoveContainer" containerID="494d75fdb506cb929de89f4cb4b7144af760ab7e9143be184500928105446c55" Feb 25 13:59:27 crc kubenswrapper[5005]: E0225 13:59:27.244687 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"494d75fdb506cb929de89f4cb4b7144af760ab7e9143be184500928105446c55\": container with ID starting with 494d75fdb506cb929de89f4cb4b7144af760ab7e9143be184500928105446c55 not found: ID does not exist" containerID="494d75fdb506cb929de89f4cb4b7144af760ab7e9143be184500928105446c55" Feb 25 13:59:27 crc kubenswrapper[5005]: I0225 13:59:27.244718 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"494d75fdb506cb929de89f4cb4b7144af760ab7e9143be184500928105446c55"} err="failed to get container status \"494d75fdb506cb929de89f4cb4b7144af760ab7e9143be184500928105446c55\": rpc error: code = NotFound desc = could not find container \"494d75fdb506cb929de89f4cb4b7144af760ab7e9143be184500928105446c55\": container with ID starting with 494d75fdb506cb929de89f4cb4b7144af760ab7e9143be184500928105446c55 not found: ID does not exist" Feb 25 13:59:27 crc kubenswrapper[5005]: I0225 13:59:27.244744 5005 scope.go:117] "RemoveContainer" containerID="64e5f48a127dadf6ab7b701bb9a4a933e7bcb1c66003832686e9e423198f71ba" Feb 25 13:59:27 crc kubenswrapper[5005]: E0225 13:59:27.245258 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64e5f48a127dadf6ab7b701bb9a4a933e7bcb1c66003832686e9e423198f71ba\": container with ID starting with 64e5f48a127dadf6ab7b701bb9a4a933e7bcb1c66003832686e9e423198f71ba not found: ID does not exist" containerID="64e5f48a127dadf6ab7b701bb9a4a933e7bcb1c66003832686e9e423198f71ba" Feb 25 13:59:27 crc kubenswrapper[5005]: I0225 13:59:27.245275 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64e5f48a127dadf6ab7b701bb9a4a933e7bcb1c66003832686e9e423198f71ba"} err="failed to get container status \"64e5f48a127dadf6ab7b701bb9a4a933e7bcb1c66003832686e9e423198f71ba\": rpc error: code = NotFound desc = could not find container \"64e5f48a127dadf6ab7b701bb9a4a933e7bcb1c66003832686e9e423198f71ba\": container with ID starting with 64e5f48a127dadf6ab7b701bb9a4a933e7bcb1c66003832686e9e423198f71ba not found: ID does not exist" Feb 25 13:59:27 crc kubenswrapper[5005]: I0225 13:59:27.245288 5005 scope.go:117] "RemoveContainer" containerID="a43543ec785cf0f41bf0d371983af64a58b30d13f20ee098aef123582932936f" Feb 25 13:59:27 crc kubenswrapper[5005]: E0225 13:59:27.245707 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a43543ec785cf0f41bf0d371983af64a58b30d13f20ee098aef123582932936f\": container with ID starting with a43543ec785cf0f41bf0d371983af64a58b30d13f20ee098aef123582932936f not found: ID does not exist" containerID="a43543ec785cf0f41bf0d371983af64a58b30d13f20ee098aef123582932936f" Feb 25 13:59:27 crc kubenswrapper[5005]: I0225 13:59:27.245732 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a43543ec785cf0f41bf0d371983af64a58b30d13f20ee098aef123582932936f"} err="failed to get container status \"a43543ec785cf0f41bf0d371983af64a58b30d13f20ee098aef123582932936f\": rpc error: code = NotFound desc = could not find container \"a43543ec785cf0f41bf0d371983af64a58b30d13f20ee098aef123582932936f\": container with ID starting with a43543ec785cf0f41bf0d371983af64a58b30d13f20ee098aef123582932936f not found: ID does not exist" Feb 25 13:59:27 crc kubenswrapper[5005]: I0225 13:59:27.544898 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m92xd" Feb 25 13:59:27 crc kubenswrapper[5005]: I0225 13:59:27.650015 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afd98343-4162-4b59-a3ab-ef517df99459-catalog-content\") pod \"afd98343-4162-4b59-a3ab-ef517df99459\" (UID: \"afd98343-4162-4b59-a3ab-ef517df99459\") " Feb 25 13:59:27 crc kubenswrapper[5005]: I0225 13:59:27.650234 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afd98343-4162-4b59-a3ab-ef517df99459-utilities\") pod \"afd98343-4162-4b59-a3ab-ef517df99459\" (UID: \"afd98343-4162-4b59-a3ab-ef517df99459\") " Feb 25 13:59:27 crc kubenswrapper[5005]: I0225 13:59:27.650361 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ld2kc\" (UniqueName: \"kubernetes.io/projected/afd98343-4162-4b59-a3ab-ef517df99459-kube-api-access-ld2kc\") pod \"afd98343-4162-4b59-a3ab-ef517df99459\" (UID: \"afd98343-4162-4b59-a3ab-ef517df99459\") " Feb 25 13:59:27 crc kubenswrapper[5005]: I0225 13:59:27.651814 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afd98343-4162-4b59-a3ab-ef517df99459-utilities" (OuterVolumeSpecName: "utilities") pod "afd98343-4162-4b59-a3ab-ef517df99459" (UID: "afd98343-4162-4b59-a3ab-ef517df99459"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 13:59:27 crc kubenswrapper[5005]: I0225 13:59:27.655927 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afd98343-4162-4b59-a3ab-ef517df99459-kube-api-access-ld2kc" (OuterVolumeSpecName: "kube-api-access-ld2kc") pod "afd98343-4162-4b59-a3ab-ef517df99459" (UID: "afd98343-4162-4b59-a3ab-ef517df99459"). InnerVolumeSpecName "kube-api-access-ld2kc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 13:59:27 crc kubenswrapper[5005]: I0225 13:59:27.680095 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afd98343-4162-4b59-a3ab-ef517df99459-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "afd98343-4162-4b59-a3ab-ef517df99459" (UID: "afd98343-4162-4b59-a3ab-ef517df99459"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 13:59:27 crc kubenswrapper[5005]: I0225 13:59:27.752819 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afd98343-4162-4b59-a3ab-ef517df99459-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 13:59:27 crc kubenswrapper[5005]: I0225 13:59:27.752848 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ld2kc\" (UniqueName: \"kubernetes.io/projected/afd98343-4162-4b59-a3ab-ef517df99459-kube-api-access-ld2kc\") on node \"crc\" DevicePath \"\"" Feb 25 13:59:27 crc kubenswrapper[5005]: I0225 13:59:27.752858 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afd98343-4162-4b59-a3ab-ef517df99459-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 13:59:28 crc kubenswrapper[5005]: I0225 13:59:28.017651 5005 generic.go:334] "Generic (PLEG): container finished" podID="afd98343-4162-4b59-a3ab-ef517df99459" containerID="017a11349daab0e159d61d871118ad1ee40c48013940030a695be8d255a017f7" exitCode=0 Feb 25 13:59:28 crc kubenswrapper[5005]: I0225 13:59:28.017697 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m92xd" event={"ID":"afd98343-4162-4b59-a3ab-ef517df99459","Type":"ContainerDied","Data":"017a11349daab0e159d61d871118ad1ee40c48013940030a695be8d255a017f7"} Feb 25 13:59:28 crc kubenswrapper[5005]: I0225 13:59:28.017716 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m92xd" Feb 25 13:59:28 crc kubenswrapper[5005]: I0225 13:59:28.017738 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m92xd" event={"ID":"afd98343-4162-4b59-a3ab-ef517df99459","Type":"ContainerDied","Data":"ff8905a5e88d109f179b176a50f43b567e2da71e9c2eabbb75508159630830b4"} Feb 25 13:59:28 crc kubenswrapper[5005]: I0225 13:59:28.017761 5005 scope.go:117] "RemoveContainer" containerID="017a11349daab0e159d61d871118ad1ee40c48013940030a695be8d255a017f7" Feb 25 13:59:28 crc kubenswrapper[5005]: I0225 13:59:28.035978 5005 scope.go:117] "RemoveContainer" containerID="4649892dba460569061f6e83c4482f229d99c8499df541759e6e00e551ab5278" Feb 25 13:59:28 crc kubenswrapper[5005]: I0225 13:59:28.055484 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m92xd"] Feb 25 13:59:28 crc kubenswrapper[5005]: I0225 13:59:28.057736 5005 scope.go:117] "RemoveContainer" containerID="ae6154cd08bb550c91cd483fcfd3c9098d418908ea9dfa38394c4584ceb0559c" Feb 25 13:59:28 crc kubenswrapper[5005]: I0225 13:59:28.064256 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-m92xd"] Feb 25 13:59:28 crc kubenswrapper[5005]: I0225 13:59:28.073937 5005 scope.go:117] "RemoveContainer" containerID="017a11349daab0e159d61d871118ad1ee40c48013940030a695be8d255a017f7" Feb 25 13:59:28 crc kubenswrapper[5005]: E0225 13:59:28.074333 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"017a11349daab0e159d61d871118ad1ee40c48013940030a695be8d255a017f7\": container with ID starting with 017a11349daab0e159d61d871118ad1ee40c48013940030a695be8d255a017f7 not found: ID does not exist" containerID="017a11349daab0e159d61d871118ad1ee40c48013940030a695be8d255a017f7" Feb 25 13:59:28 crc kubenswrapper[5005]: I0225 13:59:28.074386 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"017a11349daab0e159d61d871118ad1ee40c48013940030a695be8d255a017f7"} err="failed to get container status \"017a11349daab0e159d61d871118ad1ee40c48013940030a695be8d255a017f7\": rpc error: code = NotFound desc = could not find container \"017a11349daab0e159d61d871118ad1ee40c48013940030a695be8d255a017f7\": container with ID starting with 017a11349daab0e159d61d871118ad1ee40c48013940030a695be8d255a017f7 not found: ID does not exist" Feb 25 13:59:28 crc kubenswrapper[5005]: I0225 13:59:28.074407 5005 scope.go:117] "RemoveContainer" containerID="4649892dba460569061f6e83c4482f229d99c8499df541759e6e00e551ab5278" Feb 25 13:59:28 crc kubenswrapper[5005]: E0225 13:59:28.074874 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4649892dba460569061f6e83c4482f229d99c8499df541759e6e00e551ab5278\": container with ID starting with 4649892dba460569061f6e83c4482f229d99c8499df541759e6e00e551ab5278 not found: ID does not exist" containerID="4649892dba460569061f6e83c4482f229d99c8499df541759e6e00e551ab5278" Feb 25 13:59:28 crc kubenswrapper[5005]: I0225 13:59:28.074904 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4649892dba460569061f6e83c4482f229d99c8499df541759e6e00e551ab5278"} err="failed to get container status \"4649892dba460569061f6e83c4482f229d99c8499df541759e6e00e551ab5278\": rpc error: code = NotFound desc = could not find container \"4649892dba460569061f6e83c4482f229d99c8499df541759e6e00e551ab5278\": container with ID starting with 4649892dba460569061f6e83c4482f229d99c8499df541759e6e00e551ab5278 not found: ID does not exist" Feb 25 13:59:28 crc kubenswrapper[5005]: I0225 13:59:28.074926 5005 scope.go:117] "RemoveContainer" containerID="ae6154cd08bb550c91cd483fcfd3c9098d418908ea9dfa38394c4584ceb0559c" Feb 25 13:59:28 crc kubenswrapper[5005]: E0225 13:59:28.075133 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae6154cd08bb550c91cd483fcfd3c9098d418908ea9dfa38394c4584ceb0559c\": container with ID starting with ae6154cd08bb550c91cd483fcfd3c9098d418908ea9dfa38394c4584ceb0559c not found: ID does not exist" containerID="ae6154cd08bb550c91cd483fcfd3c9098d418908ea9dfa38394c4584ceb0559c" Feb 25 13:59:28 crc kubenswrapper[5005]: I0225 13:59:28.075151 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae6154cd08bb550c91cd483fcfd3c9098d418908ea9dfa38394c4584ceb0559c"} err="failed to get container status \"ae6154cd08bb550c91cd483fcfd3c9098d418908ea9dfa38394c4584ceb0559c\": rpc error: code = NotFound desc = could not find container \"ae6154cd08bb550c91cd483fcfd3c9098d418908ea9dfa38394c4584ceb0559c\": container with ID starting with ae6154cd08bb550c91cd483fcfd3c9098d418908ea9dfa38394c4584ceb0559c not found: ID does not exist" Feb 25 13:59:28 crc kubenswrapper[5005]: I0225 13:59:28.700101 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cf08d0d-e6c0-45b7-a7c1-753c10e2ae6f" path="/var/lib/kubelet/pods/0cf08d0d-e6c0-45b7-a7c1-753c10e2ae6f/volumes" Feb 25 13:59:28 crc kubenswrapper[5005]: I0225 13:59:28.702150 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afd98343-4162-4b59-a3ab-ef517df99459" path="/var/lib/kubelet/pods/afd98343-4162-4b59-a3ab-ef517df99459/volumes" Feb 25 14:00:00 crc kubenswrapper[5005]: I0225 14:00:00.159364 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29533800-94hn2"] Feb 25 14:00:00 crc kubenswrapper[5005]: E0225 14:00:00.160508 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cf08d0d-e6c0-45b7-a7c1-753c10e2ae6f" containerName="registry-server" Feb 25 14:00:00 crc kubenswrapper[5005]: I0225 14:00:00.160523 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cf08d0d-e6c0-45b7-a7c1-753c10e2ae6f" containerName="registry-server" Feb 25 14:00:00 crc kubenswrapper[5005]: E0225 14:00:00.160542 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afd98343-4162-4b59-a3ab-ef517df99459" containerName="extract-utilities" Feb 25 14:00:00 crc kubenswrapper[5005]: I0225 14:00:00.160548 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="afd98343-4162-4b59-a3ab-ef517df99459" containerName="extract-utilities" Feb 25 14:00:00 crc kubenswrapper[5005]: E0225 14:00:00.160562 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afd98343-4162-4b59-a3ab-ef517df99459" containerName="registry-server" Feb 25 14:00:00 crc kubenswrapper[5005]: I0225 14:00:00.160569 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="afd98343-4162-4b59-a3ab-ef517df99459" containerName="registry-server" Feb 25 14:00:00 crc kubenswrapper[5005]: E0225 14:00:00.160587 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afd98343-4162-4b59-a3ab-ef517df99459" containerName="extract-content" Feb 25 14:00:00 crc kubenswrapper[5005]: I0225 14:00:00.160592 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="afd98343-4162-4b59-a3ab-ef517df99459" containerName="extract-content" Feb 25 14:00:00 crc kubenswrapper[5005]: E0225 14:00:00.160601 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cf08d0d-e6c0-45b7-a7c1-753c10e2ae6f" containerName="extract-utilities" Feb 25 14:00:00 crc kubenswrapper[5005]: I0225 14:00:00.160606 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cf08d0d-e6c0-45b7-a7c1-753c10e2ae6f" containerName="extract-utilities" Feb 25 14:00:00 crc kubenswrapper[5005]: E0225 14:00:00.160621 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cf08d0d-e6c0-45b7-a7c1-753c10e2ae6f" containerName="extract-content" Feb 25 14:00:00 crc kubenswrapper[5005]: I0225 14:00:00.160627 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cf08d0d-e6c0-45b7-a7c1-753c10e2ae6f" containerName="extract-content" Feb 25 14:00:00 crc kubenswrapper[5005]: I0225 14:00:00.160792 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cf08d0d-e6c0-45b7-a7c1-753c10e2ae6f" containerName="registry-server" Feb 25 14:00:00 crc kubenswrapper[5005]: I0225 14:00:00.160806 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="afd98343-4162-4b59-a3ab-ef517df99459" containerName="registry-server" Feb 25 14:00:00 crc kubenswrapper[5005]: I0225 14:00:00.161481 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29533800-94hn2" Feb 25 14:00:00 crc kubenswrapper[5005]: I0225 14:00:00.164840 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 25 14:00:00 crc kubenswrapper[5005]: I0225 14:00:00.171706 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533800-r758t"] Feb 25 14:00:00 crc kubenswrapper[5005]: I0225 14:00:00.173044 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533800-r758t" Feb 25 14:00:00 crc kubenswrapper[5005]: I0225 14:00:00.173846 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 25 14:00:00 crc kubenswrapper[5005]: I0225 14:00:00.175276 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 14:00:00 crc kubenswrapper[5005]: I0225 14:00:00.175275 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 14:00:00 crc kubenswrapper[5005]: I0225 14:00:00.175505 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 14:00:00 crc kubenswrapper[5005]: I0225 14:00:00.187331 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29533800-94hn2"] Feb 25 14:00:00 crc kubenswrapper[5005]: I0225 14:00:00.198701 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533800-r758t"] Feb 25 14:00:00 crc kubenswrapper[5005]: I0225 14:00:00.333256 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg9l2\" (UniqueName: \"kubernetes.io/projected/aa09c865-8bed-4c3b-86fb-07da923d58cf-kube-api-access-mg9l2\") pod \"collect-profiles-29533800-94hn2\" (UID: \"aa09c865-8bed-4c3b-86fb-07da923d58cf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533800-94hn2" Feb 25 14:00:00 crc kubenswrapper[5005]: I0225 14:00:00.333657 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aa09c865-8bed-4c3b-86fb-07da923d58cf-secret-volume\") pod \"collect-profiles-29533800-94hn2\" (UID: \"aa09c865-8bed-4c3b-86fb-07da923d58cf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533800-94hn2" Feb 25 14:00:00 crc kubenswrapper[5005]: I0225 14:00:00.333824 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68rz9\" (UniqueName: \"kubernetes.io/projected/adefa602-56b0-4df1-bbb5-a49a335696e5-kube-api-access-68rz9\") pod \"auto-csr-approver-29533800-r758t\" (UID: \"adefa602-56b0-4df1-bbb5-a49a335696e5\") " pod="openshift-infra/auto-csr-approver-29533800-r758t" Feb 25 14:00:00 crc kubenswrapper[5005]: I0225 14:00:00.333915 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa09c865-8bed-4c3b-86fb-07da923d58cf-config-volume\") pod \"collect-profiles-29533800-94hn2\" (UID: \"aa09c865-8bed-4c3b-86fb-07da923d58cf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533800-94hn2" Feb 25 14:00:00 crc kubenswrapper[5005]: I0225 14:00:00.436020 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aa09c865-8bed-4c3b-86fb-07da923d58cf-secret-volume\") pod \"collect-profiles-29533800-94hn2\" (UID: \"aa09c865-8bed-4c3b-86fb-07da923d58cf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533800-94hn2" Feb 25 14:00:00 crc kubenswrapper[5005]: I0225 14:00:00.436123 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68rz9\" (UniqueName: \"kubernetes.io/projected/adefa602-56b0-4df1-bbb5-a49a335696e5-kube-api-access-68rz9\") pod \"auto-csr-approver-29533800-r758t\" (UID: \"adefa602-56b0-4df1-bbb5-a49a335696e5\") " pod="openshift-infra/auto-csr-approver-29533800-r758t" Feb 25 14:00:00 crc kubenswrapper[5005]: I0225 14:00:00.436179 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa09c865-8bed-4c3b-86fb-07da923d58cf-config-volume\") pod \"collect-profiles-29533800-94hn2\" (UID: \"aa09c865-8bed-4c3b-86fb-07da923d58cf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533800-94hn2" Feb 25 14:00:00 crc kubenswrapper[5005]: I0225 14:00:00.436251 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mg9l2\" (UniqueName: \"kubernetes.io/projected/aa09c865-8bed-4c3b-86fb-07da923d58cf-kube-api-access-mg9l2\") pod \"collect-profiles-29533800-94hn2\" (UID: \"aa09c865-8bed-4c3b-86fb-07da923d58cf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533800-94hn2" Feb 25 14:00:00 crc kubenswrapper[5005]: I0225 14:00:00.437147 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa09c865-8bed-4c3b-86fb-07da923d58cf-config-volume\") pod \"collect-profiles-29533800-94hn2\" (UID: \"aa09c865-8bed-4c3b-86fb-07da923d58cf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533800-94hn2" Feb 25 14:00:00 crc kubenswrapper[5005]: I0225 14:00:00.441829 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aa09c865-8bed-4c3b-86fb-07da923d58cf-secret-volume\") pod \"collect-profiles-29533800-94hn2\" (UID: \"aa09c865-8bed-4c3b-86fb-07da923d58cf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533800-94hn2" Feb 25 14:00:00 crc kubenswrapper[5005]: I0225 14:00:00.452780 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mg9l2\" (UniqueName: \"kubernetes.io/projected/aa09c865-8bed-4c3b-86fb-07da923d58cf-kube-api-access-mg9l2\") pod \"collect-profiles-29533800-94hn2\" (UID: \"aa09c865-8bed-4c3b-86fb-07da923d58cf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533800-94hn2" Feb 25 14:00:00 crc kubenswrapper[5005]: I0225 14:00:00.456731 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68rz9\" (UniqueName: \"kubernetes.io/projected/adefa602-56b0-4df1-bbb5-a49a335696e5-kube-api-access-68rz9\") pod \"auto-csr-approver-29533800-r758t\" (UID: \"adefa602-56b0-4df1-bbb5-a49a335696e5\") " pod="openshift-infra/auto-csr-approver-29533800-r758t" Feb 25 14:00:00 crc kubenswrapper[5005]: I0225 14:00:00.491085 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29533800-94hn2" Feb 25 14:00:00 crc kubenswrapper[5005]: I0225 14:00:00.527990 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533800-r758t" Feb 25 14:00:01 crc kubenswrapper[5005]: I0225 14:00:01.036523 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533800-r758t"] Feb 25 14:00:01 crc kubenswrapper[5005]: W0225 14:00:01.101353 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa09c865_8bed_4c3b_86fb_07da923d58cf.slice/crio-6e23417421867e3170077eeb513679249e70d16f8af0b88185ea91b965ee845e WatchSource:0}: Error finding container 6e23417421867e3170077eeb513679249e70d16f8af0b88185ea91b965ee845e: Status 404 returned error can't find the container with id 6e23417421867e3170077eeb513679249e70d16f8af0b88185ea91b965ee845e Feb 25 14:00:01 crc kubenswrapper[5005]: I0225 14:00:01.102073 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29533800-94hn2"] Feb 25 14:00:01 crc kubenswrapper[5005]: I0225 14:00:01.343804 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29533800-94hn2" event={"ID":"aa09c865-8bed-4c3b-86fb-07da923d58cf","Type":"ContainerStarted","Data":"781226117ef73c957664aeea83cc3a0213023c9459aae00230b1ba617044a945"} Feb 25 14:00:01 crc kubenswrapper[5005]: I0225 14:00:01.344184 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29533800-94hn2" event={"ID":"aa09c865-8bed-4c3b-86fb-07da923d58cf","Type":"ContainerStarted","Data":"6e23417421867e3170077eeb513679249e70d16f8af0b88185ea91b965ee845e"} Feb 25 14:00:01 crc kubenswrapper[5005]: I0225 14:00:01.346213 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533800-r758t" event={"ID":"adefa602-56b0-4df1-bbb5-a49a335696e5","Type":"ContainerStarted","Data":"e93da58276001267b3202ec05dd7d90820786ee2cec8b627dc6ad63c190cead1"} Feb 25 14:00:02 crc kubenswrapper[5005]: I0225 14:00:02.357212 5005 generic.go:334] "Generic (PLEG): container finished" podID="aa09c865-8bed-4c3b-86fb-07da923d58cf" containerID="781226117ef73c957664aeea83cc3a0213023c9459aae00230b1ba617044a945" exitCode=0 Feb 25 14:00:02 crc kubenswrapper[5005]: I0225 14:00:02.357319 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29533800-94hn2" event={"ID":"aa09c865-8bed-4c3b-86fb-07da923d58cf","Type":"ContainerDied","Data":"781226117ef73c957664aeea83cc3a0213023c9459aae00230b1ba617044a945"} Feb 25 14:00:03 crc kubenswrapper[5005]: I0225 14:00:03.700785 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29533800-94hn2" Feb 25 14:00:03 crc kubenswrapper[5005]: I0225 14:00:03.891943 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aa09c865-8bed-4c3b-86fb-07da923d58cf-secret-volume\") pod \"aa09c865-8bed-4c3b-86fb-07da923d58cf\" (UID: \"aa09c865-8bed-4c3b-86fb-07da923d58cf\") " Feb 25 14:00:03 crc kubenswrapper[5005]: I0225 14:00:03.892015 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa09c865-8bed-4c3b-86fb-07da923d58cf-config-volume\") pod \"aa09c865-8bed-4c3b-86fb-07da923d58cf\" (UID: \"aa09c865-8bed-4c3b-86fb-07da923d58cf\") " Feb 25 14:00:03 crc kubenswrapper[5005]: I0225 14:00:03.892133 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg9l2\" (UniqueName: \"kubernetes.io/projected/aa09c865-8bed-4c3b-86fb-07da923d58cf-kube-api-access-mg9l2\") pod \"aa09c865-8bed-4c3b-86fb-07da923d58cf\" (UID: \"aa09c865-8bed-4c3b-86fb-07da923d58cf\") " Feb 25 14:00:03 crc kubenswrapper[5005]: I0225 14:00:03.893059 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa09c865-8bed-4c3b-86fb-07da923d58cf-config-volume" (OuterVolumeSpecName: "config-volume") pod "aa09c865-8bed-4c3b-86fb-07da923d58cf" (UID: "aa09c865-8bed-4c3b-86fb-07da923d58cf"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 14:00:03 crc kubenswrapper[5005]: I0225 14:00:03.910626 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa09c865-8bed-4c3b-86fb-07da923d58cf-kube-api-access-mg9l2" (OuterVolumeSpecName: "kube-api-access-mg9l2") pod "aa09c865-8bed-4c3b-86fb-07da923d58cf" (UID: "aa09c865-8bed-4c3b-86fb-07da923d58cf"). InnerVolumeSpecName "kube-api-access-mg9l2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 14:00:03 crc kubenswrapper[5005]: I0225 14:00:03.910701 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa09c865-8bed-4c3b-86fb-07da923d58cf-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "aa09c865-8bed-4c3b-86fb-07da923d58cf" (UID: "aa09c865-8bed-4c3b-86fb-07da923d58cf"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 14:00:03 crc kubenswrapper[5005]: I0225 14:00:03.994617 5005 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aa09c865-8bed-4c3b-86fb-07da923d58cf-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 25 14:00:03 crc kubenswrapper[5005]: I0225 14:00:03.994960 5005 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa09c865-8bed-4c3b-86fb-07da923d58cf-config-volume\") on node \"crc\" DevicePath \"\"" Feb 25 14:00:03 crc kubenswrapper[5005]: I0225 14:00:03.994974 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg9l2\" (UniqueName: \"kubernetes.io/projected/aa09c865-8bed-4c3b-86fb-07da923d58cf-kube-api-access-mg9l2\") on node \"crc\" DevicePath \"\"" Feb 25 14:00:04 crc kubenswrapper[5005]: I0225 14:00:04.373860 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29533800-94hn2" event={"ID":"aa09c865-8bed-4c3b-86fb-07da923d58cf","Type":"ContainerDied","Data":"6e23417421867e3170077eeb513679249e70d16f8af0b88185ea91b965ee845e"} Feb 25 14:00:04 crc kubenswrapper[5005]: I0225 14:00:04.374284 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e23417421867e3170077eeb513679249e70d16f8af0b88185ea91b965ee845e" Feb 25 14:00:04 crc kubenswrapper[5005]: I0225 14:00:04.373895 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29533800-94hn2" Feb 25 14:00:04 crc kubenswrapper[5005]: I0225 14:00:04.427104 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29533755-nc8nk"] Feb 25 14:00:04 crc kubenswrapper[5005]: I0225 14:00:04.434994 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29533755-nc8nk"] Feb 25 14:00:04 crc kubenswrapper[5005]: I0225 14:00:04.696913 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="435895b4-3a4a-49ab-bf1c-bd8a4b1df37d" path="/var/lib/kubelet/pods/435895b4-3a4a-49ab-bf1c-bd8a4b1df37d/volumes" Feb 25 14:00:09 crc kubenswrapper[5005]: I0225 14:00:09.418191 5005 generic.go:334] "Generic (PLEG): container finished" podID="adefa602-56b0-4df1-bbb5-a49a335696e5" containerID="94b55ee1a0d326c639523366bfd28c639382af3ad1a6a6d631833cb85c976cec" exitCode=0 Feb 25 14:00:09 crc kubenswrapper[5005]: I0225 14:00:09.418284 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533800-r758t" event={"ID":"adefa602-56b0-4df1-bbb5-a49a335696e5","Type":"ContainerDied","Data":"94b55ee1a0d326c639523366bfd28c639382af3ad1a6a6d631833cb85c976cec"} Feb 25 14:00:10 crc kubenswrapper[5005]: I0225 14:00:10.811031 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533800-r758t" Feb 25 14:00:10 crc kubenswrapper[5005]: I0225 14:00:10.927033 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68rz9\" (UniqueName: \"kubernetes.io/projected/adefa602-56b0-4df1-bbb5-a49a335696e5-kube-api-access-68rz9\") pod \"adefa602-56b0-4df1-bbb5-a49a335696e5\" (UID: \"adefa602-56b0-4df1-bbb5-a49a335696e5\") " Feb 25 14:00:10 crc kubenswrapper[5005]: I0225 14:00:10.933172 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adefa602-56b0-4df1-bbb5-a49a335696e5-kube-api-access-68rz9" (OuterVolumeSpecName: "kube-api-access-68rz9") pod "adefa602-56b0-4df1-bbb5-a49a335696e5" (UID: "adefa602-56b0-4df1-bbb5-a49a335696e5"). InnerVolumeSpecName "kube-api-access-68rz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 14:00:11 crc kubenswrapper[5005]: I0225 14:00:11.028908 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68rz9\" (UniqueName: \"kubernetes.io/projected/adefa602-56b0-4df1-bbb5-a49a335696e5-kube-api-access-68rz9\") on node \"crc\" DevicePath \"\"" Feb 25 14:00:11 crc kubenswrapper[5005]: I0225 14:00:11.475772 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533800-r758t" event={"ID":"adefa602-56b0-4df1-bbb5-a49a335696e5","Type":"ContainerDied","Data":"e93da58276001267b3202ec05dd7d90820786ee2cec8b627dc6ad63c190cead1"} Feb 25 14:00:11 crc kubenswrapper[5005]: I0225 14:00:11.475825 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e93da58276001267b3202ec05dd7d90820786ee2cec8b627dc6ad63c190cead1" Feb 25 14:00:11 crc kubenswrapper[5005]: I0225 14:00:11.475824 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533800-r758t" Feb 25 14:00:11 crc kubenswrapper[5005]: I0225 14:00:11.864491 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533794-sdw7r"] Feb 25 14:00:11 crc kubenswrapper[5005]: I0225 14:00:11.872268 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533794-sdw7r"] Feb 25 14:00:12 crc kubenswrapper[5005]: I0225 14:00:12.695671 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bb0b06b-bc4a-4f7e-a256-63f0995f1f19" path="/var/lib/kubelet/pods/0bb0b06b-bc4a-4f7e-a256-63f0995f1f19/volumes" Feb 25 14:00:27 crc kubenswrapper[5005]: I0225 14:00:27.648635 5005 scope.go:117] "RemoveContainer" containerID="fc90c498b00aab0e9f0169f8ccbc751c5d2f8859b40256b28d9552393890826f" Feb 25 14:00:27 crc kubenswrapper[5005]: I0225 14:00:27.693579 5005 scope.go:117] "RemoveContainer" containerID="230e0c86c5bc0a3b383e58d316e99cafd5282f49bef16678072ef9d094421e1a" Feb 25 14:00:28 crc kubenswrapper[5005]: I0225 14:00:28.087836 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 14:00:28 crc kubenswrapper[5005]: I0225 14:00:28.087902 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 14:00:58 crc kubenswrapper[5005]: I0225 14:00:58.087864 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 14:00:58 crc kubenswrapper[5005]: I0225 14:00:58.088547 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 14:01:00 crc kubenswrapper[5005]: I0225 14:01:00.146992 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29533801-h2bmh"] Feb 25 14:01:00 crc kubenswrapper[5005]: E0225 14:01:00.147703 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adefa602-56b0-4df1-bbb5-a49a335696e5" containerName="oc" Feb 25 14:01:00 crc kubenswrapper[5005]: I0225 14:01:00.147717 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="adefa602-56b0-4df1-bbb5-a49a335696e5" containerName="oc" Feb 25 14:01:00 crc kubenswrapper[5005]: E0225 14:01:00.147746 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa09c865-8bed-4c3b-86fb-07da923d58cf" containerName="collect-profiles" Feb 25 14:01:00 crc kubenswrapper[5005]: I0225 14:01:00.147753 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa09c865-8bed-4c3b-86fb-07da923d58cf" containerName="collect-profiles" Feb 25 14:01:00 crc kubenswrapper[5005]: I0225 14:01:00.148171 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa09c865-8bed-4c3b-86fb-07da923d58cf" containerName="collect-profiles" Feb 25 14:01:00 crc kubenswrapper[5005]: I0225 14:01:00.148190 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="adefa602-56b0-4df1-bbb5-a49a335696e5" containerName="oc" Feb 25 14:01:00 crc kubenswrapper[5005]: I0225 14:01:00.148827 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29533801-h2bmh" Feb 25 14:01:00 crc kubenswrapper[5005]: I0225 14:01:00.173950 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29533801-h2bmh"] Feb 25 14:01:00 crc kubenswrapper[5005]: I0225 14:01:00.230095 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f5b545f-a79f-408f-8dae-4c688e9a70eb-combined-ca-bundle\") pod \"keystone-cron-29533801-h2bmh\" (UID: \"6f5b545f-a79f-408f-8dae-4c688e9a70eb\") " pod="openstack/keystone-cron-29533801-h2bmh" Feb 25 14:01:00 crc kubenswrapper[5005]: I0225 14:01:00.230222 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6f5b545f-a79f-408f-8dae-4c688e9a70eb-fernet-keys\") pod \"keystone-cron-29533801-h2bmh\" (UID: \"6f5b545f-a79f-408f-8dae-4c688e9a70eb\") " pod="openstack/keystone-cron-29533801-h2bmh" Feb 25 14:01:00 crc kubenswrapper[5005]: I0225 14:01:00.230281 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tp69\" (UniqueName: \"kubernetes.io/projected/6f5b545f-a79f-408f-8dae-4c688e9a70eb-kube-api-access-5tp69\") pod \"keystone-cron-29533801-h2bmh\" (UID: \"6f5b545f-a79f-408f-8dae-4c688e9a70eb\") " pod="openstack/keystone-cron-29533801-h2bmh" Feb 25 14:01:00 crc kubenswrapper[5005]: I0225 14:01:00.230361 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f5b545f-a79f-408f-8dae-4c688e9a70eb-config-data\") pod \"keystone-cron-29533801-h2bmh\" (UID: \"6f5b545f-a79f-408f-8dae-4c688e9a70eb\") " pod="openstack/keystone-cron-29533801-h2bmh" Feb 25 14:01:00 crc kubenswrapper[5005]: I0225 14:01:00.331631 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f5b545f-a79f-408f-8dae-4c688e9a70eb-config-data\") pod \"keystone-cron-29533801-h2bmh\" (UID: \"6f5b545f-a79f-408f-8dae-4c688e9a70eb\") " pod="openstack/keystone-cron-29533801-h2bmh" Feb 25 14:01:00 crc kubenswrapper[5005]: I0225 14:01:00.331699 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f5b545f-a79f-408f-8dae-4c688e9a70eb-combined-ca-bundle\") pod \"keystone-cron-29533801-h2bmh\" (UID: \"6f5b545f-a79f-408f-8dae-4c688e9a70eb\") " pod="openstack/keystone-cron-29533801-h2bmh" Feb 25 14:01:00 crc kubenswrapper[5005]: I0225 14:01:00.331785 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6f5b545f-a79f-408f-8dae-4c688e9a70eb-fernet-keys\") pod \"keystone-cron-29533801-h2bmh\" (UID: \"6f5b545f-a79f-408f-8dae-4c688e9a70eb\") " pod="openstack/keystone-cron-29533801-h2bmh" Feb 25 14:01:00 crc kubenswrapper[5005]: I0225 14:01:00.331839 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tp69\" (UniqueName: \"kubernetes.io/projected/6f5b545f-a79f-408f-8dae-4c688e9a70eb-kube-api-access-5tp69\") pod \"keystone-cron-29533801-h2bmh\" (UID: \"6f5b545f-a79f-408f-8dae-4c688e9a70eb\") " pod="openstack/keystone-cron-29533801-h2bmh" Feb 25 14:01:00 crc kubenswrapper[5005]: I0225 14:01:00.338524 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f5b545f-a79f-408f-8dae-4c688e9a70eb-combined-ca-bundle\") pod \"keystone-cron-29533801-h2bmh\" (UID: \"6f5b545f-a79f-408f-8dae-4c688e9a70eb\") " pod="openstack/keystone-cron-29533801-h2bmh" Feb 25 14:01:00 crc kubenswrapper[5005]: I0225 14:01:00.338666 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f5b545f-a79f-408f-8dae-4c688e9a70eb-config-data\") pod \"keystone-cron-29533801-h2bmh\" (UID: \"6f5b545f-a79f-408f-8dae-4c688e9a70eb\") " pod="openstack/keystone-cron-29533801-h2bmh" Feb 25 14:01:00 crc kubenswrapper[5005]: I0225 14:01:00.345412 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6f5b545f-a79f-408f-8dae-4c688e9a70eb-fernet-keys\") pod \"keystone-cron-29533801-h2bmh\" (UID: \"6f5b545f-a79f-408f-8dae-4c688e9a70eb\") " pod="openstack/keystone-cron-29533801-h2bmh" Feb 25 14:01:00 crc kubenswrapper[5005]: I0225 14:01:00.356724 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tp69\" (UniqueName: \"kubernetes.io/projected/6f5b545f-a79f-408f-8dae-4c688e9a70eb-kube-api-access-5tp69\") pod \"keystone-cron-29533801-h2bmh\" (UID: \"6f5b545f-a79f-408f-8dae-4c688e9a70eb\") " pod="openstack/keystone-cron-29533801-h2bmh" Feb 25 14:01:00 crc kubenswrapper[5005]: I0225 14:01:00.474244 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29533801-h2bmh" Feb 25 14:01:00 crc kubenswrapper[5005]: I0225 14:01:00.967404 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29533801-h2bmh"] Feb 25 14:01:01 crc kubenswrapper[5005]: I0225 14:01:01.905021 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29533801-h2bmh" event={"ID":"6f5b545f-a79f-408f-8dae-4c688e9a70eb","Type":"ContainerStarted","Data":"1410929941c06da86642f0cc5f4ef83817ca3f86f706f8a20a8999c17d01c8ff"} Feb 25 14:01:01 crc kubenswrapper[5005]: I0225 14:01:01.905658 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29533801-h2bmh" event={"ID":"6f5b545f-a79f-408f-8dae-4c688e9a70eb","Type":"ContainerStarted","Data":"e8d1458572b4a2244df4cbeebf7b059f41f797460ae8211316c37b21d7f33941"} Feb 25 14:01:01 crc kubenswrapper[5005]: I0225 14:01:01.933986 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29533801-h2bmh" podStartSLOduration=1.933943856 podStartE2EDuration="1.933943856s" podCreationTimestamp="2026-02-25 14:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-25 14:01:01.923211717 +0000 UTC m=+9775.963944044" watchObservedRunningTime="2026-02-25 14:01:01.933943856 +0000 UTC m=+9775.974676183" Feb 25 14:01:03 crc kubenswrapper[5005]: I0225 14:01:03.921323 5005 generic.go:334] "Generic (PLEG): container finished" podID="6f5b545f-a79f-408f-8dae-4c688e9a70eb" containerID="1410929941c06da86642f0cc5f4ef83817ca3f86f706f8a20a8999c17d01c8ff" exitCode=0 Feb 25 14:01:03 crc kubenswrapper[5005]: I0225 14:01:03.921487 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29533801-h2bmh" event={"ID":"6f5b545f-a79f-408f-8dae-4c688e9a70eb","Type":"ContainerDied","Data":"1410929941c06da86642f0cc5f4ef83817ca3f86f706f8a20a8999c17d01c8ff"} Feb 25 14:01:05 crc kubenswrapper[5005]: I0225 14:01:05.267733 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29533801-h2bmh" Feb 25 14:01:05 crc kubenswrapper[5005]: I0225 14:01:05.344839 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f5b545f-a79f-408f-8dae-4c688e9a70eb-config-data\") pod \"6f5b545f-a79f-408f-8dae-4c688e9a70eb\" (UID: \"6f5b545f-a79f-408f-8dae-4c688e9a70eb\") " Feb 25 14:01:05 crc kubenswrapper[5005]: I0225 14:01:05.344908 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6f5b545f-a79f-408f-8dae-4c688e9a70eb-fernet-keys\") pod \"6f5b545f-a79f-408f-8dae-4c688e9a70eb\" (UID: \"6f5b545f-a79f-408f-8dae-4c688e9a70eb\") " Feb 25 14:01:05 crc kubenswrapper[5005]: I0225 14:01:05.345005 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f5b545f-a79f-408f-8dae-4c688e9a70eb-combined-ca-bundle\") pod \"6f5b545f-a79f-408f-8dae-4c688e9a70eb\" (UID: \"6f5b545f-a79f-408f-8dae-4c688e9a70eb\") " Feb 25 14:01:05 crc kubenswrapper[5005]: I0225 14:01:05.345122 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tp69\" (UniqueName: \"kubernetes.io/projected/6f5b545f-a79f-408f-8dae-4c688e9a70eb-kube-api-access-5tp69\") pod \"6f5b545f-a79f-408f-8dae-4c688e9a70eb\" (UID: \"6f5b545f-a79f-408f-8dae-4c688e9a70eb\") " Feb 25 14:01:05 crc kubenswrapper[5005]: I0225 14:01:05.350761 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f5b545f-a79f-408f-8dae-4c688e9a70eb-kube-api-access-5tp69" (OuterVolumeSpecName: "kube-api-access-5tp69") pod "6f5b545f-a79f-408f-8dae-4c688e9a70eb" (UID: "6f5b545f-a79f-408f-8dae-4c688e9a70eb"). InnerVolumeSpecName "kube-api-access-5tp69". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 14:01:05 crc kubenswrapper[5005]: I0225 14:01:05.351244 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f5b545f-a79f-408f-8dae-4c688e9a70eb-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "6f5b545f-a79f-408f-8dae-4c688e9a70eb" (UID: "6f5b545f-a79f-408f-8dae-4c688e9a70eb"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 14:01:05 crc kubenswrapper[5005]: I0225 14:01:05.384483 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f5b545f-a79f-408f-8dae-4c688e9a70eb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6f5b545f-a79f-408f-8dae-4c688e9a70eb" (UID: "6f5b545f-a79f-408f-8dae-4c688e9a70eb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 14:01:05 crc kubenswrapper[5005]: I0225 14:01:05.406495 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f5b545f-a79f-408f-8dae-4c688e9a70eb-config-data" (OuterVolumeSpecName: "config-data") pod "6f5b545f-a79f-408f-8dae-4c688e9a70eb" (UID: "6f5b545f-a79f-408f-8dae-4c688e9a70eb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 14:01:05 crc kubenswrapper[5005]: I0225 14:01:05.446560 5005 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f5b545f-a79f-408f-8dae-4c688e9a70eb-config-data\") on node \"crc\" DevicePath \"\"" Feb 25 14:01:05 crc kubenswrapper[5005]: I0225 14:01:05.446891 5005 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6f5b545f-a79f-408f-8dae-4c688e9a70eb-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 25 14:01:05 crc kubenswrapper[5005]: I0225 14:01:05.446902 5005 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f5b545f-a79f-408f-8dae-4c688e9a70eb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 25 14:01:05 crc kubenswrapper[5005]: I0225 14:01:05.446915 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tp69\" (UniqueName: \"kubernetes.io/projected/6f5b545f-a79f-408f-8dae-4c688e9a70eb-kube-api-access-5tp69\") on node \"crc\" DevicePath \"\"" Feb 25 14:01:05 crc kubenswrapper[5005]: I0225 14:01:05.943469 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29533801-h2bmh" event={"ID":"6f5b545f-a79f-408f-8dae-4c688e9a70eb","Type":"ContainerDied","Data":"e8d1458572b4a2244df4cbeebf7b059f41f797460ae8211316c37b21d7f33941"} Feb 25 14:01:05 crc kubenswrapper[5005]: I0225 14:01:05.943511 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8d1458572b4a2244df4cbeebf7b059f41f797460ae8211316c37b21d7f33941" Feb 25 14:01:05 crc kubenswrapper[5005]: I0225 14:01:05.943549 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29533801-h2bmh" Feb 25 14:01:28 crc kubenswrapper[5005]: I0225 14:01:28.086941 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 14:01:28 crc kubenswrapper[5005]: I0225 14:01:28.087550 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 14:01:28 crc kubenswrapper[5005]: I0225 14:01:28.087603 5005 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" Feb 25 14:01:28 crc kubenswrapper[5005]: I0225 14:01:28.088442 5005 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"09011083cf2f6f987ca97f765e283a7d3f7fb45513d71b6530b852477309d29b"} pod="openshift-machine-config-operator/machine-config-daemon-tct5q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 25 14:01:28 crc kubenswrapper[5005]: I0225 14:01:28.088508 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" containerID="cri-o://09011083cf2f6f987ca97f765e283a7d3f7fb45513d71b6530b852477309d29b" gracePeriod=600 Feb 25 14:01:28 crc kubenswrapper[5005]: E0225 14:01:28.206165 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 14:01:29 crc kubenswrapper[5005]: I0225 14:01:29.140641 5005 generic.go:334] "Generic (PLEG): container finished" podID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerID="09011083cf2f6f987ca97f765e283a7d3f7fb45513d71b6530b852477309d29b" exitCode=0 Feb 25 14:01:29 crc kubenswrapper[5005]: I0225 14:01:29.140708 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerDied","Data":"09011083cf2f6f987ca97f765e283a7d3f7fb45513d71b6530b852477309d29b"} Feb 25 14:01:29 crc kubenswrapper[5005]: I0225 14:01:29.141022 5005 scope.go:117] "RemoveContainer" containerID="0dce4e7103bd6e14723c9dec0cd250411a0416d07006d4293ec16e96d56e8d6c" Feb 25 14:01:29 crc kubenswrapper[5005]: I0225 14:01:29.141759 5005 scope.go:117] "RemoveContainer" containerID="09011083cf2f6f987ca97f765e283a7d3f7fb45513d71b6530b852477309d29b" Feb 25 14:01:29 crc kubenswrapper[5005]: E0225 14:01:29.141997 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 14:01:44 crc kubenswrapper[5005]: I0225 14:01:44.685998 5005 scope.go:117] "RemoveContainer" containerID="09011083cf2f6f987ca97f765e283a7d3f7fb45513d71b6530b852477309d29b" Feb 25 14:01:44 crc kubenswrapper[5005]: E0225 14:01:44.686764 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 14:01:56 crc kubenswrapper[5005]: I0225 14:01:56.692093 5005 scope.go:117] "RemoveContainer" containerID="09011083cf2f6f987ca97f765e283a7d3f7fb45513d71b6530b852477309d29b" Feb 25 14:01:56 crc kubenswrapper[5005]: E0225 14:01:56.693517 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 14:02:00 crc kubenswrapper[5005]: I0225 14:02:00.136445 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533802-rc5z9"] Feb 25 14:02:00 crc kubenswrapper[5005]: E0225 14:02:00.138410 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f5b545f-a79f-408f-8dae-4c688e9a70eb" containerName="keystone-cron" Feb 25 14:02:00 crc kubenswrapper[5005]: I0225 14:02:00.138490 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f5b545f-a79f-408f-8dae-4c688e9a70eb" containerName="keystone-cron" Feb 25 14:02:00 crc kubenswrapper[5005]: I0225 14:02:00.138736 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f5b545f-a79f-408f-8dae-4c688e9a70eb" containerName="keystone-cron" Feb 25 14:02:00 crc kubenswrapper[5005]: I0225 14:02:00.139533 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533802-rc5z9" Feb 25 14:02:00 crc kubenswrapper[5005]: I0225 14:02:00.141749 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 14:02:00 crc kubenswrapper[5005]: I0225 14:02:00.142151 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 14:02:00 crc kubenswrapper[5005]: I0225 14:02:00.142299 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 14:02:00 crc kubenswrapper[5005]: I0225 14:02:00.147550 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533802-rc5z9"] Feb 25 14:02:00 crc kubenswrapper[5005]: I0225 14:02:00.191812 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6kgd\" (UniqueName: \"kubernetes.io/projected/8ea66c9b-0080-4cb8-bc92-95471e197568-kube-api-access-j6kgd\") pod \"auto-csr-approver-29533802-rc5z9\" (UID: \"8ea66c9b-0080-4cb8-bc92-95471e197568\") " pod="openshift-infra/auto-csr-approver-29533802-rc5z9" Feb 25 14:02:00 crc kubenswrapper[5005]: I0225 14:02:00.293388 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6kgd\" (UniqueName: \"kubernetes.io/projected/8ea66c9b-0080-4cb8-bc92-95471e197568-kube-api-access-j6kgd\") pod \"auto-csr-approver-29533802-rc5z9\" (UID: \"8ea66c9b-0080-4cb8-bc92-95471e197568\") " pod="openshift-infra/auto-csr-approver-29533802-rc5z9" Feb 25 14:02:00 crc kubenswrapper[5005]: I0225 14:02:00.309918 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6kgd\" (UniqueName: \"kubernetes.io/projected/8ea66c9b-0080-4cb8-bc92-95471e197568-kube-api-access-j6kgd\") pod \"auto-csr-approver-29533802-rc5z9\" (UID: \"8ea66c9b-0080-4cb8-bc92-95471e197568\") " pod="openshift-infra/auto-csr-approver-29533802-rc5z9" Feb 25 14:02:00 crc kubenswrapper[5005]: I0225 14:02:00.461510 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533802-rc5z9" Feb 25 14:02:00 crc kubenswrapper[5005]: I0225 14:02:00.920825 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533802-rc5z9"] Feb 25 14:02:01 crc kubenswrapper[5005]: I0225 14:02:01.442585 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533802-rc5z9" event={"ID":"8ea66c9b-0080-4cb8-bc92-95471e197568","Type":"ContainerStarted","Data":"ece2f2c24271adbc7da4bff67b08b53ddff03d9b144604238d9512d083ea04a0"} Feb 25 14:02:02 crc kubenswrapper[5005]: I0225 14:02:02.450956 5005 generic.go:334] "Generic (PLEG): container finished" podID="8ea66c9b-0080-4cb8-bc92-95471e197568" containerID="c5a39f3b55e703b419e2ec84bc8dbcd650643048bce96c8b68b82a9e25262ec0" exitCode=0 Feb 25 14:02:02 crc kubenswrapper[5005]: I0225 14:02:02.451001 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533802-rc5z9" event={"ID":"8ea66c9b-0080-4cb8-bc92-95471e197568","Type":"ContainerDied","Data":"c5a39f3b55e703b419e2ec84bc8dbcd650643048bce96c8b68b82a9e25262ec0"} Feb 25 14:02:03 crc kubenswrapper[5005]: I0225 14:02:03.790837 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533802-rc5z9" Feb 25 14:02:03 crc kubenswrapper[5005]: I0225 14:02:03.861507 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6kgd\" (UniqueName: \"kubernetes.io/projected/8ea66c9b-0080-4cb8-bc92-95471e197568-kube-api-access-j6kgd\") pod \"8ea66c9b-0080-4cb8-bc92-95471e197568\" (UID: \"8ea66c9b-0080-4cb8-bc92-95471e197568\") " Feb 25 14:02:03 crc kubenswrapper[5005]: I0225 14:02:03.876921 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ea66c9b-0080-4cb8-bc92-95471e197568-kube-api-access-j6kgd" (OuterVolumeSpecName: "kube-api-access-j6kgd") pod "8ea66c9b-0080-4cb8-bc92-95471e197568" (UID: "8ea66c9b-0080-4cb8-bc92-95471e197568"). InnerVolumeSpecName "kube-api-access-j6kgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 14:02:03 crc kubenswrapper[5005]: I0225 14:02:03.963744 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6kgd\" (UniqueName: \"kubernetes.io/projected/8ea66c9b-0080-4cb8-bc92-95471e197568-kube-api-access-j6kgd\") on node \"crc\" DevicePath \"\"" Feb 25 14:02:04 crc kubenswrapper[5005]: I0225 14:02:04.472838 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533802-rc5z9" event={"ID":"8ea66c9b-0080-4cb8-bc92-95471e197568","Type":"ContainerDied","Data":"ece2f2c24271adbc7da4bff67b08b53ddff03d9b144604238d9512d083ea04a0"} Feb 25 14:02:04 crc kubenswrapper[5005]: I0225 14:02:04.472908 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ece2f2c24271adbc7da4bff67b08b53ddff03d9b144604238d9512d083ea04a0" Feb 25 14:02:04 crc kubenswrapper[5005]: I0225 14:02:04.472877 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533802-rc5z9" Feb 25 14:02:04 crc kubenswrapper[5005]: I0225 14:02:04.851526 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533796-tfmk8"] Feb 25 14:02:04 crc kubenswrapper[5005]: I0225 14:02:04.860734 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533796-tfmk8"] Feb 25 14:02:06 crc kubenswrapper[5005]: I0225 14:02:06.702432 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e6f95c5-4d98-4a5e-afde-ab241829e009" path="/var/lib/kubelet/pods/8e6f95c5-4d98-4a5e-afde-ab241829e009/volumes" Feb 25 14:02:07 crc kubenswrapper[5005]: I0225 14:02:07.685950 5005 scope.go:117] "RemoveContainer" containerID="09011083cf2f6f987ca97f765e283a7d3f7fb45513d71b6530b852477309d29b" Feb 25 14:02:07 crc kubenswrapper[5005]: E0225 14:02:07.686616 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 14:02:19 crc kubenswrapper[5005]: I0225 14:02:19.686301 5005 scope.go:117] "RemoveContainer" containerID="09011083cf2f6f987ca97f765e283a7d3f7fb45513d71b6530b852477309d29b" Feb 25 14:02:19 crc kubenswrapper[5005]: E0225 14:02:19.687096 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 14:02:27 crc kubenswrapper[5005]: I0225 14:02:27.807976 5005 scope.go:117] "RemoveContainer" containerID="9fb40e56f2944e165a140dbf64ed851f9d0717fc8d4cd4eca656c440b3a253a0" Feb 25 14:02:34 crc kubenswrapper[5005]: I0225 14:02:34.687321 5005 scope.go:117] "RemoveContainer" containerID="09011083cf2f6f987ca97f765e283a7d3f7fb45513d71b6530b852477309d29b" Feb 25 14:02:34 crc kubenswrapper[5005]: E0225 14:02:34.688907 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 14:02:48 crc kubenswrapper[5005]: I0225 14:02:48.686174 5005 scope.go:117] "RemoveContainer" containerID="09011083cf2f6f987ca97f765e283a7d3f7fb45513d71b6530b852477309d29b" Feb 25 14:02:48 crc kubenswrapper[5005]: E0225 14:02:48.686967 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 14:03:03 crc kubenswrapper[5005]: I0225 14:03:03.685540 5005 scope.go:117] "RemoveContainer" containerID="09011083cf2f6f987ca97f765e283a7d3f7fb45513d71b6530b852477309d29b" Feb 25 14:03:03 crc kubenswrapper[5005]: E0225 14:03:03.686289 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 14:03:15 crc kubenswrapper[5005]: I0225 14:03:15.686304 5005 scope.go:117] "RemoveContainer" containerID="09011083cf2f6f987ca97f765e283a7d3f7fb45513d71b6530b852477309d29b" Feb 25 14:03:15 crc kubenswrapper[5005]: E0225 14:03:15.687225 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 14:03:26 crc kubenswrapper[5005]: I0225 14:03:26.694445 5005 scope.go:117] "RemoveContainer" containerID="09011083cf2f6f987ca97f765e283a7d3f7fb45513d71b6530b852477309d29b" Feb 25 14:03:26 crc kubenswrapper[5005]: E0225 14:03:26.695356 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 14:03:40 crc kubenswrapper[5005]: I0225 14:03:40.688137 5005 scope.go:117] "RemoveContainer" containerID="09011083cf2f6f987ca97f765e283a7d3f7fb45513d71b6530b852477309d29b" Feb 25 14:03:40 crc kubenswrapper[5005]: E0225 14:03:40.689583 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 14:03:52 crc kubenswrapper[5005]: I0225 14:03:52.686725 5005 scope.go:117] "RemoveContainer" containerID="09011083cf2f6f987ca97f765e283a7d3f7fb45513d71b6530b852477309d29b" Feb 25 14:03:52 crc kubenswrapper[5005]: E0225 14:03:52.687751 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 14:04:00 crc kubenswrapper[5005]: I0225 14:04:00.146129 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533804-n76ck"] Feb 25 14:04:00 crc kubenswrapper[5005]: E0225 14:04:00.147025 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ea66c9b-0080-4cb8-bc92-95471e197568" containerName="oc" Feb 25 14:04:00 crc kubenswrapper[5005]: I0225 14:04:00.147042 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ea66c9b-0080-4cb8-bc92-95471e197568" containerName="oc" Feb 25 14:04:00 crc kubenswrapper[5005]: I0225 14:04:00.147245 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ea66c9b-0080-4cb8-bc92-95471e197568" containerName="oc" Feb 25 14:04:00 crc kubenswrapper[5005]: I0225 14:04:00.148073 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533804-n76ck" Feb 25 14:04:00 crc kubenswrapper[5005]: I0225 14:04:00.152849 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 14:04:00 crc kubenswrapper[5005]: I0225 14:04:00.152956 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 14:04:00 crc kubenswrapper[5005]: I0225 14:04:00.153078 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 14:04:00 crc kubenswrapper[5005]: I0225 14:04:00.160314 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533804-n76ck"] Feb 25 14:04:00 crc kubenswrapper[5005]: I0225 14:04:00.321430 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpdl5\" (UniqueName: \"kubernetes.io/projected/8ed1902d-ada0-4e8e-8a7a-0aa928f09599-kube-api-access-jpdl5\") pod \"auto-csr-approver-29533804-n76ck\" (UID: \"8ed1902d-ada0-4e8e-8a7a-0aa928f09599\") " pod="openshift-infra/auto-csr-approver-29533804-n76ck" Feb 25 14:04:00 crc kubenswrapper[5005]: I0225 14:04:00.424222 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpdl5\" (UniqueName: \"kubernetes.io/projected/8ed1902d-ada0-4e8e-8a7a-0aa928f09599-kube-api-access-jpdl5\") pod \"auto-csr-approver-29533804-n76ck\" (UID: \"8ed1902d-ada0-4e8e-8a7a-0aa928f09599\") " pod="openshift-infra/auto-csr-approver-29533804-n76ck" Feb 25 14:04:00 crc kubenswrapper[5005]: I0225 14:04:00.456481 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpdl5\" (UniqueName: \"kubernetes.io/projected/8ed1902d-ada0-4e8e-8a7a-0aa928f09599-kube-api-access-jpdl5\") pod \"auto-csr-approver-29533804-n76ck\" (UID: \"8ed1902d-ada0-4e8e-8a7a-0aa928f09599\") " pod="openshift-infra/auto-csr-approver-29533804-n76ck" Feb 25 14:04:00 crc kubenswrapper[5005]: I0225 14:04:00.466495 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533804-n76ck" Feb 25 14:04:00 crc kubenswrapper[5005]: I0225 14:04:00.926181 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533804-n76ck"] Feb 25 14:04:00 crc kubenswrapper[5005]: W0225 14:04:00.927672 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ed1902d_ada0_4e8e_8a7a_0aa928f09599.slice/crio-6a51a1949460ad28f67db1bb67a84cd823dda5a00b693b5f8af91eb0948ad951 WatchSource:0}: Error finding container 6a51a1949460ad28f67db1bb67a84cd823dda5a00b693b5f8af91eb0948ad951: Status 404 returned error can't find the container with id 6a51a1949460ad28f67db1bb67a84cd823dda5a00b693b5f8af91eb0948ad951 Feb 25 14:04:01 crc kubenswrapper[5005]: I0225 14:04:01.543613 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533804-n76ck" event={"ID":"8ed1902d-ada0-4e8e-8a7a-0aa928f09599","Type":"ContainerStarted","Data":"6a51a1949460ad28f67db1bb67a84cd823dda5a00b693b5f8af91eb0948ad951"} Feb 25 14:04:02 crc kubenswrapper[5005]: I0225 14:04:02.563127 5005 generic.go:334] "Generic (PLEG): container finished" podID="8ed1902d-ada0-4e8e-8a7a-0aa928f09599" containerID="b3997995268c54d20b208b9300d81f42de545399896935ad7c0c997229056fe4" exitCode=0 Feb 25 14:04:02 crc kubenswrapper[5005]: I0225 14:04:02.563181 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533804-n76ck" event={"ID":"8ed1902d-ada0-4e8e-8a7a-0aa928f09599","Type":"ContainerDied","Data":"b3997995268c54d20b208b9300d81f42de545399896935ad7c0c997229056fe4"} Feb 25 14:04:03 crc kubenswrapper[5005]: I0225 14:04:03.896844 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533804-n76ck" Feb 25 14:04:03 crc kubenswrapper[5005]: I0225 14:04:03.988237 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpdl5\" (UniqueName: \"kubernetes.io/projected/8ed1902d-ada0-4e8e-8a7a-0aa928f09599-kube-api-access-jpdl5\") pod \"8ed1902d-ada0-4e8e-8a7a-0aa928f09599\" (UID: \"8ed1902d-ada0-4e8e-8a7a-0aa928f09599\") " Feb 25 14:04:03 crc kubenswrapper[5005]: I0225 14:04:03.993566 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ed1902d-ada0-4e8e-8a7a-0aa928f09599-kube-api-access-jpdl5" (OuterVolumeSpecName: "kube-api-access-jpdl5") pod "8ed1902d-ada0-4e8e-8a7a-0aa928f09599" (UID: "8ed1902d-ada0-4e8e-8a7a-0aa928f09599"). InnerVolumeSpecName "kube-api-access-jpdl5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 14:04:04 crc kubenswrapper[5005]: I0225 14:04:04.090555 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpdl5\" (UniqueName: \"kubernetes.io/projected/8ed1902d-ada0-4e8e-8a7a-0aa928f09599-kube-api-access-jpdl5\") on node \"crc\" DevicePath \"\"" Feb 25 14:04:04 crc kubenswrapper[5005]: I0225 14:04:04.582269 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533804-n76ck" event={"ID":"8ed1902d-ada0-4e8e-8a7a-0aa928f09599","Type":"ContainerDied","Data":"6a51a1949460ad28f67db1bb67a84cd823dda5a00b693b5f8af91eb0948ad951"} Feb 25 14:04:04 crc kubenswrapper[5005]: I0225 14:04:04.582306 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a51a1949460ad28f67db1bb67a84cd823dda5a00b693b5f8af91eb0948ad951" Feb 25 14:04:04 crc kubenswrapper[5005]: I0225 14:04:04.582330 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533804-n76ck" Feb 25 14:04:05 crc kubenswrapper[5005]: I0225 14:04:05.000470 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533798-9pfx5"] Feb 25 14:04:05 crc kubenswrapper[5005]: I0225 14:04:05.012043 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533798-9pfx5"] Feb 25 14:04:06 crc kubenswrapper[5005]: I0225 14:04:06.692088 5005 scope.go:117] "RemoveContainer" containerID="09011083cf2f6f987ca97f765e283a7d3f7fb45513d71b6530b852477309d29b" Feb 25 14:04:06 crc kubenswrapper[5005]: E0225 14:04:06.692580 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 14:04:06 crc kubenswrapper[5005]: I0225 14:04:06.697066 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4c385c6-ce55-4264-a6dc-2453e7da9e2a" path="/var/lib/kubelet/pods/a4c385c6-ce55-4264-a6dc-2453e7da9e2a/volumes" Feb 25 14:04:17 crc kubenswrapper[5005]: I0225 14:04:17.686465 5005 scope.go:117] "RemoveContainer" containerID="09011083cf2f6f987ca97f765e283a7d3f7fb45513d71b6530b852477309d29b" Feb 25 14:04:17 crc kubenswrapper[5005]: E0225 14:04:17.687608 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 14:04:27 crc kubenswrapper[5005]: I0225 14:04:27.917778 5005 scope.go:117] "RemoveContainer" containerID="707335ced59666285654f7cdf8c81563a97c9da35636e8d377aa14d4942a3b41" Feb 25 14:04:31 crc kubenswrapper[5005]: I0225 14:04:31.686529 5005 scope.go:117] "RemoveContainer" containerID="09011083cf2f6f987ca97f765e283a7d3f7fb45513d71b6530b852477309d29b" Feb 25 14:04:31 crc kubenswrapper[5005]: E0225 14:04:31.687326 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 14:04:43 crc kubenswrapper[5005]: I0225 14:04:43.685624 5005 scope.go:117] "RemoveContainer" containerID="09011083cf2f6f987ca97f765e283a7d3f7fb45513d71b6530b852477309d29b" Feb 25 14:04:43 crc kubenswrapper[5005]: E0225 14:04:43.686553 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 14:04:56 crc kubenswrapper[5005]: I0225 14:04:56.692866 5005 scope.go:117] "RemoveContainer" containerID="09011083cf2f6f987ca97f765e283a7d3f7fb45513d71b6530b852477309d29b" Feb 25 14:04:56 crc kubenswrapper[5005]: E0225 14:04:56.693676 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 14:05:11 crc kubenswrapper[5005]: I0225 14:05:11.685812 5005 scope.go:117] "RemoveContainer" containerID="09011083cf2f6f987ca97f765e283a7d3f7fb45513d71b6530b852477309d29b" Feb 25 14:05:11 crc kubenswrapper[5005]: E0225 14:05:11.686526 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 14:05:24 crc kubenswrapper[5005]: I0225 14:05:24.685846 5005 scope.go:117] "RemoveContainer" containerID="09011083cf2f6f987ca97f765e283a7d3f7fb45513d71b6530b852477309d29b" Feb 25 14:05:24 crc kubenswrapper[5005]: E0225 14:05:24.686626 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 14:05:35 crc kubenswrapper[5005]: I0225 14:05:35.685286 5005 scope.go:117] "RemoveContainer" containerID="09011083cf2f6f987ca97f765e283a7d3f7fb45513d71b6530b852477309d29b" Feb 25 14:05:35 crc kubenswrapper[5005]: E0225 14:05:35.686119 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 14:05:49 crc kubenswrapper[5005]: I0225 14:05:49.686074 5005 scope.go:117] "RemoveContainer" containerID="09011083cf2f6f987ca97f765e283a7d3f7fb45513d71b6530b852477309d29b" Feb 25 14:05:49 crc kubenswrapper[5005]: E0225 14:05:49.686882 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 14:06:00 crc kubenswrapper[5005]: I0225 14:06:00.140795 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533806-k8cv7"] Feb 25 14:06:00 crc kubenswrapper[5005]: E0225 14:06:00.141628 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ed1902d-ada0-4e8e-8a7a-0aa928f09599" containerName="oc" Feb 25 14:06:00 crc kubenswrapper[5005]: I0225 14:06:00.141640 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ed1902d-ada0-4e8e-8a7a-0aa928f09599" containerName="oc" Feb 25 14:06:00 crc kubenswrapper[5005]: I0225 14:06:00.141823 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ed1902d-ada0-4e8e-8a7a-0aa928f09599" containerName="oc" Feb 25 14:06:00 crc kubenswrapper[5005]: I0225 14:06:00.142494 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533806-k8cv7" Feb 25 14:06:00 crc kubenswrapper[5005]: I0225 14:06:00.144556 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 14:06:00 crc kubenswrapper[5005]: I0225 14:06:00.144645 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 14:06:00 crc kubenswrapper[5005]: I0225 14:06:00.146053 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 14:06:00 crc kubenswrapper[5005]: I0225 14:06:00.151729 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533806-k8cv7"] Feb 25 14:06:00 crc kubenswrapper[5005]: I0225 14:06:00.234069 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlpzf\" (UniqueName: \"kubernetes.io/projected/5e4330aa-2c49-4162-9cb6-362f8a4a3356-kube-api-access-qlpzf\") pod \"auto-csr-approver-29533806-k8cv7\" (UID: \"5e4330aa-2c49-4162-9cb6-362f8a4a3356\") " pod="openshift-infra/auto-csr-approver-29533806-k8cv7" Feb 25 14:06:00 crc kubenswrapper[5005]: I0225 14:06:00.336396 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlpzf\" (UniqueName: \"kubernetes.io/projected/5e4330aa-2c49-4162-9cb6-362f8a4a3356-kube-api-access-qlpzf\") pod \"auto-csr-approver-29533806-k8cv7\" (UID: \"5e4330aa-2c49-4162-9cb6-362f8a4a3356\") " pod="openshift-infra/auto-csr-approver-29533806-k8cv7" Feb 25 14:06:00 crc kubenswrapper[5005]: I0225 14:06:00.355715 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlpzf\" (UniqueName: \"kubernetes.io/projected/5e4330aa-2c49-4162-9cb6-362f8a4a3356-kube-api-access-qlpzf\") pod \"auto-csr-approver-29533806-k8cv7\" (UID: \"5e4330aa-2c49-4162-9cb6-362f8a4a3356\") " pod="openshift-infra/auto-csr-approver-29533806-k8cv7" Feb 25 14:06:00 crc kubenswrapper[5005]: I0225 14:06:00.464874 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533806-k8cv7" Feb 25 14:06:00 crc kubenswrapper[5005]: I0225 14:06:00.952792 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533806-k8cv7"] Feb 25 14:06:00 crc kubenswrapper[5005]: I0225 14:06:00.964669 5005 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 25 14:06:01 crc kubenswrapper[5005]: I0225 14:06:01.646526 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533806-k8cv7" event={"ID":"5e4330aa-2c49-4162-9cb6-362f8a4a3356","Type":"ContainerStarted","Data":"f5d8f9cd2842aa297c57e04eccefd7cb15d7ecce4d2a3e8f781e749df8e81f8a"} Feb 25 14:06:01 crc kubenswrapper[5005]: I0225 14:06:01.685579 5005 scope.go:117] "RemoveContainer" containerID="09011083cf2f6f987ca97f765e283a7d3f7fb45513d71b6530b852477309d29b" Feb 25 14:06:01 crc kubenswrapper[5005]: E0225 14:06:01.685863 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 14:06:02 crc kubenswrapper[5005]: I0225 14:06:02.656061 5005 generic.go:334] "Generic (PLEG): container finished" podID="5e4330aa-2c49-4162-9cb6-362f8a4a3356" containerID="a95ce97398aaf9b95b476148bda0e53ea2c6631903e2ea61d74d015db6aa35e0" exitCode=0 Feb 25 14:06:02 crc kubenswrapper[5005]: I0225 14:06:02.656369 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533806-k8cv7" event={"ID":"5e4330aa-2c49-4162-9cb6-362f8a4a3356","Type":"ContainerDied","Data":"a95ce97398aaf9b95b476148bda0e53ea2c6631903e2ea61d74d015db6aa35e0"} Feb 25 14:06:03 crc kubenswrapper[5005]: I0225 14:06:03.985274 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533806-k8cv7" Feb 25 14:06:04 crc kubenswrapper[5005]: I0225 14:06:04.110868 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlpzf\" (UniqueName: \"kubernetes.io/projected/5e4330aa-2c49-4162-9cb6-362f8a4a3356-kube-api-access-qlpzf\") pod \"5e4330aa-2c49-4162-9cb6-362f8a4a3356\" (UID: \"5e4330aa-2c49-4162-9cb6-362f8a4a3356\") " Feb 25 14:06:04 crc kubenswrapper[5005]: I0225 14:06:04.117688 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e4330aa-2c49-4162-9cb6-362f8a4a3356-kube-api-access-qlpzf" (OuterVolumeSpecName: "kube-api-access-qlpzf") pod "5e4330aa-2c49-4162-9cb6-362f8a4a3356" (UID: "5e4330aa-2c49-4162-9cb6-362f8a4a3356"). InnerVolumeSpecName "kube-api-access-qlpzf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 14:06:04 crc kubenswrapper[5005]: I0225 14:06:04.213575 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlpzf\" (UniqueName: \"kubernetes.io/projected/5e4330aa-2c49-4162-9cb6-362f8a4a3356-kube-api-access-qlpzf\") on node \"crc\" DevicePath \"\"" Feb 25 14:06:04 crc kubenswrapper[5005]: I0225 14:06:04.676406 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533806-k8cv7" event={"ID":"5e4330aa-2c49-4162-9cb6-362f8a4a3356","Type":"ContainerDied","Data":"f5d8f9cd2842aa297c57e04eccefd7cb15d7ecce4d2a3e8f781e749df8e81f8a"} Feb 25 14:06:04 crc kubenswrapper[5005]: I0225 14:06:04.676448 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5d8f9cd2842aa297c57e04eccefd7cb15d7ecce4d2a3e8f781e749df8e81f8a" Feb 25 14:06:04 crc kubenswrapper[5005]: I0225 14:06:04.676457 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533806-k8cv7" Feb 25 14:06:05 crc kubenswrapper[5005]: I0225 14:06:05.054643 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533800-r758t"] Feb 25 14:06:05 crc kubenswrapper[5005]: I0225 14:06:05.062624 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533800-r758t"] Feb 25 14:06:06 crc kubenswrapper[5005]: I0225 14:06:06.714931 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adefa602-56b0-4df1-bbb5-a49a335696e5" path="/var/lib/kubelet/pods/adefa602-56b0-4df1-bbb5-a49a335696e5/volumes" Feb 25 14:06:14 crc kubenswrapper[5005]: I0225 14:06:14.686033 5005 scope.go:117] "RemoveContainer" containerID="09011083cf2f6f987ca97f765e283a7d3f7fb45513d71b6530b852477309d29b" Feb 25 14:06:14 crc kubenswrapper[5005]: E0225 14:06:14.687052 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 14:06:27 crc kubenswrapper[5005]: I0225 14:06:27.685251 5005 scope.go:117] "RemoveContainer" containerID="09011083cf2f6f987ca97f765e283a7d3f7fb45513d71b6530b852477309d29b" Feb 25 14:06:27 crc kubenswrapper[5005]: E0225 14:06:27.685976 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 14:06:28 crc kubenswrapper[5005]: I0225 14:06:28.023624 5005 scope.go:117] "RemoveContainer" containerID="94b55ee1a0d326c639523366bfd28c639382af3ad1a6a6d631833cb85c976cec" Feb 25 14:06:39 crc kubenswrapper[5005]: I0225 14:06:39.685949 5005 scope.go:117] "RemoveContainer" containerID="09011083cf2f6f987ca97f765e283a7d3f7fb45513d71b6530b852477309d29b" Feb 25 14:06:40 crc kubenswrapper[5005]: I0225 14:06:40.010635 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerStarted","Data":"38175ea92de24a24d2b60b32f6965a44b6aebab81fa0434b8fdc104643cc42ed"} Feb 25 14:08:00 crc kubenswrapper[5005]: I0225 14:08:00.148741 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533808-cfnps"] Feb 25 14:08:00 crc kubenswrapper[5005]: E0225 14:08:00.150088 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e4330aa-2c49-4162-9cb6-362f8a4a3356" containerName="oc" Feb 25 14:08:00 crc kubenswrapper[5005]: I0225 14:08:00.150113 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e4330aa-2c49-4162-9cb6-362f8a4a3356" containerName="oc" Feb 25 14:08:00 crc kubenswrapper[5005]: I0225 14:08:00.150495 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e4330aa-2c49-4162-9cb6-362f8a4a3356" containerName="oc" Feb 25 14:08:00 crc kubenswrapper[5005]: I0225 14:08:00.151497 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533808-cfnps" Feb 25 14:08:00 crc kubenswrapper[5005]: I0225 14:08:00.155635 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 14:08:00 crc kubenswrapper[5005]: I0225 14:08:00.155795 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 14:08:00 crc kubenswrapper[5005]: I0225 14:08:00.157548 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwf9s\" (UniqueName: \"kubernetes.io/projected/d6112fc0-5d70-4d6b-9b4d-7117488fafb1-kube-api-access-wwf9s\") pod \"auto-csr-approver-29533808-cfnps\" (UID: \"d6112fc0-5d70-4d6b-9b4d-7117488fafb1\") " pod="openshift-infra/auto-csr-approver-29533808-cfnps" Feb 25 14:08:00 crc kubenswrapper[5005]: I0225 14:08:00.158644 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 14:08:00 crc kubenswrapper[5005]: I0225 14:08:00.168476 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533808-cfnps"] Feb 25 14:08:00 crc kubenswrapper[5005]: I0225 14:08:00.260097 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwf9s\" (UniqueName: \"kubernetes.io/projected/d6112fc0-5d70-4d6b-9b4d-7117488fafb1-kube-api-access-wwf9s\") pod \"auto-csr-approver-29533808-cfnps\" (UID: \"d6112fc0-5d70-4d6b-9b4d-7117488fafb1\") " pod="openshift-infra/auto-csr-approver-29533808-cfnps" Feb 25 14:08:00 crc kubenswrapper[5005]: I0225 14:08:00.286228 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwf9s\" (UniqueName: \"kubernetes.io/projected/d6112fc0-5d70-4d6b-9b4d-7117488fafb1-kube-api-access-wwf9s\") pod \"auto-csr-approver-29533808-cfnps\" (UID: \"d6112fc0-5d70-4d6b-9b4d-7117488fafb1\") " pod="openshift-infra/auto-csr-approver-29533808-cfnps" Feb 25 14:08:00 crc kubenswrapper[5005]: I0225 14:08:00.473464 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533808-cfnps" Feb 25 14:08:00 crc kubenswrapper[5005]: I0225 14:08:00.974834 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533808-cfnps"] Feb 25 14:08:00 crc kubenswrapper[5005]: W0225 14:08:00.977890 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd6112fc0_5d70_4d6b_9b4d_7117488fafb1.slice/crio-eecb40303b58391e50fa7598dd7fc49b717cf81cc4009b1e725905cb653f7910 WatchSource:0}: Error finding container eecb40303b58391e50fa7598dd7fc49b717cf81cc4009b1e725905cb653f7910: Status 404 returned error can't find the container with id eecb40303b58391e50fa7598dd7fc49b717cf81cc4009b1e725905cb653f7910 Feb 25 14:08:01 crc kubenswrapper[5005]: I0225 14:08:01.762664 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533808-cfnps" event={"ID":"d6112fc0-5d70-4d6b-9b4d-7117488fafb1","Type":"ContainerStarted","Data":"eecb40303b58391e50fa7598dd7fc49b717cf81cc4009b1e725905cb653f7910"} Feb 25 14:08:02 crc kubenswrapper[5005]: I0225 14:08:02.773779 5005 generic.go:334] "Generic (PLEG): container finished" podID="d6112fc0-5d70-4d6b-9b4d-7117488fafb1" containerID="9f21062f687d565ea7ba2566d46c9ece4a3276dab27b12f3a5828a2d0a21e20d" exitCode=0 Feb 25 14:08:02 crc kubenswrapper[5005]: I0225 14:08:02.773848 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533808-cfnps" event={"ID":"d6112fc0-5d70-4d6b-9b4d-7117488fafb1","Type":"ContainerDied","Data":"9f21062f687d565ea7ba2566d46c9ece4a3276dab27b12f3a5828a2d0a21e20d"} Feb 25 14:08:04 crc kubenswrapper[5005]: I0225 14:08:04.211493 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533808-cfnps" Feb 25 14:08:04 crc kubenswrapper[5005]: I0225 14:08:04.237417 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwf9s\" (UniqueName: \"kubernetes.io/projected/d6112fc0-5d70-4d6b-9b4d-7117488fafb1-kube-api-access-wwf9s\") pod \"d6112fc0-5d70-4d6b-9b4d-7117488fafb1\" (UID: \"d6112fc0-5d70-4d6b-9b4d-7117488fafb1\") " Feb 25 14:08:04 crc kubenswrapper[5005]: I0225 14:08:04.244116 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6112fc0-5d70-4d6b-9b4d-7117488fafb1-kube-api-access-wwf9s" (OuterVolumeSpecName: "kube-api-access-wwf9s") pod "d6112fc0-5d70-4d6b-9b4d-7117488fafb1" (UID: "d6112fc0-5d70-4d6b-9b4d-7117488fafb1"). InnerVolumeSpecName "kube-api-access-wwf9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 14:08:04 crc kubenswrapper[5005]: I0225 14:08:04.339249 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwf9s\" (UniqueName: \"kubernetes.io/projected/d6112fc0-5d70-4d6b-9b4d-7117488fafb1-kube-api-access-wwf9s\") on node \"crc\" DevicePath \"\"" Feb 25 14:08:04 crc kubenswrapper[5005]: I0225 14:08:04.806417 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533808-cfnps" event={"ID":"d6112fc0-5d70-4d6b-9b4d-7117488fafb1","Type":"ContainerDied","Data":"eecb40303b58391e50fa7598dd7fc49b717cf81cc4009b1e725905cb653f7910"} Feb 25 14:08:04 crc kubenswrapper[5005]: I0225 14:08:04.806714 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eecb40303b58391e50fa7598dd7fc49b717cf81cc4009b1e725905cb653f7910" Feb 25 14:08:04 crc kubenswrapper[5005]: I0225 14:08:04.806481 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533808-cfnps" Feb 25 14:08:05 crc kubenswrapper[5005]: I0225 14:08:05.318767 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533802-rc5z9"] Feb 25 14:08:05 crc kubenswrapper[5005]: I0225 14:08:05.337389 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533802-rc5z9"] Feb 25 14:08:06 crc kubenswrapper[5005]: I0225 14:08:06.699517 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ea66c9b-0080-4cb8-bc92-95471e197568" path="/var/lib/kubelet/pods/8ea66c9b-0080-4cb8-bc92-95471e197568/volumes" Feb 25 14:08:28 crc kubenswrapper[5005]: I0225 14:08:28.132790 5005 scope.go:117] "RemoveContainer" containerID="c5a39f3b55e703b419e2ec84bc8dbcd650643048bce96c8b68b82a9e25262ec0" Feb 25 14:08:58 crc kubenswrapper[5005]: I0225 14:08:58.087107 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 14:08:58 crc kubenswrapper[5005]: I0225 14:08:58.087736 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 14:09:14 crc kubenswrapper[5005]: I0225 14:09:14.191924 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6ts5z"] Feb 25 14:09:14 crc kubenswrapper[5005]: E0225 14:09:14.192966 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6112fc0-5d70-4d6b-9b4d-7117488fafb1" containerName="oc" Feb 25 14:09:14 crc kubenswrapper[5005]: I0225 14:09:14.192982 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6112fc0-5d70-4d6b-9b4d-7117488fafb1" containerName="oc" Feb 25 14:09:14 crc kubenswrapper[5005]: I0225 14:09:14.193238 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6112fc0-5d70-4d6b-9b4d-7117488fafb1" containerName="oc" Feb 25 14:09:14 crc kubenswrapper[5005]: I0225 14:09:14.194862 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6ts5z" Feb 25 14:09:14 crc kubenswrapper[5005]: I0225 14:09:14.220008 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6ts5z"] Feb 25 14:09:14 crc kubenswrapper[5005]: I0225 14:09:14.273688 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d20e3396-edc9-4181-866d-2f071dc28471-utilities\") pod \"community-operators-6ts5z\" (UID: \"d20e3396-edc9-4181-866d-2f071dc28471\") " pod="openshift-marketplace/community-operators-6ts5z" Feb 25 14:09:14 crc kubenswrapper[5005]: I0225 14:09:14.273761 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc2fp\" (UniqueName: \"kubernetes.io/projected/d20e3396-edc9-4181-866d-2f071dc28471-kube-api-access-wc2fp\") pod \"community-operators-6ts5z\" (UID: \"d20e3396-edc9-4181-866d-2f071dc28471\") " pod="openshift-marketplace/community-operators-6ts5z" Feb 25 14:09:14 crc kubenswrapper[5005]: I0225 14:09:14.273795 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d20e3396-edc9-4181-866d-2f071dc28471-catalog-content\") pod \"community-operators-6ts5z\" (UID: \"d20e3396-edc9-4181-866d-2f071dc28471\") " pod="openshift-marketplace/community-operators-6ts5z" Feb 25 14:09:14 crc kubenswrapper[5005]: I0225 14:09:14.375288 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wc2fp\" (UniqueName: \"kubernetes.io/projected/d20e3396-edc9-4181-866d-2f071dc28471-kube-api-access-wc2fp\") pod \"community-operators-6ts5z\" (UID: \"d20e3396-edc9-4181-866d-2f071dc28471\") " pod="openshift-marketplace/community-operators-6ts5z" Feb 25 14:09:14 crc kubenswrapper[5005]: I0225 14:09:14.375349 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d20e3396-edc9-4181-866d-2f071dc28471-catalog-content\") pod \"community-operators-6ts5z\" (UID: \"d20e3396-edc9-4181-866d-2f071dc28471\") " pod="openshift-marketplace/community-operators-6ts5z" Feb 25 14:09:14 crc kubenswrapper[5005]: I0225 14:09:14.375530 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d20e3396-edc9-4181-866d-2f071dc28471-utilities\") pod \"community-operators-6ts5z\" (UID: \"d20e3396-edc9-4181-866d-2f071dc28471\") " pod="openshift-marketplace/community-operators-6ts5z" Feb 25 14:09:14 crc kubenswrapper[5005]: I0225 14:09:14.376012 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d20e3396-edc9-4181-866d-2f071dc28471-catalog-content\") pod \"community-operators-6ts5z\" (UID: \"d20e3396-edc9-4181-866d-2f071dc28471\") " pod="openshift-marketplace/community-operators-6ts5z" Feb 25 14:09:14 crc kubenswrapper[5005]: I0225 14:09:14.376126 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d20e3396-edc9-4181-866d-2f071dc28471-utilities\") pod \"community-operators-6ts5z\" (UID: \"d20e3396-edc9-4181-866d-2f071dc28471\") " pod="openshift-marketplace/community-operators-6ts5z" Feb 25 14:09:14 crc kubenswrapper[5005]: I0225 14:09:14.396488 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wc2fp\" (UniqueName: \"kubernetes.io/projected/d20e3396-edc9-4181-866d-2f071dc28471-kube-api-access-wc2fp\") pod \"community-operators-6ts5z\" (UID: \"d20e3396-edc9-4181-866d-2f071dc28471\") " pod="openshift-marketplace/community-operators-6ts5z" Feb 25 14:09:14 crc kubenswrapper[5005]: I0225 14:09:14.519560 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6ts5z" Feb 25 14:09:15 crc kubenswrapper[5005]: I0225 14:09:15.098905 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6ts5z"] Feb 25 14:09:15 crc kubenswrapper[5005]: I0225 14:09:15.431435 5005 generic.go:334] "Generic (PLEG): container finished" podID="d20e3396-edc9-4181-866d-2f071dc28471" containerID="18448c4ad209abc957be25b92a75e2a406a7fea875ee3f90c3ad7d8eb0ea74ca" exitCode=0 Feb 25 14:09:15 crc kubenswrapper[5005]: I0225 14:09:15.431479 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6ts5z" event={"ID":"d20e3396-edc9-4181-866d-2f071dc28471","Type":"ContainerDied","Data":"18448c4ad209abc957be25b92a75e2a406a7fea875ee3f90c3ad7d8eb0ea74ca"} Feb 25 14:09:15 crc kubenswrapper[5005]: I0225 14:09:15.432315 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6ts5z" event={"ID":"d20e3396-edc9-4181-866d-2f071dc28471","Type":"ContainerStarted","Data":"e1c65096d52057d7b9e4a51eca23c40ec3f04975f8122d25fe559940f193fd51"} Feb 25 14:09:16 crc kubenswrapper[5005]: I0225 14:09:16.442345 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6ts5z" event={"ID":"d20e3396-edc9-4181-866d-2f071dc28471","Type":"ContainerStarted","Data":"0b18e7f61d4411fde5615664d9ac96b4f197f6f1b3045c588b5f99bba216b0da"} Feb 25 14:09:17 crc kubenswrapper[5005]: I0225 14:09:17.451820 5005 generic.go:334] "Generic (PLEG): container finished" podID="d20e3396-edc9-4181-866d-2f071dc28471" containerID="0b18e7f61d4411fde5615664d9ac96b4f197f6f1b3045c588b5f99bba216b0da" exitCode=0 Feb 25 14:09:17 crc kubenswrapper[5005]: I0225 14:09:17.451935 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6ts5z" event={"ID":"d20e3396-edc9-4181-866d-2f071dc28471","Type":"ContainerDied","Data":"0b18e7f61d4411fde5615664d9ac96b4f197f6f1b3045c588b5f99bba216b0da"} Feb 25 14:09:18 crc kubenswrapper[5005]: I0225 14:09:18.467988 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6ts5z" event={"ID":"d20e3396-edc9-4181-866d-2f071dc28471","Type":"ContainerStarted","Data":"99fbb1c952b2e86a5420f96f378ef103874243b4c1a2d8fdbffc30f73f45f613"} Feb 25 14:09:18 crc kubenswrapper[5005]: I0225 14:09:18.489994 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6ts5z" podStartSLOduration=2.081473459 podStartE2EDuration="4.48997123s" podCreationTimestamp="2026-02-25 14:09:14 +0000 UTC" firstStartedPulling="2026-02-25 14:09:15.433779352 +0000 UTC m=+10269.474511679" lastFinishedPulling="2026-02-25 14:09:17.842277123 +0000 UTC m=+10271.883009450" observedRunningTime="2026-02-25 14:09:18.487502943 +0000 UTC m=+10272.528235290" watchObservedRunningTime="2026-02-25 14:09:18.48997123 +0000 UTC m=+10272.530703617" Feb 25 14:09:24 crc kubenswrapper[5005]: I0225 14:09:24.519726 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6ts5z" Feb 25 14:09:24 crc kubenswrapper[5005]: I0225 14:09:24.520311 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6ts5z" Feb 25 14:09:24 crc kubenswrapper[5005]: I0225 14:09:24.562868 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6ts5z" Feb 25 14:09:25 crc kubenswrapper[5005]: I0225 14:09:25.641784 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6ts5z" Feb 25 14:09:25 crc kubenswrapper[5005]: I0225 14:09:25.691888 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6ts5z"] Feb 25 14:09:27 crc kubenswrapper[5005]: I0225 14:09:27.550815 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6ts5z" podUID="d20e3396-edc9-4181-866d-2f071dc28471" containerName="registry-server" containerID="cri-o://99fbb1c952b2e86a5420f96f378ef103874243b4c1a2d8fdbffc30f73f45f613" gracePeriod=2 Feb 25 14:09:27 crc kubenswrapper[5005]: I0225 14:09:27.998033 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6ts5z" Feb 25 14:09:28 crc kubenswrapper[5005]: I0225 14:09:28.086869 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 14:09:28 crc kubenswrapper[5005]: I0225 14:09:28.086933 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 14:09:28 crc kubenswrapper[5005]: I0225 14:09:28.144974 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wc2fp\" (UniqueName: \"kubernetes.io/projected/d20e3396-edc9-4181-866d-2f071dc28471-kube-api-access-wc2fp\") pod \"d20e3396-edc9-4181-866d-2f071dc28471\" (UID: \"d20e3396-edc9-4181-866d-2f071dc28471\") " Feb 25 14:09:28 crc kubenswrapper[5005]: I0225 14:09:28.145229 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d20e3396-edc9-4181-866d-2f071dc28471-catalog-content\") pod \"d20e3396-edc9-4181-866d-2f071dc28471\" (UID: \"d20e3396-edc9-4181-866d-2f071dc28471\") " Feb 25 14:09:28 crc kubenswrapper[5005]: I0225 14:09:28.145322 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d20e3396-edc9-4181-866d-2f071dc28471-utilities\") pod \"d20e3396-edc9-4181-866d-2f071dc28471\" (UID: \"d20e3396-edc9-4181-866d-2f071dc28471\") " Feb 25 14:09:28 crc kubenswrapper[5005]: I0225 14:09:28.146476 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d20e3396-edc9-4181-866d-2f071dc28471-utilities" (OuterVolumeSpecName: "utilities") pod "d20e3396-edc9-4181-866d-2f071dc28471" (UID: "d20e3396-edc9-4181-866d-2f071dc28471"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 14:09:28 crc kubenswrapper[5005]: I0225 14:09:28.150993 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d20e3396-edc9-4181-866d-2f071dc28471-kube-api-access-wc2fp" (OuterVolumeSpecName: "kube-api-access-wc2fp") pod "d20e3396-edc9-4181-866d-2f071dc28471" (UID: "d20e3396-edc9-4181-866d-2f071dc28471"). InnerVolumeSpecName "kube-api-access-wc2fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 14:09:28 crc kubenswrapper[5005]: I0225 14:09:28.203066 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d20e3396-edc9-4181-866d-2f071dc28471-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d20e3396-edc9-4181-866d-2f071dc28471" (UID: "d20e3396-edc9-4181-866d-2f071dc28471"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 14:09:28 crc kubenswrapper[5005]: I0225 14:09:28.247514 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d20e3396-edc9-4181-866d-2f071dc28471-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 14:09:28 crc kubenswrapper[5005]: I0225 14:09:28.247558 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wc2fp\" (UniqueName: \"kubernetes.io/projected/d20e3396-edc9-4181-866d-2f071dc28471-kube-api-access-wc2fp\") on node \"crc\" DevicePath \"\"" Feb 25 14:09:28 crc kubenswrapper[5005]: I0225 14:09:28.247574 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d20e3396-edc9-4181-866d-2f071dc28471-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 14:09:28 crc kubenswrapper[5005]: I0225 14:09:28.560306 5005 generic.go:334] "Generic (PLEG): container finished" podID="d20e3396-edc9-4181-866d-2f071dc28471" containerID="99fbb1c952b2e86a5420f96f378ef103874243b4c1a2d8fdbffc30f73f45f613" exitCode=0 Feb 25 14:09:28 crc kubenswrapper[5005]: I0225 14:09:28.560348 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6ts5z" event={"ID":"d20e3396-edc9-4181-866d-2f071dc28471","Type":"ContainerDied","Data":"99fbb1c952b2e86a5420f96f378ef103874243b4c1a2d8fdbffc30f73f45f613"} Feb 25 14:09:28 crc kubenswrapper[5005]: I0225 14:09:28.560389 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6ts5z" event={"ID":"d20e3396-edc9-4181-866d-2f071dc28471","Type":"ContainerDied","Data":"e1c65096d52057d7b9e4a51eca23c40ec3f04975f8122d25fe559940f193fd51"} Feb 25 14:09:28 crc kubenswrapper[5005]: I0225 14:09:28.560385 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6ts5z" Feb 25 14:09:28 crc kubenswrapper[5005]: I0225 14:09:28.560405 5005 scope.go:117] "RemoveContainer" containerID="99fbb1c952b2e86a5420f96f378ef103874243b4c1a2d8fdbffc30f73f45f613" Feb 25 14:09:28 crc kubenswrapper[5005]: I0225 14:09:28.579496 5005 scope.go:117] "RemoveContainer" containerID="0b18e7f61d4411fde5615664d9ac96b4f197f6f1b3045c588b5f99bba216b0da" Feb 25 14:09:28 crc kubenswrapper[5005]: I0225 14:09:28.593011 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6ts5z"] Feb 25 14:09:28 crc kubenswrapper[5005]: I0225 14:09:28.602590 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6ts5z"] Feb 25 14:09:28 crc kubenswrapper[5005]: I0225 14:09:28.610605 5005 scope.go:117] "RemoveContainer" containerID="18448c4ad209abc957be25b92a75e2a406a7fea875ee3f90c3ad7d8eb0ea74ca" Feb 25 14:09:28 crc kubenswrapper[5005]: I0225 14:09:28.641729 5005 scope.go:117] "RemoveContainer" containerID="99fbb1c952b2e86a5420f96f378ef103874243b4c1a2d8fdbffc30f73f45f613" Feb 25 14:09:28 crc kubenswrapper[5005]: E0225 14:09:28.642139 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99fbb1c952b2e86a5420f96f378ef103874243b4c1a2d8fdbffc30f73f45f613\": container with ID starting with 99fbb1c952b2e86a5420f96f378ef103874243b4c1a2d8fdbffc30f73f45f613 not found: ID does not exist" containerID="99fbb1c952b2e86a5420f96f378ef103874243b4c1a2d8fdbffc30f73f45f613" Feb 25 14:09:28 crc kubenswrapper[5005]: I0225 14:09:28.642177 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99fbb1c952b2e86a5420f96f378ef103874243b4c1a2d8fdbffc30f73f45f613"} err="failed to get container status \"99fbb1c952b2e86a5420f96f378ef103874243b4c1a2d8fdbffc30f73f45f613\": rpc error: code = NotFound desc = could not find container \"99fbb1c952b2e86a5420f96f378ef103874243b4c1a2d8fdbffc30f73f45f613\": container with ID starting with 99fbb1c952b2e86a5420f96f378ef103874243b4c1a2d8fdbffc30f73f45f613 not found: ID does not exist" Feb 25 14:09:28 crc kubenswrapper[5005]: I0225 14:09:28.642199 5005 scope.go:117] "RemoveContainer" containerID="0b18e7f61d4411fde5615664d9ac96b4f197f6f1b3045c588b5f99bba216b0da" Feb 25 14:09:28 crc kubenswrapper[5005]: E0225 14:09:28.642651 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b18e7f61d4411fde5615664d9ac96b4f197f6f1b3045c588b5f99bba216b0da\": container with ID starting with 0b18e7f61d4411fde5615664d9ac96b4f197f6f1b3045c588b5f99bba216b0da not found: ID does not exist" containerID="0b18e7f61d4411fde5615664d9ac96b4f197f6f1b3045c588b5f99bba216b0da" Feb 25 14:09:28 crc kubenswrapper[5005]: I0225 14:09:28.642670 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b18e7f61d4411fde5615664d9ac96b4f197f6f1b3045c588b5f99bba216b0da"} err="failed to get container status \"0b18e7f61d4411fde5615664d9ac96b4f197f6f1b3045c588b5f99bba216b0da\": rpc error: code = NotFound desc = could not find container \"0b18e7f61d4411fde5615664d9ac96b4f197f6f1b3045c588b5f99bba216b0da\": container with ID starting with 0b18e7f61d4411fde5615664d9ac96b4f197f6f1b3045c588b5f99bba216b0da not found: ID does not exist" Feb 25 14:09:28 crc kubenswrapper[5005]: I0225 14:09:28.642684 5005 scope.go:117] "RemoveContainer" containerID="18448c4ad209abc957be25b92a75e2a406a7fea875ee3f90c3ad7d8eb0ea74ca" Feb 25 14:09:28 crc kubenswrapper[5005]: E0225 14:09:28.643138 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18448c4ad209abc957be25b92a75e2a406a7fea875ee3f90c3ad7d8eb0ea74ca\": container with ID starting with 18448c4ad209abc957be25b92a75e2a406a7fea875ee3f90c3ad7d8eb0ea74ca not found: ID does not exist" containerID="18448c4ad209abc957be25b92a75e2a406a7fea875ee3f90c3ad7d8eb0ea74ca" Feb 25 14:09:28 crc kubenswrapper[5005]: I0225 14:09:28.643184 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18448c4ad209abc957be25b92a75e2a406a7fea875ee3f90c3ad7d8eb0ea74ca"} err="failed to get container status \"18448c4ad209abc957be25b92a75e2a406a7fea875ee3f90c3ad7d8eb0ea74ca\": rpc error: code = NotFound desc = could not find container \"18448c4ad209abc957be25b92a75e2a406a7fea875ee3f90c3ad7d8eb0ea74ca\": container with ID starting with 18448c4ad209abc957be25b92a75e2a406a7fea875ee3f90c3ad7d8eb0ea74ca not found: ID does not exist" Feb 25 14:09:28 crc kubenswrapper[5005]: I0225 14:09:28.695326 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d20e3396-edc9-4181-866d-2f071dc28471" path="/var/lib/kubelet/pods/d20e3396-edc9-4181-866d-2f071dc28471/volumes" Feb 25 14:09:58 crc kubenswrapper[5005]: I0225 14:09:58.087259 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 14:09:58 crc kubenswrapper[5005]: I0225 14:09:58.088263 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 14:09:58 crc kubenswrapper[5005]: I0225 14:09:58.088318 5005 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" Feb 25 14:09:58 crc kubenswrapper[5005]: I0225 14:09:58.089214 5005 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"38175ea92de24a24d2b60b32f6965a44b6aebab81fa0434b8fdc104643cc42ed"} pod="openshift-machine-config-operator/machine-config-daemon-tct5q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 25 14:09:58 crc kubenswrapper[5005]: I0225 14:09:58.089275 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" containerID="cri-o://38175ea92de24a24d2b60b32f6965a44b6aebab81fa0434b8fdc104643cc42ed" gracePeriod=600 Feb 25 14:09:58 crc kubenswrapper[5005]: I0225 14:09:58.817820 5005 generic.go:334] "Generic (PLEG): container finished" podID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerID="38175ea92de24a24d2b60b32f6965a44b6aebab81fa0434b8fdc104643cc42ed" exitCode=0 Feb 25 14:09:58 crc kubenswrapper[5005]: I0225 14:09:58.817896 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerDied","Data":"38175ea92de24a24d2b60b32f6965a44b6aebab81fa0434b8fdc104643cc42ed"} Feb 25 14:09:58 crc kubenswrapper[5005]: I0225 14:09:58.818457 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerStarted","Data":"0fd17b4cc81721a66adaff149966a75a0355cfefe1996efe9af637406ab28af1"} Feb 25 14:09:58 crc kubenswrapper[5005]: I0225 14:09:58.818476 5005 scope.go:117] "RemoveContainer" containerID="09011083cf2f6f987ca97f765e283a7d3f7fb45513d71b6530b852477309d29b" Feb 25 14:10:00 crc kubenswrapper[5005]: I0225 14:10:00.141834 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533810-9lfw8"] Feb 25 14:10:00 crc kubenswrapper[5005]: E0225 14:10:00.143533 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d20e3396-edc9-4181-866d-2f071dc28471" containerName="extract-content" Feb 25 14:10:00 crc kubenswrapper[5005]: I0225 14:10:00.143616 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="d20e3396-edc9-4181-866d-2f071dc28471" containerName="extract-content" Feb 25 14:10:00 crc kubenswrapper[5005]: E0225 14:10:00.143697 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d20e3396-edc9-4181-866d-2f071dc28471" containerName="registry-server" Feb 25 14:10:00 crc kubenswrapper[5005]: I0225 14:10:00.143753 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="d20e3396-edc9-4181-866d-2f071dc28471" containerName="registry-server" Feb 25 14:10:00 crc kubenswrapper[5005]: E0225 14:10:00.143825 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d20e3396-edc9-4181-866d-2f071dc28471" containerName="extract-utilities" Feb 25 14:10:00 crc kubenswrapper[5005]: I0225 14:10:00.143887 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="d20e3396-edc9-4181-866d-2f071dc28471" containerName="extract-utilities" Feb 25 14:10:00 crc kubenswrapper[5005]: I0225 14:10:00.144121 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="d20e3396-edc9-4181-866d-2f071dc28471" containerName="registry-server" Feb 25 14:10:00 crc kubenswrapper[5005]: I0225 14:10:00.145034 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533810-9lfw8" Feb 25 14:10:00 crc kubenswrapper[5005]: I0225 14:10:00.146962 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 14:10:00 crc kubenswrapper[5005]: I0225 14:10:00.147213 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 14:10:00 crc kubenswrapper[5005]: I0225 14:10:00.147824 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 14:10:00 crc kubenswrapper[5005]: I0225 14:10:00.154067 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533810-9lfw8"] Feb 25 14:10:00 crc kubenswrapper[5005]: I0225 14:10:00.201848 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvdwc\" (UniqueName: \"kubernetes.io/projected/ab364140-1ee9-48f2-b494-8118720b89e7-kube-api-access-mvdwc\") pod \"auto-csr-approver-29533810-9lfw8\" (UID: \"ab364140-1ee9-48f2-b494-8118720b89e7\") " pod="openshift-infra/auto-csr-approver-29533810-9lfw8" Feb 25 14:10:00 crc kubenswrapper[5005]: I0225 14:10:00.303319 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvdwc\" (UniqueName: \"kubernetes.io/projected/ab364140-1ee9-48f2-b494-8118720b89e7-kube-api-access-mvdwc\") pod \"auto-csr-approver-29533810-9lfw8\" (UID: \"ab364140-1ee9-48f2-b494-8118720b89e7\") " pod="openshift-infra/auto-csr-approver-29533810-9lfw8" Feb 25 14:10:00 crc kubenswrapper[5005]: I0225 14:10:00.326441 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvdwc\" (UniqueName: \"kubernetes.io/projected/ab364140-1ee9-48f2-b494-8118720b89e7-kube-api-access-mvdwc\") pod \"auto-csr-approver-29533810-9lfw8\" (UID: \"ab364140-1ee9-48f2-b494-8118720b89e7\") " pod="openshift-infra/auto-csr-approver-29533810-9lfw8" Feb 25 14:10:00 crc kubenswrapper[5005]: I0225 14:10:00.469218 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533810-9lfw8" Feb 25 14:10:00 crc kubenswrapper[5005]: I0225 14:10:00.939876 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533810-9lfw8"] Feb 25 14:10:01 crc kubenswrapper[5005]: I0225 14:10:01.849568 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533810-9lfw8" event={"ID":"ab364140-1ee9-48f2-b494-8118720b89e7","Type":"ContainerStarted","Data":"e84624be5e0b80b76e5e848970ca034aa1a7fc35a26803d52a8557941e77c1d2"} Feb 25 14:10:02 crc kubenswrapper[5005]: I0225 14:10:02.448223 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tndzf"] Feb 25 14:10:02 crc kubenswrapper[5005]: I0225 14:10:02.451295 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tndzf" Feb 25 14:10:02 crc kubenswrapper[5005]: I0225 14:10:02.458601 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tndzf"] Feb 25 14:10:02 crc kubenswrapper[5005]: I0225 14:10:02.552808 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q577n\" (UniqueName: \"kubernetes.io/projected/890442d2-083b-4c53-b2d6-d725c94b8025-kube-api-access-q577n\") pod \"redhat-marketplace-tndzf\" (UID: \"890442d2-083b-4c53-b2d6-d725c94b8025\") " pod="openshift-marketplace/redhat-marketplace-tndzf" Feb 25 14:10:02 crc kubenswrapper[5005]: I0225 14:10:02.552889 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/890442d2-083b-4c53-b2d6-d725c94b8025-catalog-content\") pod \"redhat-marketplace-tndzf\" (UID: \"890442d2-083b-4c53-b2d6-d725c94b8025\") " pod="openshift-marketplace/redhat-marketplace-tndzf" Feb 25 14:10:02 crc kubenswrapper[5005]: I0225 14:10:02.552926 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/890442d2-083b-4c53-b2d6-d725c94b8025-utilities\") pod \"redhat-marketplace-tndzf\" (UID: \"890442d2-083b-4c53-b2d6-d725c94b8025\") " pod="openshift-marketplace/redhat-marketplace-tndzf" Feb 25 14:10:02 crc kubenswrapper[5005]: I0225 14:10:02.654839 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q577n\" (UniqueName: \"kubernetes.io/projected/890442d2-083b-4c53-b2d6-d725c94b8025-kube-api-access-q577n\") pod \"redhat-marketplace-tndzf\" (UID: \"890442d2-083b-4c53-b2d6-d725c94b8025\") " pod="openshift-marketplace/redhat-marketplace-tndzf" Feb 25 14:10:02 crc kubenswrapper[5005]: I0225 14:10:02.654927 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/890442d2-083b-4c53-b2d6-d725c94b8025-catalog-content\") pod \"redhat-marketplace-tndzf\" (UID: \"890442d2-083b-4c53-b2d6-d725c94b8025\") " pod="openshift-marketplace/redhat-marketplace-tndzf" Feb 25 14:10:02 crc kubenswrapper[5005]: I0225 14:10:02.654968 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/890442d2-083b-4c53-b2d6-d725c94b8025-utilities\") pod \"redhat-marketplace-tndzf\" (UID: \"890442d2-083b-4c53-b2d6-d725c94b8025\") " pod="openshift-marketplace/redhat-marketplace-tndzf" Feb 25 14:10:02 crc kubenswrapper[5005]: I0225 14:10:02.655544 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/890442d2-083b-4c53-b2d6-d725c94b8025-catalog-content\") pod \"redhat-marketplace-tndzf\" (UID: \"890442d2-083b-4c53-b2d6-d725c94b8025\") " pod="openshift-marketplace/redhat-marketplace-tndzf" Feb 25 14:10:02 crc kubenswrapper[5005]: I0225 14:10:02.655638 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/890442d2-083b-4c53-b2d6-d725c94b8025-utilities\") pod \"redhat-marketplace-tndzf\" (UID: \"890442d2-083b-4c53-b2d6-d725c94b8025\") " pod="openshift-marketplace/redhat-marketplace-tndzf" Feb 25 14:10:02 crc kubenswrapper[5005]: I0225 14:10:02.688396 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q577n\" (UniqueName: \"kubernetes.io/projected/890442d2-083b-4c53-b2d6-d725c94b8025-kube-api-access-q577n\") pod \"redhat-marketplace-tndzf\" (UID: \"890442d2-083b-4c53-b2d6-d725c94b8025\") " pod="openshift-marketplace/redhat-marketplace-tndzf" Feb 25 14:10:02 crc kubenswrapper[5005]: I0225 14:10:02.816737 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tndzf" Feb 25 14:10:02 crc kubenswrapper[5005]: I0225 14:10:02.867060 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533810-9lfw8" event={"ID":"ab364140-1ee9-48f2-b494-8118720b89e7","Type":"ContainerStarted","Data":"c0ef01cc88fea1852f4c8d46ddcc19af6937cbf00b7369e80bc04700b9e5518f"} Feb 25 14:10:02 crc kubenswrapper[5005]: I0225 14:10:02.888911 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29533810-9lfw8" podStartSLOduration=1.56510675 podStartE2EDuration="2.88889333s" podCreationTimestamp="2026-02-25 14:10:00 +0000 UTC" firstStartedPulling="2026-02-25 14:10:00.947986027 +0000 UTC m=+10314.988718364" lastFinishedPulling="2026-02-25 14:10:02.271772607 +0000 UTC m=+10316.312504944" observedRunningTime="2026-02-25 14:10:02.883316654 +0000 UTC m=+10316.924048971" watchObservedRunningTime="2026-02-25 14:10:02.88889333 +0000 UTC m=+10316.929625657" Feb 25 14:10:03 crc kubenswrapper[5005]: I0225 14:10:03.330851 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tndzf"] Feb 25 14:10:03 crc kubenswrapper[5005]: W0225 14:10:03.332441 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod890442d2_083b_4c53_b2d6_d725c94b8025.slice/crio-b9fe3a25f5abc3c6d1e4dc1cc2b49f9aefa1515e433facf1d7fcf78a15fc376b WatchSource:0}: Error finding container b9fe3a25f5abc3c6d1e4dc1cc2b49f9aefa1515e433facf1d7fcf78a15fc376b: Status 404 returned error can't find the container with id b9fe3a25f5abc3c6d1e4dc1cc2b49f9aefa1515e433facf1d7fcf78a15fc376b Feb 25 14:10:03 crc kubenswrapper[5005]: I0225 14:10:03.880619 5005 generic.go:334] "Generic (PLEG): container finished" podID="ab364140-1ee9-48f2-b494-8118720b89e7" containerID="c0ef01cc88fea1852f4c8d46ddcc19af6937cbf00b7369e80bc04700b9e5518f" exitCode=0 Feb 25 14:10:03 crc kubenswrapper[5005]: I0225 14:10:03.881594 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533810-9lfw8" event={"ID":"ab364140-1ee9-48f2-b494-8118720b89e7","Type":"ContainerDied","Data":"c0ef01cc88fea1852f4c8d46ddcc19af6937cbf00b7369e80bc04700b9e5518f"} Feb 25 14:10:03 crc kubenswrapper[5005]: I0225 14:10:03.886415 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tndzf" event={"ID":"890442d2-083b-4c53-b2d6-d725c94b8025","Type":"ContainerStarted","Data":"b9fe3a25f5abc3c6d1e4dc1cc2b49f9aefa1515e433facf1d7fcf78a15fc376b"} Feb 25 14:10:04 crc kubenswrapper[5005]: I0225 14:10:04.897358 5005 generic.go:334] "Generic (PLEG): container finished" podID="890442d2-083b-4c53-b2d6-d725c94b8025" containerID="61b46fd19d2d78802f9f6aea5ed4188e30a51333aac287118fe75ea8bb9ff9d4" exitCode=0 Feb 25 14:10:04 crc kubenswrapper[5005]: I0225 14:10:04.897537 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tndzf" event={"ID":"890442d2-083b-4c53-b2d6-d725c94b8025","Type":"ContainerDied","Data":"61b46fd19d2d78802f9f6aea5ed4188e30a51333aac287118fe75ea8bb9ff9d4"} Feb 25 14:10:05 crc kubenswrapper[5005]: I0225 14:10:05.252273 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533810-9lfw8" Feb 25 14:10:05 crc kubenswrapper[5005]: I0225 14:10:05.326404 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvdwc\" (UniqueName: \"kubernetes.io/projected/ab364140-1ee9-48f2-b494-8118720b89e7-kube-api-access-mvdwc\") pod \"ab364140-1ee9-48f2-b494-8118720b89e7\" (UID: \"ab364140-1ee9-48f2-b494-8118720b89e7\") " Feb 25 14:10:05 crc kubenswrapper[5005]: I0225 14:10:05.758579 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab364140-1ee9-48f2-b494-8118720b89e7-kube-api-access-mvdwc" (OuterVolumeSpecName: "kube-api-access-mvdwc") pod "ab364140-1ee9-48f2-b494-8118720b89e7" (UID: "ab364140-1ee9-48f2-b494-8118720b89e7"). InnerVolumeSpecName "kube-api-access-mvdwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 14:10:05 crc kubenswrapper[5005]: I0225 14:10:05.840845 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvdwc\" (UniqueName: \"kubernetes.io/projected/ab364140-1ee9-48f2-b494-8118720b89e7-kube-api-access-mvdwc\") on node \"crc\" DevicePath \"\"" Feb 25 14:10:05 crc kubenswrapper[5005]: I0225 14:10:05.910185 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533810-9lfw8" event={"ID":"ab364140-1ee9-48f2-b494-8118720b89e7","Type":"ContainerDied","Data":"e84624be5e0b80b76e5e848970ca034aa1a7fc35a26803d52a8557941e77c1d2"} Feb 25 14:10:05 crc kubenswrapper[5005]: I0225 14:10:05.910224 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e84624be5e0b80b76e5e848970ca034aa1a7fc35a26803d52a8557941e77c1d2" Feb 25 14:10:05 crc kubenswrapper[5005]: I0225 14:10:05.910276 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533810-9lfw8" Feb 25 14:10:05 crc kubenswrapper[5005]: I0225 14:10:05.952871 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533804-n76ck"] Feb 25 14:10:05 crc kubenswrapper[5005]: I0225 14:10:05.963488 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533804-n76ck"] Feb 25 14:10:06 crc kubenswrapper[5005]: I0225 14:10:06.696981 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ed1902d-ada0-4e8e-8a7a-0aa928f09599" path="/var/lib/kubelet/pods/8ed1902d-ada0-4e8e-8a7a-0aa928f09599/volumes" Feb 25 14:10:06 crc kubenswrapper[5005]: I0225 14:10:06.919406 5005 generic.go:334] "Generic (PLEG): container finished" podID="890442d2-083b-4c53-b2d6-d725c94b8025" containerID="373e3cea8ed2fd14a278304d2fdcd7411be1b67926a7e3e052daa08623a49cfa" exitCode=0 Feb 25 14:10:06 crc kubenswrapper[5005]: I0225 14:10:06.919452 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tndzf" event={"ID":"890442d2-083b-4c53-b2d6-d725c94b8025","Type":"ContainerDied","Data":"373e3cea8ed2fd14a278304d2fdcd7411be1b67926a7e3e052daa08623a49cfa"} Feb 25 14:10:08 crc kubenswrapper[5005]: I0225 14:10:08.937839 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tndzf" event={"ID":"890442d2-083b-4c53-b2d6-d725c94b8025","Type":"ContainerStarted","Data":"ca3c4bb8b2f334213c12e98a81db264c8c4c699531c23f1db2bace15165e654b"} Feb 25 14:10:08 crc kubenswrapper[5005]: I0225 14:10:08.960590 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tndzf" podStartSLOduration=3.204592925 podStartE2EDuration="6.960568783s" podCreationTimestamp="2026-02-25 14:10:02 +0000 UTC" firstStartedPulling="2026-02-25 14:10:04.899315502 +0000 UTC m=+10318.940047829" lastFinishedPulling="2026-02-25 14:10:08.65529136 +0000 UTC m=+10322.696023687" observedRunningTime="2026-02-25 14:10:08.951443845 +0000 UTC m=+10322.992176172" watchObservedRunningTime="2026-02-25 14:10:08.960568783 +0000 UTC m=+10323.001301110" Feb 25 14:10:12 crc kubenswrapper[5005]: I0225 14:10:12.816964 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tndzf" Feb 25 14:10:12 crc kubenswrapper[5005]: I0225 14:10:12.818487 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tndzf" Feb 25 14:10:12 crc kubenswrapper[5005]: I0225 14:10:12.876981 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tndzf" Feb 25 14:10:22 crc kubenswrapper[5005]: I0225 14:10:22.860137 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tndzf" Feb 25 14:10:22 crc kubenswrapper[5005]: I0225 14:10:22.913215 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tndzf"] Feb 25 14:10:23 crc kubenswrapper[5005]: I0225 14:10:23.051841 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tndzf" podUID="890442d2-083b-4c53-b2d6-d725c94b8025" containerName="registry-server" containerID="cri-o://ca3c4bb8b2f334213c12e98a81db264c8c4c699531c23f1db2bace15165e654b" gracePeriod=2 Feb 25 14:10:24 crc kubenswrapper[5005]: I0225 14:10:24.003215 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tndzf" Feb 25 14:10:24 crc kubenswrapper[5005]: I0225 14:10:24.061184 5005 generic.go:334] "Generic (PLEG): container finished" podID="890442d2-083b-4c53-b2d6-d725c94b8025" containerID="ca3c4bb8b2f334213c12e98a81db264c8c4c699531c23f1db2bace15165e654b" exitCode=0 Feb 25 14:10:24 crc kubenswrapper[5005]: I0225 14:10:24.061227 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tndzf" event={"ID":"890442d2-083b-4c53-b2d6-d725c94b8025","Type":"ContainerDied","Data":"ca3c4bb8b2f334213c12e98a81db264c8c4c699531c23f1db2bace15165e654b"} Feb 25 14:10:24 crc kubenswrapper[5005]: I0225 14:10:24.061246 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tndzf" Feb 25 14:10:24 crc kubenswrapper[5005]: I0225 14:10:24.061252 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tndzf" event={"ID":"890442d2-083b-4c53-b2d6-d725c94b8025","Type":"ContainerDied","Data":"b9fe3a25f5abc3c6d1e4dc1cc2b49f9aefa1515e433facf1d7fcf78a15fc376b"} Feb 25 14:10:24 crc kubenswrapper[5005]: I0225 14:10:24.061273 5005 scope.go:117] "RemoveContainer" containerID="ca3c4bb8b2f334213c12e98a81db264c8c4c699531c23f1db2bace15165e654b" Feb 25 14:10:24 crc kubenswrapper[5005]: I0225 14:10:24.086806 5005 scope.go:117] "RemoveContainer" containerID="373e3cea8ed2fd14a278304d2fdcd7411be1b67926a7e3e052daa08623a49cfa" Feb 25 14:10:24 crc kubenswrapper[5005]: I0225 14:10:24.134052 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q577n\" (UniqueName: \"kubernetes.io/projected/890442d2-083b-4c53-b2d6-d725c94b8025-kube-api-access-q577n\") pod \"890442d2-083b-4c53-b2d6-d725c94b8025\" (UID: \"890442d2-083b-4c53-b2d6-d725c94b8025\") " Feb 25 14:10:24 crc kubenswrapper[5005]: I0225 14:10:24.134227 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/890442d2-083b-4c53-b2d6-d725c94b8025-catalog-content\") pod \"890442d2-083b-4c53-b2d6-d725c94b8025\" (UID: \"890442d2-083b-4c53-b2d6-d725c94b8025\") " Feb 25 14:10:24 crc kubenswrapper[5005]: I0225 14:10:24.134300 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/890442d2-083b-4c53-b2d6-d725c94b8025-utilities\") pod \"890442d2-083b-4c53-b2d6-d725c94b8025\" (UID: \"890442d2-083b-4c53-b2d6-d725c94b8025\") " Feb 25 14:10:24 crc kubenswrapper[5005]: I0225 14:10:24.135706 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/890442d2-083b-4c53-b2d6-d725c94b8025-utilities" (OuterVolumeSpecName: "utilities") pod "890442d2-083b-4c53-b2d6-d725c94b8025" (UID: "890442d2-083b-4c53-b2d6-d725c94b8025"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 14:10:24 crc kubenswrapper[5005]: I0225 14:10:24.157509 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/890442d2-083b-4c53-b2d6-d725c94b8025-kube-api-access-q577n" (OuterVolumeSpecName: "kube-api-access-q577n") pod "890442d2-083b-4c53-b2d6-d725c94b8025" (UID: "890442d2-083b-4c53-b2d6-d725c94b8025"). InnerVolumeSpecName "kube-api-access-q577n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 14:10:24 crc kubenswrapper[5005]: I0225 14:10:24.160222 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/890442d2-083b-4c53-b2d6-d725c94b8025-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "890442d2-083b-4c53-b2d6-d725c94b8025" (UID: "890442d2-083b-4c53-b2d6-d725c94b8025"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 14:10:24 crc kubenswrapper[5005]: I0225 14:10:24.167686 5005 scope.go:117] "RemoveContainer" containerID="61b46fd19d2d78802f9f6aea5ed4188e30a51333aac287118fe75ea8bb9ff9d4" Feb 25 14:10:24 crc kubenswrapper[5005]: I0225 14:10:24.189756 5005 scope.go:117] "RemoveContainer" containerID="ca3c4bb8b2f334213c12e98a81db264c8c4c699531c23f1db2bace15165e654b" Feb 25 14:10:24 crc kubenswrapper[5005]: E0225 14:10:24.190168 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca3c4bb8b2f334213c12e98a81db264c8c4c699531c23f1db2bace15165e654b\": container with ID starting with ca3c4bb8b2f334213c12e98a81db264c8c4c699531c23f1db2bace15165e654b not found: ID does not exist" containerID="ca3c4bb8b2f334213c12e98a81db264c8c4c699531c23f1db2bace15165e654b" Feb 25 14:10:24 crc kubenswrapper[5005]: I0225 14:10:24.190212 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca3c4bb8b2f334213c12e98a81db264c8c4c699531c23f1db2bace15165e654b"} err="failed to get container status \"ca3c4bb8b2f334213c12e98a81db264c8c4c699531c23f1db2bace15165e654b\": rpc error: code = NotFound desc = could not find container \"ca3c4bb8b2f334213c12e98a81db264c8c4c699531c23f1db2bace15165e654b\": container with ID starting with ca3c4bb8b2f334213c12e98a81db264c8c4c699531c23f1db2bace15165e654b not found: ID does not exist" Feb 25 14:10:24 crc kubenswrapper[5005]: I0225 14:10:24.190237 5005 scope.go:117] "RemoveContainer" containerID="373e3cea8ed2fd14a278304d2fdcd7411be1b67926a7e3e052daa08623a49cfa" Feb 25 14:10:24 crc kubenswrapper[5005]: E0225 14:10:24.191017 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"373e3cea8ed2fd14a278304d2fdcd7411be1b67926a7e3e052daa08623a49cfa\": container with ID starting with 373e3cea8ed2fd14a278304d2fdcd7411be1b67926a7e3e052daa08623a49cfa not found: ID does not exist" containerID="373e3cea8ed2fd14a278304d2fdcd7411be1b67926a7e3e052daa08623a49cfa" Feb 25 14:10:24 crc kubenswrapper[5005]: I0225 14:10:24.191047 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"373e3cea8ed2fd14a278304d2fdcd7411be1b67926a7e3e052daa08623a49cfa"} err="failed to get container status \"373e3cea8ed2fd14a278304d2fdcd7411be1b67926a7e3e052daa08623a49cfa\": rpc error: code = NotFound desc = could not find container \"373e3cea8ed2fd14a278304d2fdcd7411be1b67926a7e3e052daa08623a49cfa\": container with ID starting with 373e3cea8ed2fd14a278304d2fdcd7411be1b67926a7e3e052daa08623a49cfa not found: ID does not exist" Feb 25 14:10:24 crc kubenswrapper[5005]: I0225 14:10:24.191063 5005 scope.go:117] "RemoveContainer" containerID="61b46fd19d2d78802f9f6aea5ed4188e30a51333aac287118fe75ea8bb9ff9d4" Feb 25 14:10:24 crc kubenswrapper[5005]: E0225 14:10:24.191320 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61b46fd19d2d78802f9f6aea5ed4188e30a51333aac287118fe75ea8bb9ff9d4\": container with ID starting with 61b46fd19d2d78802f9f6aea5ed4188e30a51333aac287118fe75ea8bb9ff9d4 not found: ID does not exist" containerID="61b46fd19d2d78802f9f6aea5ed4188e30a51333aac287118fe75ea8bb9ff9d4" Feb 25 14:10:24 crc kubenswrapper[5005]: I0225 14:10:24.191345 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61b46fd19d2d78802f9f6aea5ed4188e30a51333aac287118fe75ea8bb9ff9d4"} err="failed to get container status \"61b46fd19d2d78802f9f6aea5ed4188e30a51333aac287118fe75ea8bb9ff9d4\": rpc error: code = NotFound desc = could not find container \"61b46fd19d2d78802f9f6aea5ed4188e30a51333aac287118fe75ea8bb9ff9d4\": container with ID starting with 61b46fd19d2d78802f9f6aea5ed4188e30a51333aac287118fe75ea8bb9ff9d4 not found: ID does not exist" Feb 25 14:10:24 crc kubenswrapper[5005]: I0225 14:10:24.236452 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q577n\" (UniqueName: \"kubernetes.io/projected/890442d2-083b-4c53-b2d6-d725c94b8025-kube-api-access-q577n\") on node \"crc\" DevicePath \"\"" Feb 25 14:10:24 crc kubenswrapper[5005]: I0225 14:10:24.236488 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/890442d2-083b-4c53-b2d6-d725c94b8025-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 14:10:24 crc kubenswrapper[5005]: I0225 14:10:24.236500 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/890442d2-083b-4c53-b2d6-d725c94b8025-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 14:10:24 crc kubenswrapper[5005]: I0225 14:10:24.394150 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tndzf"] Feb 25 14:10:24 crc kubenswrapper[5005]: I0225 14:10:24.404547 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tndzf"] Feb 25 14:10:24 crc kubenswrapper[5005]: I0225 14:10:24.696434 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="890442d2-083b-4c53-b2d6-d725c94b8025" path="/var/lib/kubelet/pods/890442d2-083b-4c53-b2d6-d725c94b8025/volumes" Feb 25 14:10:28 crc kubenswrapper[5005]: I0225 14:10:28.231481 5005 scope.go:117] "RemoveContainer" containerID="b3997995268c54d20b208b9300d81f42de545399896935ad7c0c997229056fe4" Feb 25 14:11:58 crc kubenswrapper[5005]: I0225 14:11:58.087927 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 14:11:58 crc kubenswrapper[5005]: I0225 14:11:58.088625 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 14:12:00 crc kubenswrapper[5005]: I0225 14:12:00.140031 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533812-s6p86"] Feb 25 14:12:00 crc kubenswrapper[5005]: E0225 14:12:00.140428 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="890442d2-083b-4c53-b2d6-d725c94b8025" containerName="extract-content" Feb 25 14:12:00 crc kubenswrapper[5005]: I0225 14:12:00.140439 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="890442d2-083b-4c53-b2d6-d725c94b8025" containerName="extract-content" Feb 25 14:12:00 crc kubenswrapper[5005]: E0225 14:12:00.140468 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="890442d2-083b-4c53-b2d6-d725c94b8025" containerName="registry-server" Feb 25 14:12:00 crc kubenswrapper[5005]: I0225 14:12:00.140474 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="890442d2-083b-4c53-b2d6-d725c94b8025" containerName="registry-server" Feb 25 14:12:00 crc kubenswrapper[5005]: E0225 14:12:00.140498 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="890442d2-083b-4c53-b2d6-d725c94b8025" containerName="extract-utilities" Feb 25 14:12:00 crc kubenswrapper[5005]: I0225 14:12:00.140505 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="890442d2-083b-4c53-b2d6-d725c94b8025" containerName="extract-utilities" Feb 25 14:12:00 crc kubenswrapper[5005]: E0225 14:12:00.140523 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab364140-1ee9-48f2-b494-8118720b89e7" containerName="oc" Feb 25 14:12:00 crc kubenswrapper[5005]: I0225 14:12:00.140529 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab364140-1ee9-48f2-b494-8118720b89e7" containerName="oc" Feb 25 14:12:00 crc kubenswrapper[5005]: I0225 14:12:00.140718 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="890442d2-083b-4c53-b2d6-d725c94b8025" containerName="registry-server" Feb 25 14:12:00 crc kubenswrapper[5005]: I0225 14:12:00.140741 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab364140-1ee9-48f2-b494-8118720b89e7" containerName="oc" Feb 25 14:12:00 crc kubenswrapper[5005]: I0225 14:12:00.141420 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533812-s6p86" Feb 25 14:12:00 crc kubenswrapper[5005]: I0225 14:12:00.143304 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 14:12:00 crc kubenswrapper[5005]: I0225 14:12:00.143423 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 14:12:00 crc kubenswrapper[5005]: I0225 14:12:00.146727 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 14:12:00 crc kubenswrapper[5005]: I0225 14:12:00.155531 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533812-s6p86"] Feb 25 14:12:00 crc kubenswrapper[5005]: I0225 14:12:00.255309 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bxb2\" (UniqueName: \"kubernetes.io/projected/7cb8f230-652e-4c81-b2af-9849280eabe6-kube-api-access-8bxb2\") pod \"auto-csr-approver-29533812-s6p86\" (UID: \"7cb8f230-652e-4c81-b2af-9849280eabe6\") " pod="openshift-infra/auto-csr-approver-29533812-s6p86" Feb 25 14:12:00 crc kubenswrapper[5005]: I0225 14:12:00.358285 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bxb2\" (UniqueName: \"kubernetes.io/projected/7cb8f230-652e-4c81-b2af-9849280eabe6-kube-api-access-8bxb2\") pod \"auto-csr-approver-29533812-s6p86\" (UID: \"7cb8f230-652e-4c81-b2af-9849280eabe6\") " pod="openshift-infra/auto-csr-approver-29533812-s6p86" Feb 25 14:12:00 crc kubenswrapper[5005]: I0225 14:12:00.377552 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bxb2\" (UniqueName: \"kubernetes.io/projected/7cb8f230-652e-4c81-b2af-9849280eabe6-kube-api-access-8bxb2\") pod \"auto-csr-approver-29533812-s6p86\" (UID: \"7cb8f230-652e-4c81-b2af-9849280eabe6\") " pod="openshift-infra/auto-csr-approver-29533812-s6p86" Feb 25 14:12:00 crc kubenswrapper[5005]: I0225 14:12:00.489004 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533812-s6p86" Feb 25 14:12:00 crc kubenswrapper[5005]: I0225 14:12:00.950228 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533812-s6p86"] Feb 25 14:12:00 crc kubenswrapper[5005]: I0225 14:12:00.967353 5005 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 25 14:12:01 crc kubenswrapper[5005]: I0225 14:12:01.984172 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533812-s6p86" event={"ID":"7cb8f230-652e-4c81-b2af-9849280eabe6","Type":"ContainerStarted","Data":"d5d5c05bcdd935d7b11059bdaa48290fc379636995b7af85b3597e26c20e51aa"} Feb 25 14:12:02 crc kubenswrapper[5005]: I0225 14:12:02.999799 5005 generic.go:334] "Generic (PLEG): container finished" podID="7cb8f230-652e-4c81-b2af-9849280eabe6" containerID="5edd8f3d83d924607e11bf0f91717a191dd4e59c85a9b0246bce5485f6c0886a" exitCode=0 Feb 25 14:12:02 crc kubenswrapper[5005]: I0225 14:12:02.999847 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533812-s6p86" event={"ID":"7cb8f230-652e-4c81-b2af-9849280eabe6","Type":"ContainerDied","Data":"5edd8f3d83d924607e11bf0f91717a191dd4e59c85a9b0246bce5485f6c0886a"} Feb 25 14:12:04 crc kubenswrapper[5005]: I0225 14:12:04.381789 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533812-s6p86" Feb 25 14:12:04 crc kubenswrapper[5005]: I0225 14:12:04.545595 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bxb2\" (UniqueName: \"kubernetes.io/projected/7cb8f230-652e-4c81-b2af-9849280eabe6-kube-api-access-8bxb2\") pod \"7cb8f230-652e-4c81-b2af-9849280eabe6\" (UID: \"7cb8f230-652e-4c81-b2af-9849280eabe6\") " Feb 25 14:12:04 crc kubenswrapper[5005]: I0225 14:12:04.554645 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cb8f230-652e-4c81-b2af-9849280eabe6-kube-api-access-8bxb2" (OuterVolumeSpecName: "kube-api-access-8bxb2") pod "7cb8f230-652e-4c81-b2af-9849280eabe6" (UID: "7cb8f230-652e-4c81-b2af-9849280eabe6"). InnerVolumeSpecName "kube-api-access-8bxb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 14:12:04 crc kubenswrapper[5005]: I0225 14:12:04.648046 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bxb2\" (UniqueName: \"kubernetes.io/projected/7cb8f230-652e-4c81-b2af-9849280eabe6-kube-api-access-8bxb2\") on node \"crc\" DevicePath \"\"" Feb 25 14:12:05 crc kubenswrapper[5005]: I0225 14:12:05.018519 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533812-s6p86" event={"ID":"7cb8f230-652e-4c81-b2af-9849280eabe6","Type":"ContainerDied","Data":"d5d5c05bcdd935d7b11059bdaa48290fc379636995b7af85b3597e26c20e51aa"} Feb 25 14:12:05 crc kubenswrapper[5005]: I0225 14:12:05.018594 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533812-s6p86" Feb 25 14:12:05 crc kubenswrapper[5005]: I0225 14:12:05.018613 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5d5c05bcdd935d7b11059bdaa48290fc379636995b7af85b3597e26c20e51aa" Feb 25 14:12:05 crc kubenswrapper[5005]: I0225 14:12:05.467335 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533806-k8cv7"] Feb 25 14:12:05 crc kubenswrapper[5005]: I0225 14:12:05.475659 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533806-k8cv7"] Feb 25 14:12:06 crc kubenswrapper[5005]: I0225 14:12:06.698636 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e4330aa-2c49-4162-9cb6-362f8a4a3356" path="/var/lib/kubelet/pods/5e4330aa-2c49-4162-9cb6-362f8a4a3356/volumes" Feb 25 14:12:28 crc kubenswrapper[5005]: I0225 14:12:28.087843 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 14:12:28 crc kubenswrapper[5005]: I0225 14:12:28.088506 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 14:12:28 crc kubenswrapper[5005]: I0225 14:12:28.345181 5005 scope.go:117] "RemoveContainer" containerID="a95ce97398aaf9b95b476148bda0e53ea2c6631903e2ea61d74d015db6aa35e0" Feb 25 14:12:58 crc kubenswrapper[5005]: I0225 14:12:58.086808 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 14:12:58 crc kubenswrapper[5005]: I0225 14:12:58.087615 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 14:12:58 crc kubenswrapper[5005]: I0225 14:12:58.087657 5005 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" Feb 25 14:12:58 crc kubenswrapper[5005]: I0225 14:12:58.088365 5005 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0fd17b4cc81721a66adaff149966a75a0355cfefe1996efe9af637406ab28af1"} pod="openshift-machine-config-operator/machine-config-daemon-tct5q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 25 14:12:58 crc kubenswrapper[5005]: I0225 14:12:58.088723 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" containerID="cri-o://0fd17b4cc81721a66adaff149966a75a0355cfefe1996efe9af637406ab28af1" gracePeriod=600 Feb 25 14:12:58 crc kubenswrapper[5005]: E0225 14:12:58.212266 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 14:12:58 crc kubenswrapper[5005]: I0225 14:12:58.516689 5005 generic.go:334] "Generic (PLEG): container finished" podID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerID="0fd17b4cc81721a66adaff149966a75a0355cfefe1996efe9af637406ab28af1" exitCode=0 Feb 25 14:12:58 crc kubenswrapper[5005]: I0225 14:12:58.516731 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerDied","Data":"0fd17b4cc81721a66adaff149966a75a0355cfefe1996efe9af637406ab28af1"} Feb 25 14:12:58 crc kubenswrapper[5005]: I0225 14:12:58.516793 5005 scope.go:117] "RemoveContainer" containerID="38175ea92de24a24d2b60b32f6965a44b6aebab81fa0434b8fdc104643cc42ed" Feb 25 14:12:58 crc kubenswrapper[5005]: I0225 14:12:58.517170 5005 scope.go:117] "RemoveContainer" containerID="0fd17b4cc81721a66adaff149966a75a0355cfefe1996efe9af637406ab28af1" Feb 25 14:12:58 crc kubenswrapper[5005]: E0225 14:12:58.517488 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 14:13:13 crc kubenswrapper[5005]: I0225 14:13:13.515858 5005 scope.go:117] "RemoveContainer" containerID="0fd17b4cc81721a66adaff149966a75a0355cfefe1996efe9af637406ab28af1" Feb 25 14:13:13 crc kubenswrapper[5005]: E0225 14:13:13.517770 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 14:13:23 crc kubenswrapper[5005]: I0225 14:13:23.393401 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ntgqh"] Feb 25 14:13:23 crc kubenswrapper[5005]: E0225 14:13:23.394420 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cb8f230-652e-4c81-b2af-9849280eabe6" containerName="oc" Feb 25 14:13:23 crc kubenswrapper[5005]: I0225 14:13:23.394438 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cb8f230-652e-4c81-b2af-9849280eabe6" containerName="oc" Feb 25 14:13:23 crc kubenswrapper[5005]: I0225 14:13:23.394748 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cb8f230-652e-4c81-b2af-9849280eabe6" containerName="oc" Feb 25 14:13:23 crc kubenswrapper[5005]: I0225 14:13:23.396304 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ntgqh" Feb 25 14:13:23 crc kubenswrapper[5005]: I0225 14:13:23.404667 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ntgqh"] Feb 25 14:13:23 crc kubenswrapper[5005]: I0225 14:13:23.508217 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b4xx\" (UniqueName: \"kubernetes.io/projected/a49397bd-3d28-460a-b537-8d4cd30d981d-kube-api-access-9b4xx\") pod \"redhat-operators-ntgqh\" (UID: \"a49397bd-3d28-460a-b537-8d4cd30d981d\") " pod="openshift-marketplace/redhat-operators-ntgqh" Feb 25 14:13:23 crc kubenswrapper[5005]: I0225 14:13:23.508361 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a49397bd-3d28-460a-b537-8d4cd30d981d-catalog-content\") pod \"redhat-operators-ntgqh\" (UID: \"a49397bd-3d28-460a-b537-8d4cd30d981d\") " pod="openshift-marketplace/redhat-operators-ntgqh" Feb 25 14:13:23 crc kubenswrapper[5005]: I0225 14:13:23.508468 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a49397bd-3d28-460a-b537-8d4cd30d981d-utilities\") pod \"redhat-operators-ntgqh\" (UID: \"a49397bd-3d28-460a-b537-8d4cd30d981d\") " pod="openshift-marketplace/redhat-operators-ntgqh" Feb 25 14:13:23 crc kubenswrapper[5005]: I0225 14:13:23.609867 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a49397bd-3d28-460a-b537-8d4cd30d981d-catalog-content\") pod \"redhat-operators-ntgqh\" (UID: \"a49397bd-3d28-460a-b537-8d4cd30d981d\") " pod="openshift-marketplace/redhat-operators-ntgqh" Feb 25 14:13:23 crc kubenswrapper[5005]: I0225 14:13:23.609920 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a49397bd-3d28-460a-b537-8d4cd30d981d-utilities\") pod \"redhat-operators-ntgqh\" (UID: \"a49397bd-3d28-460a-b537-8d4cd30d981d\") " pod="openshift-marketplace/redhat-operators-ntgqh" Feb 25 14:13:23 crc kubenswrapper[5005]: I0225 14:13:23.610060 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9b4xx\" (UniqueName: \"kubernetes.io/projected/a49397bd-3d28-460a-b537-8d4cd30d981d-kube-api-access-9b4xx\") pod \"redhat-operators-ntgqh\" (UID: \"a49397bd-3d28-460a-b537-8d4cd30d981d\") " pod="openshift-marketplace/redhat-operators-ntgqh" Feb 25 14:13:23 crc kubenswrapper[5005]: I0225 14:13:23.610331 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a49397bd-3d28-460a-b537-8d4cd30d981d-catalog-content\") pod \"redhat-operators-ntgqh\" (UID: \"a49397bd-3d28-460a-b537-8d4cd30d981d\") " pod="openshift-marketplace/redhat-operators-ntgqh" Feb 25 14:13:23 crc kubenswrapper[5005]: I0225 14:13:23.610469 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a49397bd-3d28-460a-b537-8d4cd30d981d-utilities\") pod \"redhat-operators-ntgqh\" (UID: \"a49397bd-3d28-460a-b537-8d4cd30d981d\") " pod="openshift-marketplace/redhat-operators-ntgqh" Feb 25 14:13:23 crc kubenswrapper[5005]: I0225 14:13:23.639768 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b4xx\" (UniqueName: \"kubernetes.io/projected/a49397bd-3d28-460a-b537-8d4cd30d981d-kube-api-access-9b4xx\") pod \"redhat-operators-ntgqh\" (UID: \"a49397bd-3d28-460a-b537-8d4cd30d981d\") " pod="openshift-marketplace/redhat-operators-ntgqh" Feb 25 14:13:23 crc kubenswrapper[5005]: I0225 14:13:23.714505 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ntgqh" Feb 25 14:13:24 crc kubenswrapper[5005]: I0225 14:13:24.197146 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ntgqh"] Feb 25 14:13:24 crc kubenswrapper[5005]: I0225 14:13:24.669265 5005 generic.go:334] "Generic (PLEG): container finished" podID="a49397bd-3d28-460a-b537-8d4cd30d981d" containerID="afa7834ab21ce0151e4e4165a405e166ac2a71145c6c415f0b0926cebf84628d" exitCode=0 Feb 25 14:13:24 crc kubenswrapper[5005]: I0225 14:13:24.669316 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ntgqh" event={"ID":"a49397bd-3d28-460a-b537-8d4cd30d981d","Type":"ContainerDied","Data":"afa7834ab21ce0151e4e4165a405e166ac2a71145c6c415f0b0926cebf84628d"} Feb 25 14:13:24 crc kubenswrapper[5005]: I0225 14:13:24.669526 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ntgqh" event={"ID":"a49397bd-3d28-460a-b537-8d4cd30d981d","Type":"ContainerStarted","Data":"9485ab00630fa0f0c608fa2a074d8982ecf3deab80417b1f691626b93dc34210"} Feb 25 14:13:25 crc kubenswrapper[5005]: I0225 14:13:25.678483 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ntgqh" event={"ID":"a49397bd-3d28-460a-b537-8d4cd30d981d","Type":"ContainerStarted","Data":"2afe4ccde0bfefe4340c451e62724121ea1eb1b291b9a248532f61353bd8ad04"} Feb 25 14:13:26 crc kubenswrapper[5005]: I0225 14:13:26.591348 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4rs7p"] Feb 25 14:13:26 crc kubenswrapper[5005]: I0225 14:13:26.593267 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4rs7p" Feb 25 14:13:26 crc kubenswrapper[5005]: I0225 14:13:26.627302 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4rs7p"] Feb 25 14:13:26 crc kubenswrapper[5005]: I0225 14:13:26.693256 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9shbw\" (UniqueName: \"kubernetes.io/projected/532b125e-183d-473e-869e-5fe46f1c60c9-kube-api-access-9shbw\") pod \"certified-operators-4rs7p\" (UID: \"532b125e-183d-473e-869e-5fe46f1c60c9\") " pod="openshift-marketplace/certified-operators-4rs7p" Feb 25 14:13:26 crc kubenswrapper[5005]: I0225 14:13:26.693356 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/532b125e-183d-473e-869e-5fe46f1c60c9-catalog-content\") pod \"certified-operators-4rs7p\" (UID: \"532b125e-183d-473e-869e-5fe46f1c60c9\") " pod="openshift-marketplace/certified-operators-4rs7p" Feb 25 14:13:26 crc kubenswrapper[5005]: I0225 14:13:26.693451 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/532b125e-183d-473e-869e-5fe46f1c60c9-utilities\") pod \"certified-operators-4rs7p\" (UID: \"532b125e-183d-473e-869e-5fe46f1c60c9\") " pod="openshift-marketplace/certified-operators-4rs7p" Feb 25 14:13:26 crc kubenswrapper[5005]: I0225 14:13:26.795835 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9shbw\" (UniqueName: \"kubernetes.io/projected/532b125e-183d-473e-869e-5fe46f1c60c9-kube-api-access-9shbw\") pod \"certified-operators-4rs7p\" (UID: \"532b125e-183d-473e-869e-5fe46f1c60c9\") " pod="openshift-marketplace/certified-operators-4rs7p" Feb 25 14:13:26 crc kubenswrapper[5005]: I0225 14:13:26.796017 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/532b125e-183d-473e-869e-5fe46f1c60c9-catalog-content\") pod \"certified-operators-4rs7p\" (UID: \"532b125e-183d-473e-869e-5fe46f1c60c9\") " pod="openshift-marketplace/certified-operators-4rs7p" Feb 25 14:13:26 crc kubenswrapper[5005]: I0225 14:13:26.796121 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/532b125e-183d-473e-869e-5fe46f1c60c9-utilities\") pod \"certified-operators-4rs7p\" (UID: \"532b125e-183d-473e-869e-5fe46f1c60c9\") " pod="openshift-marketplace/certified-operators-4rs7p" Feb 25 14:13:26 crc kubenswrapper[5005]: I0225 14:13:26.796992 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/532b125e-183d-473e-869e-5fe46f1c60c9-catalog-content\") pod \"certified-operators-4rs7p\" (UID: \"532b125e-183d-473e-869e-5fe46f1c60c9\") " pod="openshift-marketplace/certified-operators-4rs7p" Feb 25 14:13:26 crc kubenswrapper[5005]: I0225 14:13:26.797124 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/532b125e-183d-473e-869e-5fe46f1c60c9-utilities\") pod \"certified-operators-4rs7p\" (UID: \"532b125e-183d-473e-869e-5fe46f1c60c9\") " pod="openshift-marketplace/certified-operators-4rs7p" Feb 25 14:13:26 crc kubenswrapper[5005]: I0225 14:13:26.866486 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9shbw\" (UniqueName: \"kubernetes.io/projected/532b125e-183d-473e-869e-5fe46f1c60c9-kube-api-access-9shbw\") pod \"certified-operators-4rs7p\" (UID: \"532b125e-183d-473e-869e-5fe46f1c60c9\") " pod="openshift-marketplace/certified-operators-4rs7p" Feb 25 14:13:26 crc kubenswrapper[5005]: I0225 14:13:26.933575 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4rs7p" Feb 25 14:13:27 crc kubenswrapper[5005]: I0225 14:13:27.462298 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4rs7p"] Feb 25 14:13:27 crc kubenswrapper[5005]: I0225 14:13:27.686244 5005 scope.go:117] "RemoveContainer" containerID="0fd17b4cc81721a66adaff149966a75a0355cfefe1996efe9af637406ab28af1" Feb 25 14:13:27 crc kubenswrapper[5005]: E0225 14:13:27.686508 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 14:13:27 crc kubenswrapper[5005]: I0225 14:13:27.694462 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4rs7p" event={"ID":"532b125e-183d-473e-869e-5fe46f1c60c9","Type":"ContainerStarted","Data":"2cecabdec7d5d487354bdcc9ee38312e1392e576457c29b3699d0aada9242456"} Feb 25 14:13:27 crc kubenswrapper[5005]: I0225 14:13:27.694502 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4rs7p" event={"ID":"532b125e-183d-473e-869e-5fe46f1c60c9","Type":"ContainerStarted","Data":"5e6283d882145de5575bfdbd823137227b37a090834226ebf568d7ce7a3e7ce8"} Feb 25 14:13:28 crc kubenswrapper[5005]: I0225 14:13:28.703300 5005 generic.go:334] "Generic (PLEG): container finished" podID="a49397bd-3d28-460a-b537-8d4cd30d981d" containerID="2afe4ccde0bfefe4340c451e62724121ea1eb1b291b9a248532f61353bd8ad04" exitCode=0 Feb 25 14:13:28 crc kubenswrapper[5005]: I0225 14:13:28.703387 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ntgqh" event={"ID":"a49397bd-3d28-460a-b537-8d4cd30d981d","Type":"ContainerDied","Data":"2afe4ccde0bfefe4340c451e62724121ea1eb1b291b9a248532f61353bd8ad04"} Feb 25 14:13:28 crc kubenswrapper[5005]: I0225 14:13:28.706027 5005 generic.go:334] "Generic (PLEG): container finished" podID="532b125e-183d-473e-869e-5fe46f1c60c9" containerID="2cecabdec7d5d487354bdcc9ee38312e1392e576457c29b3699d0aada9242456" exitCode=0 Feb 25 14:13:28 crc kubenswrapper[5005]: I0225 14:13:28.706081 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4rs7p" event={"ID":"532b125e-183d-473e-869e-5fe46f1c60c9","Type":"ContainerDied","Data":"2cecabdec7d5d487354bdcc9ee38312e1392e576457c29b3699d0aada9242456"} Feb 25 14:13:29 crc kubenswrapper[5005]: I0225 14:13:29.715060 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ntgqh" event={"ID":"a49397bd-3d28-460a-b537-8d4cd30d981d","Type":"ContainerStarted","Data":"55485e445e5b86c1e4feb8fba34138567985d2dc65e362e4d4e2da9dbaef1b3c"} Feb 25 14:13:29 crc kubenswrapper[5005]: I0225 14:13:29.731478 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ntgqh" podStartSLOduration=2.28144524 podStartE2EDuration="6.731460315s" podCreationTimestamp="2026-02-25 14:13:23 +0000 UTC" firstStartedPulling="2026-02-25 14:13:24.671178073 +0000 UTC m=+10518.711910400" lastFinishedPulling="2026-02-25 14:13:29.121193148 +0000 UTC m=+10523.161925475" observedRunningTime="2026-02-25 14:13:29.727429828 +0000 UTC m=+10523.768162175" watchObservedRunningTime="2026-02-25 14:13:29.731460315 +0000 UTC m=+10523.772192642" Feb 25 14:13:30 crc kubenswrapper[5005]: I0225 14:13:30.723846 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4rs7p" event={"ID":"532b125e-183d-473e-869e-5fe46f1c60c9","Type":"ContainerStarted","Data":"4d30d911def24297f4e379fd3f12066e1f56d62be03c252cad064bf869b78cbb"} Feb 25 14:13:31 crc kubenswrapper[5005]: I0225 14:13:31.731964 5005 generic.go:334] "Generic (PLEG): container finished" podID="532b125e-183d-473e-869e-5fe46f1c60c9" containerID="4d30d911def24297f4e379fd3f12066e1f56d62be03c252cad064bf869b78cbb" exitCode=0 Feb 25 14:13:31 crc kubenswrapper[5005]: I0225 14:13:31.732002 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4rs7p" event={"ID":"532b125e-183d-473e-869e-5fe46f1c60c9","Type":"ContainerDied","Data":"4d30d911def24297f4e379fd3f12066e1f56d62be03c252cad064bf869b78cbb"} Feb 25 14:13:32 crc kubenswrapper[5005]: I0225 14:13:32.741148 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4rs7p" event={"ID":"532b125e-183d-473e-869e-5fe46f1c60c9","Type":"ContainerStarted","Data":"0ff720f1be605632750822a7ce71ac72a75abdc2e6a3bafb9b8b74a9b41602ee"} Feb 25 14:13:32 crc kubenswrapper[5005]: I0225 14:13:32.774983 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4rs7p" podStartSLOduration=3.359748137 podStartE2EDuration="6.774962344s" podCreationTimestamp="2026-02-25 14:13:26 +0000 UTC" firstStartedPulling="2026-02-25 14:13:28.70791222 +0000 UTC m=+10522.748644547" lastFinishedPulling="2026-02-25 14:13:32.123126417 +0000 UTC m=+10526.163858754" observedRunningTime="2026-02-25 14:13:32.767637112 +0000 UTC m=+10526.808369439" watchObservedRunningTime="2026-02-25 14:13:32.774962344 +0000 UTC m=+10526.815694671" Feb 25 14:13:33 crc kubenswrapper[5005]: I0225 14:13:33.715507 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ntgqh" Feb 25 14:13:33 crc kubenswrapper[5005]: I0225 14:13:33.716435 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ntgqh" Feb 25 14:13:34 crc kubenswrapper[5005]: I0225 14:13:34.766868 5005 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ntgqh" podUID="a49397bd-3d28-460a-b537-8d4cd30d981d" containerName="registry-server" probeResult="failure" output=< Feb 25 14:13:34 crc kubenswrapper[5005]: timeout: failed to connect service ":50051" within 1s Feb 25 14:13:34 crc kubenswrapper[5005]: > Feb 25 14:13:36 crc kubenswrapper[5005]: I0225 14:13:36.934472 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4rs7p" Feb 25 14:13:36 crc kubenswrapper[5005]: I0225 14:13:36.935127 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4rs7p" Feb 25 14:13:36 crc kubenswrapper[5005]: I0225 14:13:36.979364 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4rs7p" Feb 25 14:13:38 crc kubenswrapper[5005]: I0225 14:13:38.102975 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4rs7p" Feb 25 14:13:38 crc kubenswrapper[5005]: I0225 14:13:38.151922 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4rs7p"] Feb 25 14:13:39 crc kubenswrapper[5005]: I0225 14:13:39.821093 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4rs7p" podUID="532b125e-183d-473e-869e-5fe46f1c60c9" containerName="registry-server" containerID="cri-o://0ff720f1be605632750822a7ce71ac72a75abdc2e6a3bafb9b8b74a9b41602ee" gracePeriod=2 Feb 25 14:13:40 crc kubenswrapper[5005]: I0225 14:13:40.369239 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4rs7p" Feb 25 14:13:40 crc kubenswrapper[5005]: I0225 14:13:40.467721 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/532b125e-183d-473e-869e-5fe46f1c60c9-catalog-content\") pod \"532b125e-183d-473e-869e-5fe46f1c60c9\" (UID: \"532b125e-183d-473e-869e-5fe46f1c60c9\") " Feb 25 14:13:40 crc kubenswrapper[5005]: I0225 14:13:40.467979 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/532b125e-183d-473e-869e-5fe46f1c60c9-utilities\") pod \"532b125e-183d-473e-869e-5fe46f1c60c9\" (UID: \"532b125e-183d-473e-869e-5fe46f1c60c9\") " Feb 25 14:13:40 crc kubenswrapper[5005]: I0225 14:13:40.468018 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9shbw\" (UniqueName: \"kubernetes.io/projected/532b125e-183d-473e-869e-5fe46f1c60c9-kube-api-access-9shbw\") pod \"532b125e-183d-473e-869e-5fe46f1c60c9\" (UID: \"532b125e-183d-473e-869e-5fe46f1c60c9\") " Feb 25 14:13:40 crc kubenswrapper[5005]: I0225 14:13:40.469341 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/532b125e-183d-473e-869e-5fe46f1c60c9-utilities" (OuterVolumeSpecName: "utilities") pod "532b125e-183d-473e-869e-5fe46f1c60c9" (UID: "532b125e-183d-473e-869e-5fe46f1c60c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 14:13:40 crc kubenswrapper[5005]: I0225 14:13:40.474571 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/532b125e-183d-473e-869e-5fe46f1c60c9-kube-api-access-9shbw" (OuterVolumeSpecName: "kube-api-access-9shbw") pod "532b125e-183d-473e-869e-5fe46f1c60c9" (UID: "532b125e-183d-473e-869e-5fe46f1c60c9"). InnerVolumeSpecName "kube-api-access-9shbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 14:13:40 crc kubenswrapper[5005]: I0225 14:13:40.520847 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/532b125e-183d-473e-869e-5fe46f1c60c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "532b125e-183d-473e-869e-5fe46f1c60c9" (UID: "532b125e-183d-473e-869e-5fe46f1c60c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 14:13:40 crc kubenswrapper[5005]: I0225 14:13:40.570702 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/532b125e-183d-473e-869e-5fe46f1c60c9-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 14:13:40 crc kubenswrapper[5005]: I0225 14:13:40.570738 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9shbw\" (UniqueName: \"kubernetes.io/projected/532b125e-183d-473e-869e-5fe46f1c60c9-kube-api-access-9shbw\") on node \"crc\" DevicePath \"\"" Feb 25 14:13:40 crc kubenswrapper[5005]: I0225 14:13:40.570750 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/532b125e-183d-473e-869e-5fe46f1c60c9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 14:13:40 crc kubenswrapper[5005]: I0225 14:13:40.685559 5005 scope.go:117] "RemoveContainer" containerID="0fd17b4cc81721a66adaff149966a75a0355cfefe1996efe9af637406ab28af1" Feb 25 14:13:40 crc kubenswrapper[5005]: E0225 14:13:40.686027 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 14:13:40 crc kubenswrapper[5005]: I0225 14:13:40.830979 5005 generic.go:334] "Generic (PLEG): container finished" podID="532b125e-183d-473e-869e-5fe46f1c60c9" containerID="0ff720f1be605632750822a7ce71ac72a75abdc2e6a3bafb9b8b74a9b41602ee" exitCode=0 Feb 25 14:13:40 crc kubenswrapper[5005]: I0225 14:13:40.831043 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4rs7p" Feb 25 14:13:40 crc kubenswrapper[5005]: I0225 14:13:40.831066 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4rs7p" event={"ID":"532b125e-183d-473e-869e-5fe46f1c60c9","Type":"ContainerDied","Data":"0ff720f1be605632750822a7ce71ac72a75abdc2e6a3bafb9b8b74a9b41602ee"} Feb 25 14:13:40 crc kubenswrapper[5005]: I0225 14:13:40.831447 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4rs7p" event={"ID":"532b125e-183d-473e-869e-5fe46f1c60c9","Type":"ContainerDied","Data":"5e6283d882145de5575bfdbd823137227b37a090834226ebf568d7ce7a3e7ce8"} Feb 25 14:13:40 crc kubenswrapper[5005]: I0225 14:13:40.831469 5005 scope.go:117] "RemoveContainer" containerID="0ff720f1be605632750822a7ce71ac72a75abdc2e6a3bafb9b8b74a9b41602ee" Feb 25 14:13:40 crc kubenswrapper[5005]: I0225 14:13:40.848032 5005 scope.go:117] "RemoveContainer" containerID="4d30d911def24297f4e379fd3f12066e1f56d62be03c252cad064bf869b78cbb" Feb 25 14:13:40 crc kubenswrapper[5005]: I0225 14:13:40.852408 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4rs7p"] Feb 25 14:13:40 crc kubenswrapper[5005]: I0225 14:13:40.860674 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4rs7p"] Feb 25 14:13:40 crc kubenswrapper[5005]: I0225 14:13:40.869349 5005 scope.go:117] "RemoveContainer" containerID="2cecabdec7d5d487354bdcc9ee38312e1392e576457c29b3699d0aada9242456" Feb 25 14:13:40 crc kubenswrapper[5005]: I0225 14:13:40.916390 5005 scope.go:117] "RemoveContainer" containerID="0ff720f1be605632750822a7ce71ac72a75abdc2e6a3bafb9b8b74a9b41602ee" Feb 25 14:13:40 crc kubenswrapper[5005]: E0225 14:13:40.917147 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ff720f1be605632750822a7ce71ac72a75abdc2e6a3bafb9b8b74a9b41602ee\": container with ID starting with 0ff720f1be605632750822a7ce71ac72a75abdc2e6a3bafb9b8b74a9b41602ee not found: ID does not exist" containerID="0ff720f1be605632750822a7ce71ac72a75abdc2e6a3bafb9b8b74a9b41602ee" Feb 25 14:13:40 crc kubenswrapper[5005]: I0225 14:13:40.917197 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ff720f1be605632750822a7ce71ac72a75abdc2e6a3bafb9b8b74a9b41602ee"} err="failed to get container status \"0ff720f1be605632750822a7ce71ac72a75abdc2e6a3bafb9b8b74a9b41602ee\": rpc error: code = NotFound desc = could not find container \"0ff720f1be605632750822a7ce71ac72a75abdc2e6a3bafb9b8b74a9b41602ee\": container with ID starting with 0ff720f1be605632750822a7ce71ac72a75abdc2e6a3bafb9b8b74a9b41602ee not found: ID does not exist" Feb 25 14:13:40 crc kubenswrapper[5005]: I0225 14:13:40.917245 5005 scope.go:117] "RemoveContainer" containerID="4d30d911def24297f4e379fd3f12066e1f56d62be03c252cad064bf869b78cbb" Feb 25 14:13:40 crc kubenswrapper[5005]: E0225 14:13:40.917727 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d30d911def24297f4e379fd3f12066e1f56d62be03c252cad064bf869b78cbb\": container with ID starting with 4d30d911def24297f4e379fd3f12066e1f56d62be03c252cad064bf869b78cbb not found: ID does not exist" containerID="4d30d911def24297f4e379fd3f12066e1f56d62be03c252cad064bf869b78cbb" Feb 25 14:13:40 crc kubenswrapper[5005]: I0225 14:13:40.917761 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d30d911def24297f4e379fd3f12066e1f56d62be03c252cad064bf869b78cbb"} err="failed to get container status \"4d30d911def24297f4e379fd3f12066e1f56d62be03c252cad064bf869b78cbb\": rpc error: code = NotFound desc = could not find container \"4d30d911def24297f4e379fd3f12066e1f56d62be03c252cad064bf869b78cbb\": container with ID starting with 4d30d911def24297f4e379fd3f12066e1f56d62be03c252cad064bf869b78cbb not found: ID does not exist" Feb 25 14:13:40 crc kubenswrapper[5005]: I0225 14:13:40.917781 5005 scope.go:117] "RemoveContainer" containerID="2cecabdec7d5d487354bdcc9ee38312e1392e576457c29b3699d0aada9242456" Feb 25 14:13:40 crc kubenswrapper[5005]: E0225 14:13:40.918059 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cecabdec7d5d487354bdcc9ee38312e1392e576457c29b3699d0aada9242456\": container with ID starting with 2cecabdec7d5d487354bdcc9ee38312e1392e576457c29b3699d0aada9242456 not found: ID does not exist" containerID="2cecabdec7d5d487354bdcc9ee38312e1392e576457c29b3699d0aada9242456" Feb 25 14:13:40 crc kubenswrapper[5005]: I0225 14:13:40.918084 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cecabdec7d5d487354bdcc9ee38312e1392e576457c29b3699d0aada9242456"} err="failed to get container status \"2cecabdec7d5d487354bdcc9ee38312e1392e576457c29b3699d0aada9242456\": rpc error: code = NotFound desc = could not find container \"2cecabdec7d5d487354bdcc9ee38312e1392e576457c29b3699d0aada9242456\": container with ID starting with 2cecabdec7d5d487354bdcc9ee38312e1392e576457c29b3699d0aada9242456 not found: ID does not exist" Feb 25 14:13:42 crc kubenswrapper[5005]: I0225 14:13:42.695968 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="532b125e-183d-473e-869e-5fe46f1c60c9" path="/var/lib/kubelet/pods/532b125e-183d-473e-869e-5fe46f1c60c9/volumes" Feb 25 14:13:43 crc kubenswrapper[5005]: I0225 14:13:43.765583 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ntgqh" Feb 25 14:13:43 crc kubenswrapper[5005]: I0225 14:13:43.820073 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ntgqh" Feb 25 14:13:44 crc kubenswrapper[5005]: I0225 14:13:44.001840 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ntgqh"] Feb 25 14:13:44 crc kubenswrapper[5005]: I0225 14:13:44.870089 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ntgqh" podUID="a49397bd-3d28-460a-b537-8d4cd30d981d" containerName="registry-server" containerID="cri-o://55485e445e5b86c1e4feb8fba34138567985d2dc65e362e4d4e2da9dbaef1b3c" gracePeriod=2 Feb 25 14:13:45 crc kubenswrapper[5005]: I0225 14:13:45.380854 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ntgqh" Feb 25 14:13:45 crc kubenswrapper[5005]: I0225 14:13:45.479417 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a49397bd-3d28-460a-b537-8d4cd30d981d-utilities\") pod \"a49397bd-3d28-460a-b537-8d4cd30d981d\" (UID: \"a49397bd-3d28-460a-b537-8d4cd30d981d\") " Feb 25 14:13:45 crc kubenswrapper[5005]: I0225 14:13:45.479515 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a49397bd-3d28-460a-b537-8d4cd30d981d-catalog-content\") pod \"a49397bd-3d28-460a-b537-8d4cd30d981d\" (UID: \"a49397bd-3d28-460a-b537-8d4cd30d981d\") " Feb 25 14:13:45 crc kubenswrapper[5005]: I0225 14:13:45.479662 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9b4xx\" (UniqueName: \"kubernetes.io/projected/a49397bd-3d28-460a-b537-8d4cd30d981d-kube-api-access-9b4xx\") pod \"a49397bd-3d28-460a-b537-8d4cd30d981d\" (UID: \"a49397bd-3d28-460a-b537-8d4cd30d981d\") " Feb 25 14:13:45 crc kubenswrapper[5005]: I0225 14:13:45.481863 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a49397bd-3d28-460a-b537-8d4cd30d981d-utilities" (OuterVolumeSpecName: "utilities") pod "a49397bd-3d28-460a-b537-8d4cd30d981d" (UID: "a49397bd-3d28-460a-b537-8d4cd30d981d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 14:13:45 crc kubenswrapper[5005]: I0225 14:13:45.485825 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a49397bd-3d28-460a-b537-8d4cd30d981d-kube-api-access-9b4xx" (OuterVolumeSpecName: "kube-api-access-9b4xx") pod "a49397bd-3d28-460a-b537-8d4cd30d981d" (UID: "a49397bd-3d28-460a-b537-8d4cd30d981d"). InnerVolumeSpecName "kube-api-access-9b4xx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 14:13:45 crc kubenswrapper[5005]: I0225 14:13:45.583086 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9b4xx\" (UniqueName: \"kubernetes.io/projected/a49397bd-3d28-460a-b537-8d4cd30d981d-kube-api-access-9b4xx\") on node \"crc\" DevicePath \"\"" Feb 25 14:13:45 crc kubenswrapper[5005]: I0225 14:13:45.583127 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a49397bd-3d28-460a-b537-8d4cd30d981d-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 14:13:45 crc kubenswrapper[5005]: I0225 14:13:45.620768 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a49397bd-3d28-460a-b537-8d4cd30d981d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a49397bd-3d28-460a-b537-8d4cd30d981d" (UID: "a49397bd-3d28-460a-b537-8d4cd30d981d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 14:13:45 crc kubenswrapper[5005]: I0225 14:13:45.685117 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a49397bd-3d28-460a-b537-8d4cd30d981d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 14:13:45 crc kubenswrapper[5005]: I0225 14:13:45.879747 5005 generic.go:334] "Generic (PLEG): container finished" podID="a49397bd-3d28-460a-b537-8d4cd30d981d" containerID="55485e445e5b86c1e4feb8fba34138567985d2dc65e362e4d4e2da9dbaef1b3c" exitCode=0 Feb 25 14:13:45 crc kubenswrapper[5005]: I0225 14:13:45.879785 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ntgqh" event={"ID":"a49397bd-3d28-460a-b537-8d4cd30d981d","Type":"ContainerDied","Data":"55485e445e5b86c1e4feb8fba34138567985d2dc65e362e4d4e2da9dbaef1b3c"} Feb 25 14:13:45 crc kubenswrapper[5005]: I0225 14:13:45.879835 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ntgqh" event={"ID":"a49397bd-3d28-460a-b537-8d4cd30d981d","Type":"ContainerDied","Data":"9485ab00630fa0f0c608fa2a074d8982ecf3deab80417b1f691626b93dc34210"} Feb 25 14:13:45 crc kubenswrapper[5005]: I0225 14:13:45.879854 5005 scope.go:117] "RemoveContainer" containerID="55485e445e5b86c1e4feb8fba34138567985d2dc65e362e4d4e2da9dbaef1b3c" Feb 25 14:13:45 crc kubenswrapper[5005]: I0225 14:13:45.879869 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ntgqh" Feb 25 14:13:45 crc kubenswrapper[5005]: I0225 14:13:45.904569 5005 scope.go:117] "RemoveContainer" containerID="2afe4ccde0bfefe4340c451e62724121ea1eb1b291b9a248532f61353bd8ad04" Feb 25 14:13:45 crc kubenswrapper[5005]: I0225 14:13:45.915662 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ntgqh"] Feb 25 14:13:45 crc kubenswrapper[5005]: I0225 14:13:45.926162 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ntgqh"] Feb 25 14:13:45 crc kubenswrapper[5005]: I0225 14:13:45.946955 5005 scope.go:117] "RemoveContainer" containerID="afa7834ab21ce0151e4e4165a405e166ac2a71145c6c415f0b0926cebf84628d" Feb 25 14:13:45 crc kubenswrapper[5005]: I0225 14:13:45.972517 5005 scope.go:117] "RemoveContainer" containerID="55485e445e5b86c1e4feb8fba34138567985d2dc65e362e4d4e2da9dbaef1b3c" Feb 25 14:13:45 crc kubenswrapper[5005]: E0225 14:13:45.972913 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55485e445e5b86c1e4feb8fba34138567985d2dc65e362e4d4e2da9dbaef1b3c\": container with ID starting with 55485e445e5b86c1e4feb8fba34138567985d2dc65e362e4d4e2da9dbaef1b3c not found: ID does not exist" containerID="55485e445e5b86c1e4feb8fba34138567985d2dc65e362e4d4e2da9dbaef1b3c" Feb 25 14:13:45 crc kubenswrapper[5005]: I0225 14:13:45.972955 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55485e445e5b86c1e4feb8fba34138567985d2dc65e362e4d4e2da9dbaef1b3c"} err="failed to get container status \"55485e445e5b86c1e4feb8fba34138567985d2dc65e362e4d4e2da9dbaef1b3c\": rpc error: code = NotFound desc = could not find container \"55485e445e5b86c1e4feb8fba34138567985d2dc65e362e4d4e2da9dbaef1b3c\": container with ID starting with 55485e445e5b86c1e4feb8fba34138567985d2dc65e362e4d4e2da9dbaef1b3c not found: ID does not exist" Feb 25 14:13:45 crc kubenswrapper[5005]: I0225 14:13:45.972984 5005 scope.go:117] "RemoveContainer" containerID="2afe4ccde0bfefe4340c451e62724121ea1eb1b291b9a248532f61353bd8ad04" Feb 25 14:13:45 crc kubenswrapper[5005]: E0225 14:13:45.973345 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2afe4ccde0bfefe4340c451e62724121ea1eb1b291b9a248532f61353bd8ad04\": container with ID starting with 2afe4ccde0bfefe4340c451e62724121ea1eb1b291b9a248532f61353bd8ad04 not found: ID does not exist" containerID="2afe4ccde0bfefe4340c451e62724121ea1eb1b291b9a248532f61353bd8ad04" Feb 25 14:13:45 crc kubenswrapper[5005]: I0225 14:13:45.973412 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2afe4ccde0bfefe4340c451e62724121ea1eb1b291b9a248532f61353bd8ad04"} err="failed to get container status \"2afe4ccde0bfefe4340c451e62724121ea1eb1b291b9a248532f61353bd8ad04\": rpc error: code = NotFound desc = could not find container \"2afe4ccde0bfefe4340c451e62724121ea1eb1b291b9a248532f61353bd8ad04\": container with ID starting with 2afe4ccde0bfefe4340c451e62724121ea1eb1b291b9a248532f61353bd8ad04 not found: ID does not exist" Feb 25 14:13:45 crc kubenswrapper[5005]: I0225 14:13:45.973439 5005 scope.go:117] "RemoveContainer" containerID="afa7834ab21ce0151e4e4165a405e166ac2a71145c6c415f0b0926cebf84628d" Feb 25 14:13:45 crc kubenswrapper[5005]: E0225 14:13:45.973740 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afa7834ab21ce0151e4e4165a405e166ac2a71145c6c415f0b0926cebf84628d\": container with ID starting with afa7834ab21ce0151e4e4165a405e166ac2a71145c6c415f0b0926cebf84628d not found: ID does not exist" containerID="afa7834ab21ce0151e4e4165a405e166ac2a71145c6c415f0b0926cebf84628d" Feb 25 14:13:45 crc kubenswrapper[5005]: I0225 14:13:45.973786 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afa7834ab21ce0151e4e4165a405e166ac2a71145c6c415f0b0926cebf84628d"} err="failed to get container status \"afa7834ab21ce0151e4e4165a405e166ac2a71145c6c415f0b0926cebf84628d\": rpc error: code = NotFound desc = could not find container \"afa7834ab21ce0151e4e4165a405e166ac2a71145c6c415f0b0926cebf84628d\": container with ID starting with afa7834ab21ce0151e4e4165a405e166ac2a71145c6c415f0b0926cebf84628d not found: ID does not exist" Feb 25 14:13:46 crc kubenswrapper[5005]: I0225 14:13:46.696436 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a49397bd-3d28-460a-b537-8d4cd30d981d" path="/var/lib/kubelet/pods/a49397bd-3d28-460a-b537-8d4cd30d981d/volumes" Feb 25 14:13:54 crc kubenswrapper[5005]: I0225 14:13:54.686472 5005 scope.go:117] "RemoveContainer" containerID="0fd17b4cc81721a66adaff149966a75a0355cfefe1996efe9af637406ab28af1" Feb 25 14:13:54 crc kubenswrapper[5005]: E0225 14:13:54.687268 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 14:14:00 crc kubenswrapper[5005]: I0225 14:14:00.142882 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533814-6xlqr"] Feb 25 14:14:00 crc kubenswrapper[5005]: E0225 14:14:00.144077 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a49397bd-3d28-460a-b537-8d4cd30d981d" containerName="registry-server" Feb 25 14:14:00 crc kubenswrapper[5005]: I0225 14:14:00.144093 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="a49397bd-3d28-460a-b537-8d4cd30d981d" containerName="registry-server" Feb 25 14:14:00 crc kubenswrapper[5005]: E0225 14:14:00.144119 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="532b125e-183d-473e-869e-5fe46f1c60c9" containerName="extract-content" Feb 25 14:14:00 crc kubenswrapper[5005]: I0225 14:14:00.144126 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="532b125e-183d-473e-869e-5fe46f1c60c9" containerName="extract-content" Feb 25 14:14:00 crc kubenswrapper[5005]: E0225 14:14:00.144141 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a49397bd-3d28-460a-b537-8d4cd30d981d" containerName="extract-content" Feb 25 14:14:00 crc kubenswrapper[5005]: I0225 14:14:00.144150 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="a49397bd-3d28-460a-b537-8d4cd30d981d" containerName="extract-content" Feb 25 14:14:00 crc kubenswrapper[5005]: E0225 14:14:00.144178 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="532b125e-183d-473e-869e-5fe46f1c60c9" containerName="extract-utilities" Feb 25 14:14:00 crc kubenswrapper[5005]: I0225 14:14:00.144186 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="532b125e-183d-473e-869e-5fe46f1c60c9" containerName="extract-utilities" Feb 25 14:14:00 crc kubenswrapper[5005]: E0225 14:14:00.144211 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a49397bd-3d28-460a-b537-8d4cd30d981d" containerName="extract-utilities" Feb 25 14:14:00 crc kubenswrapper[5005]: I0225 14:14:00.144219 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="a49397bd-3d28-460a-b537-8d4cd30d981d" containerName="extract-utilities" Feb 25 14:14:00 crc kubenswrapper[5005]: E0225 14:14:00.144229 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="532b125e-183d-473e-869e-5fe46f1c60c9" containerName="registry-server" Feb 25 14:14:00 crc kubenswrapper[5005]: I0225 14:14:00.144237 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="532b125e-183d-473e-869e-5fe46f1c60c9" containerName="registry-server" Feb 25 14:14:00 crc kubenswrapper[5005]: I0225 14:14:00.144453 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="532b125e-183d-473e-869e-5fe46f1c60c9" containerName="registry-server" Feb 25 14:14:00 crc kubenswrapper[5005]: I0225 14:14:00.144473 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="a49397bd-3d28-460a-b537-8d4cd30d981d" containerName="registry-server" Feb 25 14:14:00 crc kubenswrapper[5005]: I0225 14:14:00.145330 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533814-6xlqr" Feb 25 14:14:00 crc kubenswrapper[5005]: I0225 14:14:00.147852 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 14:14:00 crc kubenswrapper[5005]: I0225 14:14:00.148223 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 14:14:00 crc kubenswrapper[5005]: I0225 14:14:00.150510 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 14:14:00 crc kubenswrapper[5005]: I0225 14:14:00.158678 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533814-6xlqr"] Feb 25 14:14:00 crc kubenswrapper[5005]: I0225 14:14:00.287749 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54lk4\" (UniqueName: \"kubernetes.io/projected/c5ce3d82-823f-4884-bf5c-1523ee618ed9-kube-api-access-54lk4\") pod \"auto-csr-approver-29533814-6xlqr\" (UID: \"c5ce3d82-823f-4884-bf5c-1523ee618ed9\") " pod="openshift-infra/auto-csr-approver-29533814-6xlqr" Feb 25 14:14:00 crc kubenswrapper[5005]: I0225 14:14:00.389909 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54lk4\" (UniqueName: \"kubernetes.io/projected/c5ce3d82-823f-4884-bf5c-1523ee618ed9-kube-api-access-54lk4\") pod \"auto-csr-approver-29533814-6xlqr\" (UID: \"c5ce3d82-823f-4884-bf5c-1523ee618ed9\") " pod="openshift-infra/auto-csr-approver-29533814-6xlqr" Feb 25 14:14:00 crc kubenswrapper[5005]: I0225 14:14:00.412137 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54lk4\" (UniqueName: \"kubernetes.io/projected/c5ce3d82-823f-4884-bf5c-1523ee618ed9-kube-api-access-54lk4\") pod \"auto-csr-approver-29533814-6xlqr\" (UID: \"c5ce3d82-823f-4884-bf5c-1523ee618ed9\") " pod="openshift-infra/auto-csr-approver-29533814-6xlqr" Feb 25 14:14:00 crc kubenswrapper[5005]: I0225 14:14:00.468959 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533814-6xlqr" Feb 25 14:14:00 crc kubenswrapper[5005]: I0225 14:14:00.974056 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533814-6xlqr"] Feb 25 14:14:01 crc kubenswrapper[5005]: I0225 14:14:01.015818 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533814-6xlqr" event={"ID":"c5ce3d82-823f-4884-bf5c-1523ee618ed9","Type":"ContainerStarted","Data":"9ea48357b1d3d89a6ee42832285df7de69b842a09d6554799facb0df33a894f7"} Feb 25 14:14:03 crc kubenswrapper[5005]: I0225 14:14:03.034050 5005 generic.go:334] "Generic (PLEG): container finished" podID="c5ce3d82-823f-4884-bf5c-1523ee618ed9" containerID="3f21f5d981fa1f6719966b81b2798fa53a57440d55c2394f61e74ef9efe092ea" exitCode=0 Feb 25 14:14:03 crc kubenswrapper[5005]: I0225 14:14:03.034147 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533814-6xlqr" event={"ID":"c5ce3d82-823f-4884-bf5c-1523ee618ed9","Type":"ContainerDied","Data":"3f21f5d981fa1f6719966b81b2798fa53a57440d55c2394f61e74ef9efe092ea"} Feb 25 14:14:04 crc kubenswrapper[5005]: I0225 14:14:04.406938 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533814-6xlqr" Feb 25 14:14:04 crc kubenswrapper[5005]: I0225 14:14:04.569803 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54lk4\" (UniqueName: \"kubernetes.io/projected/c5ce3d82-823f-4884-bf5c-1523ee618ed9-kube-api-access-54lk4\") pod \"c5ce3d82-823f-4884-bf5c-1523ee618ed9\" (UID: \"c5ce3d82-823f-4884-bf5c-1523ee618ed9\") " Feb 25 14:14:04 crc kubenswrapper[5005]: I0225 14:14:04.575523 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5ce3d82-823f-4884-bf5c-1523ee618ed9-kube-api-access-54lk4" (OuterVolumeSpecName: "kube-api-access-54lk4") pod "c5ce3d82-823f-4884-bf5c-1523ee618ed9" (UID: "c5ce3d82-823f-4884-bf5c-1523ee618ed9"). InnerVolumeSpecName "kube-api-access-54lk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 14:14:04 crc kubenswrapper[5005]: I0225 14:14:04.671539 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54lk4\" (UniqueName: \"kubernetes.io/projected/c5ce3d82-823f-4884-bf5c-1523ee618ed9-kube-api-access-54lk4\") on node \"crc\" DevicePath \"\"" Feb 25 14:14:05 crc kubenswrapper[5005]: I0225 14:14:05.054430 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533814-6xlqr" event={"ID":"c5ce3d82-823f-4884-bf5c-1523ee618ed9","Type":"ContainerDied","Data":"9ea48357b1d3d89a6ee42832285df7de69b842a09d6554799facb0df33a894f7"} Feb 25 14:14:05 crc kubenswrapper[5005]: I0225 14:14:05.054747 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ea48357b1d3d89a6ee42832285df7de69b842a09d6554799facb0df33a894f7" Feb 25 14:14:05 crc kubenswrapper[5005]: I0225 14:14:05.054492 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533814-6xlqr" Feb 25 14:14:05 crc kubenswrapper[5005]: I0225 14:14:05.473473 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533808-cfnps"] Feb 25 14:14:05 crc kubenswrapper[5005]: I0225 14:14:05.484821 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533808-cfnps"] Feb 25 14:14:06 crc kubenswrapper[5005]: I0225 14:14:06.700355 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6112fc0-5d70-4d6b-9b4d-7117488fafb1" path="/var/lib/kubelet/pods/d6112fc0-5d70-4d6b-9b4d-7117488fafb1/volumes" Feb 25 14:14:08 crc kubenswrapper[5005]: I0225 14:14:08.685733 5005 scope.go:117] "RemoveContainer" containerID="0fd17b4cc81721a66adaff149966a75a0355cfefe1996efe9af637406ab28af1" Feb 25 14:14:08 crc kubenswrapper[5005]: E0225 14:14:08.686260 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 14:14:21 crc kubenswrapper[5005]: I0225 14:14:21.685561 5005 scope.go:117] "RemoveContainer" containerID="0fd17b4cc81721a66adaff149966a75a0355cfefe1996efe9af637406ab28af1" Feb 25 14:14:21 crc kubenswrapper[5005]: E0225 14:14:21.686289 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 14:14:28 crc kubenswrapper[5005]: I0225 14:14:28.426545 5005 scope.go:117] "RemoveContainer" containerID="9f21062f687d565ea7ba2566d46c9ece4a3276dab27b12f3a5828a2d0a21e20d" Feb 25 14:14:32 crc kubenswrapper[5005]: I0225 14:14:32.686195 5005 scope.go:117] "RemoveContainer" containerID="0fd17b4cc81721a66adaff149966a75a0355cfefe1996efe9af637406ab28af1" Feb 25 14:14:32 crc kubenswrapper[5005]: E0225 14:14:32.686994 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 14:14:44 crc kubenswrapper[5005]: I0225 14:14:44.686100 5005 scope.go:117] "RemoveContainer" containerID="0fd17b4cc81721a66adaff149966a75a0355cfefe1996efe9af637406ab28af1" Feb 25 14:14:44 crc kubenswrapper[5005]: E0225 14:14:44.686795 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 14:14:55 crc kubenswrapper[5005]: I0225 14:14:55.686026 5005 scope.go:117] "RemoveContainer" containerID="0fd17b4cc81721a66adaff149966a75a0355cfefe1996efe9af637406ab28af1" Feb 25 14:14:55 crc kubenswrapper[5005]: E0225 14:14:55.686901 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 14:14:59 crc kubenswrapper[5005]: I0225 14:14:59.009529 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-lpkd5/must-gather-cvghf"] Feb 25 14:14:59 crc kubenswrapper[5005]: E0225 14:14:59.010295 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5ce3d82-823f-4884-bf5c-1523ee618ed9" containerName="oc" Feb 25 14:14:59 crc kubenswrapper[5005]: I0225 14:14:59.010315 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5ce3d82-823f-4884-bf5c-1523ee618ed9" containerName="oc" Feb 25 14:14:59 crc kubenswrapper[5005]: I0225 14:14:59.010587 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5ce3d82-823f-4884-bf5c-1523ee618ed9" containerName="oc" Feb 25 14:14:59 crc kubenswrapper[5005]: I0225 14:14:59.011825 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lpkd5/must-gather-cvghf" Feb 25 14:14:59 crc kubenswrapper[5005]: I0225 14:14:59.013823 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-lpkd5"/"openshift-service-ca.crt" Feb 25 14:14:59 crc kubenswrapper[5005]: I0225 14:14:59.014234 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-lpkd5"/"default-dockercfg-x4msv" Feb 25 14:14:59 crc kubenswrapper[5005]: I0225 14:14:59.014403 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-lpkd5"/"kube-root-ca.crt" Feb 25 14:14:59 crc kubenswrapper[5005]: I0225 14:14:59.027831 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-lpkd5/must-gather-cvghf"] Feb 25 14:14:59 crc kubenswrapper[5005]: I0225 14:14:59.165084 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hpfx\" (UniqueName: \"kubernetes.io/projected/c3eb9525-7d3e-4b6e-9c64-f38ee54a8316-kube-api-access-2hpfx\") pod \"must-gather-cvghf\" (UID: \"c3eb9525-7d3e-4b6e-9c64-f38ee54a8316\") " pod="openshift-must-gather-lpkd5/must-gather-cvghf" Feb 25 14:14:59 crc kubenswrapper[5005]: I0225 14:14:59.165182 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c3eb9525-7d3e-4b6e-9c64-f38ee54a8316-must-gather-output\") pod \"must-gather-cvghf\" (UID: \"c3eb9525-7d3e-4b6e-9c64-f38ee54a8316\") " pod="openshift-must-gather-lpkd5/must-gather-cvghf" Feb 25 14:14:59 crc kubenswrapper[5005]: I0225 14:14:59.266688 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hpfx\" (UniqueName: \"kubernetes.io/projected/c3eb9525-7d3e-4b6e-9c64-f38ee54a8316-kube-api-access-2hpfx\") pod \"must-gather-cvghf\" (UID: \"c3eb9525-7d3e-4b6e-9c64-f38ee54a8316\") " pod="openshift-must-gather-lpkd5/must-gather-cvghf" Feb 25 14:14:59 crc kubenswrapper[5005]: I0225 14:14:59.266791 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c3eb9525-7d3e-4b6e-9c64-f38ee54a8316-must-gather-output\") pod \"must-gather-cvghf\" (UID: \"c3eb9525-7d3e-4b6e-9c64-f38ee54a8316\") " pod="openshift-must-gather-lpkd5/must-gather-cvghf" Feb 25 14:14:59 crc kubenswrapper[5005]: I0225 14:14:59.267234 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c3eb9525-7d3e-4b6e-9c64-f38ee54a8316-must-gather-output\") pod \"must-gather-cvghf\" (UID: \"c3eb9525-7d3e-4b6e-9c64-f38ee54a8316\") " pod="openshift-must-gather-lpkd5/must-gather-cvghf" Feb 25 14:14:59 crc kubenswrapper[5005]: I0225 14:14:59.293247 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hpfx\" (UniqueName: \"kubernetes.io/projected/c3eb9525-7d3e-4b6e-9c64-f38ee54a8316-kube-api-access-2hpfx\") pod \"must-gather-cvghf\" (UID: \"c3eb9525-7d3e-4b6e-9c64-f38ee54a8316\") " pod="openshift-must-gather-lpkd5/must-gather-cvghf" Feb 25 14:14:59 crc kubenswrapper[5005]: I0225 14:14:59.332969 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lpkd5/must-gather-cvghf" Feb 25 14:14:59 crc kubenswrapper[5005]: I0225 14:14:59.791809 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-lpkd5/must-gather-cvghf"] Feb 25 14:15:00 crc kubenswrapper[5005]: I0225 14:15:00.150351 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29533815-z6tcq"] Feb 25 14:15:00 crc kubenswrapper[5005]: I0225 14:15:00.152750 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29533815-z6tcq" Feb 25 14:15:00 crc kubenswrapper[5005]: I0225 14:15:00.159017 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 25 14:15:00 crc kubenswrapper[5005]: I0225 14:15:00.159639 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 25 14:15:00 crc kubenswrapper[5005]: I0225 14:15:00.168042 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29533815-z6tcq"] Feb 25 14:15:00 crc kubenswrapper[5005]: I0225 14:15:00.185359 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m96dn\" (UniqueName: \"kubernetes.io/projected/0eca32f3-a760-44bf-b0f2-516fdf241d52-kube-api-access-m96dn\") pod \"collect-profiles-29533815-z6tcq\" (UID: \"0eca32f3-a760-44bf-b0f2-516fdf241d52\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533815-z6tcq" Feb 25 14:15:00 crc kubenswrapper[5005]: I0225 14:15:00.185510 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0eca32f3-a760-44bf-b0f2-516fdf241d52-config-volume\") pod \"collect-profiles-29533815-z6tcq\" (UID: \"0eca32f3-a760-44bf-b0f2-516fdf241d52\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533815-z6tcq" Feb 25 14:15:00 crc kubenswrapper[5005]: I0225 14:15:00.185575 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0eca32f3-a760-44bf-b0f2-516fdf241d52-secret-volume\") pod \"collect-profiles-29533815-z6tcq\" (UID: \"0eca32f3-a760-44bf-b0f2-516fdf241d52\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533815-z6tcq" Feb 25 14:15:00 crc kubenswrapper[5005]: I0225 14:15:00.288212 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0eca32f3-a760-44bf-b0f2-516fdf241d52-config-volume\") pod \"collect-profiles-29533815-z6tcq\" (UID: \"0eca32f3-a760-44bf-b0f2-516fdf241d52\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533815-z6tcq" Feb 25 14:15:00 crc kubenswrapper[5005]: I0225 14:15:00.288310 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0eca32f3-a760-44bf-b0f2-516fdf241d52-secret-volume\") pod \"collect-profiles-29533815-z6tcq\" (UID: \"0eca32f3-a760-44bf-b0f2-516fdf241d52\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533815-z6tcq" Feb 25 14:15:00 crc kubenswrapper[5005]: I0225 14:15:00.288439 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m96dn\" (UniqueName: \"kubernetes.io/projected/0eca32f3-a760-44bf-b0f2-516fdf241d52-kube-api-access-m96dn\") pod \"collect-profiles-29533815-z6tcq\" (UID: \"0eca32f3-a760-44bf-b0f2-516fdf241d52\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533815-z6tcq" Feb 25 14:15:00 crc kubenswrapper[5005]: I0225 14:15:00.291524 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0eca32f3-a760-44bf-b0f2-516fdf241d52-config-volume\") pod \"collect-profiles-29533815-z6tcq\" (UID: \"0eca32f3-a760-44bf-b0f2-516fdf241d52\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533815-z6tcq" Feb 25 14:15:00 crc kubenswrapper[5005]: I0225 14:15:00.300144 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0eca32f3-a760-44bf-b0f2-516fdf241d52-secret-volume\") pod \"collect-profiles-29533815-z6tcq\" (UID: \"0eca32f3-a760-44bf-b0f2-516fdf241d52\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533815-z6tcq" Feb 25 14:15:00 crc kubenswrapper[5005]: I0225 14:15:00.310926 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m96dn\" (UniqueName: \"kubernetes.io/projected/0eca32f3-a760-44bf-b0f2-516fdf241d52-kube-api-access-m96dn\") pod \"collect-profiles-29533815-z6tcq\" (UID: \"0eca32f3-a760-44bf-b0f2-516fdf241d52\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29533815-z6tcq" Feb 25 14:15:00 crc kubenswrapper[5005]: I0225 14:15:00.483786 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29533815-z6tcq" Feb 25 14:15:00 crc kubenswrapper[5005]: I0225 14:15:00.521671 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lpkd5/must-gather-cvghf" event={"ID":"c3eb9525-7d3e-4b6e-9c64-f38ee54a8316","Type":"ContainerStarted","Data":"e82fbe1a851a465c50d95405017c6ec78f2da6d1e8f77f87ef7ce9a4fe5996e8"} Feb 25 14:15:00 crc kubenswrapper[5005]: I0225 14:15:00.978079 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29533815-z6tcq"] Feb 25 14:15:00 crc kubenswrapper[5005]: W0225 14:15:00.985873 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0eca32f3_a760_44bf_b0f2_516fdf241d52.slice/crio-39b03db52631f5c13722bc4c5958ea0a5e68d2a6824d0ebf410a10d3f428e211 WatchSource:0}: Error finding container 39b03db52631f5c13722bc4c5958ea0a5e68d2a6824d0ebf410a10d3f428e211: Status 404 returned error can't find the container with id 39b03db52631f5c13722bc4c5958ea0a5e68d2a6824d0ebf410a10d3f428e211 Feb 25 14:15:01 crc kubenswrapper[5005]: I0225 14:15:01.536111 5005 generic.go:334] "Generic (PLEG): container finished" podID="0eca32f3-a760-44bf-b0f2-516fdf241d52" containerID="c53bed76b7d76122d5a6c6666a3078a40e347d72f74b457cdd31c7f431f01325" exitCode=0 Feb 25 14:15:01 crc kubenswrapper[5005]: I0225 14:15:01.536155 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29533815-z6tcq" event={"ID":"0eca32f3-a760-44bf-b0f2-516fdf241d52","Type":"ContainerDied","Data":"c53bed76b7d76122d5a6c6666a3078a40e347d72f74b457cdd31c7f431f01325"} Feb 25 14:15:01 crc kubenswrapper[5005]: I0225 14:15:01.536182 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29533815-z6tcq" event={"ID":"0eca32f3-a760-44bf-b0f2-516fdf241d52","Type":"ContainerStarted","Data":"39b03db52631f5c13722bc4c5958ea0a5e68d2a6824d0ebf410a10d3f428e211"} Feb 25 14:15:02 crc kubenswrapper[5005]: E0225 14:15:02.714181 5005 kubelet_node_status.go:756] "Failed to set some node status fields" err="failed to validate nodeIP: route ip+net: no such network interface" node="crc" Feb 25 14:15:04 crc kubenswrapper[5005]: I0225 14:15:04.916524 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29533815-z6tcq" Feb 25 14:15:04 crc kubenswrapper[5005]: I0225 14:15:04.987776 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0eca32f3-a760-44bf-b0f2-516fdf241d52-config-volume\") pod \"0eca32f3-a760-44bf-b0f2-516fdf241d52\" (UID: \"0eca32f3-a760-44bf-b0f2-516fdf241d52\") " Feb 25 14:15:04 crc kubenswrapper[5005]: I0225 14:15:04.987942 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0eca32f3-a760-44bf-b0f2-516fdf241d52-secret-volume\") pod \"0eca32f3-a760-44bf-b0f2-516fdf241d52\" (UID: \"0eca32f3-a760-44bf-b0f2-516fdf241d52\") " Feb 25 14:15:04 crc kubenswrapper[5005]: I0225 14:15:04.989028 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0eca32f3-a760-44bf-b0f2-516fdf241d52-config-volume" (OuterVolumeSpecName: "config-volume") pod "0eca32f3-a760-44bf-b0f2-516fdf241d52" (UID: "0eca32f3-a760-44bf-b0f2-516fdf241d52"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 25 14:15:05 crc kubenswrapper[5005]: I0225 14:15:05.002605 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0eca32f3-a760-44bf-b0f2-516fdf241d52-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0eca32f3-a760-44bf-b0f2-516fdf241d52" (UID: "0eca32f3-a760-44bf-b0f2-516fdf241d52"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 25 14:15:05 crc kubenswrapper[5005]: I0225 14:15:05.089571 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m96dn\" (UniqueName: \"kubernetes.io/projected/0eca32f3-a760-44bf-b0f2-516fdf241d52-kube-api-access-m96dn\") pod \"0eca32f3-a760-44bf-b0f2-516fdf241d52\" (UID: \"0eca32f3-a760-44bf-b0f2-516fdf241d52\") " Feb 25 14:15:05 crc kubenswrapper[5005]: I0225 14:15:05.091213 5005 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0eca32f3-a760-44bf-b0f2-516fdf241d52-config-volume\") on node \"crc\" DevicePath \"\"" Feb 25 14:15:05 crc kubenswrapper[5005]: I0225 14:15:05.091236 5005 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0eca32f3-a760-44bf-b0f2-516fdf241d52-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 25 14:15:05 crc kubenswrapper[5005]: I0225 14:15:05.093433 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0eca32f3-a760-44bf-b0f2-516fdf241d52-kube-api-access-m96dn" (OuterVolumeSpecName: "kube-api-access-m96dn") pod "0eca32f3-a760-44bf-b0f2-516fdf241d52" (UID: "0eca32f3-a760-44bf-b0f2-516fdf241d52"). InnerVolumeSpecName "kube-api-access-m96dn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 14:15:05 crc kubenswrapper[5005]: I0225 14:15:05.193016 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m96dn\" (UniqueName: \"kubernetes.io/projected/0eca32f3-a760-44bf-b0f2-516fdf241d52-kube-api-access-m96dn\") on node \"crc\" DevicePath \"\"" Feb 25 14:15:05 crc kubenswrapper[5005]: I0225 14:15:05.584415 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29533815-z6tcq" event={"ID":"0eca32f3-a760-44bf-b0f2-516fdf241d52","Type":"ContainerDied","Data":"39b03db52631f5c13722bc4c5958ea0a5e68d2a6824d0ebf410a10d3f428e211"} Feb 25 14:15:05 crc kubenswrapper[5005]: I0225 14:15:05.584464 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39b03db52631f5c13722bc4c5958ea0a5e68d2a6824d0ebf410a10d3f428e211" Feb 25 14:15:05 crc kubenswrapper[5005]: I0225 14:15:05.584522 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29533815-z6tcq" Feb 25 14:15:05 crc kubenswrapper[5005]: I0225 14:15:05.984583 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29533770-9jtth"] Feb 25 14:15:06 crc kubenswrapper[5005]: I0225 14:15:06.010017 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29533770-9jtth"] Feb 25 14:15:06 crc kubenswrapper[5005]: I0225 14:15:06.711154 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cb76377-039f-40e0-8934-b8d0c7bc7bda" path="/var/lib/kubelet/pods/1cb76377-039f-40e0-8934-b8d0c7bc7bda/volumes" Feb 25 14:15:07 crc kubenswrapper[5005]: I0225 14:15:07.602244 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lpkd5/must-gather-cvghf" event={"ID":"c3eb9525-7d3e-4b6e-9c64-f38ee54a8316","Type":"ContainerStarted","Data":"4842fd77e2a6d8c97d35b8045f19958de3d7730481c50628cf80c7ce4a77428e"} Feb 25 14:15:07 crc kubenswrapper[5005]: I0225 14:15:07.602594 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lpkd5/must-gather-cvghf" event={"ID":"c3eb9525-7d3e-4b6e-9c64-f38ee54a8316","Type":"ContainerStarted","Data":"3d2c17fb28c07f3be7a3fb592a51ac79c0647183a7c74ee3568fea267d3b05cd"} Feb 25 14:15:07 crc kubenswrapper[5005]: I0225 14:15:07.618102 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-lpkd5/must-gather-cvghf" podStartSLOduration=2.341899921 podStartE2EDuration="9.618085621s" podCreationTimestamp="2026-02-25 14:14:58 +0000 UTC" firstStartedPulling="2026-02-25 14:14:59.78677873 +0000 UTC m=+10613.827511077" lastFinishedPulling="2026-02-25 14:15:07.06296445 +0000 UTC m=+10621.103696777" observedRunningTime="2026-02-25 14:15:07.615952454 +0000 UTC m=+10621.656684791" watchObservedRunningTime="2026-02-25 14:15:07.618085621 +0000 UTC m=+10621.658817948" Feb 25 14:15:08 crc kubenswrapper[5005]: I0225 14:15:08.687419 5005 scope.go:117] "RemoveContainer" containerID="0fd17b4cc81721a66adaff149966a75a0355cfefe1996efe9af637406ab28af1" Feb 25 14:15:08 crc kubenswrapper[5005]: E0225 14:15:08.688584 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 14:15:13 crc kubenswrapper[5005]: I0225 14:15:13.280687 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-lpkd5/crc-debug-kmdtz"] Feb 25 14:15:13 crc kubenswrapper[5005]: E0225 14:15:13.281642 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eca32f3-a760-44bf-b0f2-516fdf241d52" containerName="collect-profiles" Feb 25 14:15:13 crc kubenswrapper[5005]: I0225 14:15:13.281655 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eca32f3-a760-44bf-b0f2-516fdf241d52" containerName="collect-profiles" Feb 25 14:15:13 crc kubenswrapper[5005]: I0225 14:15:13.281826 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="0eca32f3-a760-44bf-b0f2-516fdf241d52" containerName="collect-profiles" Feb 25 14:15:13 crc kubenswrapper[5005]: I0225 14:15:13.282416 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lpkd5/crc-debug-kmdtz" Feb 25 14:15:13 crc kubenswrapper[5005]: I0225 14:15:13.453837 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4f03b436-c52e-457b-95ba-c5e81a57e295-host\") pod \"crc-debug-kmdtz\" (UID: \"4f03b436-c52e-457b-95ba-c5e81a57e295\") " pod="openshift-must-gather-lpkd5/crc-debug-kmdtz" Feb 25 14:15:13 crc kubenswrapper[5005]: I0225 14:15:13.454510 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4zc6\" (UniqueName: \"kubernetes.io/projected/4f03b436-c52e-457b-95ba-c5e81a57e295-kube-api-access-k4zc6\") pod \"crc-debug-kmdtz\" (UID: \"4f03b436-c52e-457b-95ba-c5e81a57e295\") " pod="openshift-must-gather-lpkd5/crc-debug-kmdtz" Feb 25 14:15:13 crc kubenswrapper[5005]: I0225 14:15:13.556699 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4zc6\" (UniqueName: \"kubernetes.io/projected/4f03b436-c52e-457b-95ba-c5e81a57e295-kube-api-access-k4zc6\") pod \"crc-debug-kmdtz\" (UID: \"4f03b436-c52e-457b-95ba-c5e81a57e295\") " pod="openshift-must-gather-lpkd5/crc-debug-kmdtz" Feb 25 14:15:13 crc kubenswrapper[5005]: I0225 14:15:13.556772 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4f03b436-c52e-457b-95ba-c5e81a57e295-host\") pod \"crc-debug-kmdtz\" (UID: \"4f03b436-c52e-457b-95ba-c5e81a57e295\") " pod="openshift-must-gather-lpkd5/crc-debug-kmdtz" Feb 25 14:15:13 crc kubenswrapper[5005]: I0225 14:15:13.556892 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4f03b436-c52e-457b-95ba-c5e81a57e295-host\") pod \"crc-debug-kmdtz\" (UID: \"4f03b436-c52e-457b-95ba-c5e81a57e295\") " pod="openshift-must-gather-lpkd5/crc-debug-kmdtz" Feb 25 14:15:13 crc kubenswrapper[5005]: I0225 14:15:13.577736 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4zc6\" (UniqueName: \"kubernetes.io/projected/4f03b436-c52e-457b-95ba-c5e81a57e295-kube-api-access-k4zc6\") pod \"crc-debug-kmdtz\" (UID: \"4f03b436-c52e-457b-95ba-c5e81a57e295\") " pod="openshift-must-gather-lpkd5/crc-debug-kmdtz" Feb 25 14:15:13 crc kubenswrapper[5005]: I0225 14:15:13.601760 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lpkd5/crc-debug-kmdtz" Feb 25 14:15:13 crc kubenswrapper[5005]: I0225 14:15:13.680073 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lpkd5/crc-debug-kmdtz" event={"ID":"4f03b436-c52e-457b-95ba-c5e81a57e295","Type":"ContainerStarted","Data":"fc37a207f8b3a0510286583ea08c287c3e549710e95eac7270f2c52b2785d3f7"} Feb 25 14:15:20 crc kubenswrapper[5005]: I0225 14:15:20.686116 5005 scope.go:117] "RemoveContainer" containerID="0fd17b4cc81721a66adaff149966a75a0355cfefe1996efe9af637406ab28af1" Feb 25 14:15:20 crc kubenswrapper[5005]: E0225 14:15:20.686914 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 14:15:25 crc kubenswrapper[5005]: I0225 14:15:25.795181 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lpkd5/crc-debug-kmdtz" event={"ID":"4f03b436-c52e-457b-95ba-c5e81a57e295","Type":"ContainerStarted","Data":"c7a08e4c0246aa5ee5610205c735101d25a4db10f6214069bc3f7d5d0d776a5c"} Feb 25 14:15:25 crc kubenswrapper[5005]: I0225 14:15:25.820841 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-lpkd5/crc-debug-kmdtz" podStartSLOduration=1.949535018 podStartE2EDuration="12.820824992s" podCreationTimestamp="2026-02-25 14:15:13 +0000 UTC" firstStartedPulling="2026-02-25 14:15:13.64993924 +0000 UTC m=+10627.690671567" lastFinishedPulling="2026-02-25 14:15:24.521229214 +0000 UTC m=+10638.561961541" observedRunningTime="2026-02-25 14:15:25.814584246 +0000 UTC m=+10639.855316583" watchObservedRunningTime="2026-02-25 14:15:25.820824992 +0000 UTC m=+10639.861557319" Feb 25 14:15:28 crc kubenswrapper[5005]: I0225 14:15:28.547060 5005 scope.go:117] "RemoveContainer" containerID="5f7adb34e6c338d9739b660f5e91c844c71f456f0881e2b4f3c0d410e2522f6f" Feb 25 14:15:33 crc kubenswrapper[5005]: I0225 14:15:33.686093 5005 scope.go:117] "RemoveContainer" containerID="0fd17b4cc81721a66adaff149966a75a0355cfefe1996efe9af637406ab28af1" Feb 25 14:15:33 crc kubenswrapper[5005]: E0225 14:15:33.686857 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 14:15:45 crc kubenswrapper[5005]: I0225 14:15:45.686216 5005 scope.go:117] "RemoveContainer" containerID="0fd17b4cc81721a66adaff149966a75a0355cfefe1996efe9af637406ab28af1" Feb 25 14:15:45 crc kubenswrapper[5005]: E0225 14:15:45.687014 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 14:16:00 crc kubenswrapper[5005]: I0225 14:16:00.145798 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533816-9rgsp"] Feb 25 14:16:00 crc kubenswrapper[5005]: I0225 14:16:00.148592 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533816-9rgsp" Feb 25 14:16:00 crc kubenswrapper[5005]: I0225 14:16:00.154574 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 14:16:00 crc kubenswrapper[5005]: I0225 14:16:00.154979 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 14:16:00 crc kubenswrapper[5005]: I0225 14:16:00.155105 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 14:16:00 crc kubenswrapper[5005]: I0225 14:16:00.159619 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533816-9rgsp"] Feb 25 14:16:00 crc kubenswrapper[5005]: I0225 14:16:00.259719 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwjhx\" (UniqueName: \"kubernetes.io/projected/1bfdb45e-91ba-4ac4-914f-ac8175b5cd70-kube-api-access-rwjhx\") pod \"auto-csr-approver-29533816-9rgsp\" (UID: \"1bfdb45e-91ba-4ac4-914f-ac8175b5cd70\") " pod="openshift-infra/auto-csr-approver-29533816-9rgsp" Feb 25 14:16:00 crc kubenswrapper[5005]: I0225 14:16:00.362160 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwjhx\" (UniqueName: \"kubernetes.io/projected/1bfdb45e-91ba-4ac4-914f-ac8175b5cd70-kube-api-access-rwjhx\") pod \"auto-csr-approver-29533816-9rgsp\" (UID: \"1bfdb45e-91ba-4ac4-914f-ac8175b5cd70\") " pod="openshift-infra/auto-csr-approver-29533816-9rgsp" Feb 25 14:16:00 crc kubenswrapper[5005]: I0225 14:16:00.382695 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwjhx\" (UniqueName: \"kubernetes.io/projected/1bfdb45e-91ba-4ac4-914f-ac8175b5cd70-kube-api-access-rwjhx\") pod \"auto-csr-approver-29533816-9rgsp\" (UID: \"1bfdb45e-91ba-4ac4-914f-ac8175b5cd70\") " pod="openshift-infra/auto-csr-approver-29533816-9rgsp" Feb 25 14:16:00 crc kubenswrapper[5005]: I0225 14:16:00.477550 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533816-9rgsp" Feb 25 14:16:00 crc kubenswrapper[5005]: I0225 14:16:00.689328 5005 scope.go:117] "RemoveContainer" containerID="0fd17b4cc81721a66adaff149966a75a0355cfefe1996efe9af637406ab28af1" Feb 25 14:16:00 crc kubenswrapper[5005]: E0225 14:16:00.689876 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 14:16:00 crc kubenswrapper[5005]: I0225 14:16:00.940952 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533816-9rgsp"] Feb 25 14:16:01 crc kubenswrapper[5005]: I0225 14:16:01.108091 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533816-9rgsp" event={"ID":"1bfdb45e-91ba-4ac4-914f-ac8175b5cd70","Type":"ContainerStarted","Data":"58b8d8d021f5e9a25bf2e4cecd5e251574cd7f16085912dfefdc4659350567d6"} Feb 25 14:16:03 crc kubenswrapper[5005]: I0225 14:16:03.125776 5005 generic.go:334] "Generic (PLEG): container finished" podID="1bfdb45e-91ba-4ac4-914f-ac8175b5cd70" containerID="d9e6cc6614631429870024c3a2a08f3afa68b4d23b150dd1a39cbd0081f4b8ba" exitCode=0 Feb 25 14:16:03 crc kubenswrapper[5005]: I0225 14:16:03.125888 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533816-9rgsp" event={"ID":"1bfdb45e-91ba-4ac4-914f-ac8175b5cd70","Type":"ContainerDied","Data":"d9e6cc6614631429870024c3a2a08f3afa68b4d23b150dd1a39cbd0081f4b8ba"} Feb 25 14:16:04 crc kubenswrapper[5005]: I0225 14:16:04.664306 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533816-9rgsp" Feb 25 14:16:04 crc kubenswrapper[5005]: I0225 14:16:04.858125 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwjhx\" (UniqueName: \"kubernetes.io/projected/1bfdb45e-91ba-4ac4-914f-ac8175b5cd70-kube-api-access-rwjhx\") pod \"1bfdb45e-91ba-4ac4-914f-ac8175b5cd70\" (UID: \"1bfdb45e-91ba-4ac4-914f-ac8175b5cd70\") " Feb 25 14:16:05 crc kubenswrapper[5005]: I0225 14:16:05.146444 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533816-9rgsp" event={"ID":"1bfdb45e-91ba-4ac4-914f-ac8175b5cd70","Type":"ContainerDied","Data":"58b8d8d021f5e9a25bf2e4cecd5e251574cd7f16085912dfefdc4659350567d6"} Feb 25 14:16:05 crc kubenswrapper[5005]: I0225 14:16:05.146895 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58b8d8d021f5e9a25bf2e4cecd5e251574cd7f16085912dfefdc4659350567d6" Feb 25 14:16:05 crc kubenswrapper[5005]: I0225 14:16:05.146983 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533816-9rgsp" Feb 25 14:16:05 crc kubenswrapper[5005]: I0225 14:16:05.459144 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bfdb45e-91ba-4ac4-914f-ac8175b5cd70-kube-api-access-rwjhx" (OuterVolumeSpecName: "kube-api-access-rwjhx") pod "1bfdb45e-91ba-4ac4-914f-ac8175b5cd70" (UID: "1bfdb45e-91ba-4ac4-914f-ac8175b5cd70"). InnerVolumeSpecName "kube-api-access-rwjhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 14:16:05 crc kubenswrapper[5005]: I0225 14:16:05.468621 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwjhx\" (UniqueName: \"kubernetes.io/projected/1bfdb45e-91ba-4ac4-914f-ac8175b5cd70-kube-api-access-rwjhx\") on node \"crc\" DevicePath \"\"" Feb 25 14:16:05 crc kubenswrapper[5005]: I0225 14:16:05.744971 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533810-9lfw8"] Feb 25 14:16:05 crc kubenswrapper[5005]: I0225 14:16:05.752735 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533810-9lfw8"] Feb 25 14:16:06 crc kubenswrapper[5005]: I0225 14:16:06.703591 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab364140-1ee9-48f2-b494-8118720b89e7" path="/var/lib/kubelet/pods/ab364140-1ee9-48f2-b494-8118720b89e7/volumes" Feb 25 14:16:11 crc kubenswrapper[5005]: I0225 14:16:11.200775 5005 generic.go:334] "Generic (PLEG): container finished" podID="4f03b436-c52e-457b-95ba-c5e81a57e295" containerID="c7a08e4c0246aa5ee5610205c735101d25a4db10f6214069bc3f7d5d0d776a5c" exitCode=0 Feb 25 14:16:11 crc kubenswrapper[5005]: I0225 14:16:11.200869 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lpkd5/crc-debug-kmdtz" event={"ID":"4f03b436-c52e-457b-95ba-c5e81a57e295","Type":"ContainerDied","Data":"c7a08e4c0246aa5ee5610205c735101d25a4db10f6214069bc3f7d5d0d776a5c"} Feb 25 14:16:12 crc kubenswrapper[5005]: I0225 14:16:12.313657 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lpkd5/crc-debug-kmdtz" Feb 25 14:16:12 crc kubenswrapper[5005]: I0225 14:16:12.324298 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4f03b436-c52e-457b-95ba-c5e81a57e295-host\") pod \"4f03b436-c52e-457b-95ba-c5e81a57e295\" (UID: \"4f03b436-c52e-457b-95ba-c5e81a57e295\") " Feb 25 14:16:12 crc kubenswrapper[5005]: I0225 14:16:12.324378 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4zc6\" (UniqueName: \"kubernetes.io/projected/4f03b436-c52e-457b-95ba-c5e81a57e295-kube-api-access-k4zc6\") pod \"4f03b436-c52e-457b-95ba-c5e81a57e295\" (UID: \"4f03b436-c52e-457b-95ba-c5e81a57e295\") " Feb 25 14:16:12 crc kubenswrapper[5005]: I0225 14:16:12.324405 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f03b436-c52e-457b-95ba-c5e81a57e295-host" (OuterVolumeSpecName: "host") pod "4f03b436-c52e-457b-95ba-c5e81a57e295" (UID: "4f03b436-c52e-457b-95ba-c5e81a57e295"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 25 14:16:12 crc kubenswrapper[5005]: I0225 14:16:12.324905 5005 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4f03b436-c52e-457b-95ba-c5e81a57e295-host\") on node \"crc\" DevicePath \"\"" Feb 25 14:16:12 crc kubenswrapper[5005]: I0225 14:16:12.330851 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f03b436-c52e-457b-95ba-c5e81a57e295-kube-api-access-k4zc6" (OuterVolumeSpecName: "kube-api-access-k4zc6") pod "4f03b436-c52e-457b-95ba-c5e81a57e295" (UID: "4f03b436-c52e-457b-95ba-c5e81a57e295"). InnerVolumeSpecName "kube-api-access-k4zc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 14:16:12 crc kubenswrapper[5005]: I0225 14:16:12.347416 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-lpkd5/crc-debug-kmdtz"] Feb 25 14:16:12 crc kubenswrapper[5005]: I0225 14:16:12.356304 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-lpkd5/crc-debug-kmdtz"] Feb 25 14:16:12 crc kubenswrapper[5005]: I0225 14:16:12.426384 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4zc6\" (UniqueName: \"kubernetes.io/projected/4f03b436-c52e-457b-95ba-c5e81a57e295-kube-api-access-k4zc6\") on node \"crc\" DevicePath \"\"" Feb 25 14:16:12 crc kubenswrapper[5005]: I0225 14:16:12.699628 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f03b436-c52e-457b-95ba-c5e81a57e295" path="/var/lib/kubelet/pods/4f03b436-c52e-457b-95ba-c5e81a57e295/volumes" Feb 25 14:16:13 crc kubenswrapper[5005]: I0225 14:16:13.219971 5005 scope.go:117] "RemoveContainer" containerID="c7a08e4c0246aa5ee5610205c735101d25a4db10f6214069bc3f7d5d0d776a5c" Feb 25 14:16:13 crc kubenswrapper[5005]: I0225 14:16:13.220056 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lpkd5/crc-debug-kmdtz" Feb 25 14:16:13 crc kubenswrapper[5005]: I0225 14:16:13.676965 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-lpkd5/crc-debug-mwk8r"] Feb 25 14:16:13 crc kubenswrapper[5005]: E0225 14:16:13.678467 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bfdb45e-91ba-4ac4-914f-ac8175b5cd70" containerName="oc" Feb 25 14:16:13 crc kubenswrapper[5005]: I0225 14:16:13.678532 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bfdb45e-91ba-4ac4-914f-ac8175b5cd70" containerName="oc" Feb 25 14:16:13 crc kubenswrapper[5005]: E0225 14:16:13.678608 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f03b436-c52e-457b-95ba-c5e81a57e295" containerName="container-00" Feb 25 14:16:13 crc kubenswrapper[5005]: I0225 14:16:13.678638 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f03b436-c52e-457b-95ba-c5e81a57e295" containerName="container-00" Feb 25 14:16:13 crc kubenswrapper[5005]: I0225 14:16:13.679236 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f03b436-c52e-457b-95ba-c5e81a57e295" containerName="container-00" Feb 25 14:16:13 crc kubenswrapper[5005]: I0225 14:16:13.679269 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bfdb45e-91ba-4ac4-914f-ac8175b5cd70" containerName="oc" Feb 25 14:16:13 crc kubenswrapper[5005]: I0225 14:16:13.680848 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lpkd5/crc-debug-mwk8r" Feb 25 14:16:13 crc kubenswrapper[5005]: I0225 14:16:13.762608 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/148bd43e-5dfb-45eb-b33a-64b6c3a73ee2-host\") pod \"crc-debug-mwk8r\" (UID: \"148bd43e-5dfb-45eb-b33a-64b6c3a73ee2\") " pod="openshift-must-gather-lpkd5/crc-debug-mwk8r" Feb 25 14:16:13 crc kubenswrapper[5005]: I0225 14:16:13.762648 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8pbz\" (UniqueName: \"kubernetes.io/projected/148bd43e-5dfb-45eb-b33a-64b6c3a73ee2-kube-api-access-w8pbz\") pod \"crc-debug-mwk8r\" (UID: \"148bd43e-5dfb-45eb-b33a-64b6c3a73ee2\") " pod="openshift-must-gather-lpkd5/crc-debug-mwk8r" Feb 25 14:16:13 crc kubenswrapper[5005]: I0225 14:16:13.864150 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/148bd43e-5dfb-45eb-b33a-64b6c3a73ee2-host\") pod \"crc-debug-mwk8r\" (UID: \"148bd43e-5dfb-45eb-b33a-64b6c3a73ee2\") " pod="openshift-must-gather-lpkd5/crc-debug-mwk8r" Feb 25 14:16:13 crc kubenswrapper[5005]: I0225 14:16:13.864499 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8pbz\" (UniqueName: \"kubernetes.io/projected/148bd43e-5dfb-45eb-b33a-64b6c3a73ee2-kube-api-access-w8pbz\") pod \"crc-debug-mwk8r\" (UID: \"148bd43e-5dfb-45eb-b33a-64b6c3a73ee2\") " pod="openshift-must-gather-lpkd5/crc-debug-mwk8r" Feb 25 14:16:13 crc kubenswrapper[5005]: I0225 14:16:13.864348 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/148bd43e-5dfb-45eb-b33a-64b6c3a73ee2-host\") pod \"crc-debug-mwk8r\" (UID: \"148bd43e-5dfb-45eb-b33a-64b6c3a73ee2\") " pod="openshift-must-gather-lpkd5/crc-debug-mwk8r" Feb 25 14:16:13 crc kubenswrapper[5005]: I0225 14:16:13.904299 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8pbz\" (UniqueName: \"kubernetes.io/projected/148bd43e-5dfb-45eb-b33a-64b6c3a73ee2-kube-api-access-w8pbz\") pod \"crc-debug-mwk8r\" (UID: \"148bd43e-5dfb-45eb-b33a-64b6c3a73ee2\") " pod="openshift-must-gather-lpkd5/crc-debug-mwk8r" Feb 25 14:16:14 crc kubenswrapper[5005]: I0225 14:16:14.006818 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lpkd5/crc-debug-mwk8r" Feb 25 14:16:14 crc kubenswrapper[5005]: I0225 14:16:14.230605 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lpkd5/crc-debug-mwk8r" event={"ID":"148bd43e-5dfb-45eb-b33a-64b6c3a73ee2","Type":"ContainerStarted","Data":"a36513aaafb06e645aa3d174701ce16e9664a214b8ed182f5bb182d7b8dd0193"} Feb 25 14:16:14 crc kubenswrapper[5005]: I0225 14:16:14.685642 5005 scope.go:117] "RemoveContainer" containerID="0fd17b4cc81721a66adaff149966a75a0355cfefe1996efe9af637406ab28af1" Feb 25 14:16:14 crc kubenswrapper[5005]: E0225 14:16:14.686155 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 14:16:15 crc kubenswrapper[5005]: I0225 14:16:15.253000 5005 generic.go:334] "Generic (PLEG): container finished" podID="148bd43e-5dfb-45eb-b33a-64b6c3a73ee2" containerID="2df263c24cb3d562ab52ad387780e766a24fde5eb9ea726c135a2f48a7277045" exitCode=0 Feb 25 14:16:15 crc kubenswrapper[5005]: I0225 14:16:15.253060 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lpkd5/crc-debug-mwk8r" event={"ID":"148bd43e-5dfb-45eb-b33a-64b6c3a73ee2","Type":"ContainerDied","Data":"2df263c24cb3d562ab52ad387780e766a24fde5eb9ea726c135a2f48a7277045"} Feb 25 14:16:16 crc kubenswrapper[5005]: I0225 14:16:16.379750 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lpkd5/crc-debug-mwk8r" Feb 25 14:16:16 crc kubenswrapper[5005]: I0225 14:16:16.513992 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8pbz\" (UniqueName: \"kubernetes.io/projected/148bd43e-5dfb-45eb-b33a-64b6c3a73ee2-kube-api-access-w8pbz\") pod \"148bd43e-5dfb-45eb-b33a-64b6c3a73ee2\" (UID: \"148bd43e-5dfb-45eb-b33a-64b6c3a73ee2\") " Feb 25 14:16:16 crc kubenswrapper[5005]: I0225 14:16:16.514160 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/148bd43e-5dfb-45eb-b33a-64b6c3a73ee2-host\") pod \"148bd43e-5dfb-45eb-b33a-64b6c3a73ee2\" (UID: \"148bd43e-5dfb-45eb-b33a-64b6c3a73ee2\") " Feb 25 14:16:16 crc kubenswrapper[5005]: I0225 14:16:16.514309 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/148bd43e-5dfb-45eb-b33a-64b6c3a73ee2-host" (OuterVolumeSpecName: "host") pod "148bd43e-5dfb-45eb-b33a-64b6c3a73ee2" (UID: "148bd43e-5dfb-45eb-b33a-64b6c3a73ee2"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 25 14:16:16 crc kubenswrapper[5005]: I0225 14:16:16.514801 5005 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/148bd43e-5dfb-45eb-b33a-64b6c3a73ee2-host\") on node \"crc\" DevicePath \"\"" Feb 25 14:16:16 crc kubenswrapper[5005]: I0225 14:16:16.519843 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/148bd43e-5dfb-45eb-b33a-64b6c3a73ee2-kube-api-access-w8pbz" (OuterVolumeSpecName: "kube-api-access-w8pbz") pod "148bd43e-5dfb-45eb-b33a-64b6c3a73ee2" (UID: "148bd43e-5dfb-45eb-b33a-64b6c3a73ee2"). InnerVolumeSpecName "kube-api-access-w8pbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 14:16:16 crc kubenswrapper[5005]: I0225 14:16:16.616189 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8pbz\" (UniqueName: \"kubernetes.io/projected/148bd43e-5dfb-45eb-b33a-64b6c3a73ee2-kube-api-access-w8pbz\") on node \"crc\" DevicePath \"\"" Feb 25 14:16:17 crc kubenswrapper[5005]: I0225 14:16:17.270035 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lpkd5/crc-debug-mwk8r" event={"ID":"148bd43e-5dfb-45eb-b33a-64b6c3a73ee2","Type":"ContainerDied","Data":"a36513aaafb06e645aa3d174701ce16e9664a214b8ed182f5bb182d7b8dd0193"} Feb 25 14:16:17 crc kubenswrapper[5005]: I0225 14:16:17.270074 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a36513aaafb06e645aa3d174701ce16e9664a214b8ed182f5bb182d7b8dd0193" Feb 25 14:16:17 crc kubenswrapper[5005]: I0225 14:16:17.270591 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lpkd5/crc-debug-mwk8r" Feb 25 14:16:17 crc kubenswrapper[5005]: I0225 14:16:17.935284 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-lpkd5/crc-debug-mwk8r"] Feb 25 14:16:17 crc kubenswrapper[5005]: I0225 14:16:17.948802 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-lpkd5/crc-debug-mwk8r"] Feb 25 14:16:18 crc kubenswrapper[5005]: I0225 14:16:18.698777 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="148bd43e-5dfb-45eb-b33a-64b6c3a73ee2" path="/var/lib/kubelet/pods/148bd43e-5dfb-45eb-b33a-64b6c3a73ee2/volumes" Feb 25 14:16:19 crc kubenswrapper[5005]: I0225 14:16:19.094705 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-lpkd5/crc-debug-9g84t"] Feb 25 14:16:19 crc kubenswrapper[5005]: E0225 14:16:19.095957 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="148bd43e-5dfb-45eb-b33a-64b6c3a73ee2" containerName="container-00" Feb 25 14:16:19 crc kubenswrapper[5005]: I0225 14:16:19.096027 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="148bd43e-5dfb-45eb-b33a-64b6c3a73ee2" containerName="container-00" Feb 25 14:16:19 crc kubenswrapper[5005]: I0225 14:16:19.096248 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="148bd43e-5dfb-45eb-b33a-64b6c3a73ee2" containerName="container-00" Feb 25 14:16:19 crc kubenswrapper[5005]: I0225 14:16:19.096933 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lpkd5/crc-debug-9g84t" Feb 25 14:16:19 crc kubenswrapper[5005]: I0225 14:16:19.274560 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9f7142bc-4a6d-406a-937b-7a2eb008a44e-host\") pod \"crc-debug-9g84t\" (UID: \"9f7142bc-4a6d-406a-937b-7a2eb008a44e\") " pod="openshift-must-gather-lpkd5/crc-debug-9g84t" Feb 25 14:16:19 crc kubenswrapper[5005]: I0225 14:16:19.274860 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtwx9\" (UniqueName: \"kubernetes.io/projected/9f7142bc-4a6d-406a-937b-7a2eb008a44e-kube-api-access-qtwx9\") pod \"crc-debug-9g84t\" (UID: \"9f7142bc-4a6d-406a-937b-7a2eb008a44e\") " pod="openshift-must-gather-lpkd5/crc-debug-9g84t" Feb 25 14:16:19 crc kubenswrapper[5005]: I0225 14:16:19.376537 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9f7142bc-4a6d-406a-937b-7a2eb008a44e-host\") pod \"crc-debug-9g84t\" (UID: \"9f7142bc-4a6d-406a-937b-7a2eb008a44e\") " pod="openshift-must-gather-lpkd5/crc-debug-9g84t" Feb 25 14:16:19 crc kubenswrapper[5005]: I0225 14:16:19.376609 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtwx9\" (UniqueName: \"kubernetes.io/projected/9f7142bc-4a6d-406a-937b-7a2eb008a44e-kube-api-access-qtwx9\") pod \"crc-debug-9g84t\" (UID: \"9f7142bc-4a6d-406a-937b-7a2eb008a44e\") " pod="openshift-must-gather-lpkd5/crc-debug-9g84t" Feb 25 14:16:19 crc kubenswrapper[5005]: I0225 14:16:19.376852 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9f7142bc-4a6d-406a-937b-7a2eb008a44e-host\") pod \"crc-debug-9g84t\" (UID: \"9f7142bc-4a6d-406a-937b-7a2eb008a44e\") " pod="openshift-must-gather-lpkd5/crc-debug-9g84t" Feb 25 14:16:19 crc kubenswrapper[5005]: I0225 14:16:19.397204 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtwx9\" (UniqueName: \"kubernetes.io/projected/9f7142bc-4a6d-406a-937b-7a2eb008a44e-kube-api-access-qtwx9\") pod \"crc-debug-9g84t\" (UID: \"9f7142bc-4a6d-406a-937b-7a2eb008a44e\") " pod="openshift-must-gather-lpkd5/crc-debug-9g84t" Feb 25 14:16:19 crc kubenswrapper[5005]: I0225 14:16:19.417453 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lpkd5/crc-debug-9g84t" Feb 25 14:16:19 crc kubenswrapper[5005]: W0225 14:16:19.446118 5005 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f7142bc_4a6d_406a_937b_7a2eb008a44e.slice/crio-43433f557e57f4a482bd383145af9d94f3b6bb8e765c25c51303b0d2406fa185 WatchSource:0}: Error finding container 43433f557e57f4a482bd383145af9d94f3b6bb8e765c25c51303b0d2406fa185: Status 404 returned error can't find the container with id 43433f557e57f4a482bd383145af9d94f3b6bb8e765c25c51303b0d2406fa185 Feb 25 14:16:20 crc kubenswrapper[5005]: I0225 14:16:20.300108 5005 generic.go:334] "Generic (PLEG): container finished" podID="9f7142bc-4a6d-406a-937b-7a2eb008a44e" containerID="d7084173b5ef0465e6b4770a909a9a9ddb67fa1ce50cc00ee17a7c94f12612ef" exitCode=0 Feb 25 14:16:20 crc kubenswrapper[5005]: I0225 14:16:20.300151 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lpkd5/crc-debug-9g84t" event={"ID":"9f7142bc-4a6d-406a-937b-7a2eb008a44e","Type":"ContainerDied","Data":"d7084173b5ef0465e6b4770a909a9a9ddb67fa1ce50cc00ee17a7c94f12612ef"} Feb 25 14:16:20 crc kubenswrapper[5005]: I0225 14:16:20.300176 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lpkd5/crc-debug-9g84t" event={"ID":"9f7142bc-4a6d-406a-937b-7a2eb008a44e","Type":"ContainerStarted","Data":"43433f557e57f4a482bd383145af9d94f3b6bb8e765c25c51303b0d2406fa185"} Feb 25 14:16:20 crc kubenswrapper[5005]: I0225 14:16:20.346813 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-lpkd5/crc-debug-9g84t"] Feb 25 14:16:20 crc kubenswrapper[5005]: I0225 14:16:20.356174 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-lpkd5/crc-debug-9g84t"] Feb 25 14:16:21 crc kubenswrapper[5005]: I0225 14:16:21.401287 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lpkd5/crc-debug-9g84t" Feb 25 14:16:21 crc kubenswrapper[5005]: I0225 14:16:21.515453 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9f7142bc-4a6d-406a-937b-7a2eb008a44e-host\") pod \"9f7142bc-4a6d-406a-937b-7a2eb008a44e\" (UID: \"9f7142bc-4a6d-406a-937b-7a2eb008a44e\") " Feb 25 14:16:21 crc kubenswrapper[5005]: I0225 14:16:21.515501 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtwx9\" (UniqueName: \"kubernetes.io/projected/9f7142bc-4a6d-406a-937b-7a2eb008a44e-kube-api-access-qtwx9\") pod \"9f7142bc-4a6d-406a-937b-7a2eb008a44e\" (UID: \"9f7142bc-4a6d-406a-937b-7a2eb008a44e\") " Feb 25 14:16:21 crc kubenswrapper[5005]: I0225 14:16:21.515609 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f7142bc-4a6d-406a-937b-7a2eb008a44e-host" (OuterVolumeSpecName: "host") pod "9f7142bc-4a6d-406a-937b-7a2eb008a44e" (UID: "9f7142bc-4a6d-406a-937b-7a2eb008a44e"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 25 14:16:21 crc kubenswrapper[5005]: I0225 14:16:21.515983 5005 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9f7142bc-4a6d-406a-937b-7a2eb008a44e-host\") on node \"crc\" DevicePath \"\"" Feb 25 14:16:21 crc kubenswrapper[5005]: I0225 14:16:21.521549 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f7142bc-4a6d-406a-937b-7a2eb008a44e-kube-api-access-qtwx9" (OuterVolumeSpecName: "kube-api-access-qtwx9") pod "9f7142bc-4a6d-406a-937b-7a2eb008a44e" (UID: "9f7142bc-4a6d-406a-937b-7a2eb008a44e"). InnerVolumeSpecName "kube-api-access-qtwx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 14:16:21 crc kubenswrapper[5005]: I0225 14:16:21.618821 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtwx9\" (UniqueName: \"kubernetes.io/projected/9f7142bc-4a6d-406a-937b-7a2eb008a44e-kube-api-access-qtwx9\") on node \"crc\" DevicePath \"\"" Feb 25 14:16:22 crc kubenswrapper[5005]: I0225 14:16:22.334906 5005 scope.go:117] "RemoveContainer" containerID="d7084173b5ef0465e6b4770a909a9a9ddb67fa1ce50cc00ee17a7c94f12612ef" Feb 25 14:16:22 crc kubenswrapper[5005]: I0225 14:16:22.335002 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lpkd5/crc-debug-9g84t" Feb 25 14:16:22 crc kubenswrapper[5005]: I0225 14:16:22.698409 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f7142bc-4a6d-406a-937b-7a2eb008a44e" path="/var/lib/kubelet/pods/9f7142bc-4a6d-406a-937b-7a2eb008a44e/volumes" Feb 25 14:16:26 crc kubenswrapper[5005]: I0225 14:16:26.700107 5005 scope.go:117] "RemoveContainer" containerID="0fd17b4cc81721a66adaff149966a75a0355cfefe1996efe9af637406ab28af1" Feb 25 14:16:26 crc kubenswrapper[5005]: E0225 14:16:26.701047 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 14:16:28 crc kubenswrapper[5005]: I0225 14:16:28.612015 5005 scope.go:117] "RemoveContainer" containerID="c0ef01cc88fea1852f4c8d46ddcc19af6937cbf00b7369e80bc04700b9e5518f" Feb 25 14:16:38 crc kubenswrapper[5005]: I0225 14:16:38.687409 5005 scope.go:117] "RemoveContainer" containerID="0fd17b4cc81721a66adaff149966a75a0355cfefe1996efe9af637406ab28af1" Feb 25 14:16:38 crc kubenswrapper[5005]: E0225 14:16:38.690364 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 14:16:51 crc kubenswrapper[5005]: I0225 14:16:51.686115 5005 scope.go:117] "RemoveContainer" containerID="0fd17b4cc81721a66adaff149966a75a0355cfefe1996efe9af637406ab28af1" Feb 25 14:16:51 crc kubenswrapper[5005]: E0225 14:16:51.687665 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 14:16:59 crc kubenswrapper[5005]: I0225 14:16:59.604784 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-64d599957d-86x2s_e7a211eb-8b24-4344-a82e-65ad4770881a/barbican-api/0.log" Feb 25 14:16:59 crc kubenswrapper[5005]: I0225 14:16:59.756778 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-64d599957d-86x2s_e7a211eb-8b24-4344-a82e-65ad4770881a/barbican-api-log/0.log" Feb 25 14:16:59 crc kubenswrapper[5005]: I0225 14:16:59.783833 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5fb6567bf4-7vxqw_129a4e2f-9f64-4fbb-a36a-f894073762db/barbican-keystone-listener/0.log" Feb 25 14:16:59 crc kubenswrapper[5005]: I0225 14:16:59.975739 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-58f48f7767-7pg69_2458c085-0fe5-433b-8f9d-d7406e2cd54c/barbican-worker/0.log" Feb 25 14:17:00 crc kubenswrapper[5005]: I0225 14:17:00.108946 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-58f48f7767-7pg69_2458c085-0fe5-433b-8f9d-d7406e2cd54c/barbican-worker-log/0.log" Feb 25 14:17:00 crc kubenswrapper[5005]: I0225 14:17:00.323234 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5fb6567bf4-7vxqw_129a4e2f-9f64-4fbb-a36a-f894073762db/barbican-keystone-listener-log/0.log" Feb 25 14:17:00 crc kubenswrapper[5005]: I0225 14:17:00.369929 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-2jhfb_6148b2f6-0020-48b4-9d78-19fd37400e69/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 25 14:17:00 crc kubenswrapper[5005]: I0225 14:17:00.475011 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b438d6e5-da98-4860-ae13-71372ea976cb/ceilometer-central-agent/0.log" Feb 25 14:17:00 crc kubenswrapper[5005]: I0225 14:17:00.538727 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b438d6e5-da98-4860-ae13-71372ea976cb/ceilometer-notification-agent/0.log" Feb 25 14:17:00 crc kubenswrapper[5005]: I0225 14:17:00.581936 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b438d6e5-da98-4860-ae13-71372ea976cb/proxy-httpd/0.log" Feb 25 14:17:00 crc kubenswrapper[5005]: I0225 14:17:00.595808 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b438d6e5-da98-4860-ae13-71372ea976cb/sg-core/0.log" Feb 25 14:17:00 crc kubenswrapper[5005]: I0225 14:17:00.785738 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-edpm-deployment-openstack-edpm-ipam-jkhrj_254d9140-397a-4f78-b3bc-ad03ddd2fb26/ceph-client-edpm-deployment-openstack-edpm-ipam/0.log" Feb 25 14:17:00 crc kubenswrapper[5005]: I0225 14:17:00.845788 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2nljc_951c3390-5e6e-4ac4-9833-f7c71959fae6/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Feb 25 14:17:01 crc kubenswrapper[5005]: I0225 14:17:01.112774 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_fe51dd1b-0b94-44f9-b7a5-e76b07e62f10/cinder-api-log/0.log" Feb 25 14:17:01 crc kubenswrapper[5005]: I0225 14:17:01.199530 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_fe51dd1b-0b94-44f9-b7a5-e76b07e62f10/cinder-api/0.log" Feb 25 14:17:01 crc kubenswrapper[5005]: I0225 14:17:01.415790 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_b64ce473-ca21-4e77-afd9-24d93e79a71f/probe/0.log" Feb 25 14:17:01 crc kubenswrapper[5005]: I0225 14:17:01.458852 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_b64ce473-ca21-4e77-afd9-24d93e79a71f/cinder-backup/0.log" Feb 25 14:17:01 crc kubenswrapper[5005]: I0225 14:17:01.483347 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_29bc251d-1657-4787-81f8-83fdf903f229/cinder-scheduler/0.log" Feb 25 14:17:01 crc kubenswrapper[5005]: I0225 14:17:01.801690 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_29bc251d-1657-4787-81f8-83fdf903f229/probe/0.log" Feb 25 14:17:01 crc kubenswrapper[5005]: I0225 14:17:01.863156 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_da2a296c-43d5-4d35-9929-d13584a2d821/cinder-volume/0.log" Feb 25 14:17:01 crc kubenswrapper[5005]: I0225 14:17:01.959096 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_da2a296c-43d5-4d35-9929-d13584a2d821/probe/0.log" Feb 25 14:17:02 crc kubenswrapper[5005]: I0225 14:17:02.043864 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-gmj5r_0a6d3b90-c256-4163-9be6-e28fc31f4094/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 25 14:17:02 crc kubenswrapper[5005]: I0225 14:17:02.184980 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-2lkjk_de337099-6e1c-496a-a163-ebc04d7a0fc8/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 25 14:17:02 crc kubenswrapper[5005]: I0225 14:17:02.304787 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76b5fdb995-mb8k4_d5ab8f06-b6d7-4d93-9aed-ca2cf66b3856/init/0.log" Feb 25 14:17:02 crc kubenswrapper[5005]: I0225 14:17:02.487855 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76b5fdb995-mb8k4_d5ab8f06-b6d7-4d93-9aed-ca2cf66b3856/init/0.log" Feb 25 14:17:02 crc kubenswrapper[5005]: I0225 14:17:02.629667 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_e67820f3-0872-48ef-b531-e263d5be19bf/glance-httpd/0.log" Feb 25 14:17:02 crc kubenswrapper[5005]: I0225 14:17:02.744872 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76b5fdb995-mb8k4_d5ab8f06-b6d7-4d93-9aed-ca2cf66b3856/dnsmasq-dns/0.log" Feb 25 14:17:02 crc kubenswrapper[5005]: I0225 14:17:02.762646 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_e67820f3-0872-48ef-b531-e263d5be19bf/glance-log/0.log" Feb 25 14:17:02 crc kubenswrapper[5005]: I0225 14:17:02.921383 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_5484c86b-7109-4708-b68d-a8ba3a06925b/glance-httpd/0.log" Feb 25 14:17:02 crc kubenswrapper[5005]: I0225 14:17:02.953810 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_5484c86b-7109-4708-b68d-a8ba3a06925b/glance-log/0.log" Feb 25 14:17:03 crc kubenswrapper[5005]: I0225 14:17:03.254620 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-9vfx5_2f97d5cb-2060-4a95-84dd-e4dca87fb87a/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 25 14:17:03 crc kubenswrapper[5005]: I0225 14:17:03.281879 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-f8bbcbf96-lg5q8_e277143e-cdb1-4dda-976d-f06c58c14c33/horizon/0.log" Feb 25 14:17:03 crc kubenswrapper[5005]: I0225 14:17:03.469285 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-x9rp4_daf2a90e-77cb-446f-8fbf-1f526edc75d9/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 25 14:17:03 crc kubenswrapper[5005]: I0225 14:17:03.841918 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29533681-d94x7_39300e2f-d3dc-453a-8555-0453080ef2bd/keystone-cron/0.log" Feb 25 14:17:04 crc kubenswrapper[5005]: I0225 14:17:04.143813 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29533741-cx7jz_797ec89c-781b-4d90-a6f5-7e54312be3d9/keystone-cron/0.log" Feb 25 14:17:04 crc kubenswrapper[5005]: I0225 14:17:04.232981 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29533801-h2bmh_6f5b545f-a79f-408f-8dae-4c688e9a70eb/keystone-cron/0.log" Feb 25 14:17:04 crc kubenswrapper[5005]: I0225 14:17:04.390690 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_4b524360-8bfc-488d-b2ec-2668afe9b13d/kube-state-metrics/0.log" Feb 25 14:17:04 crc kubenswrapper[5005]: I0225 14:17:04.681674 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-jldwr_6aeb7945-54e1-40bf-b490-7623fac580b0/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 25 14:17:04 crc kubenswrapper[5005]: I0225 14:17:04.854885 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_e583340e-f1c1-49c8-bb9b-edce3afe87c5/manila-api-log/0.log" Feb 25 14:17:04 crc kubenswrapper[5005]: I0225 14:17:04.894096 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_e583340e-f1c1-49c8-bb9b-edce3afe87c5/manila-api/0.log" Feb 25 14:17:05 crc kubenswrapper[5005]: I0225 14:17:05.117071 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-f8bbcbf96-lg5q8_e277143e-cdb1-4dda-976d-f06c58c14c33/horizon-log/0.log" Feb 25 14:17:05 crc kubenswrapper[5005]: I0225 14:17:05.149198 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_eddaedf3-e462-471f-abac-a5e1553d14a4/probe/0.log" Feb 25 14:17:05 crc kubenswrapper[5005]: I0225 14:17:05.182423 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_eddaedf3-e462-471f-abac-a5e1553d14a4/manila-scheduler/0.log" Feb 25 14:17:05 crc kubenswrapper[5005]: I0225 14:17:05.414812 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_213c62b6-1330-492a-92aa-4e756678a6f2/probe/0.log" Feb 25 14:17:05 crc kubenswrapper[5005]: I0225 14:17:05.489178 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_213c62b6-1330-492a-92aa-4e756678a6f2/manila-share/0.log" Feb 25 14:17:06 crc kubenswrapper[5005]: I0225 14:17:06.054632 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-pn82x_44b0558f-354e-40f9-9db7-8f5e8762f1fe/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 25 14:17:06 crc kubenswrapper[5005]: I0225 14:17:06.646809 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-74447dd785-mk8tc_360249f4-3024-4567-afa0-f52fb42cc400/keystone-api/0.log" Feb 25 14:17:06 crc kubenswrapper[5005]: I0225 14:17:06.647805 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-84f5fb5d89-rmhhp_ced877ad-f6e2-4f0e-a4dd-6bf09e612717/neutron-httpd/0.log" Feb 25 14:17:06 crc kubenswrapper[5005]: I0225 14:17:06.693986 5005 scope.go:117] "RemoveContainer" containerID="0fd17b4cc81721a66adaff149966a75a0355cfefe1996efe9af637406ab28af1" Feb 25 14:17:06 crc kubenswrapper[5005]: E0225 14:17:06.694233 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 14:17:07 crc kubenswrapper[5005]: I0225 14:17:07.435367 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-84f5fb5d89-rmhhp_ced877ad-f6e2-4f0e-a4dd-6bf09e612717/neutron-api/0.log" Feb 25 14:17:07 crc kubenswrapper[5005]: I0225 14:17:07.721829 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_f6b51fe3-5b9f-4745-8d5a-c9418091f431/nova-cell0-conductor-conductor/0.log" Feb 25 14:17:08 crc kubenswrapper[5005]: I0225 14:17:08.077149 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_e4dbecac-c5f9-4cdf-a25a-6b3e24ca93df/nova-cell1-conductor-conductor/0.log" Feb 25 14:17:08 crc kubenswrapper[5005]: I0225 14:17:08.497602 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_f2858fac-fd3d-46ed-9ac2-f057ed2d8395/nova-cell1-novncproxy-novncproxy/0.log" Feb 25 14:17:08 crc kubenswrapper[5005]: I0225 14:17:08.598682 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-ngfwl_13abaed5-d544-41f0-8bd9-07bbd0798c33/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam/0.log" Feb 25 14:17:08 crc kubenswrapper[5005]: I0225 14:17:08.893089 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_a7bae511-5969-46d6-9bb5-e45984a014e4/nova-metadata-log/0.log" Feb 25 14:17:09 crc kubenswrapper[5005]: I0225 14:17:09.951597 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_a4ac86e9-6f7f-4d5a-8307-6f1dd8f43032/nova-scheduler-scheduler/0.log" Feb 25 14:17:10 crc kubenswrapper[5005]: I0225 14:17:10.365559 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_7d41f2ea-e694-463b-a0bb-d8b987bab0b4/mysql-bootstrap/0.log" Feb 25 14:17:10 crc kubenswrapper[5005]: I0225 14:17:10.608081 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_1050651a-de95-4e1f-92ca-9f89194a2ae3/nova-api-log/0.log" Feb 25 14:17:11 crc kubenswrapper[5005]: I0225 14:17:11.059506 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_7d41f2ea-e694-463b-a0bb-d8b987bab0b4/mysql-bootstrap/0.log" Feb 25 14:17:11 crc kubenswrapper[5005]: I0225 14:17:11.071526 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_7d41f2ea-e694-463b-a0bb-d8b987bab0b4/galera/0.log" Feb 25 14:17:11 crc kubenswrapper[5005]: I0225 14:17:11.290610 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_fe9dcc0a-0321-4f68-929f-fb5393b97e38/mysql-bootstrap/0.log" Feb 25 14:17:11 crc kubenswrapper[5005]: I0225 14:17:11.453919 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_fe9dcc0a-0321-4f68-929f-fb5393b97e38/mysql-bootstrap/0.log" Feb 25 14:17:11 crc kubenswrapper[5005]: I0225 14:17:11.550665 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_fe9dcc0a-0321-4f68-929f-fb5393b97e38/galera/0.log" Feb 25 14:17:11 crc kubenswrapper[5005]: I0225 14:17:11.796293 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_7513bdda-48d0-4c57-a264-56886c4a89bd/openstackclient/0.log" Feb 25 14:17:12 crc kubenswrapper[5005]: I0225 14:17:12.042753 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-q7v44_fc1ff781-cfcd-4b44-92d5-ff9153b19871/openstack-network-exporter/0.log" Feb 25 14:17:12 crc kubenswrapper[5005]: I0225 14:17:12.185303 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_1050651a-de95-4e1f-92ca-9f89194a2ae3/nova-api-api/0.log" Feb 25 14:17:12 crc kubenswrapper[5005]: I0225 14:17:12.259985 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-mhhkt_d7433eab-76d5-403c-8949-6b99fa8624d5/ovn-controller/0.log" Feb 25 14:17:12 crc kubenswrapper[5005]: I0225 14:17:12.443338 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-stnkf_8bb9b011-87e3-4dd1-bec8-10c27806cad6/ovsdb-server-init/0.log" Feb 25 14:17:13 crc kubenswrapper[5005]: I0225 14:17:13.043335 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-stnkf_8bb9b011-87e3-4dd1-bec8-10c27806cad6/ovs-vswitchd/0.log" Feb 25 14:17:13 crc kubenswrapper[5005]: I0225 14:17:13.045978 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-stnkf_8bb9b011-87e3-4dd1-bec8-10c27806cad6/ovsdb-server-init/0.log" Feb 25 14:17:13 crc kubenswrapper[5005]: I0225 14:17:13.052332 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-stnkf_8bb9b011-87e3-4dd1-bec8-10c27806cad6/ovsdb-server/0.log" Feb 25 14:17:13 crc kubenswrapper[5005]: I0225 14:17:13.326363 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-w6dhj_5d029a87-5e79-4abe-9bc5-68de638fb6b8/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 25 14:17:13 crc kubenswrapper[5005]: I0225 14:17:13.507046 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_e2fa3add-6e64-4cfb-9349-7650d9fa6da5/openstack-network-exporter/0.log" Feb 25 14:17:13 crc kubenswrapper[5005]: I0225 14:17:13.544543 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_e2fa3add-6e64-4cfb-9349-7650d9fa6da5/ovn-northd/0.log" Feb 25 14:17:13 crc kubenswrapper[5005]: I0225 14:17:13.718914 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_5b6d805e-5f35-4f58-a71a-5bdbb4eba017/openstack-network-exporter/0.log" Feb 25 14:17:13 crc kubenswrapper[5005]: I0225 14:17:13.779615 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_5b6d805e-5f35-4f58-a71a-5bdbb4eba017/ovsdbserver-nb/0.log" Feb 25 14:17:13 crc kubenswrapper[5005]: I0225 14:17:13.965514 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_69573081-3b63-4aab-b734-c29867f9f0c1/openstack-network-exporter/0.log" Feb 25 14:17:14 crc kubenswrapper[5005]: I0225 14:17:14.014480 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_69573081-3b63-4aab-b734-c29867f9f0c1/ovsdbserver-sb/0.log" Feb 25 14:17:14 crc kubenswrapper[5005]: I0225 14:17:14.504023 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_e53714d3-f02e-4700-a89d-d6a8dbcff7d3/setup-container/0.log" Feb 25 14:17:14 crc kubenswrapper[5005]: I0225 14:17:14.578484 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_a7bae511-5969-46d6-9bb5-e45984a014e4/nova-metadata-metadata/0.log" Feb 25 14:17:14 crc kubenswrapper[5005]: I0225 14:17:14.730992 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_e53714d3-f02e-4700-a89d-d6a8dbcff7d3/setup-container/0.log" Feb 25 14:17:14 crc kubenswrapper[5005]: I0225 14:17:14.752948 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6c94f4bc5b-tpjtv_fed08fee-c796-4162-b342-458bb0d5fc68/placement-api/0.log" Feb 25 14:17:14 crc kubenswrapper[5005]: I0225 14:17:14.771832 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_e53714d3-f02e-4700-a89d-d6a8dbcff7d3/rabbitmq/0.log" Feb 25 14:17:14 crc kubenswrapper[5005]: I0225 14:17:14.940038 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_7951928d-6b95-4766-b04f-3b7f448ad731/setup-container/0.log" Feb 25 14:17:14 crc kubenswrapper[5005]: I0225 14:17:14.945823 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6c94f4bc5b-tpjtv_fed08fee-c796-4162-b342-458bb0d5fc68/placement-log/0.log" Feb 25 14:17:15 crc kubenswrapper[5005]: I0225 14:17:15.150199 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_7951928d-6b95-4766-b04f-3b7f448ad731/rabbitmq/0.log" Feb 25 14:17:15 crc kubenswrapper[5005]: I0225 14:17:15.163625 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_7951928d-6b95-4766-b04f-3b7f448ad731/setup-container/0.log" Feb 25 14:17:15 crc kubenswrapper[5005]: I0225 14:17:15.225685 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-6z458_48c9975f-aaf1-49c2-8dfe-ee8eb7180754/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 25 14:17:15 crc kubenswrapper[5005]: I0225 14:17:15.429140 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-w89mw_d7e93344-ea31-4de6-8473-4bd46cbe028b/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 25 14:17:15 crc kubenswrapper[5005]: I0225 14:17:15.448036 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-6pdzz_9f92f677-2047-4b97-ad0c-3862a43553d5/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 25 14:17:15 crc kubenswrapper[5005]: I0225 14:17:15.657831 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-zk2nb_fc7b1540-d8b4-4387-98ac-a56fb073470d/ssh-known-hosts-edpm-deployment/0.log" Feb 25 14:17:15 crc kubenswrapper[5005]: I0225 14:17:15.784775 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest-s00-full_31e745d1-9f40-4deb-adb0-7cb412b3b21f/tempest-tests-tempest-tests-runner/0.log" Feb 25 14:17:15 crc kubenswrapper[5005]: I0225 14:17:15.908935 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest-s01-single-test_444bbedc-b80c-45ab-8538-c572e17d3892/tempest-tests-tempest-tests-runner/0.log" Feb 25 14:17:15 crc kubenswrapper[5005]: I0225 14:17:15.989251 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_2840d4e0-d718-4faf-81f2-b5b054a2fd5d/test-operator-logs-container/0.log" Feb 25 14:17:16 crc kubenswrapper[5005]: I0225 14:17:16.163590 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tobiko-tobiko-tests-tobiko_ba83c291-5d2f-4fdd-9b61-e3c5a39b86c1/test-operator-logs-container/0.log" Feb 25 14:17:16 crc kubenswrapper[5005]: I0225 14:17:16.211646 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tobiko-tests-tobiko-s00-podified-functional_c817cda1-46cd-4d4d-bf2b-6d99af91a859/tobiko-tests-tobiko/0.log" Feb 25 14:17:16 crc kubenswrapper[5005]: I0225 14:17:16.407287 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tobiko-tests-tobiko-s01-sanity_5667c5d7-d8e5-482e-8463-6660b2289aa5/tobiko-tests-tobiko/0.log" Feb 25 14:17:16 crc kubenswrapper[5005]: I0225 14:17:16.445633 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-r67xp_925953b4-980e-4cf1-be73-3c6e33b496c5/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 25 14:17:20 crc kubenswrapper[5005]: I0225 14:17:20.687850 5005 scope.go:117] "RemoveContainer" containerID="0fd17b4cc81721a66adaff149966a75a0355cfefe1996efe9af637406ab28af1" Feb 25 14:17:20 crc kubenswrapper[5005]: E0225 14:17:20.689214 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 14:17:32 crc kubenswrapper[5005]: I0225 14:17:32.686320 5005 scope.go:117] "RemoveContainer" containerID="0fd17b4cc81721a66adaff149966a75a0355cfefe1996efe9af637406ab28af1" Feb 25 14:17:32 crc kubenswrapper[5005]: E0225 14:17:32.686968 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 14:17:34 crc kubenswrapper[5005]: I0225 14:17:34.384141 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_9b0ce46e-c63e-49fa-b35c-10745cf3abc4/memcached/0.log" Feb 25 14:17:44 crc kubenswrapper[5005]: I0225 14:17:44.686396 5005 scope.go:117] "RemoveContainer" containerID="0fd17b4cc81721a66adaff149966a75a0355cfefe1996efe9af637406ab28af1" Feb 25 14:17:44 crc kubenswrapper[5005]: E0225 14:17:44.687118 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 14:17:44 crc kubenswrapper[5005]: I0225 14:17:44.992803 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d73e3e6ee0845b4b1f6787829b10c723b3732e094d91c565492c41e8a8hj2s8_84c3a6b4-399c-427a-aa4c-a19b41e897f6/util/0.log" Feb 25 14:17:45 crc kubenswrapper[5005]: I0225 14:17:45.218245 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d73e3e6ee0845b4b1f6787829b10c723b3732e094d91c565492c41e8a8hj2s8_84c3a6b4-399c-427a-aa4c-a19b41e897f6/pull/0.log" Feb 25 14:17:45 crc kubenswrapper[5005]: I0225 14:17:45.249039 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d73e3e6ee0845b4b1f6787829b10c723b3732e094d91c565492c41e8a8hj2s8_84c3a6b4-399c-427a-aa4c-a19b41e897f6/util/0.log" Feb 25 14:17:45 crc kubenswrapper[5005]: I0225 14:17:45.431711 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d73e3e6ee0845b4b1f6787829b10c723b3732e094d91c565492c41e8a8hj2s8_84c3a6b4-399c-427a-aa4c-a19b41e897f6/pull/0.log" Feb 25 14:17:45 crc kubenswrapper[5005]: I0225 14:17:45.601878 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d73e3e6ee0845b4b1f6787829b10c723b3732e094d91c565492c41e8a8hj2s8_84c3a6b4-399c-427a-aa4c-a19b41e897f6/util/0.log" Feb 25 14:17:45 crc kubenswrapper[5005]: I0225 14:17:45.630138 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d73e3e6ee0845b4b1f6787829b10c723b3732e094d91c565492c41e8a8hj2s8_84c3a6b4-399c-427a-aa4c-a19b41e897f6/pull/0.log" Feb 25 14:17:45 crc kubenswrapper[5005]: I0225 14:17:45.790135 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d73e3e6ee0845b4b1f6787829b10c723b3732e094d91c565492c41e8a8hj2s8_84c3a6b4-399c-427a-aa4c-a19b41e897f6/extract/0.log" Feb 25 14:17:46 crc kubenswrapper[5005]: I0225 14:17:46.036151 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-2flh2_bbe120fb-31bc-4979-afb0-e629a69b4c80/manager/0.log" Feb 25 14:17:46 crc kubenswrapper[5005]: I0225 14:17:46.433193 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-784b5bb6c5-b4p92_d4d77380-132e-40ee-859f-ed77a83e2f0a/manager/0.log" Feb 25 14:17:46 crc kubenswrapper[5005]: I0225 14:17:46.522122 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-kmwrd_10203f00-712d-4f78-87eb-973cd8b82e16/manager/0.log" Feb 25 14:17:46 crc kubenswrapper[5005]: I0225 14:17:46.773975 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-c24l5_baa1eb2e-998d-46b3-8641-f2274bb32274/manager/0.log" Feb 25 14:17:47 crc kubenswrapper[5005]: I0225 14:17:47.360156 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-79ddk_e5913871-3107-4c84-b940-34c8f4171fc2/manager/0.log" Feb 25 14:17:47 crc kubenswrapper[5005]: I0225 14:17:47.463720 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-8jrrj_153c478a-59a9-4d31-8822-cfb3b62d9c39/manager/0.log" Feb 25 14:17:47 crc kubenswrapper[5005]: I0225 14:17:47.725463 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-4kqj7_a87589fa-1024-43bc-85ec-e9c3bf944db3/manager/0.log" Feb 25 14:17:48 crc kubenswrapper[5005]: I0225 14:17:48.044764 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-67d996989d-zlm8p_c70b1ffc-e4b5-4e6e-9b0e-e3141c223d33/manager/0.log" Feb 25 14:17:48 crc kubenswrapper[5005]: I0225 14:17:48.403194 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-55d77d7b5c-swlsl_6a567e4b-427c-4355-a59b-22f247ce374f/manager/0.log" Feb 25 14:17:48 crc kubenswrapper[5005]: I0225 14:17:48.431864 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-8mtqq_bb12a9fa-312f-4ecd-9732-513717eeb77e/manager/0.log" Feb 25 14:17:48 crc kubenswrapper[5005]: I0225 14:17:48.758603 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-xr4bs_ef3a922b-0335-4403-b695-5924b4ce2650/manager/0.log" Feb 25 14:17:48 crc kubenswrapper[5005]: I0225 14:17:48.762509 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-6bd4687957-5gzxv_681d6a40-d67d-4d22-985d-7f3b6f10a1d7/manager/0.log" Feb 25 14:17:48 crc kubenswrapper[5005]: I0225 14:17:48.930755 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-659dc6bbfc-xfzrq_12ea1888-c73d-4a23-a3c2-ba52588b8eba/manager/0.log" Feb 25 14:17:49 crc kubenswrapper[5005]: I0225 14:17:49.024963 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9ck47t4_ce436a93-47a7-48b4-8134-04bd264c6105/manager/0.log" Feb 25 14:17:49 crc kubenswrapper[5005]: I0225 14:17:49.269562 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-75d447ff8b-vmmgj_6972a9d4-7b95-4a9d-9b01-ee67bb1fb9af/operator/0.log" Feb 25 14:17:49 crc kubenswrapper[5005]: I0225 14:17:49.482130 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-g97mm_16ce7172-a1e1-4af8-a26b-6700cad253a3/registry-server/0.log" Feb 25 14:17:49 crc kubenswrapper[5005]: I0225 14:17:49.657966 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-5955d8c787-xfwvf_100cda48-3590-4281-8f00-6881497a2420/manager/0.log" Feb 25 14:17:49 crc kubenswrapper[5005]: I0225 14:17:49.776270 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-5rslz_ec29cb03-6d5b-4922-bf56-da937019444d/manager/0.log" Feb 25 14:17:49 crc kubenswrapper[5005]: I0225 14:17:49.877743 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-fc4gc_626ae184-1624-447b-a9be-7e4d92dc4e67/operator/0.log" Feb 25 14:17:50 crc kubenswrapper[5005]: I0225 14:17:50.089279 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-b27xm_22145e61-39f2-479f-a6a9-82afb5c654df/manager/0.log" Feb 25 14:17:50 crc kubenswrapper[5005]: I0225 14:17:50.337007 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-589c568786-x5c6m_667f5dd6-b885-416e-b0da-aa3af5f91d4e/manager/0.log" Feb 25 14:17:50 crc kubenswrapper[5005]: I0225 14:17:50.454187 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5d94c77696-97dh4_5a255675-1625-4741-a27d-dbd287a31276/manager/0.log" Feb 25 14:17:50 crc kubenswrapper[5005]: I0225 14:17:50.556601 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-bccc79885-zzcv7_8855d562-69b7-4122-804f-dd87a6a3031a/manager/0.log" Feb 25 14:17:51 crc kubenswrapper[5005]: I0225 14:17:51.085750 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-655db7587-5x4xq_c058ccb9-0d11-4b5b-bbc8-bb88b605a661/manager/0.log" Feb 25 14:17:55 crc kubenswrapper[5005]: I0225 14:17:55.685947 5005 scope.go:117] "RemoveContainer" containerID="0fd17b4cc81721a66adaff149966a75a0355cfefe1996efe9af637406ab28af1" Feb 25 14:17:55 crc kubenswrapper[5005]: E0225 14:17:55.686517 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 14:17:59 crc kubenswrapper[5005]: I0225 14:17:59.577795 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-6hqnr_e43cd401-1094-4b7e-89cd-08216d652cee/manager/0.log" Feb 25 14:18:00 crc kubenswrapper[5005]: I0225 14:18:00.142682 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533818-df79r"] Feb 25 14:18:00 crc kubenswrapper[5005]: E0225 14:18:00.143007 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f7142bc-4a6d-406a-937b-7a2eb008a44e" containerName="container-00" Feb 25 14:18:00 crc kubenswrapper[5005]: I0225 14:18:00.143018 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f7142bc-4a6d-406a-937b-7a2eb008a44e" containerName="container-00" Feb 25 14:18:00 crc kubenswrapper[5005]: I0225 14:18:00.143187 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f7142bc-4a6d-406a-937b-7a2eb008a44e" containerName="container-00" Feb 25 14:18:00 crc kubenswrapper[5005]: I0225 14:18:00.143763 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533818-df79r" Feb 25 14:18:00 crc kubenswrapper[5005]: I0225 14:18:00.165385 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 14:18:00 crc kubenswrapper[5005]: I0225 14:18:00.165616 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 14:18:00 crc kubenswrapper[5005]: I0225 14:18:00.165763 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 14:18:00 crc kubenswrapper[5005]: I0225 14:18:00.186450 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533818-df79r"] Feb 25 14:18:00 crc kubenswrapper[5005]: I0225 14:18:00.342947 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l66kd\" (UniqueName: \"kubernetes.io/projected/21240307-3105-420f-b08e-d921f14ac6bf-kube-api-access-l66kd\") pod \"auto-csr-approver-29533818-df79r\" (UID: \"21240307-3105-420f-b08e-d921f14ac6bf\") " pod="openshift-infra/auto-csr-approver-29533818-df79r" Feb 25 14:18:00 crc kubenswrapper[5005]: I0225 14:18:00.445667 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l66kd\" (UniqueName: \"kubernetes.io/projected/21240307-3105-420f-b08e-d921f14ac6bf-kube-api-access-l66kd\") pod \"auto-csr-approver-29533818-df79r\" (UID: \"21240307-3105-420f-b08e-d921f14ac6bf\") " pod="openshift-infra/auto-csr-approver-29533818-df79r" Feb 25 14:18:00 crc kubenswrapper[5005]: I0225 14:18:00.517362 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l66kd\" (UniqueName: \"kubernetes.io/projected/21240307-3105-420f-b08e-d921f14ac6bf-kube-api-access-l66kd\") pod \"auto-csr-approver-29533818-df79r\" (UID: \"21240307-3105-420f-b08e-d921f14ac6bf\") " pod="openshift-infra/auto-csr-approver-29533818-df79r" Feb 25 14:18:00 crc kubenswrapper[5005]: I0225 14:18:00.784211 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533818-df79r" Feb 25 14:18:01 crc kubenswrapper[5005]: I0225 14:18:01.259316 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533818-df79r"] Feb 25 14:18:01 crc kubenswrapper[5005]: I0225 14:18:01.369705 5005 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 25 14:18:02 crc kubenswrapper[5005]: I0225 14:18:02.205511 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533818-df79r" event={"ID":"21240307-3105-420f-b08e-d921f14ac6bf","Type":"ContainerStarted","Data":"2fef9e8dfc05f3c6f2655906f64311b62f13bf75c22dd02265eb703822d4bd62"} Feb 25 14:18:04 crc kubenswrapper[5005]: I0225 14:18:04.233891 5005 generic.go:334] "Generic (PLEG): container finished" podID="21240307-3105-420f-b08e-d921f14ac6bf" containerID="b3172521c595e1f3285289c1286ce61241ab5cafea36b29b47ede131e86164d4" exitCode=0 Feb 25 14:18:04 crc kubenswrapper[5005]: I0225 14:18:04.234074 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533818-df79r" event={"ID":"21240307-3105-420f-b08e-d921f14ac6bf","Type":"ContainerDied","Data":"b3172521c595e1f3285289c1286ce61241ab5cafea36b29b47ede131e86164d4"} Feb 25 14:18:05 crc kubenswrapper[5005]: I0225 14:18:05.648880 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533818-df79r" Feb 25 14:18:05 crc kubenswrapper[5005]: I0225 14:18:05.749175 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l66kd\" (UniqueName: \"kubernetes.io/projected/21240307-3105-420f-b08e-d921f14ac6bf-kube-api-access-l66kd\") pod \"21240307-3105-420f-b08e-d921f14ac6bf\" (UID: \"21240307-3105-420f-b08e-d921f14ac6bf\") " Feb 25 14:18:05 crc kubenswrapper[5005]: I0225 14:18:05.761706 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21240307-3105-420f-b08e-d921f14ac6bf-kube-api-access-l66kd" (OuterVolumeSpecName: "kube-api-access-l66kd") pod "21240307-3105-420f-b08e-d921f14ac6bf" (UID: "21240307-3105-420f-b08e-d921f14ac6bf"). InnerVolumeSpecName "kube-api-access-l66kd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 14:18:05 crc kubenswrapper[5005]: I0225 14:18:05.852085 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l66kd\" (UniqueName: \"kubernetes.io/projected/21240307-3105-420f-b08e-d921f14ac6bf-kube-api-access-l66kd\") on node \"crc\" DevicePath \"\"" Feb 25 14:18:06 crc kubenswrapper[5005]: I0225 14:18:06.253187 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533818-df79r" event={"ID":"21240307-3105-420f-b08e-d921f14ac6bf","Type":"ContainerDied","Data":"2fef9e8dfc05f3c6f2655906f64311b62f13bf75c22dd02265eb703822d4bd62"} Feb 25 14:18:06 crc kubenswrapper[5005]: I0225 14:18:06.253530 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fef9e8dfc05f3c6f2655906f64311b62f13bf75c22dd02265eb703822d4bd62" Feb 25 14:18:06 crc kubenswrapper[5005]: I0225 14:18:06.253242 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533818-df79r" Feb 25 14:18:06 crc kubenswrapper[5005]: I0225 14:18:06.695033 5005 scope.go:117] "RemoveContainer" containerID="0fd17b4cc81721a66adaff149966a75a0355cfefe1996efe9af637406ab28af1" Feb 25 14:18:06 crc kubenswrapper[5005]: I0225 14:18:06.767549 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533812-s6p86"] Feb 25 14:18:06 crc kubenswrapper[5005]: I0225 14:18:06.791631 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533812-s6p86"] Feb 25 14:18:07 crc kubenswrapper[5005]: I0225 14:18:07.262500 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerStarted","Data":"ad13f166a37837786041d84a352dd1da26ff2473b3b78faeba1072292dc343ec"} Feb 25 14:18:08 crc kubenswrapper[5005]: I0225 14:18:08.697398 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cb8f230-652e-4c81-b2af-9849280eabe6" path="/var/lib/kubelet/pods/7cb8f230-652e-4c81-b2af-9849280eabe6/volumes" Feb 25 14:18:14 crc kubenswrapper[5005]: I0225 14:18:14.765854 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-hl9wb_75d48e65-ec5a-4706-bf55-2b97ee71af11/control-plane-machine-set-operator/0.log" Feb 25 14:18:14 crc kubenswrapper[5005]: I0225 14:18:14.966494 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-75zdc_771a10ce-b3f7-4d81-9963-51d7a38c2cdf/machine-api-operator/0.log" Feb 25 14:18:14 crc kubenswrapper[5005]: I0225 14:18:14.977991 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-75zdc_771a10ce-b3f7-4d81-9963-51d7a38c2cdf/kube-rbac-proxy/0.log" Feb 25 14:18:28 crc kubenswrapper[5005]: I0225 14:18:28.110445 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-fqn5s_3bf0b057-d203-4825-973b-c6ec18ce6008/cert-manager-controller/0.log" Feb 25 14:18:28 crc kubenswrapper[5005]: I0225 14:18:28.296164 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-8qqzv_7080961a-f87d-44d8-ba9b-a97d6e6113a3/cert-manager-cainjector/0.log" Feb 25 14:18:28 crc kubenswrapper[5005]: I0225 14:18:28.372924 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-7q9jl_597607f6-7b88-42c4-959e-c40d7273b7d5/cert-manager-webhook/0.log" Feb 25 14:18:28 crc kubenswrapper[5005]: I0225 14:18:28.735960 5005 scope.go:117] "RemoveContainer" containerID="5edd8f3d83d924607e11bf0f91717a191dd4e59c85a9b0246bce5485f6c0886a" Feb 25 14:18:42 crc kubenswrapper[5005]: I0225 14:18:42.026910 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-j5r7m_eb9a2251-b746-4e9b-88b9-2db41d719a6b/nmstate-console-plugin/0.log" Feb 25 14:18:42 crc kubenswrapper[5005]: I0225 14:18:42.221287 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-vjwbs_e395d174-8019-4c2d-b6eb-01de557ad7f0/nmstate-handler/0.log" Feb 25 14:18:42 crc kubenswrapper[5005]: I0225 14:18:42.314352 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-7v8sm_fb9e60b3-a104-48de-8db3-d056e7803ed1/kube-rbac-proxy/0.log" Feb 25 14:18:42 crc kubenswrapper[5005]: I0225 14:18:42.418348 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-7v8sm_fb9e60b3-a104-48de-8db3-d056e7803ed1/nmstate-metrics/0.log" Feb 25 14:18:42 crc kubenswrapper[5005]: I0225 14:18:42.439586 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-49k98_e9d3d9e7-9646-45be-b459-10fce735fd04/nmstate-operator/0.log" Feb 25 14:18:42 crc kubenswrapper[5005]: I0225 14:18:42.566589 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-jg2hs_f9b21027-4967-42b7-bb45-800971bccae6/nmstate-webhook/0.log" Feb 25 14:19:08 crc kubenswrapper[5005]: I0225 14:19:08.796981 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-j6mkr_c2ccc2c1-2a28-4507-8713-2aad869af209/kube-rbac-proxy/0.log" Feb 25 14:19:08 crc kubenswrapper[5005]: I0225 14:19:08.951067 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-j6mkr_c2ccc2c1-2a28-4507-8713-2aad869af209/controller/0.log" Feb 25 14:19:09 crc kubenswrapper[5005]: I0225 14:19:09.006502 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2g97t_a3c7dfa8-0263-4f57-84c7-c61b75fab65c/cp-frr-files/0.log" Feb 25 14:19:09 crc kubenswrapper[5005]: I0225 14:19:09.206852 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2g97t_a3c7dfa8-0263-4f57-84c7-c61b75fab65c/cp-metrics/0.log" Feb 25 14:19:09 crc kubenswrapper[5005]: I0225 14:19:09.248058 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2g97t_a3c7dfa8-0263-4f57-84c7-c61b75fab65c/cp-frr-files/0.log" Feb 25 14:19:09 crc kubenswrapper[5005]: I0225 14:19:09.249571 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2g97t_a3c7dfa8-0263-4f57-84c7-c61b75fab65c/cp-reloader/0.log" Feb 25 14:19:09 crc kubenswrapper[5005]: I0225 14:19:09.270061 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2g97t_a3c7dfa8-0263-4f57-84c7-c61b75fab65c/cp-reloader/0.log" Feb 25 14:19:09 crc kubenswrapper[5005]: I0225 14:19:09.428126 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2g97t_a3c7dfa8-0263-4f57-84c7-c61b75fab65c/cp-frr-files/0.log" Feb 25 14:19:09 crc kubenswrapper[5005]: I0225 14:19:09.432058 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2g97t_a3c7dfa8-0263-4f57-84c7-c61b75fab65c/cp-reloader/0.log" Feb 25 14:19:09 crc kubenswrapper[5005]: I0225 14:19:09.455889 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2g97t_a3c7dfa8-0263-4f57-84c7-c61b75fab65c/cp-metrics/0.log" Feb 25 14:19:09 crc kubenswrapper[5005]: I0225 14:19:09.468725 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2g97t_a3c7dfa8-0263-4f57-84c7-c61b75fab65c/cp-metrics/0.log" Feb 25 14:19:09 crc kubenswrapper[5005]: I0225 14:19:09.648978 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2g97t_a3c7dfa8-0263-4f57-84c7-c61b75fab65c/cp-reloader/0.log" Feb 25 14:19:09 crc kubenswrapper[5005]: I0225 14:19:09.658736 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2g97t_a3c7dfa8-0263-4f57-84c7-c61b75fab65c/cp-frr-files/0.log" Feb 25 14:19:09 crc kubenswrapper[5005]: I0225 14:19:09.681580 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2g97t_a3c7dfa8-0263-4f57-84c7-c61b75fab65c/controller/0.log" Feb 25 14:19:09 crc kubenswrapper[5005]: I0225 14:19:09.683901 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2g97t_a3c7dfa8-0263-4f57-84c7-c61b75fab65c/cp-metrics/0.log" Feb 25 14:19:09 crc kubenswrapper[5005]: I0225 14:19:09.858986 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2g97t_a3c7dfa8-0263-4f57-84c7-c61b75fab65c/frr-metrics/0.log" Feb 25 14:19:09 crc kubenswrapper[5005]: I0225 14:19:09.866996 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2g97t_a3c7dfa8-0263-4f57-84c7-c61b75fab65c/kube-rbac-proxy-frr/0.log" Feb 25 14:19:09 crc kubenswrapper[5005]: I0225 14:19:09.872988 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2g97t_a3c7dfa8-0263-4f57-84c7-c61b75fab65c/kube-rbac-proxy/0.log" Feb 25 14:19:10 crc kubenswrapper[5005]: I0225 14:19:10.059660 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2g97t_a3c7dfa8-0263-4f57-84c7-c61b75fab65c/reloader/0.log" Feb 25 14:19:10 crc kubenswrapper[5005]: I0225 14:19:10.116567 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-fljdj_4f1e84e7-d8d7-4b92-acc0-28ff4a1c94ae/frr-k8s-webhook-server/0.log" Feb 25 14:19:10 crc kubenswrapper[5005]: I0225 14:19:10.335950 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-845fb74b78-rxqvx_9e72e13f-7621-408a-b858-4b83e090769b/manager/0.log" Feb 25 14:19:10 crc kubenswrapper[5005]: I0225 14:19:10.527616 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-58d6fc4c6d-lmf6n_389ed2af-b504-4485-8232-bf3a8fed70e8/webhook-server/0.log" Feb 25 14:19:10 crc kubenswrapper[5005]: I0225 14:19:10.622655 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-9h4p9_0c5c912d-da16-4405-864f-3459d2ef4e9c/kube-rbac-proxy/0.log" Feb 25 14:19:11 crc kubenswrapper[5005]: I0225 14:19:11.293564 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-9h4p9_0c5c912d-da16-4405-864f-3459d2ef4e9c/speaker/0.log" Feb 25 14:19:13 crc kubenswrapper[5005]: I0225 14:19:13.586166 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2g97t_a3c7dfa8-0263-4f57-84c7-c61b75fab65c/frr/0.log" Feb 25 14:19:24 crc kubenswrapper[5005]: I0225 14:19:24.907861 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p84xp_63201af9-c23b-44a7-9d91-97243558a963/util/0.log" Feb 25 14:19:25 crc kubenswrapper[5005]: I0225 14:19:25.122676 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p84xp_63201af9-c23b-44a7-9d91-97243558a963/util/0.log" Feb 25 14:19:25 crc kubenswrapper[5005]: I0225 14:19:25.178145 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p84xp_63201af9-c23b-44a7-9d91-97243558a963/pull/0.log" Feb 25 14:19:25 crc kubenswrapper[5005]: I0225 14:19:25.185120 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p84xp_63201af9-c23b-44a7-9d91-97243558a963/pull/0.log" Feb 25 14:19:25 crc kubenswrapper[5005]: I0225 14:19:25.332456 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p84xp_63201af9-c23b-44a7-9d91-97243558a963/pull/0.log" Feb 25 14:19:25 crc kubenswrapper[5005]: I0225 14:19:25.364553 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p84xp_63201af9-c23b-44a7-9d91-97243558a963/util/0.log" Feb 25 14:19:25 crc kubenswrapper[5005]: I0225 14:19:25.462907 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p84xp_63201af9-c23b-44a7-9d91-97243558a963/extract/0.log" Feb 25 14:19:25 crc kubenswrapper[5005]: I0225 14:19:25.517275 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-g7tp7_5b3b898a-5622-4477-8eb9-85d277e28efb/extract-utilities/0.log" Feb 25 14:19:25 crc kubenswrapper[5005]: I0225 14:19:25.770227 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-g7tp7_5b3b898a-5622-4477-8eb9-85d277e28efb/extract-utilities/0.log" Feb 25 14:19:25 crc kubenswrapper[5005]: I0225 14:19:25.810560 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-g7tp7_5b3b898a-5622-4477-8eb9-85d277e28efb/extract-content/0.log" Feb 25 14:19:25 crc kubenswrapper[5005]: I0225 14:19:25.826038 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-g7tp7_5b3b898a-5622-4477-8eb9-85d277e28efb/extract-content/0.log" Feb 25 14:19:25 crc kubenswrapper[5005]: I0225 14:19:25.974969 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-g7tp7_5b3b898a-5622-4477-8eb9-85d277e28efb/extract-content/0.log" Feb 25 14:19:26 crc kubenswrapper[5005]: I0225 14:19:26.155882 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-g7tp7_5b3b898a-5622-4477-8eb9-85d277e28efb/extract-utilities/0.log" Feb 25 14:19:26 crc kubenswrapper[5005]: I0225 14:19:26.205880 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-g7tp7_5b3b898a-5622-4477-8eb9-85d277e28efb/registry-server/0.log" Feb 25 14:19:26 crc kubenswrapper[5005]: I0225 14:19:26.312505 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9djgj_2bb0d8f1-3374-4893-b5db-e7c5e27b0f43/extract-utilities/0.log" Feb 25 14:19:26 crc kubenswrapper[5005]: I0225 14:19:26.483941 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9djgj_2bb0d8f1-3374-4893-b5db-e7c5e27b0f43/extract-utilities/0.log" Feb 25 14:19:26 crc kubenswrapper[5005]: I0225 14:19:26.497898 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9djgj_2bb0d8f1-3374-4893-b5db-e7c5e27b0f43/extract-content/0.log" Feb 25 14:19:26 crc kubenswrapper[5005]: I0225 14:19:26.520011 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9djgj_2bb0d8f1-3374-4893-b5db-e7c5e27b0f43/extract-content/0.log" Feb 25 14:19:26 crc kubenswrapper[5005]: I0225 14:19:26.693233 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9djgj_2bb0d8f1-3374-4893-b5db-e7c5e27b0f43/extract-utilities/0.log" Feb 25 14:19:26 crc kubenswrapper[5005]: I0225 14:19:26.791919 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9djgj_2bb0d8f1-3374-4893-b5db-e7c5e27b0f43/extract-content/0.log" Feb 25 14:19:27 crc kubenswrapper[5005]: I0225 14:19:27.032132 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawrrnc_8ecc824f-780b-43ba-8d32-9cc548278165/util/0.log" Feb 25 14:19:27 crc kubenswrapper[5005]: I0225 14:19:27.073558 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9djgj_2bb0d8f1-3374-4893-b5db-e7c5e27b0f43/registry-server/0.log" Feb 25 14:19:27 crc kubenswrapper[5005]: I0225 14:19:27.126038 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawrrnc_8ecc824f-780b-43ba-8d32-9cc548278165/util/0.log" Feb 25 14:19:27 crc kubenswrapper[5005]: I0225 14:19:27.169569 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawrrnc_8ecc824f-780b-43ba-8d32-9cc548278165/pull/0.log" Feb 25 14:19:27 crc kubenswrapper[5005]: I0225 14:19:27.249103 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawrrnc_8ecc824f-780b-43ba-8d32-9cc548278165/pull/0.log" Feb 25 14:19:27 crc kubenswrapper[5005]: I0225 14:19:27.476734 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawrrnc_8ecc824f-780b-43ba-8d32-9cc548278165/extract/0.log" Feb 25 14:19:27 crc kubenswrapper[5005]: I0225 14:19:27.490500 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawrrnc_8ecc824f-780b-43ba-8d32-9cc548278165/pull/0.log" Feb 25 14:19:27 crc kubenswrapper[5005]: I0225 14:19:27.499745 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawrrnc_8ecc824f-780b-43ba-8d32-9cc548278165/util/0.log" Feb 25 14:19:27 crc kubenswrapper[5005]: I0225 14:19:27.668130 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-9d6zm_5733f94a-093f-4eda-ad84-a2d3cf989483/marketplace-operator/0.log" Feb 25 14:19:27 crc kubenswrapper[5005]: I0225 14:19:27.684109 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5m2mn_a8ca94a0-3d97-4bc0-a7d8-9eb2f96304b0/extract-utilities/0.log" Feb 25 14:19:27 crc kubenswrapper[5005]: I0225 14:19:27.971275 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5m2mn_a8ca94a0-3d97-4bc0-a7d8-9eb2f96304b0/extract-utilities/0.log" Feb 25 14:19:27 crc kubenswrapper[5005]: I0225 14:19:27.977821 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5m2mn_a8ca94a0-3d97-4bc0-a7d8-9eb2f96304b0/extract-content/0.log" Feb 25 14:19:27 crc kubenswrapper[5005]: I0225 14:19:27.986264 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5m2mn_a8ca94a0-3d97-4bc0-a7d8-9eb2f96304b0/extract-content/0.log" Feb 25 14:19:28 crc kubenswrapper[5005]: I0225 14:19:28.171054 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5m2mn_a8ca94a0-3d97-4bc0-a7d8-9eb2f96304b0/extract-utilities/0.log" Feb 25 14:19:28 crc kubenswrapper[5005]: I0225 14:19:28.292150 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5m2mn_a8ca94a0-3d97-4bc0-a7d8-9eb2f96304b0/extract-content/0.log" Feb 25 14:19:28 crc kubenswrapper[5005]: I0225 14:19:28.304048 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5m2mn_a8ca94a0-3d97-4bc0-a7d8-9eb2f96304b0/registry-server/0.log" Feb 25 14:19:28 crc kubenswrapper[5005]: I0225 14:19:28.370622 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4dt5p_57238c9b-440f-4958-9380-41b2fed1033e/extract-utilities/0.log" Feb 25 14:19:28 crc kubenswrapper[5005]: I0225 14:19:28.569001 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4dt5p_57238c9b-440f-4958-9380-41b2fed1033e/extract-content/0.log" Feb 25 14:19:28 crc kubenswrapper[5005]: I0225 14:19:28.574859 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4dt5p_57238c9b-440f-4958-9380-41b2fed1033e/extract-utilities/0.log" Feb 25 14:19:28 crc kubenswrapper[5005]: I0225 14:19:28.596451 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4dt5p_57238c9b-440f-4958-9380-41b2fed1033e/extract-content/0.log" Feb 25 14:19:28 crc kubenswrapper[5005]: I0225 14:19:28.741480 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4dt5p_57238c9b-440f-4958-9380-41b2fed1033e/extract-content/0.log" Feb 25 14:19:28 crc kubenswrapper[5005]: I0225 14:19:28.747891 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4dt5p_57238c9b-440f-4958-9380-41b2fed1033e/extract-utilities/0.log" Feb 25 14:19:29 crc kubenswrapper[5005]: I0225 14:19:29.853970 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4dt5p_57238c9b-440f-4958-9380-41b2fed1033e/registry-server/0.log" Feb 25 14:19:53 crc kubenswrapper[5005]: E0225 14:19:53.506095 5005 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.233:46000->38.102.83.233:40985: write tcp 38.102.83.233:46000->38.102.83.233:40985: write: broken pipe Feb 25 14:20:00 crc kubenswrapper[5005]: I0225 14:20:00.151805 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533820-cth59"] Feb 25 14:20:00 crc kubenswrapper[5005]: E0225 14:20:00.152804 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21240307-3105-420f-b08e-d921f14ac6bf" containerName="oc" Feb 25 14:20:00 crc kubenswrapper[5005]: I0225 14:20:00.152818 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="21240307-3105-420f-b08e-d921f14ac6bf" containerName="oc" Feb 25 14:20:00 crc kubenswrapper[5005]: I0225 14:20:00.152999 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="21240307-3105-420f-b08e-d921f14ac6bf" containerName="oc" Feb 25 14:20:00 crc kubenswrapper[5005]: I0225 14:20:00.153714 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533820-cth59" Feb 25 14:20:00 crc kubenswrapper[5005]: I0225 14:20:00.156472 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 14:20:00 crc kubenswrapper[5005]: I0225 14:20:00.156518 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 14:20:00 crc kubenswrapper[5005]: I0225 14:20:00.156740 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 14:20:00 crc kubenswrapper[5005]: I0225 14:20:00.162902 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533820-cth59"] Feb 25 14:20:00 crc kubenswrapper[5005]: I0225 14:20:00.227597 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kp9h5\" (UniqueName: \"kubernetes.io/projected/e2d8bc83-8aa9-4c6f-8018-e0f2c534f132-kube-api-access-kp9h5\") pod \"auto-csr-approver-29533820-cth59\" (UID: \"e2d8bc83-8aa9-4c6f-8018-e0f2c534f132\") " pod="openshift-infra/auto-csr-approver-29533820-cth59" Feb 25 14:20:00 crc kubenswrapper[5005]: I0225 14:20:00.330040 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kp9h5\" (UniqueName: \"kubernetes.io/projected/e2d8bc83-8aa9-4c6f-8018-e0f2c534f132-kube-api-access-kp9h5\") pod \"auto-csr-approver-29533820-cth59\" (UID: \"e2d8bc83-8aa9-4c6f-8018-e0f2c534f132\") " pod="openshift-infra/auto-csr-approver-29533820-cth59" Feb 25 14:20:00 crc kubenswrapper[5005]: I0225 14:20:00.352678 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kp9h5\" (UniqueName: \"kubernetes.io/projected/e2d8bc83-8aa9-4c6f-8018-e0f2c534f132-kube-api-access-kp9h5\") pod \"auto-csr-approver-29533820-cth59\" (UID: \"e2d8bc83-8aa9-4c6f-8018-e0f2c534f132\") " pod="openshift-infra/auto-csr-approver-29533820-cth59" Feb 25 14:20:00 crc kubenswrapper[5005]: I0225 14:20:00.478170 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533820-cth59" Feb 25 14:20:01 crc kubenswrapper[5005]: I0225 14:20:01.045531 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533820-cth59"] Feb 25 14:20:02 crc kubenswrapper[5005]: I0225 14:20:02.026949 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533820-cth59" event={"ID":"e2d8bc83-8aa9-4c6f-8018-e0f2c534f132","Type":"ContainerStarted","Data":"d618d4efd81fb4ed5cc127ae6453654e1d442ea223e43433b3b283bef0c47355"} Feb 25 14:20:03 crc kubenswrapper[5005]: I0225 14:20:03.036196 5005 generic.go:334] "Generic (PLEG): container finished" podID="e2d8bc83-8aa9-4c6f-8018-e0f2c534f132" containerID="aa03753021aa581cdfd1cb8672485f2c0c59d45655393a66059202c1da6f2a95" exitCode=0 Feb 25 14:20:03 crc kubenswrapper[5005]: I0225 14:20:03.036255 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533820-cth59" event={"ID":"e2d8bc83-8aa9-4c6f-8018-e0f2c534f132","Type":"ContainerDied","Data":"aa03753021aa581cdfd1cb8672485f2c0c59d45655393a66059202c1da6f2a95"} Feb 25 14:20:04 crc kubenswrapper[5005]: I0225 14:20:04.391327 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533820-cth59" Feb 25 14:20:04 crc kubenswrapper[5005]: I0225 14:20:04.415981 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kp9h5\" (UniqueName: \"kubernetes.io/projected/e2d8bc83-8aa9-4c6f-8018-e0f2c534f132-kube-api-access-kp9h5\") pod \"e2d8bc83-8aa9-4c6f-8018-e0f2c534f132\" (UID: \"e2d8bc83-8aa9-4c6f-8018-e0f2c534f132\") " Feb 25 14:20:04 crc kubenswrapper[5005]: I0225 14:20:04.424507 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2d8bc83-8aa9-4c6f-8018-e0f2c534f132-kube-api-access-kp9h5" (OuterVolumeSpecName: "kube-api-access-kp9h5") pod "e2d8bc83-8aa9-4c6f-8018-e0f2c534f132" (UID: "e2d8bc83-8aa9-4c6f-8018-e0f2c534f132"). InnerVolumeSpecName "kube-api-access-kp9h5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 14:20:04 crc kubenswrapper[5005]: I0225 14:20:04.518570 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kp9h5\" (UniqueName: \"kubernetes.io/projected/e2d8bc83-8aa9-4c6f-8018-e0f2c534f132-kube-api-access-kp9h5\") on node \"crc\" DevicePath \"\"" Feb 25 14:20:05 crc kubenswrapper[5005]: I0225 14:20:05.057222 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533820-cth59" event={"ID":"e2d8bc83-8aa9-4c6f-8018-e0f2c534f132","Type":"ContainerDied","Data":"d618d4efd81fb4ed5cc127ae6453654e1d442ea223e43433b3b283bef0c47355"} Feb 25 14:20:05 crc kubenswrapper[5005]: I0225 14:20:05.057268 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d618d4efd81fb4ed5cc127ae6453654e1d442ea223e43433b3b283bef0c47355" Feb 25 14:20:05 crc kubenswrapper[5005]: I0225 14:20:05.057319 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533820-cth59" Feb 25 14:20:05 crc kubenswrapper[5005]: I0225 14:20:05.470390 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533814-6xlqr"] Feb 25 14:20:05 crc kubenswrapper[5005]: I0225 14:20:05.478640 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533814-6xlqr"] Feb 25 14:20:06 crc kubenswrapper[5005]: I0225 14:20:06.697797 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5ce3d82-823f-4884-bf5c-1523ee618ed9" path="/var/lib/kubelet/pods/c5ce3d82-823f-4884-bf5c-1523ee618ed9/volumes" Feb 25 14:20:20 crc kubenswrapper[5005]: I0225 14:20:20.975403 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ffhfn"] Feb 25 14:20:20 crc kubenswrapper[5005]: E0225 14:20:20.976531 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2d8bc83-8aa9-4c6f-8018-e0f2c534f132" containerName="oc" Feb 25 14:20:20 crc kubenswrapper[5005]: I0225 14:20:20.976544 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2d8bc83-8aa9-4c6f-8018-e0f2c534f132" containerName="oc" Feb 25 14:20:20 crc kubenswrapper[5005]: I0225 14:20:20.976772 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2d8bc83-8aa9-4c6f-8018-e0f2c534f132" containerName="oc" Feb 25 14:20:20 crc kubenswrapper[5005]: I0225 14:20:20.978178 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ffhfn" Feb 25 14:20:20 crc kubenswrapper[5005]: I0225 14:20:20.987241 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ffhfn"] Feb 25 14:20:21 crc kubenswrapper[5005]: I0225 14:20:21.077135 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39587ddd-e69c-49d2-b605-a2dd78b36fa8-utilities\") pod \"community-operators-ffhfn\" (UID: \"39587ddd-e69c-49d2-b605-a2dd78b36fa8\") " pod="openshift-marketplace/community-operators-ffhfn" Feb 25 14:20:21 crc kubenswrapper[5005]: I0225 14:20:21.077468 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39587ddd-e69c-49d2-b605-a2dd78b36fa8-catalog-content\") pod \"community-operators-ffhfn\" (UID: \"39587ddd-e69c-49d2-b605-a2dd78b36fa8\") " pod="openshift-marketplace/community-operators-ffhfn" Feb 25 14:20:21 crc kubenswrapper[5005]: I0225 14:20:21.077659 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtpbn\" (UniqueName: \"kubernetes.io/projected/39587ddd-e69c-49d2-b605-a2dd78b36fa8-kube-api-access-mtpbn\") pod \"community-operators-ffhfn\" (UID: \"39587ddd-e69c-49d2-b605-a2dd78b36fa8\") " pod="openshift-marketplace/community-operators-ffhfn" Feb 25 14:20:21 crc kubenswrapper[5005]: I0225 14:20:21.180035 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39587ddd-e69c-49d2-b605-a2dd78b36fa8-utilities\") pod \"community-operators-ffhfn\" (UID: \"39587ddd-e69c-49d2-b605-a2dd78b36fa8\") " pod="openshift-marketplace/community-operators-ffhfn" Feb 25 14:20:21 crc kubenswrapper[5005]: I0225 14:20:21.180116 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39587ddd-e69c-49d2-b605-a2dd78b36fa8-catalog-content\") pod \"community-operators-ffhfn\" (UID: \"39587ddd-e69c-49d2-b605-a2dd78b36fa8\") " pod="openshift-marketplace/community-operators-ffhfn" Feb 25 14:20:21 crc kubenswrapper[5005]: I0225 14:20:21.180250 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtpbn\" (UniqueName: \"kubernetes.io/projected/39587ddd-e69c-49d2-b605-a2dd78b36fa8-kube-api-access-mtpbn\") pod \"community-operators-ffhfn\" (UID: \"39587ddd-e69c-49d2-b605-a2dd78b36fa8\") " pod="openshift-marketplace/community-operators-ffhfn" Feb 25 14:20:21 crc kubenswrapper[5005]: I0225 14:20:21.181040 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39587ddd-e69c-49d2-b605-a2dd78b36fa8-utilities\") pod \"community-operators-ffhfn\" (UID: \"39587ddd-e69c-49d2-b605-a2dd78b36fa8\") " pod="openshift-marketplace/community-operators-ffhfn" Feb 25 14:20:21 crc kubenswrapper[5005]: I0225 14:20:21.181064 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39587ddd-e69c-49d2-b605-a2dd78b36fa8-catalog-content\") pod \"community-operators-ffhfn\" (UID: \"39587ddd-e69c-49d2-b605-a2dd78b36fa8\") " pod="openshift-marketplace/community-operators-ffhfn" Feb 25 14:20:21 crc kubenswrapper[5005]: I0225 14:20:21.207045 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtpbn\" (UniqueName: \"kubernetes.io/projected/39587ddd-e69c-49d2-b605-a2dd78b36fa8-kube-api-access-mtpbn\") pod \"community-operators-ffhfn\" (UID: \"39587ddd-e69c-49d2-b605-a2dd78b36fa8\") " pod="openshift-marketplace/community-operators-ffhfn" Feb 25 14:20:21 crc kubenswrapper[5005]: I0225 14:20:21.302545 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ffhfn" Feb 25 14:20:21 crc kubenswrapper[5005]: I0225 14:20:21.899303 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ffhfn"] Feb 25 14:20:22 crc kubenswrapper[5005]: I0225 14:20:22.224769 5005 generic.go:334] "Generic (PLEG): container finished" podID="39587ddd-e69c-49d2-b605-a2dd78b36fa8" containerID="eea63867a0a486fb2265e656ac8901e3e71c241d07e201d1ead8b1c9d5594842" exitCode=0 Feb 25 14:20:22 crc kubenswrapper[5005]: I0225 14:20:22.224960 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ffhfn" event={"ID":"39587ddd-e69c-49d2-b605-a2dd78b36fa8","Type":"ContainerDied","Data":"eea63867a0a486fb2265e656ac8901e3e71c241d07e201d1ead8b1c9d5594842"} Feb 25 14:20:22 crc kubenswrapper[5005]: I0225 14:20:22.225128 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ffhfn" event={"ID":"39587ddd-e69c-49d2-b605-a2dd78b36fa8","Type":"ContainerStarted","Data":"6058b9cc432c77ea244cd3c23bd29980c360262bbcdb95f95048ef9d023915f9"} Feb 25 14:20:23 crc kubenswrapper[5005]: I0225 14:20:23.239937 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ffhfn" event={"ID":"39587ddd-e69c-49d2-b605-a2dd78b36fa8","Type":"ContainerStarted","Data":"3cf8817fd8d3b0b043604eb67fe6f0add74525ba1637fe791c38c9fa778545c4"} Feb 25 14:20:24 crc kubenswrapper[5005]: I0225 14:20:24.253527 5005 generic.go:334] "Generic (PLEG): container finished" podID="39587ddd-e69c-49d2-b605-a2dd78b36fa8" containerID="3cf8817fd8d3b0b043604eb67fe6f0add74525ba1637fe791c38c9fa778545c4" exitCode=0 Feb 25 14:20:24 crc kubenswrapper[5005]: I0225 14:20:24.253591 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ffhfn" event={"ID":"39587ddd-e69c-49d2-b605-a2dd78b36fa8","Type":"ContainerDied","Data":"3cf8817fd8d3b0b043604eb67fe6f0add74525ba1637fe791c38c9fa778545c4"} Feb 25 14:20:26 crc kubenswrapper[5005]: I0225 14:20:26.272305 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ffhfn" event={"ID":"39587ddd-e69c-49d2-b605-a2dd78b36fa8","Type":"ContainerStarted","Data":"73bf584016397e47b1881851960966a0c2f18e8ce17c45c789dd54ce9deb7679"} Feb 25 14:20:26 crc kubenswrapper[5005]: I0225 14:20:26.301472 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ffhfn" podStartSLOduration=3.798773659 podStartE2EDuration="6.301455528s" podCreationTimestamp="2026-02-25 14:20:20 +0000 UTC" firstStartedPulling="2026-02-25 14:20:22.226459528 +0000 UTC m=+10936.267191855" lastFinishedPulling="2026-02-25 14:20:24.729141397 +0000 UTC m=+10938.769873724" observedRunningTime="2026-02-25 14:20:26.297017169 +0000 UTC m=+10940.337749506" watchObservedRunningTime="2026-02-25 14:20:26.301455528 +0000 UTC m=+10940.342187855" Feb 25 14:20:28 crc kubenswrapper[5005]: I0225 14:20:28.087218 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 14:20:28 crc kubenswrapper[5005]: I0225 14:20:28.087271 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 14:20:28 crc kubenswrapper[5005]: I0225 14:20:28.843241 5005 scope.go:117] "RemoveContainer" containerID="3f21f5d981fa1f6719966b81b2798fa53a57440d55c2394f61e74ef9efe092ea" Feb 25 14:20:31 crc kubenswrapper[5005]: I0225 14:20:31.302868 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ffhfn" Feb 25 14:20:31 crc kubenswrapper[5005]: I0225 14:20:31.303178 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ffhfn" Feb 25 14:20:31 crc kubenswrapper[5005]: I0225 14:20:31.370773 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ffhfn" Feb 25 14:20:31 crc kubenswrapper[5005]: I0225 14:20:31.418999 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ffhfn" Feb 25 14:20:31 crc kubenswrapper[5005]: I0225 14:20:31.617058 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ffhfn"] Feb 25 14:20:33 crc kubenswrapper[5005]: I0225 14:20:33.327200 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ffhfn" podUID="39587ddd-e69c-49d2-b605-a2dd78b36fa8" containerName="registry-server" containerID="cri-o://73bf584016397e47b1881851960966a0c2f18e8ce17c45c789dd54ce9deb7679" gracePeriod=2 Feb 25 14:20:33 crc kubenswrapper[5005]: I0225 14:20:33.765654 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ffhfn" Feb 25 14:20:33 crc kubenswrapper[5005]: I0225 14:20:33.939849 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39587ddd-e69c-49d2-b605-a2dd78b36fa8-utilities\") pod \"39587ddd-e69c-49d2-b605-a2dd78b36fa8\" (UID: \"39587ddd-e69c-49d2-b605-a2dd78b36fa8\") " Feb 25 14:20:33 crc kubenswrapper[5005]: I0225 14:20:33.939904 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtpbn\" (UniqueName: \"kubernetes.io/projected/39587ddd-e69c-49d2-b605-a2dd78b36fa8-kube-api-access-mtpbn\") pod \"39587ddd-e69c-49d2-b605-a2dd78b36fa8\" (UID: \"39587ddd-e69c-49d2-b605-a2dd78b36fa8\") " Feb 25 14:20:33 crc kubenswrapper[5005]: I0225 14:20:33.940123 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39587ddd-e69c-49d2-b605-a2dd78b36fa8-catalog-content\") pod \"39587ddd-e69c-49d2-b605-a2dd78b36fa8\" (UID: \"39587ddd-e69c-49d2-b605-a2dd78b36fa8\") " Feb 25 14:20:33 crc kubenswrapper[5005]: I0225 14:20:33.940796 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39587ddd-e69c-49d2-b605-a2dd78b36fa8-utilities" (OuterVolumeSpecName: "utilities") pod "39587ddd-e69c-49d2-b605-a2dd78b36fa8" (UID: "39587ddd-e69c-49d2-b605-a2dd78b36fa8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 14:20:33 crc kubenswrapper[5005]: I0225 14:20:33.953179 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39587ddd-e69c-49d2-b605-a2dd78b36fa8-kube-api-access-mtpbn" (OuterVolumeSpecName: "kube-api-access-mtpbn") pod "39587ddd-e69c-49d2-b605-a2dd78b36fa8" (UID: "39587ddd-e69c-49d2-b605-a2dd78b36fa8"). InnerVolumeSpecName "kube-api-access-mtpbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 14:20:34 crc kubenswrapper[5005]: I0225 14:20:34.042634 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39587ddd-e69c-49d2-b605-a2dd78b36fa8-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 14:20:34 crc kubenswrapper[5005]: I0225 14:20:34.042665 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtpbn\" (UniqueName: \"kubernetes.io/projected/39587ddd-e69c-49d2-b605-a2dd78b36fa8-kube-api-access-mtpbn\") on node \"crc\" DevicePath \"\"" Feb 25 14:20:34 crc kubenswrapper[5005]: I0225 14:20:34.176596 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39587ddd-e69c-49d2-b605-a2dd78b36fa8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "39587ddd-e69c-49d2-b605-a2dd78b36fa8" (UID: "39587ddd-e69c-49d2-b605-a2dd78b36fa8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 14:20:34 crc kubenswrapper[5005]: I0225 14:20:34.247006 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39587ddd-e69c-49d2-b605-a2dd78b36fa8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 14:20:34 crc kubenswrapper[5005]: I0225 14:20:34.337389 5005 generic.go:334] "Generic (PLEG): container finished" podID="39587ddd-e69c-49d2-b605-a2dd78b36fa8" containerID="73bf584016397e47b1881851960966a0c2f18e8ce17c45c789dd54ce9deb7679" exitCode=0 Feb 25 14:20:34 crc kubenswrapper[5005]: I0225 14:20:34.337430 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ffhfn" event={"ID":"39587ddd-e69c-49d2-b605-a2dd78b36fa8","Type":"ContainerDied","Data":"73bf584016397e47b1881851960966a0c2f18e8ce17c45c789dd54ce9deb7679"} Feb 25 14:20:34 crc kubenswrapper[5005]: I0225 14:20:34.337756 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ffhfn" event={"ID":"39587ddd-e69c-49d2-b605-a2dd78b36fa8","Type":"ContainerDied","Data":"6058b9cc432c77ea244cd3c23bd29980c360262bbcdb95f95048ef9d023915f9"} Feb 25 14:20:34 crc kubenswrapper[5005]: I0225 14:20:34.337492 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ffhfn" Feb 25 14:20:34 crc kubenswrapper[5005]: I0225 14:20:34.348351 5005 scope.go:117] "RemoveContainer" containerID="73bf584016397e47b1881851960966a0c2f18e8ce17c45c789dd54ce9deb7679" Feb 25 14:20:34 crc kubenswrapper[5005]: I0225 14:20:34.377903 5005 scope.go:117] "RemoveContainer" containerID="3cf8817fd8d3b0b043604eb67fe6f0add74525ba1637fe791c38c9fa778545c4" Feb 25 14:20:34 crc kubenswrapper[5005]: I0225 14:20:34.378268 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ffhfn"] Feb 25 14:20:34 crc kubenswrapper[5005]: I0225 14:20:34.394597 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ffhfn"] Feb 25 14:20:34 crc kubenswrapper[5005]: I0225 14:20:34.396337 5005 scope.go:117] "RemoveContainer" containerID="eea63867a0a486fb2265e656ac8901e3e71c241d07e201d1ead8b1c9d5594842" Feb 25 14:20:34 crc kubenswrapper[5005]: I0225 14:20:34.451765 5005 scope.go:117] "RemoveContainer" containerID="73bf584016397e47b1881851960966a0c2f18e8ce17c45c789dd54ce9deb7679" Feb 25 14:20:34 crc kubenswrapper[5005]: E0225 14:20:34.452305 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73bf584016397e47b1881851960966a0c2f18e8ce17c45c789dd54ce9deb7679\": container with ID starting with 73bf584016397e47b1881851960966a0c2f18e8ce17c45c789dd54ce9deb7679 not found: ID does not exist" containerID="73bf584016397e47b1881851960966a0c2f18e8ce17c45c789dd54ce9deb7679" Feb 25 14:20:34 crc kubenswrapper[5005]: I0225 14:20:34.452362 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73bf584016397e47b1881851960966a0c2f18e8ce17c45c789dd54ce9deb7679"} err="failed to get container status \"73bf584016397e47b1881851960966a0c2f18e8ce17c45c789dd54ce9deb7679\": rpc error: code = NotFound desc = could not find container \"73bf584016397e47b1881851960966a0c2f18e8ce17c45c789dd54ce9deb7679\": container with ID starting with 73bf584016397e47b1881851960966a0c2f18e8ce17c45c789dd54ce9deb7679 not found: ID does not exist" Feb 25 14:20:34 crc kubenswrapper[5005]: I0225 14:20:34.452487 5005 scope.go:117] "RemoveContainer" containerID="3cf8817fd8d3b0b043604eb67fe6f0add74525ba1637fe791c38c9fa778545c4" Feb 25 14:20:34 crc kubenswrapper[5005]: E0225 14:20:34.453131 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cf8817fd8d3b0b043604eb67fe6f0add74525ba1637fe791c38c9fa778545c4\": container with ID starting with 3cf8817fd8d3b0b043604eb67fe6f0add74525ba1637fe791c38c9fa778545c4 not found: ID does not exist" containerID="3cf8817fd8d3b0b043604eb67fe6f0add74525ba1637fe791c38c9fa778545c4" Feb 25 14:20:34 crc kubenswrapper[5005]: I0225 14:20:34.453173 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cf8817fd8d3b0b043604eb67fe6f0add74525ba1637fe791c38c9fa778545c4"} err="failed to get container status \"3cf8817fd8d3b0b043604eb67fe6f0add74525ba1637fe791c38c9fa778545c4\": rpc error: code = NotFound desc = could not find container \"3cf8817fd8d3b0b043604eb67fe6f0add74525ba1637fe791c38c9fa778545c4\": container with ID starting with 3cf8817fd8d3b0b043604eb67fe6f0add74525ba1637fe791c38c9fa778545c4 not found: ID does not exist" Feb 25 14:20:34 crc kubenswrapper[5005]: I0225 14:20:34.453193 5005 scope.go:117] "RemoveContainer" containerID="eea63867a0a486fb2265e656ac8901e3e71c241d07e201d1ead8b1c9d5594842" Feb 25 14:20:34 crc kubenswrapper[5005]: E0225 14:20:34.453534 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eea63867a0a486fb2265e656ac8901e3e71c241d07e201d1ead8b1c9d5594842\": container with ID starting with eea63867a0a486fb2265e656ac8901e3e71c241d07e201d1ead8b1c9d5594842 not found: ID does not exist" containerID="eea63867a0a486fb2265e656ac8901e3e71c241d07e201d1ead8b1c9d5594842" Feb 25 14:20:34 crc kubenswrapper[5005]: I0225 14:20:34.453567 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eea63867a0a486fb2265e656ac8901e3e71c241d07e201d1ead8b1c9d5594842"} err="failed to get container status \"eea63867a0a486fb2265e656ac8901e3e71c241d07e201d1ead8b1c9d5594842\": rpc error: code = NotFound desc = could not find container \"eea63867a0a486fb2265e656ac8901e3e71c241d07e201d1ead8b1c9d5594842\": container with ID starting with eea63867a0a486fb2265e656ac8901e3e71c241d07e201d1ead8b1c9d5594842 not found: ID does not exist" Feb 25 14:20:34 crc kubenswrapper[5005]: I0225 14:20:34.704186 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39587ddd-e69c-49d2-b605-a2dd78b36fa8" path="/var/lib/kubelet/pods/39587ddd-e69c-49d2-b605-a2dd78b36fa8/volumes" Feb 25 14:20:58 crc kubenswrapper[5005]: I0225 14:20:58.087220 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 14:20:58 crc kubenswrapper[5005]: I0225 14:20:58.087780 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 14:21:09 crc kubenswrapper[5005]: I0225 14:21:09.036481 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dgjf4"] Feb 25 14:21:09 crc kubenswrapper[5005]: E0225 14:21:09.037423 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39587ddd-e69c-49d2-b605-a2dd78b36fa8" containerName="registry-server" Feb 25 14:21:09 crc kubenswrapper[5005]: I0225 14:21:09.037435 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="39587ddd-e69c-49d2-b605-a2dd78b36fa8" containerName="registry-server" Feb 25 14:21:09 crc kubenswrapper[5005]: E0225 14:21:09.037448 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39587ddd-e69c-49d2-b605-a2dd78b36fa8" containerName="extract-content" Feb 25 14:21:09 crc kubenswrapper[5005]: I0225 14:21:09.037455 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="39587ddd-e69c-49d2-b605-a2dd78b36fa8" containerName="extract-content" Feb 25 14:21:09 crc kubenswrapper[5005]: E0225 14:21:09.037464 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39587ddd-e69c-49d2-b605-a2dd78b36fa8" containerName="extract-utilities" Feb 25 14:21:09 crc kubenswrapper[5005]: I0225 14:21:09.037470 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="39587ddd-e69c-49d2-b605-a2dd78b36fa8" containerName="extract-utilities" Feb 25 14:21:09 crc kubenswrapper[5005]: I0225 14:21:09.037646 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="39587ddd-e69c-49d2-b605-a2dd78b36fa8" containerName="registry-server" Feb 25 14:21:09 crc kubenswrapper[5005]: I0225 14:21:09.039518 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dgjf4" Feb 25 14:21:09 crc kubenswrapper[5005]: I0225 14:21:09.060786 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dgjf4"] Feb 25 14:21:09 crc kubenswrapper[5005]: I0225 14:21:09.139053 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0020691-29ce-4966-9dd5-d34b3dd225b1-utilities\") pod \"redhat-marketplace-dgjf4\" (UID: \"f0020691-29ce-4966-9dd5-d34b3dd225b1\") " pod="openshift-marketplace/redhat-marketplace-dgjf4" Feb 25 14:21:09 crc kubenswrapper[5005]: I0225 14:21:09.139199 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0020691-29ce-4966-9dd5-d34b3dd225b1-catalog-content\") pod \"redhat-marketplace-dgjf4\" (UID: \"f0020691-29ce-4966-9dd5-d34b3dd225b1\") " pod="openshift-marketplace/redhat-marketplace-dgjf4" Feb 25 14:21:09 crc kubenswrapper[5005]: I0225 14:21:09.139297 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4bcj\" (UniqueName: \"kubernetes.io/projected/f0020691-29ce-4966-9dd5-d34b3dd225b1-kube-api-access-q4bcj\") pod \"redhat-marketplace-dgjf4\" (UID: \"f0020691-29ce-4966-9dd5-d34b3dd225b1\") " pod="openshift-marketplace/redhat-marketplace-dgjf4" Feb 25 14:21:09 crc kubenswrapper[5005]: I0225 14:21:09.240441 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0020691-29ce-4966-9dd5-d34b3dd225b1-catalog-content\") pod \"redhat-marketplace-dgjf4\" (UID: \"f0020691-29ce-4966-9dd5-d34b3dd225b1\") " pod="openshift-marketplace/redhat-marketplace-dgjf4" Feb 25 14:21:09 crc kubenswrapper[5005]: I0225 14:21:09.240568 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4bcj\" (UniqueName: \"kubernetes.io/projected/f0020691-29ce-4966-9dd5-d34b3dd225b1-kube-api-access-q4bcj\") pod \"redhat-marketplace-dgjf4\" (UID: \"f0020691-29ce-4966-9dd5-d34b3dd225b1\") " pod="openshift-marketplace/redhat-marketplace-dgjf4" Feb 25 14:21:09 crc kubenswrapper[5005]: I0225 14:21:09.240640 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0020691-29ce-4966-9dd5-d34b3dd225b1-utilities\") pod \"redhat-marketplace-dgjf4\" (UID: \"f0020691-29ce-4966-9dd5-d34b3dd225b1\") " pod="openshift-marketplace/redhat-marketplace-dgjf4" Feb 25 14:21:09 crc kubenswrapper[5005]: I0225 14:21:09.241178 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0020691-29ce-4966-9dd5-d34b3dd225b1-utilities\") pod \"redhat-marketplace-dgjf4\" (UID: \"f0020691-29ce-4966-9dd5-d34b3dd225b1\") " pod="openshift-marketplace/redhat-marketplace-dgjf4" Feb 25 14:21:09 crc kubenswrapper[5005]: I0225 14:21:09.241306 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0020691-29ce-4966-9dd5-d34b3dd225b1-catalog-content\") pod \"redhat-marketplace-dgjf4\" (UID: \"f0020691-29ce-4966-9dd5-d34b3dd225b1\") " pod="openshift-marketplace/redhat-marketplace-dgjf4" Feb 25 14:21:09 crc kubenswrapper[5005]: I0225 14:21:09.659831 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4bcj\" (UniqueName: \"kubernetes.io/projected/f0020691-29ce-4966-9dd5-d34b3dd225b1-kube-api-access-q4bcj\") pod \"redhat-marketplace-dgjf4\" (UID: \"f0020691-29ce-4966-9dd5-d34b3dd225b1\") " pod="openshift-marketplace/redhat-marketplace-dgjf4" Feb 25 14:21:09 crc kubenswrapper[5005]: I0225 14:21:09.678052 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dgjf4" Feb 25 14:21:10 crc kubenswrapper[5005]: I0225 14:21:10.148490 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dgjf4"] Feb 25 14:21:10 crc kubenswrapper[5005]: I0225 14:21:10.681176 5005 generic.go:334] "Generic (PLEG): container finished" podID="f0020691-29ce-4966-9dd5-d34b3dd225b1" containerID="b1193c86ccde5c8dbefc04cbd916d04f47f3bcf65bbaabe1da0d81e2999c5527" exitCode=0 Feb 25 14:21:10 crc kubenswrapper[5005]: I0225 14:21:10.681484 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dgjf4" event={"ID":"f0020691-29ce-4966-9dd5-d34b3dd225b1","Type":"ContainerDied","Data":"b1193c86ccde5c8dbefc04cbd916d04f47f3bcf65bbaabe1da0d81e2999c5527"} Feb 25 14:21:10 crc kubenswrapper[5005]: I0225 14:21:10.681582 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dgjf4" event={"ID":"f0020691-29ce-4966-9dd5-d34b3dd225b1","Type":"ContainerStarted","Data":"129c2e03e4d2c28ca78fbbb8dd106453022925eba00aae3158e58433dab3b096"} Feb 25 14:21:12 crc kubenswrapper[5005]: I0225 14:21:12.709759 5005 generic.go:334] "Generic (PLEG): container finished" podID="f0020691-29ce-4966-9dd5-d34b3dd225b1" containerID="054da7840917680b10678fd473f7f90b95e3d4f320889e0ca90109a67378ea13" exitCode=0 Feb 25 14:21:12 crc kubenswrapper[5005]: I0225 14:21:12.711128 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dgjf4" event={"ID":"f0020691-29ce-4966-9dd5-d34b3dd225b1","Type":"ContainerDied","Data":"054da7840917680b10678fd473f7f90b95e3d4f320889e0ca90109a67378ea13"} Feb 25 14:21:13 crc kubenswrapper[5005]: I0225 14:21:13.731987 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dgjf4" event={"ID":"f0020691-29ce-4966-9dd5-d34b3dd225b1","Type":"ContainerStarted","Data":"5e3924540d1199368f727d6f74e5ae49fe2ee9602c6f0af87bdc94d5945b05f7"} Feb 25 14:21:19 crc kubenswrapper[5005]: I0225 14:21:19.679171 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dgjf4" Feb 25 14:21:19 crc kubenswrapper[5005]: I0225 14:21:19.680042 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dgjf4" Feb 25 14:21:19 crc kubenswrapper[5005]: I0225 14:21:19.734447 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dgjf4" Feb 25 14:21:19 crc kubenswrapper[5005]: I0225 14:21:19.757136 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dgjf4" podStartSLOduration=8.015608966 podStartE2EDuration="10.7571208s" podCreationTimestamp="2026-02-25 14:21:09 +0000 UTC" firstStartedPulling="2026-02-25 14:21:10.686417063 +0000 UTC m=+10984.727149390" lastFinishedPulling="2026-02-25 14:21:13.427928897 +0000 UTC m=+10987.468661224" observedRunningTime="2026-02-25 14:21:13.763143744 +0000 UTC m=+10987.803876091" watchObservedRunningTime="2026-02-25 14:21:19.7571208 +0000 UTC m=+10993.797853117" Feb 25 14:21:19 crc kubenswrapper[5005]: I0225 14:21:19.832590 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dgjf4" Feb 25 14:21:19 crc kubenswrapper[5005]: I0225 14:21:19.972661 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dgjf4"] Feb 25 14:21:21 crc kubenswrapper[5005]: I0225 14:21:21.802561 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dgjf4" podUID="f0020691-29ce-4966-9dd5-d34b3dd225b1" containerName="registry-server" containerID="cri-o://5e3924540d1199368f727d6f74e5ae49fe2ee9602c6f0af87bdc94d5945b05f7" gracePeriod=2 Feb 25 14:21:22 crc kubenswrapper[5005]: I0225 14:21:22.545036 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dgjf4" Feb 25 14:21:22 crc kubenswrapper[5005]: I0225 14:21:22.619882 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0020691-29ce-4966-9dd5-d34b3dd225b1-catalog-content\") pod \"f0020691-29ce-4966-9dd5-d34b3dd225b1\" (UID: \"f0020691-29ce-4966-9dd5-d34b3dd225b1\") " Feb 25 14:21:22 crc kubenswrapper[5005]: I0225 14:21:22.619991 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4bcj\" (UniqueName: \"kubernetes.io/projected/f0020691-29ce-4966-9dd5-d34b3dd225b1-kube-api-access-q4bcj\") pod \"f0020691-29ce-4966-9dd5-d34b3dd225b1\" (UID: \"f0020691-29ce-4966-9dd5-d34b3dd225b1\") " Feb 25 14:21:22 crc kubenswrapper[5005]: I0225 14:21:22.620028 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0020691-29ce-4966-9dd5-d34b3dd225b1-utilities\") pod \"f0020691-29ce-4966-9dd5-d34b3dd225b1\" (UID: \"f0020691-29ce-4966-9dd5-d34b3dd225b1\") " Feb 25 14:21:22 crc kubenswrapper[5005]: I0225 14:21:22.621000 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0020691-29ce-4966-9dd5-d34b3dd225b1-utilities" (OuterVolumeSpecName: "utilities") pod "f0020691-29ce-4966-9dd5-d34b3dd225b1" (UID: "f0020691-29ce-4966-9dd5-d34b3dd225b1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 14:21:22 crc kubenswrapper[5005]: I0225 14:21:22.632613 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0020691-29ce-4966-9dd5-d34b3dd225b1-kube-api-access-q4bcj" (OuterVolumeSpecName: "kube-api-access-q4bcj") pod "f0020691-29ce-4966-9dd5-d34b3dd225b1" (UID: "f0020691-29ce-4966-9dd5-d34b3dd225b1"). InnerVolumeSpecName "kube-api-access-q4bcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 14:21:22 crc kubenswrapper[5005]: I0225 14:21:22.645206 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0020691-29ce-4966-9dd5-d34b3dd225b1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f0020691-29ce-4966-9dd5-d34b3dd225b1" (UID: "f0020691-29ce-4966-9dd5-d34b3dd225b1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 14:21:22 crc kubenswrapper[5005]: I0225 14:21:22.722950 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0020691-29ce-4966-9dd5-d34b3dd225b1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 14:21:22 crc kubenswrapper[5005]: I0225 14:21:22.722996 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4bcj\" (UniqueName: \"kubernetes.io/projected/f0020691-29ce-4966-9dd5-d34b3dd225b1-kube-api-access-q4bcj\") on node \"crc\" DevicePath \"\"" Feb 25 14:21:22 crc kubenswrapper[5005]: I0225 14:21:22.723013 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0020691-29ce-4966-9dd5-d34b3dd225b1-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 14:21:22 crc kubenswrapper[5005]: I0225 14:21:22.813954 5005 generic.go:334] "Generic (PLEG): container finished" podID="f0020691-29ce-4966-9dd5-d34b3dd225b1" containerID="5e3924540d1199368f727d6f74e5ae49fe2ee9602c6f0af87bdc94d5945b05f7" exitCode=0 Feb 25 14:21:22 crc kubenswrapper[5005]: I0225 14:21:22.814140 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dgjf4" event={"ID":"f0020691-29ce-4966-9dd5-d34b3dd225b1","Type":"ContainerDied","Data":"5e3924540d1199368f727d6f74e5ae49fe2ee9602c6f0af87bdc94d5945b05f7"} Feb 25 14:21:22 crc kubenswrapper[5005]: I0225 14:21:22.814729 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dgjf4" event={"ID":"f0020691-29ce-4966-9dd5-d34b3dd225b1","Type":"ContainerDied","Data":"129c2e03e4d2c28ca78fbbb8dd106453022925eba00aae3158e58433dab3b096"} Feb 25 14:21:22 crc kubenswrapper[5005]: I0225 14:21:22.814268 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dgjf4" Feb 25 14:21:22 crc kubenswrapper[5005]: I0225 14:21:22.814772 5005 scope.go:117] "RemoveContainer" containerID="5e3924540d1199368f727d6f74e5ae49fe2ee9602c6f0af87bdc94d5945b05f7" Feb 25 14:21:22 crc kubenswrapper[5005]: I0225 14:21:22.935060 5005 scope.go:117] "RemoveContainer" containerID="054da7840917680b10678fd473f7f90b95e3d4f320889e0ca90109a67378ea13" Feb 25 14:21:22 crc kubenswrapper[5005]: I0225 14:21:22.939541 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dgjf4"] Feb 25 14:21:22 crc kubenswrapper[5005]: I0225 14:21:22.958603 5005 scope.go:117] "RemoveContainer" containerID="b1193c86ccde5c8dbefc04cbd916d04f47f3bcf65bbaabe1da0d81e2999c5527" Feb 25 14:21:22 crc kubenswrapper[5005]: I0225 14:21:22.966112 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dgjf4"] Feb 25 14:21:22 crc kubenswrapper[5005]: I0225 14:21:22.995437 5005 scope.go:117] "RemoveContainer" containerID="5e3924540d1199368f727d6f74e5ae49fe2ee9602c6f0af87bdc94d5945b05f7" Feb 25 14:21:22 crc kubenswrapper[5005]: E0225 14:21:22.996761 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e3924540d1199368f727d6f74e5ae49fe2ee9602c6f0af87bdc94d5945b05f7\": container with ID starting with 5e3924540d1199368f727d6f74e5ae49fe2ee9602c6f0af87bdc94d5945b05f7 not found: ID does not exist" containerID="5e3924540d1199368f727d6f74e5ae49fe2ee9602c6f0af87bdc94d5945b05f7" Feb 25 14:21:22 crc kubenswrapper[5005]: I0225 14:21:22.996809 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e3924540d1199368f727d6f74e5ae49fe2ee9602c6f0af87bdc94d5945b05f7"} err="failed to get container status \"5e3924540d1199368f727d6f74e5ae49fe2ee9602c6f0af87bdc94d5945b05f7\": rpc error: code = NotFound desc = could not find container \"5e3924540d1199368f727d6f74e5ae49fe2ee9602c6f0af87bdc94d5945b05f7\": container with ID starting with 5e3924540d1199368f727d6f74e5ae49fe2ee9602c6f0af87bdc94d5945b05f7 not found: ID does not exist" Feb 25 14:21:22 crc kubenswrapper[5005]: I0225 14:21:22.996836 5005 scope.go:117] "RemoveContainer" containerID="054da7840917680b10678fd473f7f90b95e3d4f320889e0ca90109a67378ea13" Feb 25 14:21:22 crc kubenswrapper[5005]: E0225 14:21:22.997128 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"054da7840917680b10678fd473f7f90b95e3d4f320889e0ca90109a67378ea13\": container with ID starting with 054da7840917680b10678fd473f7f90b95e3d4f320889e0ca90109a67378ea13 not found: ID does not exist" containerID="054da7840917680b10678fd473f7f90b95e3d4f320889e0ca90109a67378ea13" Feb 25 14:21:22 crc kubenswrapper[5005]: I0225 14:21:22.997157 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"054da7840917680b10678fd473f7f90b95e3d4f320889e0ca90109a67378ea13"} err="failed to get container status \"054da7840917680b10678fd473f7f90b95e3d4f320889e0ca90109a67378ea13\": rpc error: code = NotFound desc = could not find container \"054da7840917680b10678fd473f7f90b95e3d4f320889e0ca90109a67378ea13\": container with ID starting with 054da7840917680b10678fd473f7f90b95e3d4f320889e0ca90109a67378ea13 not found: ID does not exist" Feb 25 14:21:22 crc kubenswrapper[5005]: I0225 14:21:22.997175 5005 scope.go:117] "RemoveContainer" containerID="b1193c86ccde5c8dbefc04cbd916d04f47f3bcf65bbaabe1da0d81e2999c5527" Feb 25 14:21:22 crc kubenswrapper[5005]: E0225 14:21:22.997436 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1193c86ccde5c8dbefc04cbd916d04f47f3bcf65bbaabe1da0d81e2999c5527\": container with ID starting with b1193c86ccde5c8dbefc04cbd916d04f47f3bcf65bbaabe1da0d81e2999c5527 not found: ID does not exist" containerID="b1193c86ccde5c8dbefc04cbd916d04f47f3bcf65bbaabe1da0d81e2999c5527" Feb 25 14:21:22 crc kubenswrapper[5005]: I0225 14:21:22.997463 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1193c86ccde5c8dbefc04cbd916d04f47f3bcf65bbaabe1da0d81e2999c5527"} err="failed to get container status \"b1193c86ccde5c8dbefc04cbd916d04f47f3bcf65bbaabe1da0d81e2999c5527\": rpc error: code = NotFound desc = could not find container \"b1193c86ccde5c8dbefc04cbd916d04f47f3bcf65bbaabe1da0d81e2999c5527\": container with ID starting with b1193c86ccde5c8dbefc04cbd916d04f47f3bcf65bbaabe1da0d81e2999c5527 not found: ID does not exist" Feb 25 14:21:24 crc kubenswrapper[5005]: I0225 14:21:24.704142 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0020691-29ce-4966-9dd5-d34b3dd225b1" path="/var/lib/kubelet/pods/f0020691-29ce-4966-9dd5-d34b3dd225b1/volumes" Feb 25 14:21:28 crc kubenswrapper[5005]: I0225 14:21:28.087702 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 14:21:28 crc kubenswrapper[5005]: I0225 14:21:28.088265 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 14:21:28 crc kubenswrapper[5005]: I0225 14:21:28.088314 5005 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" Feb 25 14:21:28 crc kubenswrapper[5005]: I0225 14:21:28.089163 5005 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ad13f166a37837786041d84a352dd1da26ff2473b3b78faeba1072292dc343ec"} pod="openshift-machine-config-operator/machine-config-daemon-tct5q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 25 14:21:28 crc kubenswrapper[5005]: I0225 14:21:28.089239 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" containerID="cri-o://ad13f166a37837786041d84a352dd1da26ff2473b3b78faeba1072292dc343ec" gracePeriod=600 Feb 25 14:21:28 crc kubenswrapper[5005]: I0225 14:21:28.863661 5005 generic.go:334] "Generic (PLEG): container finished" podID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerID="ad13f166a37837786041d84a352dd1da26ff2473b3b78faeba1072292dc343ec" exitCode=0 Feb 25 14:21:28 crc kubenswrapper[5005]: I0225 14:21:28.864289 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerDied","Data":"ad13f166a37837786041d84a352dd1da26ff2473b3b78faeba1072292dc343ec"} Feb 25 14:21:28 crc kubenswrapper[5005]: I0225 14:21:28.864461 5005 scope.go:117] "RemoveContainer" containerID="0fd17b4cc81721a66adaff149966a75a0355cfefe1996efe9af637406ab28af1" Feb 25 14:21:29 crc kubenswrapper[5005]: I0225 14:21:29.874797 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerStarted","Data":"08ad309d5607b261c705a2f0ef607d46bdb752428517e1c6cae9a0def3d754aa"} Feb 25 14:21:52 crc kubenswrapper[5005]: I0225 14:21:52.086970 5005 generic.go:334] "Generic (PLEG): container finished" podID="c3eb9525-7d3e-4b6e-9c64-f38ee54a8316" containerID="3d2c17fb28c07f3be7a3fb592a51ac79c0647183a7c74ee3568fea267d3b05cd" exitCode=0 Feb 25 14:21:52 crc kubenswrapper[5005]: I0225 14:21:52.087062 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lpkd5/must-gather-cvghf" event={"ID":"c3eb9525-7d3e-4b6e-9c64-f38ee54a8316","Type":"ContainerDied","Data":"3d2c17fb28c07f3be7a3fb592a51ac79c0647183a7c74ee3568fea267d3b05cd"} Feb 25 14:21:52 crc kubenswrapper[5005]: I0225 14:21:52.088100 5005 scope.go:117] "RemoveContainer" containerID="3d2c17fb28c07f3be7a3fb592a51ac79c0647183a7c74ee3568fea267d3b05cd" Feb 25 14:21:52 crc kubenswrapper[5005]: I0225 14:21:52.306616 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-lpkd5_must-gather-cvghf_c3eb9525-7d3e-4b6e-9c64-f38ee54a8316/gather/0.log" Feb 25 14:22:00 crc kubenswrapper[5005]: I0225 14:22:00.148260 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533822-6j7dr"] Feb 25 14:22:00 crc kubenswrapper[5005]: E0225 14:22:00.150508 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0020691-29ce-4966-9dd5-d34b3dd225b1" containerName="registry-server" Feb 25 14:22:00 crc kubenswrapper[5005]: I0225 14:22:00.150628 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0020691-29ce-4966-9dd5-d34b3dd225b1" containerName="registry-server" Feb 25 14:22:00 crc kubenswrapper[5005]: E0225 14:22:00.150716 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0020691-29ce-4966-9dd5-d34b3dd225b1" containerName="extract-content" Feb 25 14:22:00 crc kubenswrapper[5005]: I0225 14:22:00.150790 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0020691-29ce-4966-9dd5-d34b3dd225b1" containerName="extract-content" Feb 25 14:22:00 crc kubenswrapper[5005]: E0225 14:22:00.150934 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0020691-29ce-4966-9dd5-d34b3dd225b1" containerName="extract-utilities" Feb 25 14:22:00 crc kubenswrapper[5005]: I0225 14:22:00.151027 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0020691-29ce-4966-9dd5-d34b3dd225b1" containerName="extract-utilities" Feb 25 14:22:00 crc kubenswrapper[5005]: I0225 14:22:00.151356 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0020691-29ce-4966-9dd5-d34b3dd225b1" containerName="registry-server" Feb 25 14:22:00 crc kubenswrapper[5005]: I0225 14:22:00.152283 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533822-6j7dr" Feb 25 14:22:00 crc kubenswrapper[5005]: I0225 14:22:00.155514 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 14:22:00 crc kubenswrapper[5005]: I0225 14:22:00.155673 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 14:22:00 crc kubenswrapper[5005]: I0225 14:22:00.157449 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533822-6j7dr"] Feb 25 14:22:00 crc kubenswrapper[5005]: I0225 14:22:00.160196 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 14:22:00 crc kubenswrapper[5005]: I0225 14:22:00.215282 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blzh6\" (UniqueName: \"kubernetes.io/projected/154e7723-45f2-4f76-bf20-b44380ab26ec-kube-api-access-blzh6\") pod \"auto-csr-approver-29533822-6j7dr\" (UID: \"154e7723-45f2-4f76-bf20-b44380ab26ec\") " pod="openshift-infra/auto-csr-approver-29533822-6j7dr" Feb 25 14:22:00 crc kubenswrapper[5005]: I0225 14:22:00.318017 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blzh6\" (UniqueName: \"kubernetes.io/projected/154e7723-45f2-4f76-bf20-b44380ab26ec-kube-api-access-blzh6\") pod \"auto-csr-approver-29533822-6j7dr\" (UID: \"154e7723-45f2-4f76-bf20-b44380ab26ec\") " pod="openshift-infra/auto-csr-approver-29533822-6j7dr" Feb 25 14:22:00 crc kubenswrapper[5005]: I0225 14:22:00.342554 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blzh6\" (UniqueName: \"kubernetes.io/projected/154e7723-45f2-4f76-bf20-b44380ab26ec-kube-api-access-blzh6\") pod \"auto-csr-approver-29533822-6j7dr\" (UID: \"154e7723-45f2-4f76-bf20-b44380ab26ec\") " pod="openshift-infra/auto-csr-approver-29533822-6j7dr" Feb 25 14:22:00 crc kubenswrapper[5005]: I0225 14:22:00.474010 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533822-6j7dr" Feb 25 14:22:00 crc kubenswrapper[5005]: I0225 14:22:00.981541 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533822-6j7dr"] Feb 25 14:22:01 crc kubenswrapper[5005]: I0225 14:22:01.161787 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533822-6j7dr" event={"ID":"154e7723-45f2-4f76-bf20-b44380ab26ec","Type":"ContainerStarted","Data":"13531f4fd171742a0b0afa10a2cd47826370ecd22af02f934809312788399d3d"} Feb 25 14:22:01 crc kubenswrapper[5005]: I0225 14:22:01.263618 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-lpkd5/must-gather-cvghf"] Feb 25 14:22:01 crc kubenswrapper[5005]: I0225 14:22:01.263818 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-lpkd5/must-gather-cvghf" podUID="c3eb9525-7d3e-4b6e-9c64-f38ee54a8316" containerName="copy" containerID="cri-o://4842fd77e2a6d8c97d35b8045f19958de3d7730481c50628cf80c7ce4a77428e" gracePeriod=2 Feb 25 14:22:01 crc kubenswrapper[5005]: I0225 14:22:01.274908 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-lpkd5/must-gather-cvghf"] Feb 25 14:22:02 crc kubenswrapper[5005]: I0225 14:22:02.172704 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-lpkd5_must-gather-cvghf_c3eb9525-7d3e-4b6e-9c64-f38ee54a8316/copy/0.log" Feb 25 14:22:02 crc kubenswrapper[5005]: I0225 14:22:02.173628 5005 generic.go:334] "Generic (PLEG): container finished" podID="c3eb9525-7d3e-4b6e-9c64-f38ee54a8316" containerID="4842fd77e2a6d8c97d35b8045f19958de3d7730481c50628cf80c7ce4a77428e" exitCode=143 Feb 25 14:22:02 crc kubenswrapper[5005]: I0225 14:22:02.173682 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e82fbe1a851a465c50d95405017c6ec78f2da6d1e8f77f87ef7ce9a4fe5996e8" Feb 25 14:22:02 crc kubenswrapper[5005]: I0225 14:22:02.342891 5005 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-lpkd5_must-gather-cvghf_c3eb9525-7d3e-4b6e-9c64-f38ee54a8316/copy/0.log" Feb 25 14:22:02 crc kubenswrapper[5005]: I0225 14:22:02.343470 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lpkd5/must-gather-cvghf" Feb 25 14:22:02 crc kubenswrapper[5005]: I0225 14:22:02.363601 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hpfx\" (UniqueName: \"kubernetes.io/projected/c3eb9525-7d3e-4b6e-9c64-f38ee54a8316-kube-api-access-2hpfx\") pod \"c3eb9525-7d3e-4b6e-9c64-f38ee54a8316\" (UID: \"c3eb9525-7d3e-4b6e-9c64-f38ee54a8316\") " Feb 25 14:22:02 crc kubenswrapper[5005]: I0225 14:22:02.363716 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c3eb9525-7d3e-4b6e-9c64-f38ee54a8316-must-gather-output\") pod \"c3eb9525-7d3e-4b6e-9c64-f38ee54a8316\" (UID: \"c3eb9525-7d3e-4b6e-9c64-f38ee54a8316\") " Feb 25 14:22:02 crc kubenswrapper[5005]: I0225 14:22:02.371819 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3eb9525-7d3e-4b6e-9c64-f38ee54a8316-kube-api-access-2hpfx" (OuterVolumeSpecName: "kube-api-access-2hpfx") pod "c3eb9525-7d3e-4b6e-9c64-f38ee54a8316" (UID: "c3eb9525-7d3e-4b6e-9c64-f38ee54a8316"). InnerVolumeSpecName "kube-api-access-2hpfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 14:22:02 crc kubenswrapper[5005]: I0225 14:22:02.465359 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hpfx\" (UniqueName: \"kubernetes.io/projected/c3eb9525-7d3e-4b6e-9c64-f38ee54a8316-kube-api-access-2hpfx\") on node \"crc\" DevicePath \"\"" Feb 25 14:22:02 crc kubenswrapper[5005]: I0225 14:22:02.565748 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3eb9525-7d3e-4b6e-9c64-f38ee54a8316-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "c3eb9525-7d3e-4b6e-9c64-f38ee54a8316" (UID: "c3eb9525-7d3e-4b6e-9c64-f38ee54a8316"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 14:22:02 crc kubenswrapper[5005]: I0225 14:22:02.566898 5005 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c3eb9525-7d3e-4b6e-9c64-f38ee54a8316-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 25 14:22:02 crc kubenswrapper[5005]: I0225 14:22:02.696905 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3eb9525-7d3e-4b6e-9c64-f38ee54a8316" path="/var/lib/kubelet/pods/c3eb9525-7d3e-4b6e-9c64-f38ee54a8316/volumes" Feb 25 14:22:03 crc kubenswrapper[5005]: I0225 14:22:03.183059 5005 generic.go:334] "Generic (PLEG): container finished" podID="154e7723-45f2-4f76-bf20-b44380ab26ec" containerID="7fde4cdf6c76f23062521afc9940207ddb07ee36b63ee27a1eeee05f525d2b8a" exitCode=0 Feb 25 14:22:03 crc kubenswrapper[5005]: I0225 14:22:03.183139 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lpkd5/must-gather-cvghf" Feb 25 14:22:03 crc kubenswrapper[5005]: I0225 14:22:03.183161 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533822-6j7dr" event={"ID":"154e7723-45f2-4f76-bf20-b44380ab26ec","Type":"ContainerDied","Data":"7fde4cdf6c76f23062521afc9940207ddb07ee36b63ee27a1eeee05f525d2b8a"} Feb 25 14:22:04 crc kubenswrapper[5005]: I0225 14:22:04.626585 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533822-6j7dr" Feb 25 14:22:04 crc kubenswrapper[5005]: I0225 14:22:04.803002 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blzh6\" (UniqueName: \"kubernetes.io/projected/154e7723-45f2-4f76-bf20-b44380ab26ec-kube-api-access-blzh6\") pod \"154e7723-45f2-4f76-bf20-b44380ab26ec\" (UID: \"154e7723-45f2-4f76-bf20-b44380ab26ec\") " Feb 25 14:22:04 crc kubenswrapper[5005]: I0225 14:22:04.813665 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/154e7723-45f2-4f76-bf20-b44380ab26ec-kube-api-access-blzh6" (OuterVolumeSpecName: "kube-api-access-blzh6") pod "154e7723-45f2-4f76-bf20-b44380ab26ec" (UID: "154e7723-45f2-4f76-bf20-b44380ab26ec"). InnerVolumeSpecName "kube-api-access-blzh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 14:22:04 crc kubenswrapper[5005]: I0225 14:22:04.905992 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blzh6\" (UniqueName: \"kubernetes.io/projected/154e7723-45f2-4f76-bf20-b44380ab26ec-kube-api-access-blzh6\") on node \"crc\" DevicePath \"\"" Feb 25 14:22:05 crc kubenswrapper[5005]: I0225 14:22:05.202726 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533822-6j7dr" event={"ID":"154e7723-45f2-4f76-bf20-b44380ab26ec","Type":"ContainerDied","Data":"13531f4fd171742a0b0afa10a2cd47826370ecd22af02f934809312788399d3d"} Feb 25 14:22:05 crc kubenswrapper[5005]: I0225 14:22:05.202767 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13531f4fd171742a0b0afa10a2cd47826370ecd22af02f934809312788399d3d" Feb 25 14:22:05 crc kubenswrapper[5005]: I0225 14:22:05.203035 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533822-6j7dr" Feb 25 14:22:05 crc kubenswrapper[5005]: I0225 14:22:05.688749 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533816-9rgsp"] Feb 25 14:22:05 crc kubenswrapper[5005]: I0225 14:22:05.700590 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533816-9rgsp"] Feb 25 14:22:06 crc kubenswrapper[5005]: I0225 14:22:06.696262 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bfdb45e-91ba-4ac4-914f-ac8175b5cd70" path="/var/lib/kubelet/pods/1bfdb45e-91ba-4ac4-914f-ac8175b5cd70/volumes" Feb 25 14:22:28 crc kubenswrapper[5005]: I0225 14:22:28.973198 5005 scope.go:117] "RemoveContainer" containerID="d9e6cc6614631429870024c3a2a08f3afa68b4d23b150dd1a39cbd0081f4b8ba" Feb 25 14:22:29 crc kubenswrapper[5005]: I0225 14:22:29.017006 5005 scope.go:117] "RemoveContainer" containerID="2df263c24cb3d562ab52ad387780e766a24fde5eb9ea726c135a2f48a7277045" Feb 25 14:22:29 crc kubenswrapper[5005]: I0225 14:22:29.050228 5005 scope.go:117] "RemoveContainer" containerID="3d2c17fb28c07f3be7a3fb592a51ac79c0647183a7c74ee3568fea267d3b05cd" Feb 25 14:22:29 crc kubenswrapper[5005]: I0225 14:22:29.148122 5005 scope.go:117] "RemoveContainer" containerID="4842fd77e2a6d8c97d35b8045f19958de3d7730481c50628cf80c7ce4a77428e" Feb 25 14:23:58 crc kubenswrapper[5005]: I0225 14:23:58.095753 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 14:23:58 crc kubenswrapper[5005]: I0225 14:23:58.096474 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 14:24:00 crc kubenswrapper[5005]: I0225 14:24:00.139057 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533824-wnrzs"] Feb 25 14:24:00 crc kubenswrapper[5005]: E0225 14:24:00.139833 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="154e7723-45f2-4f76-bf20-b44380ab26ec" containerName="oc" Feb 25 14:24:00 crc kubenswrapper[5005]: I0225 14:24:00.139849 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="154e7723-45f2-4f76-bf20-b44380ab26ec" containerName="oc" Feb 25 14:24:00 crc kubenswrapper[5005]: E0225 14:24:00.139872 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3eb9525-7d3e-4b6e-9c64-f38ee54a8316" containerName="gather" Feb 25 14:24:00 crc kubenswrapper[5005]: I0225 14:24:00.139881 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3eb9525-7d3e-4b6e-9c64-f38ee54a8316" containerName="gather" Feb 25 14:24:00 crc kubenswrapper[5005]: E0225 14:24:00.139937 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3eb9525-7d3e-4b6e-9c64-f38ee54a8316" containerName="copy" Feb 25 14:24:00 crc kubenswrapper[5005]: I0225 14:24:00.139945 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3eb9525-7d3e-4b6e-9c64-f38ee54a8316" containerName="copy" Feb 25 14:24:00 crc kubenswrapper[5005]: I0225 14:24:00.140154 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3eb9525-7d3e-4b6e-9c64-f38ee54a8316" containerName="gather" Feb 25 14:24:00 crc kubenswrapper[5005]: I0225 14:24:00.140173 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="154e7723-45f2-4f76-bf20-b44380ab26ec" containerName="oc" Feb 25 14:24:00 crc kubenswrapper[5005]: I0225 14:24:00.140184 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3eb9525-7d3e-4b6e-9c64-f38ee54a8316" containerName="copy" Feb 25 14:24:00 crc kubenswrapper[5005]: I0225 14:24:00.140940 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533824-wnrzs" Feb 25 14:24:00 crc kubenswrapper[5005]: I0225 14:24:00.143474 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 14:24:00 crc kubenswrapper[5005]: I0225 14:24:00.143538 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 14:24:00 crc kubenswrapper[5005]: I0225 14:24:00.144195 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 14:24:00 crc kubenswrapper[5005]: I0225 14:24:00.147937 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533824-wnrzs"] Feb 25 14:24:00 crc kubenswrapper[5005]: I0225 14:24:00.156704 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkxgn\" (UniqueName: \"kubernetes.io/projected/1e1a3053-99f4-4934-b4db-fdf0a6de5f3e-kube-api-access-zkxgn\") pod \"auto-csr-approver-29533824-wnrzs\" (UID: \"1e1a3053-99f4-4934-b4db-fdf0a6de5f3e\") " pod="openshift-infra/auto-csr-approver-29533824-wnrzs" Feb 25 14:24:00 crc kubenswrapper[5005]: I0225 14:24:00.258122 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkxgn\" (UniqueName: \"kubernetes.io/projected/1e1a3053-99f4-4934-b4db-fdf0a6de5f3e-kube-api-access-zkxgn\") pod \"auto-csr-approver-29533824-wnrzs\" (UID: \"1e1a3053-99f4-4934-b4db-fdf0a6de5f3e\") " pod="openshift-infra/auto-csr-approver-29533824-wnrzs" Feb 25 14:24:00 crc kubenswrapper[5005]: I0225 14:24:00.281692 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkxgn\" (UniqueName: \"kubernetes.io/projected/1e1a3053-99f4-4934-b4db-fdf0a6de5f3e-kube-api-access-zkxgn\") pod \"auto-csr-approver-29533824-wnrzs\" (UID: \"1e1a3053-99f4-4934-b4db-fdf0a6de5f3e\") " pod="openshift-infra/auto-csr-approver-29533824-wnrzs" Feb 25 14:24:00 crc kubenswrapper[5005]: I0225 14:24:00.509487 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533824-wnrzs" Feb 25 14:24:00 crc kubenswrapper[5005]: I0225 14:24:00.974025 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533824-wnrzs"] Feb 25 14:24:00 crc kubenswrapper[5005]: I0225 14:24:00.983092 5005 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 25 14:24:01 crc kubenswrapper[5005]: I0225 14:24:01.204354 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533824-wnrzs" event={"ID":"1e1a3053-99f4-4934-b4db-fdf0a6de5f3e","Type":"ContainerStarted","Data":"3804cd85c771c71f1fd200d1515197cb8c25eb4abfa0fecaa60db28aa9fe753d"} Feb 25 14:24:03 crc kubenswrapper[5005]: I0225 14:24:03.233541 5005 generic.go:334] "Generic (PLEG): container finished" podID="1e1a3053-99f4-4934-b4db-fdf0a6de5f3e" containerID="fd2f71544bd95628b860e74a05ecaba284e7f5757079ae883fc1aa47ca82dffc" exitCode=0 Feb 25 14:24:03 crc kubenswrapper[5005]: I0225 14:24:03.233638 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533824-wnrzs" event={"ID":"1e1a3053-99f4-4934-b4db-fdf0a6de5f3e","Type":"ContainerDied","Data":"fd2f71544bd95628b860e74a05ecaba284e7f5757079ae883fc1aa47ca82dffc"} Feb 25 14:24:04 crc kubenswrapper[5005]: I0225 14:24:04.585724 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533824-wnrzs" Feb 25 14:24:04 crc kubenswrapper[5005]: I0225 14:24:04.684482 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkxgn\" (UniqueName: \"kubernetes.io/projected/1e1a3053-99f4-4934-b4db-fdf0a6de5f3e-kube-api-access-zkxgn\") pod \"1e1a3053-99f4-4934-b4db-fdf0a6de5f3e\" (UID: \"1e1a3053-99f4-4934-b4db-fdf0a6de5f3e\") " Feb 25 14:24:04 crc kubenswrapper[5005]: I0225 14:24:04.690443 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e1a3053-99f4-4934-b4db-fdf0a6de5f3e-kube-api-access-zkxgn" (OuterVolumeSpecName: "kube-api-access-zkxgn") pod "1e1a3053-99f4-4934-b4db-fdf0a6de5f3e" (UID: "1e1a3053-99f4-4934-b4db-fdf0a6de5f3e"). InnerVolumeSpecName "kube-api-access-zkxgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 14:24:04 crc kubenswrapper[5005]: I0225 14:24:04.786745 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkxgn\" (UniqueName: \"kubernetes.io/projected/1e1a3053-99f4-4934-b4db-fdf0a6de5f3e-kube-api-access-zkxgn\") on node \"crc\" DevicePath \"\"" Feb 25 14:24:05 crc kubenswrapper[5005]: I0225 14:24:05.250487 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533824-wnrzs" event={"ID":"1e1a3053-99f4-4934-b4db-fdf0a6de5f3e","Type":"ContainerDied","Data":"3804cd85c771c71f1fd200d1515197cb8c25eb4abfa0fecaa60db28aa9fe753d"} Feb 25 14:24:05 crc kubenswrapper[5005]: I0225 14:24:05.250524 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3804cd85c771c71f1fd200d1515197cb8c25eb4abfa0fecaa60db28aa9fe753d" Feb 25 14:24:05 crc kubenswrapper[5005]: I0225 14:24:05.250545 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533824-wnrzs" Feb 25 14:24:05 crc kubenswrapper[5005]: I0225 14:24:05.649164 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533818-df79r"] Feb 25 14:24:05 crc kubenswrapper[5005]: I0225 14:24:05.656702 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533818-df79r"] Feb 25 14:24:06 crc kubenswrapper[5005]: I0225 14:24:06.696708 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21240307-3105-420f-b08e-d921f14ac6bf" path="/var/lib/kubelet/pods/21240307-3105-420f-b08e-d921f14ac6bf/volumes" Feb 25 14:24:28 crc kubenswrapper[5005]: I0225 14:24:28.087770 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 14:24:28 crc kubenswrapper[5005]: I0225 14:24:28.088513 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 14:24:29 crc kubenswrapper[5005]: I0225 14:24:29.248636 5005 scope.go:117] "RemoveContainer" containerID="b3172521c595e1f3285289c1286ce61241ab5cafea36b29b47ede131e86164d4" Feb 25 14:24:38 crc kubenswrapper[5005]: I0225 14:24:38.176034 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jctmg"] Feb 25 14:24:38 crc kubenswrapper[5005]: E0225 14:24:38.177051 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e1a3053-99f4-4934-b4db-fdf0a6de5f3e" containerName="oc" Feb 25 14:24:38 crc kubenswrapper[5005]: I0225 14:24:38.177068 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e1a3053-99f4-4934-b4db-fdf0a6de5f3e" containerName="oc" Feb 25 14:24:38 crc kubenswrapper[5005]: I0225 14:24:38.177283 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e1a3053-99f4-4934-b4db-fdf0a6de5f3e" containerName="oc" Feb 25 14:24:38 crc kubenswrapper[5005]: I0225 14:24:38.179416 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jctmg" Feb 25 14:24:38 crc kubenswrapper[5005]: I0225 14:24:38.194671 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jctmg"] Feb 25 14:24:38 crc kubenswrapper[5005]: I0225 14:24:38.194798 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62d72fb9-9a84-4ab4-9232-1d1bec40cf4e-utilities\") pod \"certified-operators-jctmg\" (UID: \"62d72fb9-9a84-4ab4-9232-1d1bec40cf4e\") " pod="openshift-marketplace/certified-operators-jctmg" Feb 25 14:24:38 crc kubenswrapper[5005]: I0225 14:24:38.194876 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62d72fb9-9a84-4ab4-9232-1d1bec40cf4e-catalog-content\") pod \"certified-operators-jctmg\" (UID: \"62d72fb9-9a84-4ab4-9232-1d1bec40cf4e\") " pod="openshift-marketplace/certified-operators-jctmg" Feb 25 14:24:38 crc kubenswrapper[5005]: I0225 14:24:38.195073 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsqdc\" (UniqueName: \"kubernetes.io/projected/62d72fb9-9a84-4ab4-9232-1d1bec40cf4e-kube-api-access-bsqdc\") pod \"certified-operators-jctmg\" (UID: \"62d72fb9-9a84-4ab4-9232-1d1bec40cf4e\") " pod="openshift-marketplace/certified-operators-jctmg" Feb 25 14:24:38 crc kubenswrapper[5005]: I0225 14:24:38.297242 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsqdc\" (UniqueName: \"kubernetes.io/projected/62d72fb9-9a84-4ab4-9232-1d1bec40cf4e-kube-api-access-bsqdc\") pod \"certified-operators-jctmg\" (UID: \"62d72fb9-9a84-4ab4-9232-1d1bec40cf4e\") " pod="openshift-marketplace/certified-operators-jctmg" Feb 25 14:24:38 crc kubenswrapper[5005]: I0225 14:24:38.297576 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62d72fb9-9a84-4ab4-9232-1d1bec40cf4e-utilities\") pod \"certified-operators-jctmg\" (UID: \"62d72fb9-9a84-4ab4-9232-1d1bec40cf4e\") " pod="openshift-marketplace/certified-operators-jctmg" Feb 25 14:24:38 crc kubenswrapper[5005]: I0225 14:24:38.297715 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62d72fb9-9a84-4ab4-9232-1d1bec40cf4e-catalog-content\") pod \"certified-operators-jctmg\" (UID: \"62d72fb9-9a84-4ab4-9232-1d1bec40cf4e\") " pod="openshift-marketplace/certified-operators-jctmg" Feb 25 14:24:38 crc kubenswrapper[5005]: I0225 14:24:38.298449 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62d72fb9-9a84-4ab4-9232-1d1bec40cf4e-utilities\") pod \"certified-operators-jctmg\" (UID: \"62d72fb9-9a84-4ab4-9232-1d1bec40cf4e\") " pod="openshift-marketplace/certified-operators-jctmg" Feb 25 14:24:38 crc kubenswrapper[5005]: I0225 14:24:38.298488 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62d72fb9-9a84-4ab4-9232-1d1bec40cf4e-catalog-content\") pod \"certified-operators-jctmg\" (UID: \"62d72fb9-9a84-4ab4-9232-1d1bec40cf4e\") " pod="openshift-marketplace/certified-operators-jctmg" Feb 25 14:24:38 crc kubenswrapper[5005]: I0225 14:24:38.317131 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsqdc\" (UniqueName: \"kubernetes.io/projected/62d72fb9-9a84-4ab4-9232-1d1bec40cf4e-kube-api-access-bsqdc\") pod \"certified-operators-jctmg\" (UID: \"62d72fb9-9a84-4ab4-9232-1d1bec40cf4e\") " pod="openshift-marketplace/certified-operators-jctmg" Feb 25 14:24:38 crc kubenswrapper[5005]: I0225 14:24:38.508546 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jctmg" Feb 25 14:24:38 crc kubenswrapper[5005]: I0225 14:24:38.989776 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jctmg"] Feb 25 14:24:39 crc kubenswrapper[5005]: I0225 14:24:39.574048 5005 generic.go:334] "Generic (PLEG): container finished" podID="62d72fb9-9a84-4ab4-9232-1d1bec40cf4e" containerID="b6bbef519a4a31ee0afa6e42e90535ff6cb45519de6bcb0fcc2b65d1dcd7db82" exitCode=0 Feb 25 14:24:39 crc kubenswrapper[5005]: I0225 14:24:39.574571 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jctmg" event={"ID":"62d72fb9-9a84-4ab4-9232-1d1bec40cf4e","Type":"ContainerDied","Data":"b6bbef519a4a31ee0afa6e42e90535ff6cb45519de6bcb0fcc2b65d1dcd7db82"} Feb 25 14:24:39 crc kubenswrapper[5005]: I0225 14:24:39.574680 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jctmg" event={"ID":"62d72fb9-9a84-4ab4-9232-1d1bec40cf4e","Type":"ContainerStarted","Data":"c47e3213027383cfaea10b91deba3b3cb838ed6e48cd81a2ac161e3990a95e22"} Feb 25 14:24:40 crc kubenswrapper[5005]: I0225 14:24:40.583896 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jctmg" event={"ID":"62d72fb9-9a84-4ab4-9232-1d1bec40cf4e","Type":"ContainerStarted","Data":"c2504235bd0a2803541a3e1fe0cf8e0fe426c29af6a50db78ed41c439a3944c8"} Feb 25 14:24:41 crc kubenswrapper[5005]: I0225 14:24:41.593631 5005 generic.go:334] "Generic (PLEG): container finished" podID="62d72fb9-9a84-4ab4-9232-1d1bec40cf4e" containerID="c2504235bd0a2803541a3e1fe0cf8e0fe426c29af6a50db78ed41c439a3944c8" exitCode=0 Feb 25 14:24:41 crc kubenswrapper[5005]: I0225 14:24:41.593672 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jctmg" event={"ID":"62d72fb9-9a84-4ab4-9232-1d1bec40cf4e","Type":"ContainerDied","Data":"c2504235bd0a2803541a3e1fe0cf8e0fe426c29af6a50db78ed41c439a3944c8"} Feb 25 14:24:42 crc kubenswrapper[5005]: I0225 14:24:42.602993 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jctmg" event={"ID":"62d72fb9-9a84-4ab4-9232-1d1bec40cf4e","Type":"ContainerStarted","Data":"015821e089c36bdc56e5773f592ca861c8006051dbd2ff66be5320b57108c941"} Feb 25 14:24:42 crc kubenswrapper[5005]: I0225 14:24:42.625967 5005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jctmg" podStartSLOduration=2.18959362 podStartE2EDuration="4.625939545s" podCreationTimestamp="2026-02-25 14:24:38 +0000 UTC" firstStartedPulling="2026-02-25 14:24:39.57721913 +0000 UTC m=+11193.617951457" lastFinishedPulling="2026-02-25 14:24:42.013565055 +0000 UTC m=+11196.054297382" observedRunningTime="2026-02-25 14:24:42.618253694 +0000 UTC m=+11196.658986021" watchObservedRunningTime="2026-02-25 14:24:42.625939545 +0000 UTC m=+11196.666671892" Feb 25 14:24:48 crc kubenswrapper[5005]: I0225 14:24:48.509283 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jctmg" Feb 25 14:24:48 crc kubenswrapper[5005]: I0225 14:24:48.509858 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jctmg" Feb 25 14:24:48 crc kubenswrapper[5005]: I0225 14:24:48.553735 5005 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jctmg" Feb 25 14:24:48 crc kubenswrapper[5005]: I0225 14:24:48.694938 5005 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jctmg" Feb 25 14:24:48 crc kubenswrapper[5005]: I0225 14:24:48.786921 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jctmg"] Feb 25 14:24:50 crc kubenswrapper[5005]: I0225 14:24:50.665775 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jctmg" podUID="62d72fb9-9a84-4ab4-9232-1d1bec40cf4e" containerName="registry-server" containerID="cri-o://015821e089c36bdc56e5773f592ca861c8006051dbd2ff66be5320b57108c941" gracePeriod=2 Feb 25 14:24:51 crc kubenswrapper[5005]: I0225 14:24:51.547976 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jctmg" Feb 25 14:24:51 crc kubenswrapper[5005]: I0225 14:24:51.670288 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62d72fb9-9a84-4ab4-9232-1d1bec40cf4e-utilities\") pod \"62d72fb9-9a84-4ab4-9232-1d1bec40cf4e\" (UID: \"62d72fb9-9a84-4ab4-9232-1d1bec40cf4e\") " Feb 25 14:24:51 crc kubenswrapper[5005]: I0225 14:24:51.670557 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62d72fb9-9a84-4ab4-9232-1d1bec40cf4e-catalog-content\") pod \"62d72fb9-9a84-4ab4-9232-1d1bec40cf4e\" (UID: \"62d72fb9-9a84-4ab4-9232-1d1bec40cf4e\") " Feb 25 14:24:51 crc kubenswrapper[5005]: I0225 14:24:51.670674 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bsqdc\" (UniqueName: \"kubernetes.io/projected/62d72fb9-9a84-4ab4-9232-1d1bec40cf4e-kube-api-access-bsqdc\") pod \"62d72fb9-9a84-4ab4-9232-1d1bec40cf4e\" (UID: \"62d72fb9-9a84-4ab4-9232-1d1bec40cf4e\") " Feb 25 14:24:51 crc kubenswrapper[5005]: I0225 14:24:51.671972 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62d72fb9-9a84-4ab4-9232-1d1bec40cf4e-utilities" (OuterVolumeSpecName: "utilities") pod "62d72fb9-9a84-4ab4-9232-1d1bec40cf4e" (UID: "62d72fb9-9a84-4ab4-9232-1d1bec40cf4e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 14:24:51 crc kubenswrapper[5005]: I0225 14:24:51.675968 5005 generic.go:334] "Generic (PLEG): container finished" podID="62d72fb9-9a84-4ab4-9232-1d1bec40cf4e" containerID="015821e089c36bdc56e5773f592ca861c8006051dbd2ff66be5320b57108c941" exitCode=0 Feb 25 14:24:51 crc kubenswrapper[5005]: I0225 14:24:51.676006 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jctmg" event={"ID":"62d72fb9-9a84-4ab4-9232-1d1bec40cf4e","Type":"ContainerDied","Data":"015821e089c36bdc56e5773f592ca861c8006051dbd2ff66be5320b57108c941"} Feb 25 14:24:51 crc kubenswrapper[5005]: I0225 14:24:51.676033 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jctmg" event={"ID":"62d72fb9-9a84-4ab4-9232-1d1bec40cf4e","Type":"ContainerDied","Data":"c47e3213027383cfaea10b91deba3b3cb838ed6e48cd81a2ac161e3990a95e22"} Feb 25 14:24:51 crc kubenswrapper[5005]: I0225 14:24:51.676041 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jctmg" Feb 25 14:24:51 crc kubenswrapper[5005]: I0225 14:24:51.676050 5005 scope.go:117] "RemoveContainer" containerID="015821e089c36bdc56e5773f592ca861c8006051dbd2ff66be5320b57108c941" Feb 25 14:24:51 crc kubenswrapper[5005]: I0225 14:24:51.728435 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62d72fb9-9a84-4ab4-9232-1d1bec40cf4e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "62d72fb9-9a84-4ab4-9232-1d1bec40cf4e" (UID: "62d72fb9-9a84-4ab4-9232-1d1bec40cf4e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 25 14:24:51 crc kubenswrapper[5005]: I0225 14:24:51.730925 5005 scope.go:117] "RemoveContainer" containerID="c2504235bd0a2803541a3e1fe0cf8e0fe426c29af6a50db78ed41c439a3944c8" Feb 25 14:24:51 crc kubenswrapper[5005]: I0225 14:24:51.761079 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62d72fb9-9a84-4ab4-9232-1d1bec40cf4e-kube-api-access-bsqdc" (OuterVolumeSpecName: "kube-api-access-bsqdc") pod "62d72fb9-9a84-4ab4-9232-1d1bec40cf4e" (UID: "62d72fb9-9a84-4ab4-9232-1d1bec40cf4e"). InnerVolumeSpecName "kube-api-access-bsqdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 14:24:51 crc kubenswrapper[5005]: I0225 14:24:51.769566 5005 scope.go:117] "RemoveContainer" containerID="b6bbef519a4a31ee0afa6e42e90535ff6cb45519de6bcb0fcc2b65d1dcd7db82" Feb 25 14:24:51 crc kubenswrapper[5005]: I0225 14:24:51.772958 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bsqdc\" (UniqueName: \"kubernetes.io/projected/62d72fb9-9a84-4ab4-9232-1d1bec40cf4e-kube-api-access-bsqdc\") on node \"crc\" DevicePath \"\"" Feb 25 14:24:51 crc kubenswrapper[5005]: I0225 14:24:51.772982 5005 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62d72fb9-9a84-4ab4-9232-1d1bec40cf4e-utilities\") on node \"crc\" DevicePath \"\"" Feb 25 14:24:51 crc kubenswrapper[5005]: I0225 14:24:51.772992 5005 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62d72fb9-9a84-4ab4-9232-1d1bec40cf4e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 25 14:24:51 crc kubenswrapper[5005]: I0225 14:24:51.799681 5005 scope.go:117] "RemoveContainer" containerID="015821e089c36bdc56e5773f592ca861c8006051dbd2ff66be5320b57108c941" Feb 25 14:24:51 crc kubenswrapper[5005]: E0225 14:24:51.800168 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"015821e089c36bdc56e5773f592ca861c8006051dbd2ff66be5320b57108c941\": container with ID starting with 015821e089c36bdc56e5773f592ca861c8006051dbd2ff66be5320b57108c941 not found: ID does not exist" containerID="015821e089c36bdc56e5773f592ca861c8006051dbd2ff66be5320b57108c941" Feb 25 14:24:51 crc kubenswrapper[5005]: I0225 14:24:51.800218 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"015821e089c36bdc56e5773f592ca861c8006051dbd2ff66be5320b57108c941"} err="failed to get container status \"015821e089c36bdc56e5773f592ca861c8006051dbd2ff66be5320b57108c941\": rpc error: code = NotFound desc = could not find container \"015821e089c36bdc56e5773f592ca861c8006051dbd2ff66be5320b57108c941\": container with ID starting with 015821e089c36bdc56e5773f592ca861c8006051dbd2ff66be5320b57108c941 not found: ID does not exist" Feb 25 14:24:51 crc kubenswrapper[5005]: I0225 14:24:51.800248 5005 scope.go:117] "RemoveContainer" containerID="c2504235bd0a2803541a3e1fe0cf8e0fe426c29af6a50db78ed41c439a3944c8" Feb 25 14:24:51 crc kubenswrapper[5005]: E0225 14:24:51.800697 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2504235bd0a2803541a3e1fe0cf8e0fe426c29af6a50db78ed41c439a3944c8\": container with ID starting with c2504235bd0a2803541a3e1fe0cf8e0fe426c29af6a50db78ed41c439a3944c8 not found: ID does not exist" containerID="c2504235bd0a2803541a3e1fe0cf8e0fe426c29af6a50db78ed41c439a3944c8" Feb 25 14:24:51 crc kubenswrapper[5005]: I0225 14:24:51.800729 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2504235bd0a2803541a3e1fe0cf8e0fe426c29af6a50db78ed41c439a3944c8"} err="failed to get container status \"c2504235bd0a2803541a3e1fe0cf8e0fe426c29af6a50db78ed41c439a3944c8\": rpc error: code = NotFound desc = could not find container \"c2504235bd0a2803541a3e1fe0cf8e0fe426c29af6a50db78ed41c439a3944c8\": container with ID starting with c2504235bd0a2803541a3e1fe0cf8e0fe426c29af6a50db78ed41c439a3944c8 not found: ID does not exist" Feb 25 14:24:51 crc kubenswrapper[5005]: I0225 14:24:51.800752 5005 scope.go:117] "RemoveContainer" containerID="b6bbef519a4a31ee0afa6e42e90535ff6cb45519de6bcb0fcc2b65d1dcd7db82" Feb 25 14:24:51 crc kubenswrapper[5005]: E0225 14:24:51.801001 5005 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6bbef519a4a31ee0afa6e42e90535ff6cb45519de6bcb0fcc2b65d1dcd7db82\": container with ID starting with b6bbef519a4a31ee0afa6e42e90535ff6cb45519de6bcb0fcc2b65d1dcd7db82 not found: ID does not exist" containerID="b6bbef519a4a31ee0afa6e42e90535ff6cb45519de6bcb0fcc2b65d1dcd7db82" Feb 25 14:24:51 crc kubenswrapper[5005]: I0225 14:24:51.801019 5005 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6bbef519a4a31ee0afa6e42e90535ff6cb45519de6bcb0fcc2b65d1dcd7db82"} err="failed to get container status \"b6bbef519a4a31ee0afa6e42e90535ff6cb45519de6bcb0fcc2b65d1dcd7db82\": rpc error: code = NotFound desc = could not find container \"b6bbef519a4a31ee0afa6e42e90535ff6cb45519de6bcb0fcc2b65d1dcd7db82\": container with ID starting with b6bbef519a4a31ee0afa6e42e90535ff6cb45519de6bcb0fcc2b65d1dcd7db82 not found: ID does not exist" Feb 25 14:24:52 crc kubenswrapper[5005]: I0225 14:24:52.009492 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jctmg"] Feb 25 14:24:52 crc kubenswrapper[5005]: I0225 14:24:52.016674 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jctmg"] Feb 25 14:24:52 crc kubenswrapper[5005]: I0225 14:24:52.697853 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62d72fb9-9a84-4ab4-9232-1d1bec40cf4e" path="/var/lib/kubelet/pods/62d72fb9-9a84-4ab4-9232-1d1bec40cf4e/volumes" Feb 25 14:24:58 crc kubenswrapper[5005]: I0225 14:24:58.087122 5005 patch_prober.go:28] interesting pod/machine-config-daemon-tct5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 25 14:24:58 crc kubenswrapper[5005]: I0225 14:24:58.087814 5005 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 25 14:24:58 crc kubenswrapper[5005]: I0225 14:24:58.087856 5005 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" Feb 25 14:24:58 crc kubenswrapper[5005]: I0225 14:24:58.088562 5005 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"08ad309d5607b261c705a2f0ef607d46bdb752428517e1c6cae9a0def3d754aa"} pod="openshift-machine-config-operator/machine-config-daemon-tct5q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 25 14:24:58 crc kubenswrapper[5005]: I0225 14:24:58.088611 5005 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerName="machine-config-daemon" containerID="cri-o://08ad309d5607b261c705a2f0ef607d46bdb752428517e1c6cae9a0def3d754aa" gracePeriod=600 Feb 25 14:24:58 crc kubenswrapper[5005]: E0225 14:24:58.226247 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 14:24:58 crc kubenswrapper[5005]: I0225 14:24:58.745506 5005 generic.go:334] "Generic (PLEG): container finished" podID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" containerID="08ad309d5607b261c705a2f0ef607d46bdb752428517e1c6cae9a0def3d754aa" exitCode=0 Feb 25 14:24:58 crc kubenswrapper[5005]: I0225 14:24:58.745743 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" event={"ID":"d56aef23-d794-49a4-8e6b-2c9e2d1adebf","Type":"ContainerDied","Data":"08ad309d5607b261c705a2f0ef607d46bdb752428517e1c6cae9a0def3d754aa"} Feb 25 14:24:58 crc kubenswrapper[5005]: I0225 14:24:58.745988 5005 scope.go:117] "RemoveContainer" containerID="ad13f166a37837786041d84a352dd1da26ff2473b3b78faeba1072292dc343ec" Feb 25 14:24:58 crc kubenswrapper[5005]: I0225 14:24:58.747039 5005 scope.go:117] "RemoveContainer" containerID="08ad309d5607b261c705a2f0ef607d46bdb752428517e1c6cae9a0def3d754aa" Feb 25 14:24:58 crc kubenswrapper[5005]: E0225 14:24:58.747450 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 14:25:09 crc kubenswrapper[5005]: I0225 14:25:09.686180 5005 scope.go:117] "RemoveContainer" containerID="08ad309d5607b261c705a2f0ef607d46bdb752428517e1c6cae9a0def3d754aa" Feb 25 14:25:09 crc kubenswrapper[5005]: E0225 14:25:09.687008 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 14:25:20 crc kubenswrapper[5005]: I0225 14:25:20.686026 5005 scope.go:117] "RemoveContainer" containerID="08ad309d5607b261c705a2f0ef607d46bdb752428517e1c6cae9a0def3d754aa" Feb 25 14:25:20 crc kubenswrapper[5005]: E0225 14:25:20.686889 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 14:25:32 crc kubenswrapper[5005]: I0225 14:25:32.686010 5005 scope.go:117] "RemoveContainer" containerID="08ad309d5607b261c705a2f0ef607d46bdb752428517e1c6cae9a0def3d754aa" Feb 25 14:25:32 crc kubenswrapper[5005]: E0225 14:25:32.686658 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 14:25:44 crc kubenswrapper[5005]: I0225 14:25:44.686151 5005 scope.go:117] "RemoveContainer" containerID="08ad309d5607b261c705a2f0ef607d46bdb752428517e1c6cae9a0def3d754aa" Feb 25 14:25:44 crc kubenswrapper[5005]: E0225 14:25:44.687030 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 14:25:55 crc kubenswrapper[5005]: I0225 14:25:55.685737 5005 scope.go:117] "RemoveContainer" containerID="08ad309d5607b261c705a2f0ef607d46bdb752428517e1c6cae9a0def3d754aa" Feb 25 14:25:55 crc kubenswrapper[5005]: E0225 14:25:55.686542 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 14:26:00 crc kubenswrapper[5005]: I0225 14:26:00.146534 5005 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29533826-v8rxd"] Feb 25 14:26:00 crc kubenswrapper[5005]: E0225 14:26:00.147530 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62d72fb9-9a84-4ab4-9232-1d1bec40cf4e" containerName="extract-utilities" Feb 25 14:26:00 crc kubenswrapper[5005]: I0225 14:26:00.147547 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="62d72fb9-9a84-4ab4-9232-1d1bec40cf4e" containerName="extract-utilities" Feb 25 14:26:00 crc kubenswrapper[5005]: E0225 14:26:00.147564 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62d72fb9-9a84-4ab4-9232-1d1bec40cf4e" containerName="extract-content" Feb 25 14:26:00 crc kubenswrapper[5005]: I0225 14:26:00.147571 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="62d72fb9-9a84-4ab4-9232-1d1bec40cf4e" containerName="extract-content" Feb 25 14:26:00 crc kubenswrapper[5005]: E0225 14:26:00.147587 5005 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62d72fb9-9a84-4ab4-9232-1d1bec40cf4e" containerName="registry-server" Feb 25 14:26:00 crc kubenswrapper[5005]: I0225 14:26:00.147594 5005 state_mem.go:107] "Deleted CPUSet assignment" podUID="62d72fb9-9a84-4ab4-9232-1d1bec40cf4e" containerName="registry-server" Feb 25 14:26:00 crc kubenswrapper[5005]: I0225 14:26:00.147811 5005 memory_manager.go:354] "RemoveStaleState removing state" podUID="62d72fb9-9a84-4ab4-9232-1d1bec40cf4e" containerName="registry-server" Feb 25 14:26:00 crc kubenswrapper[5005]: I0225 14:26:00.148614 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533826-v8rxd" Feb 25 14:26:00 crc kubenswrapper[5005]: I0225 14:26:00.151389 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 25 14:26:00 crc kubenswrapper[5005]: I0225 14:26:00.151560 5005 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 25 14:26:00 crc kubenswrapper[5005]: I0225 14:26:00.151699 5005 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7d69q" Feb 25 14:26:00 crc kubenswrapper[5005]: I0225 14:26:00.168024 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533826-v8rxd"] Feb 25 14:26:00 crc kubenswrapper[5005]: I0225 14:26:00.264713 5005 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thzhk\" (UniqueName: \"kubernetes.io/projected/9fe4411a-cece-4e5f-aa77-8ffa0be1a766-kube-api-access-thzhk\") pod \"auto-csr-approver-29533826-v8rxd\" (UID: \"9fe4411a-cece-4e5f-aa77-8ffa0be1a766\") " pod="openshift-infra/auto-csr-approver-29533826-v8rxd" Feb 25 14:26:00 crc kubenswrapper[5005]: I0225 14:26:00.367057 5005 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thzhk\" (UniqueName: \"kubernetes.io/projected/9fe4411a-cece-4e5f-aa77-8ffa0be1a766-kube-api-access-thzhk\") pod \"auto-csr-approver-29533826-v8rxd\" (UID: \"9fe4411a-cece-4e5f-aa77-8ffa0be1a766\") " pod="openshift-infra/auto-csr-approver-29533826-v8rxd" Feb 25 14:26:00 crc kubenswrapper[5005]: I0225 14:26:00.401442 5005 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thzhk\" (UniqueName: \"kubernetes.io/projected/9fe4411a-cece-4e5f-aa77-8ffa0be1a766-kube-api-access-thzhk\") pod \"auto-csr-approver-29533826-v8rxd\" (UID: \"9fe4411a-cece-4e5f-aa77-8ffa0be1a766\") " pod="openshift-infra/auto-csr-approver-29533826-v8rxd" Feb 25 14:26:00 crc kubenswrapper[5005]: I0225 14:26:00.476301 5005 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533826-v8rxd" Feb 25 14:26:00 crc kubenswrapper[5005]: I0225 14:26:00.905079 5005 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29533826-v8rxd"] Feb 25 14:26:01 crc kubenswrapper[5005]: I0225 14:26:01.337664 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533826-v8rxd" event={"ID":"9fe4411a-cece-4e5f-aa77-8ffa0be1a766","Type":"ContainerStarted","Data":"f5b9e9b950b217788975e81b25a8802e0c3dfc23d63f5314cc2bfc9a6b44fab1"} Feb 25 14:26:03 crc kubenswrapper[5005]: I0225 14:26:03.364320 5005 generic.go:334] "Generic (PLEG): container finished" podID="9fe4411a-cece-4e5f-aa77-8ffa0be1a766" containerID="1fde0b3da522ed700a870e31e2d6d55b49efea2509d73f3936dfbee2ea966c67" exitCode=0 Feb 25 14:26:03 crc kubenswrapper[5005]: I0225 14:26:03.364487 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533826-v8rxd" event={"ID":"9fe4411a-cece-4e5f-aa77-8ffa0be1a766","Type":"ContainerDied","Data":"1fde0b3da522ed700a870e31e2d6d55b49efea2509d73f3936dfbee2ea966c67"} Feb 25 14:26:04 crc kubenswrapper[5005]: I0225 14:26:04.706659 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533826-v8rxd" Feb 25 14:26:04 crc kubenswrapper[5005]: I0225 14:26:04.751689 5005 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thzhk\" (UniqueName: \"kubernetes.io/projected/9fe4411a-cece-4e5f-aa77-8ffa0be1a766-kube-api-access-thzhk\") pod \"9fe4411a-cece-4e5f-aa77-8ffa0be1a766\" (UID: \"9fe4411a-cece-4e5f-aa77-8ffa0be1a766\") " Feb 25 14:26:04 crc kubenswrapper[5005]: I0225 14:26:04.759569 5005 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fe4411a-cece-4e5f-aa77-8ffa0be1a766-kube-api-access-thzhk" (OuterVolumeSpecName: "kube-api-access-thzhk") pod "9fe4411a-cece-4e5f-aa77-8ffa0be1a766" (UID: "9fe4411a-cece-4e5f-aa77-8ffa0be1a766"). InnerVolumeSpecName "kube-api-access-thzhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 25 14:26:04 crc kubenswrapper[5005]: I0225 14:26:04.856068 5005 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thzhk\" (UniqueName: \"kubernetes.io/projected/9fe4411a-cece-4e5f-aa77-8ffa0be1a766-kube-api-access-thzhk\") on node \"crc\" DevicePath \"\"" Feb 25 14:26:05 crc kubenswrapper[5005]: I0225 14:26:05.380133 5005 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29533826-v8rxd" event={"ID":"9fe4411a-cece-4e5f-aa77-8ffa0be1a766","Type":"ContainerDied","Data":"f5b9e9b950b217788975e81b25a8802e0c3dfc23d63f5314cc2bfc9a6b44fab1"} Feb 25 14:26:05 crc kubenswrapper[5005]: I0225 14:26:05.380734 5005 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5b9e9b950b217788975e81b25a8802e0c3dfc23d63f5314cc2bfc9a6b44fab1" Feb 25 14:26:05 crc kubenswrapper[5005]: I0225 14:26:05.380200 5005 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29533826-v8rxd" Feb 25 14:26:05 crc kubenswrapper[5005]: I0225 14:26:05.776178 5005 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29533820-cth59"] Feb 25 14:26:05 crc kubenswrapper[5005]: I0225 14:26:05.783338 5005 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29533820-cth59"] Feb 25 14:26:06 crc kubenswrapper[5005]: I0225 14:26:06.696601 5005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2d8bc83-8aa9-4c6f-8018-e0f2c534f132" path="/var/lib/kubelet/pods/e2d8bc83-8aa9-4c6f-8018-e0f2c534f132/volumes" Feb 25 14:26:08 crc kubenswrapper[5005]: I0225 14:26:08.685565 5005 scope.go:117] "RemoveContainer" containerID="08ad309d5607b261c705a2f0ef607d46bdb752428517e1c6cae9a0def3d754aa" Feb 25 14:26:08 crc kubenswrapper[5005]: E0225 14:26:08.686132 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 14:26:20 crc kubenswrapper[5005]: I0225 14:26:20.685970 5005 scope.go:117] "RemoveContainer" containerID="08ad309d5607b261c705a2f0ef607d46bdb752428517e1c6cae9a0def3d754aa" Feb 25 14:26:20 crc kubenswrapper[5005]: E0225 14:26:20.687226 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 14:26:29 crc kubenswrapper[5005]: I0225 14:26:29.351600 5005 scope.go:117] "RemoveContainer" containerID="aa03753021aa581cdfd1cb8672485f2c0c59d45655393a66059202c1da6f2a95" Feb 25 14:26:31 crc kubenswrapper[5005]: I0225 14:26:31.686101 5005 scope.go:117] "RemoveContainer" containerID="08ad309d5607b261c705a2f0ef607d46bdb752428517e1c6cae9a0def3d754aa" Feb 25 14:26:31 crc kubenswrapper[5005]: E0225 14:26:31.686901 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 14:26:46 crc kubenswrapper[5005]: I0225 14:26:46.692598 5005 scope.go:117] "RemoveContainer" containerID="08ad309d5607b261c705a2f0ef607d46bdb752428517e1c6cae9a0def3d754aa" Feb 25 14:26:46 crc kubenswrapper[5005]: E0225 14:26:46.693259 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 14:26:59 crc kubenswrapper[5005]: I0225 14:26:59.685285 5005 scope.go:117] "RemoveContainer" containerID="08ad309d5607b261c705a2f0ef607d46bdb752428517e1c6cae9a0def3d754aa" Feb 25 14:26:59 crc kubenswrapper[5005]: E0225 14:26:59.686115 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf" Feb 25 14:27:14 crc kubenswrapper[5005]: I0225 14:27:14.686169 5005 scope.go:117] "RemoveContainer" containerID="08ad309d5607b261c705a2f0ef607d46bdb752428517e1c6cae9a0def3d754aa" Feb 25 14:27:14 crc kubenswrapper[5005]: E0225 14:27:14.687095 5005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tct5q_openshift-machine-config-operator(d56aef23-d794-49a4-8e6b-2c9e2d1adebf)\"" pod="openshift-machine-config-operator/machine-config-daemon-tct5q" podUID="d56aef23-d794-49a4-8e6b-2c9e2d1adebf"